msclap


Namemsclap JSON
Version 1.3.3 PyPI version JSON
download
home_page
SummaryCLAP (Contrastive Language-Audio Pretraining) is a model that learns acoustic concepts from natural language supervision and enables “Zero-Shot” inference. The model has been extensively evaluated in 26 audio downstream tasks achieving SoTA in several of them including classification, retrieval, and captioning.
upload_time2023-10-20 21:18:51
maintainer
docs_urlNone
authorBenjamin Elizalde
requires_python>=3.8,<4.0
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ###### [Overview](#CLAP) | [Setup](#Setup) | [CLAP weights](#CLAP-weights) | [Usage](#Usage) | [Examples](#Examples) | [Citation](#Citation)

# CLAP

CLAP (Contrastive Language-Audio Pretraining) is a model that learns acoustic concepts from natural language supervision and enables “Zero-Shot” inference. The model has been extensively evaluated in 26 audio downstream tasks achieving SoTA in several of them including classification, retrieval, and captioning.

<img width="832" alt="clap_diagrams" src="docs/clap2_diagram.png">

## Setup

First, install python 3.8 or higher (3.11 recommended). Then, install CLAP using either of the following:

```shell
# Install pypi pacakge
pip install msclap

# Or Install latest (unstable) git source
pip install git+https://github.com/microsoft/CLAP.git
```

## CLAP weights
CLAP weights are downloaded automatically (choose between versions _2022_, _2023_, and _clapcap_), but are also available at: [Zenodo](https://zenodo.org/record/8378278) or [HuggingFace](https://huggingface.co/microsoft/msclap)

_clapcap_ is the audio captioning model that uses the 2023 encoders.

## Usage

- Zero-Shot Classification and Retrieval
```python
from msclap import CLAP

# Load model (Choose between versions '2022' or '2023')
# The model weight will be downloaded automatically if `model_fp` is not specified
clap_model = CLAP(version = '2023', use_cuda=False)

# Extract text embeddings
text_embeddings = clap_model.get_text_embeddings(class_labels: List[str])

# Extract audio embeddings
audio_embeddings = clap_model.get_audio_embeddings(file_paths: List[str])

# Compute similarity between audio and text embeddings 
similarities = clap_model.compute_similarity(audio_embeddings, text_embeddings)
```

- Audio Captioning
```python
from msclap import CLAP

# Load model (Choose version 'clapcap')
clap_model = CLAP(version = 'clapcap', use_cuda=False)

# Generate audio captions
captions = clap_model.generate_caption(file_paths: List[str])
```

## Examples
Take a look at [examples](./examples/) for usage examples. 

To run Zero-Shot Classification on the ESC50 dataset try the following:

```bash
> cd examples && python zero_shot_classification.py
```
Output (version 2023)
```bash
ESC50 Accuracy: 93.9%
```

## Citation

Kindly cite our work if you find it useful.

[CLAP: Learning Audio Concepts from Natural Language Supervision](https://ieeexplore.ieee.org/abstract/document/10095889)
```
@inproceedings{CLAP2022,
  title={Clap learning audio concepts from natural language supervision},
  author={Elizalde, Benjamin and Deshmukh, Soham and Al Ismail, Mahmoud and Wang, Huaming},
  booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2023},
  organization={IEEE}
}
```

[Natural Language Supervision for General-Purpose Audio Representations](https://arxiv.org/abs/2309.05767)
```
@misc{CLAP2023,
      title={Natural Language Supervision for General-Purpose Audio Representations}, 
      author={Benjamin Elizalde and Soham Deshmukh and Huaming Wang},
      year={2023},
      eprint={2309.05767},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2309.05767}
}
```

## Contributing

This project welcomes contributions and suggestions.  Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

## Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft 
trademarks or logos is subject to and must follow 
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "msclap",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Benjamin Elizalde",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/b0/1b/6605192e5f4ccf9eef13aaea46443e0658db61f11abb8910373b13089926/msclap-1.3.3.tar.gz",
    "platform": null,
    "description": "###### [Overview](#CLAP) | [Setup](#Setup) | [CLAP weights](#CLAP-weights) | [Usage](#Usage) | [Examples](#Examples) | [Citation](#Citation)\n\n# CLAP\n\nCLAP (Contrastive Language-Audio Pretraining) is a model that learns acoustic concepts from natural language supervision and enables \u201cZero-Shot\u201d inference. The model has been extensively evaluated in 26 audio downstream tasks achieving SoTA in several of them including classification, retrieval, and captioning.\n\n<img width=\"832\" alt=\"clap_diagrams\" src=\"docs/clap2_diagram.png\">\n\n## Setup\n\nFirst, install python 3.8 or higher (3.11 recommended). Then, install CLAP using either of the following:\n\n```shell\n# Install pypi pacakge\npip install msclap\n\n# Or Install latest (unstable) git source\npip install git+https://github.com/microsoft/CLAP.git\n```\n\n## CLAP weights\nCLAP weights are downloaded automatically (choose between versions _2022_, _2023_, and _clapcap_), but are also available at: [Zenodo](https://zenodo.org/record/8378278) or [HuggingFace](https://huggingface.co/microsoft/msclap)\n\n_clapcap_ is the audio captioning model that uses the 2023 encoders.\n\n## Usage\n\n- Zero-Shot Classification and Retrieval\n```python\nfrom msclap import CLAP\n\n# Load model (Choose between versions '2022' or '2023')\n# The model weight will be downloaded automatically if `model_fp` is not specified\nclap_model = CLAP(version = '2023', use_cuda=False)\n\n# Extract text embeddings\ntext_embeddings = clap_model.get_text_embeddings(class_labels: List[str])\n\n# Extract audio embeddings\naudio_embeddings = clap_model.get_audio_embeddings(file_paths: List[str])\n\n# Compute similarity between audio and text embeddings \nsimilarities = clap_model.compute_similarity(audio_embeddings, text_embeddings)\n```\n\n- Audio Captioning\n```python\nfrom msclap import CLAP\n\n# Load model (Choose version 'clapcap')\nclap_model = CLAP(version = 'clapcap', use_cuda=False)\n\n# Generate audio captions\ncaptions = clap_model.generate_caption(file_paths: List[str])\n```\n\n## Examples\nTake a look at [examples](./examples/) for usage examples. \n\nTo run Zero-Shot Classification on the ESC50 dataset try the following:\n\n```bash\n> cd examples && python zero_shot_classification.py\n```\nOutput (version 2023)\n```bash\nESC50 Accuracy: 93.9%\n```\n\n## Citation\n\nKindly cite our work if you find it useful.\n\n[CLAP: Learning Audio Concepts from Natural Language Supervision](https://ieeexplore.ieee.org/abstract/document/10095889)\n```\n@inproceedings{CLAP2022,\n  title={Clap learning audio concepts from natural language supervision},\n  author={Elizalde, Benjamin and Deshmukh, Soham and Al Ismail, Mahmoud and Wang, Huaming},\n  booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n  pages={1--5},\n  year={2023},\n  organization={IEEE}\n}\n```\n\n[Natural Language Supervision for General-Purpose Audio Representations](https://arxiv.org/abs/2309.05767)\n```\n@misc{CLAP2023,\n      title={Natural Language Supervision for General-Purpose Audio Representations}, \n      author={Benjamin Elizalde and Soham Deshmukh and Huaming Wang},\n      year={2023},\n      eprint={2309.05767},\n      archivePrefix={arXiv},\n      primaryClass={cs.SD},\n      url={https://arxiv.org/abs/2309.05767}\n}\n```\n\n## Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n## Trademarks\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft \ntrademarks or logos is subject to and must follow \n[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).\nUse of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.\nAny use of third-party trademarks or logos are subject to those third-party's policies.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "CLAP (Contrastive Language-Audio Pretraining) is a model that learns acoustic concepts from natural language supervision and enables \u201cZero-Shot\u201d inference. The model has been extensively evaluated in 26 audio downstream tasks achieving SoTA in several of them including classification, retrieval, and captioning.",
    "version": "1.3.3",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "28aec22b7e05ec7864637d39acede8bdc5e36a481b9701ad47ed7e754ad5155d",
                "md5": "cfc7ea4b1c54bfeb29ec179bf6afc1d1",
                "sha256": "ee9f36ecbb19d8cc4dee42c54a1b48d9fae2b01c9b4532ef35be18533b1ea8c3"
            },
            "downloads": -1,
            "filename": "msclap-1.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cfc7ea4b1c54bfeb29ec179bf6afc1d1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 31350,
            "upload_time": "2023-10-20T21:18:50",
            "upload_time_iso_8601": "2023-10-20T21:18:50.215697Z",
            "url": "https://files.pythonhosted.org/packages/28/ae/c22b7e05ec7864637d39acede8bdc5e36a481b9701ad47ed7e754ad5155d/msclap-1.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b01b6605192e5f4ccf9eef13aaea46443e0658db61f11abb8910373b13089926",
                "md5": "9258a4c87ee49acf0783e187b3976882",
                "sha256": "ded253972b745d4e06026dda408dc80f625717e6887837ce4e6ec3e633052269"
            },
            "downloads": -1,
            "filename": "msclap-1.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "9258a4c87ee49acf0783e187b3976882",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 27498,
            "upload_time": "2023-10-20T21:18:51",
            "upload_time_iso_8601": "2023-10-20T21:18:51.541597Z",
            "url": "https://files.pythonhosted.org/packages/b0/1b/6605192e5f4ccf9eef13aaea46443e0658db61f11abb8910373b13089926/msclap-1.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-20 21:18:51",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "msclap"
}
        
Elapsed time: 0.12333s