openl3


Nameopenl3 JSON
Version 0.4.2 PyPI version JSON
download
home_pagehttps://github.com/marl/openl3
SummaryDeep audio and image embeddings, based on Look, Listen, and Learn approach
upload_time2023-05-03 18:19:43
maintainer
docs_urlNone
authorAurora Cramer, Ho-Hsiang Wu, Bea Steers, and Justin Salamon
requires_python
licenseMIT
keywords deep audio embeddings machine listening learning tensorflow keras
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            # OpenL3

OpenL3 is an open-source Python library for computing deep audio and image embeddings.

[![PyPI](https://img.shields.io/badge/python-3.6%2C%203.7%2C%203.8-blue.svg)](https://pypi.python.org/pypi/openl3)
[![MIT license](https://img.shields.io/badge/License-MIT-blue.svg)](https://choosealicense.com/licenses/mit/)
[![Build Status](https://travis-ci.com/marl/openl3.svg?branch=main)](https://travis-ci.com/marl/openl3)
[![Coverage Status](https://coveralls.io/repos/github/marl/openl3/badge.svg?branch=main)](https://coveralls.io/github/marl/openl3?branch=main)
[![Documentation Status](https://readthedocs.org/projects/openl3/badge/?version=latest)](http://openl3.readthedocs.io/en/latest/?badge=latest)
[![Downloads](https://pepy.tech/badge/openl3)](https://pepy.tech/project/openl3)

Please refer to the [documentation](https://openl3.readthedocs.io/en/latest/) for detailed instructions and examples.

> **UPDATE:** Openl3 now has Tensorflow 2 support!

> **NOTE:** Whoops! A bug was reported in the [training code](https://github.com/marl/l3embedding), with the effect that positive audio-image pairs that come from the same video do not necessarily overlap in time. Nonetheless, the embedding still seems to capture useful semantic information.

The audio and image embedding models provided here are published as part of [1], and are based on the Look, Listen and Learn approach [2]. For details about the embedding models and how they were trained, please see:

[Look, Listen and Learn More: Design Choices for Deep Audio Embeddings](http://www.justinsalamon.com/uploads/4/3/9/4/4394963/cramer_looklistenlearnmore_icassp_2019.pdf)<br/>
Aurora Cramer, Ho-Hsiang Wu, Justin Salamon, and Juan Pablo Bello.<br/>
IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 3852–3856, Brighton, UK, May 2019.


# Installing OpenL3

Dependencies
------------

#### libsndfile
OpenL3 depends on the `pysoundfile` module to load audio files, which depends on the non-Python library
``libsndfile``. On Windows and macOS, these will be installed via ``pip`` and you can therefore skip this step.
However, on Linux this must be installed manually via your platform's package manager.
For Debian-based distributions (such as Ubuntu), this can be done by simply running

    apt-get install libsndfile1

Alternatively, if you are using `conda`, you can install `libsndfile` simply by running

    conda install -c conda-forge libsndfile

For more detailed information, please consult the
[`pysoundfile` installation documentation](https://pysoundfile.readthedocs.io/en/0.9.0/#installation>).


#### Tensorflow
Starting with `openl3>=0.4.0`, Openl3 has been upgraded to use Tensorflow 2. Because Tensorflow 2 and higher now includes GPU support, `tensorflow>=2.0.0` is included as a dependency and no longer needs to be installed separately. 

If you are interested in using Tensorflow 1.x, please install using `pip install 'openl3<=0.3.1'`.

##### Tensorflow 1x & OpenL3 <= v0.3.1
Because Tensorflow 1.x comes in CPU-only and GPU variants, we leave it up to the user to install the version that best fits
their usecase.

On most platforms, either of the following commands should properly install Tensorflow:

```bash
pip install "tensorflow<1.14" # CPU-only version
pip install "tensorflow-gpu<1.14" # GPU version
```

For more detailed information, please consult the
[Tensorflow installation documentation](https://www.tensorflow.org/install/).


Installing OpenL3
-----------------
The simplest way to install OpenL3 is by using ``pip``, which will also install the additional required dependencies
if needed. To install OpenL3 using ``pip``, simply run

    pip install openl3

To install the latest version of OpenL3 from source:

1. Clone or pull the latest version, only retrieving the ``main`` branch to avoid downloading the branch where we store the model weight files (these will be properly downloaded during installation).

        git clone git@github.com:marl/openl3.git --branch main --single-branch

2. Install using pip to handle python dependencies. The installation also downloads model files, **which requires a stable network connection**.

        cd openl3
        pip install -e .

# Using OpenL3

To help you get started with OpenL3 please see the
[tutorial](http://openl3.readthedocs.io/en/latest/tutorial.html).


# Acknowledging OpenL3

Please cite the following papers when using OpenL3 in your work:

[1] [Look, Listen and Learn More: Design Choices for Deep Audio Embeddings](http://www.justinsalamon.com/uploads/4/3/9/4/4394963/cramer\_looklistenlearnmore\_icassp\_2019.pdf)<br/>
Aurora Cramer, Ho-Hsiang Wu, Justin Salamon, and Juan Pablo Bello.<br/>
IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 3852–3856, Brighton, UK, May 2019.

[2] [Look, Listen and Learn](http://openaccess.thecvf.com/content\_ICCV\_2017/papers/Arandjelovic\_Look\_Listen\_and\_ICCV\_2017\_paper.pdf)<br/>
Relja Arandjelović and Andrew Zisserman<br/>
IEEE International Conference on Computer Vision (ICCV), Venice, Italy, Oct. 2017.

# Model Weights License
The model weights are made available under a [Creative Commons Attribution 4.0 International (CC BY 4.0) License](https://creativecommons.org/licenses/by/4.0/).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/marl/openl3",
    "name": "openl3",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "deep audio embeddings machine listening learning tensorflow keras",
    "author": "Aurora Cramer, Ho-Hsiang Wu, Bea Steers, and Justin Salamon",
    "author_email": "jtc440@nyu.edu",
    "download_url": "https://files.pythonhosted.org/packages/33/fb/ac93a879d93db231e9f94acf2b07ac0977290f746953dd014ba7f1ac68b5/openl3-0.4.2.tar.gz",
    "platform": null,
    "description": "# OpenL3\n\nOpenL3 is an open-source Python library for computing deep audio and image embeddings.\n\n[![PyPI](https://img.shields.io/badge/python-3.6%2C%203.7%2C%203.8-blue.svg)](https://pypi.python.org/pypi/openl3)\n[![MIT license](https://img.shields.io/badge/License-MIT-blue.svg)](https://choosealicense.com/licenses/mit/)\n[![Build Status](https://travis-ci.com/marl/openl3.svg?branch=main)](https://travis-ci.com/marl/openl3)\n[![Coverage Status](https://coveralls.io/repos/github/marl/openl3/badge.svg?branch=main)](https://coveralls.io/github/marl/openl3?branch=main)\n[![Documentation Status](https://readthedocs.org/projects/openl3/badge/?version=latest)](http://openl3.readthedocs.io/en/latest/?badge=latest)\n[![Downloads](https://pepy.tech/badge/openl3)](https://pepy.tech/project/openl3)\n\nPlease refer to the [documentation](https://openl3.readthedocs.io/en/latest/) for detailed instructions and examples.\n\n> **UPDATE:** Openl3 now has Tensorflow 2 support!\n\n> **NOTE:** Whoops! A bug was reported in the [training code](https://github.com/marl/l3embedding), with the effect that positive audio-image pairs that come from the same video do not necessarily overlap in time. Nonetheless, the embedding still seems to capture useful semantic information.\n\nThe audio and image embedding models provided here are published as part of [1], and are based on the Look, Listen and Learn approach [2]. For details about the embedding models and how they were trained, please see:\n\n[Look, Listen and Learn More: Design Choices for Deep Audio Embeddings](http://www.justinsalamon.com/uploads/4/3/9/4/4394963/cramer_looklistenlearnmore_icassp_2019.pdf)<br/>\nAurora Cramer, Ho-Hsiang Wu, Justin Salamon, and Juan Pablo Bello.<br/>\nIEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 3852\u20133856, Brighton, UK, May 2019.\n\n\n# Installing OpenL3\n\nDependencies\n------------\n\n#### libsndfile\nOpenL3 depends on the `pysoundfile` module to load audio files, which depends on the non-Python library\n``libsndfile``. On Windows and macOS, these will be installed via ``pip`` and you can therefore skip this step.\nHowever, on Linux this must be installed manually via your platform's package manager.\nFor Debian-based distributions (such as Ubuntu), this can be done by simply running\n\n    apt-get install libsndfile1\n\nAlternatively, if you are using `conda`, you can install `libsndfile` simply by running\n\n    conda install -c conda-forge libsndfile\n\nFor more detailed information, please consult the\n[`pysoundfile` installation documentation](https://pysoundfile.readthedocs.io/en/0.9.0/#installation>).\n\n\n#### Tensorflow\nStarting with `openl3>=0.4.0`, Openl3 has been upgraded to use Tensorflow 2. Because Tensorflow 2 and higher now includes GPU support, `tensorflow>=2.0.0` is included as a dependency and no longer needs to be installed separately. \n\nIf you are interested in using Tensorflow 1.x, please install using `pip install 'openl3<=0.3.1'`.\n\n##### Tensorflow 1x & OpenL3 <= v0.3.1\nBecause Tensorflow 1.x comes in CPU-only and GPU variants, we leave it up to the user to install the version that best fits\ntheir usecase.\n\nOn most platforms, either of the following commands should properly install Tensorflow:\n\n```bash\npip install \"tensorflow<1.14\" # CPU-only version\npip install \"tensorflow-gpu<1.14\" # GPU version\n```\n\nFor more detailed information, please consult the\n[Tensorflow installation documentation](https://www.tensorflow.org/install/).\n\n\nInstalling OpenL3\n-----------------\nThe simplest way to install OpenL3 is by using ``pip``, which will also install the additional required dependencies\nif needed. To install OpenL3 using ``pip``, simply run\n\n    pip install openl3\n\nTo install the latest version of OpenL3 from source:\n\n1. Clone or pull the latest version, only retrieving the ``main`` branch to avoid downloading the branch where we store the model weight files (these will be properly downloaded during installation).\n\n        git clone git@github.com:marl/openl3.git --branch main --single-branch\n\n2. Install using pip to handle python dependencies. The installation also downloads model files, **which requires a stable network connection**.\n\n        cd openl3\n        pip install -e .\n\n# Using OpenL3\n\nTo help you get started with OpenL3 please see the\n[tutorial](http://openl3.readthedocs.io/en/latest/tutorial.html).\n\n\n# Acknowledging OpenL3\n\nPlease cite the following papers when using OpenL3 in your work:\n\n[1] [Look, Listen and Learn More: Design Choices for Deep Audio Embeddings](http://www.justinsalamon.com/uploads/4/3/9/4/4394963/cramer\\_looklistenlearnmore\\_icassp\\_2019.pdf)<br/>\nAurora Cramer, Ho-Hsiang Wu, Justin Salamon, and Juan Pablo Bello.<br/>\nIEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 3852\u20133856, Brighton, UK, May 2019.\n\n[2] [Look, Listen and Learn](http://openaccess.thecvf.com/content\\_ICCV\\_2017/papers/Arandjelovic\\_Look\\_Listen\\_and\\_ICCV\\_2017\\_paper.pdf)<br/>\nRelja Arandjelovi\u0107 and Andrew Zisserman<br/>\nIEEE International Conference on Computer Vision (ICCV), Venice, Italy, Oct. 2017.\n\n# Model Weights License\nThe model weights are made available under a [Creative Commons Attribution 4.0 International (CC BY 4.0) License](https://creativecommons.org/licenses/by/4.0/).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Deep audio and image embeddings, based on Look, Listen, and Learn approach",
    "version": "0.4.2",
    "project_urls": {
        "Documentation": "https://readthedocs.org/projects/openl3/",
        "Homepage": "https://github.com/marl/openl3",
        "Source": "https://github.com/marl/openl3",
        "Tracker": "https://github.com/marl/openl3/issues"
    },
    "split_keywords": [
        "deep",
        "audio",
        "embeddings",
        "machine",
        "listening",
        "learning",
        "tensorflow",
        "keras"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33fbac93a879d93db231e9f94acf2b07ac0977290f746953dd014ba7f1ac68b5",
                "md5": "a73f06ed7b4367795efce8619b7fef86",
                "sha256": "bd590f6c311de5196b615b65a3f49ea1327be72ed6e9e3cddb5631e391c1ee8a"
            },
            "downloads": -1,
            "filename": "openl3-0.4.2.tar.gz",
            "has_sig": false,
            "md5_digest": "a73f06ed7b4367795efce8619b7fef86",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 29602,
            "upload_time": "2023-05-03T18:19:43",
            "upload_time_iso_8601": "2023-05-03T18:19:43.068156Z",
            "url": "https://files.pythonhosted.org/packages/33/fb/ac93a879d93db231e9f94acf2b07ac0977290f746953dd014ba7f1ac68b5/openl3-0.4.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-03 18:19:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "marl",
    "github_project": "openl3",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "lcname": "openl3"
}
        
Elapsed time: 0.06552s