whisper.ai


Namewhisper.ai JSON
Version 1.0.0.1 PyPI version JSON
download
home_pagehttps://github.com/openai/whisper
SummaryRobust Speech Recognition via Large-Scale Weak Supervision
upload_time2022-12-02 01:32:14
maintainer
docs_urlNone
authorOpenAI
requires_python>=3.7
licenseMIT
keywords
VCS
bugtrack_url
requirements numpy torch tqdm more-itertools transformers ffmpeg-python
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Whisper

This is an UNOFFICIAL distribution of whisper.ai.

[[Blog]](https://openai.com/blog/whisper)
[[Paper]](https://cdn.openai.com/papers/whisper.pdf)
[[Model card]](model-card.md)
[[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.


## Approach

![Approach](approach.png)

A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.


## Setup

We used Python 3.9.9 and [PyTorch](https://pytorch.org/) 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.7 or later and recent PyTorch versions. The codebase also depends on a few Python packages, most notably [HuggingFace Transformers](https://huggingface.co/docs/transformers/index) for their fast tokenizer implementation and [ffmpeg-python](https://github.com/kkroening/ffmpeg-python) for reading audio files. The following command will pull and install the latest commit from this repository, along with its Python dependencies 

    pip install git+https://github.com/openai/whisper.git 

To update the package to the latest version of this repository, please run:

    pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git

It also requires the command-line tool [`ffmpeg`](https://ffmpeg.org/) to be installed on your system, which is available from most package managers:

```bash
# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
```

You may need [`rust`](http://rust-lang.org) installed as well, in case [tokenizers](https://pypi.org/project/tokenizers/) does not provide a pre-built wheel for your platform. If you see installation errors during the `pip install` command above, please follow the [Getting started page](https://www.rust-lang.org/learn/get-started) to install Rust development environment. Additionally, you may need to configure the `PATH` environment variable, e.g. `export PATH="$HOME/.cargo/bin:$PATH"`. If the installation fails with `No module named 'setuptools_rust'`, you need to install `setuptools_rust`, e.g. by running:

```bash
pip install setuptools-rust
```


## Available models and languages

There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed. 


|  Size  | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|
|  tiny  |    39 M    |     `tiny.en`      |       `tiny`       |     ~1 GB     |      ~32x      |
|  base  |    74 M    |     `base.en`      |       `base`       |     ~1 GB     |      ~16x      |
| small  |   244 M    |     `small.en`     |      `small`       |     ~2 GB     |      ~6x       |
| medium |   769 M    |    `medium.en`     |      `medium`      |     ~5 GB     |      ~2x       |
| large  |   1550 M   |        N/A         |      `large`       |    ~10 GB     |       1x       |

For English-only applications, the `.en` models tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.

Whisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the `large` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://cdn.openai.com/papers/whisper.pdf).

![WER breakdown by language](language-breakdown.svg)



## Command-line usage

The following command will transcribe speech in audio files, using the `medium` model:

    whisper audio.flac audio.mp3 audio.wav --model medium

The default setting (which selects the `small` model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the `--language` option:

    whisper japanese.wav --language Japanese

Adding `--task translate` will translate the speech into English:

    whisper japanese.wav --language Japanese --task translate

Run the following to view all available options:

    whisper --help

See [tokenizer.py](whisper/tokenizer.py) for the list of all available languages.


## Python usage

Transcription can also be performed within Python: 

```python
import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])
```

Internally, the `transcribe()` method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.

Below is an example usage of `whisper.detect_language()` and `whisper.decode()` which provide lower-level access to the model.

```python
import whisper

model = whisper.load_model("base")

# load audio and pad/trim it to fit 30 seconds
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)

# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)

# detect the spoken language
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")

# decode the audio
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)

# print the recognized text
print(result.text)
```

## More examples

Please use the [🙌 Show and tell](https://github.com/openai/whisper/discussions/categories/show-and-tell) category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.


## License

The code and the model weights of Whisper are released under the MIT License. See [LICENSE](LICENSE) for further details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/openai/whisper",
    "name": "whisper.ai",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "OpenAI",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/36/b4/c8c03122a2924d9a355f47e6396f402db338214d4e320e89f10e6cffcfef/whisper.ai-1.0.0.1.tar.gz",
    "platform": null,
    "description": "# Whisper\n\nThis is an UNOFFICIAL distribution of whisper.ai.\n\n[[Blog]](https://openai.com/blog/whisper)\n[[Paper]](https://cdn.openai.com/papers/whisper.pdf)\n[[Model card]](model-card.md)\n[[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)\n\nWhisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.\n\n\n## Approach\n\n![Approach](approach.png)\n\nA Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. All of these tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing for a single model to replace many different stages of a traditional speech processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.\n\n\n## Setup\n\nWe used Python 3.9.9 and [PyTorch](https://pytorch.org/) 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.7 or later and recent PyTorch versions. The codebase also depends on a few Python packages, most notably [HuggingFace Transformers](https://huggingface.co/docs/transformers/index) for their fast tokenizer implementation and [ffmpeg-python](https://github.com/kkroening/ffmpeg-python) for reading audio files. The following command will pull and install the latest commit from this repository, along with its Python dependencies \n\n    pip install git+https://github.com/openai/whisper.git \n\nTo update the package to the latest version of this repository, please run:\n\n    pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git\n\nIt also requires the command-line tool [`ffmpeg`](https://ffmpeg.org/) to be installed on your system, which is available from most package managers:\n\n```bash\n# on Ubuntu or Debian\nsudo apt update && sudo apt install ffmpeg\n\n# on Arch Linux\nsudo pacman -S ffmpeg\n\n# on MacOS using Homebrew (https://brew.sh/)\nbrew install ffmpeg\n\n# on Windows using Chocolatey (https://chocolatey.org/)\nchoco install ffmpeg\n\n# on Windows using Scoop (https://scoop.sh/)\nscoop install ffmpeg\n```\n\nYou may need [`rust`](http://rust-lang.org) installed as well, in case [tokenizers](https://pypi.org/project/tokenizers/) does not provide a pre-built wheel for your platform. If you see installation errors during the `pip install` command above, please follow the [Getting started page](https://www.rust-lang.org/learn/get-started) to install Rust development environment. Additionally, you may need to configure the `PATH` environment variable, e.g. `export PATH=\"$HOME/.cargo/bin:$PATH\"`. If the installation fails with `No module named 'setuptools_rust'`, you need to install `setuptools_rust`, e.g. by running:\n\n```bash\npip install setuptools-rust\n```\n\n\n## Available models and languages\n\nThere are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed. \n\n\n|  Size  | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |\n|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|\n|  tiny  |    39 M    |     `tiny.en`      |       `tiny`       |     ~1 GB     |      ~32x      |\n|  base  |    74 M    |     `base.en`      |       `base`       |     ~1 GB     |      ~16x      |\n| small  |   244 M    |     `small.en`     |      `small`       |     ~2 GB     |      ~6x       |\n| medium |   769 M    |    `medium.en`     |      `medium`      |     ~5 GB     |      ~2x       |\n| large  |   1550 M   |        N/A         |      `large`       |    ~10 GB     |       1x       |\n\nFor English-only applications, the `.en` models tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.\n\nWhisper's performance varies widely depending on the language. The figure below shows a WER breakdown by languages of Fleurs dataset, using the `large` model. More WER and BLEU scores corresponding to the other models and datasets can be found in Appendix D in [the paper](https://cdn.openai.com/papers/whisper.pdf).\n\n![WER breakdown by language](language-breakdown.svg)\n\n\n\n## Command-line usage\n\nThe following command will transcribe speech in audio files, using the `medium` model:\n\n    whisper audio.flac audio.mp3 audio.wav --model medium\n\nThe default setting (which selects the `small` model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the `--language` option:\n\n    whisper japanese.wav --language Japanese\n\nAdding `--task translate` will translate the speech into English:\n\n    whisper japanese.wav --language Japanese --task translate\n\nRun the following to view all available options:\n\n    whisper --help\n\nSee [tokenizer.py](whisper/tokenizer.py) for the list of all available languages.\n\n\n## Python usage\n\nTranscription can also be performed within Python: \n\n```python\nimport whisper\n\nmodel = whisper.load_model(\"base\")\nresult = model.transcribe(\"audio.mp3\")\nprint(result[\"text\"])\n```\n\nInternally, the `transcribe()` method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.\n\nBelow is an example usage of `whisper.detect_language()` and `whisper.decode()` which provide lower-level access to the model.\n\n```python\nimport whisper\n\nmodel = whisper.load_model(\"base\")\n\n# load audio and pad/trim it to fit 30 seconds\naudio = whisper.load_audio(\"audio.mp3\")\naudio = whisper.pad_or_trim(audio)\n\n# make log-Mel spectrogram and move to the same device as the model\nmel = whisper.log_mel_spectrogram(audio).to(model.device)\n\n# detect the spoken language\n_, probs = model.detect_language(mel)\nprint(f\"Detected language: {max(probs, key=probs.get)}\")\n\n# decode the audio\noptions = whisper.DecodingOptions()\nresult = whisper.decode(model, mel, options)\n\n# print the recognized text\nprint(result.text)\n```\n\n## More examples\n\nPlease use the [\ud83d\ude4c Show and tell](https://github.com/openai/whisper/discussions/categories/show-and-tell) category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.\n\n\n## License\n\nThe code and the model weights of Whisper are released under the MIT License. See [LICENSE](LICENSE) for further details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Robust Speech Recognition via Large-Scale Weak Supervision",
    "version": "1.0.0.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "49fa6207e8ca7cd5440e97486daf3017",
                "sha256": "99a88deff5d10413a99a60200ee9b0cd3f2fc8e85b801ab4a6142e03fe466931"
            },
            "downloads": -1,
            "filename": "whisper.ai-1.0.0.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "49fa6207e8ca7cd5440e97486daf3017",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.7",
            "size": 1178178,
            "upload_time": "2022-12-02T01:32:12",
            "upload_time_iso_8601": "2022-12-02T01:32:12.496615Z",
            "url": "https://files.pythonhosted.org/packages/63/39/79665ecabfca40b0942273eed08f16a5da8d8c975633dc5e9e739f4ba768/whisper.ai-1.0.0.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "46ae65c2af8c7074f0e9a8d6143ecc20",
                "sha256": "dad9f5e6563ec227e9af0e40a14d5895a6c6ba11fc0ce7f7361c9fc634d14b2a"
            },
            "downloads": -1,
            "filename": "whisper.ai-1.0.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "46ae65c2af8c7074f0e9a8d6143ecc20",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 1166428,
            "upload_time": "2022-12-02T01:32:14",
            "upload_time_iso_8601": "2022-12-02T01:32:14.584274Z",
            "url": "https://files.pythonhosted.org/packages/36/b4/c8c03122a2924d9a355f47e6396f402db338214d4e320e89f10e6cffcfef/whisper.ai-1.0.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-02 01:32:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "openai",
    "github_project": "whisper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "more-itertools",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.19.0"
                ]
            ]
        },
        {
            "name": "ffmpeg-python",
            "specs": [
                [
                    "==",
                    "0.2.0"
                ]
            ]
        }
    ],
    "lcname": "whisper.ai"
}
        
Elapsed time: 0.04023s