# Whisper
[[Blog]](https://openai.com/blog/whisper)
[[Paper]](https://arxiv.org/abs/2212.04356)
[[Model card]](https://github.com/openai/whisper/blob/main/model-card.md)
[[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)
Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.
## Approach
![Approach](https://raw.githubusercontent.com/openai/whisper/main/approach.png)
A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.
## Setup
We used Python 3.9.9 and [PyTorch](https://pytorch.org/) 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The codebase also depends on a few Python packages, most notably [OpenAI's tiktoken](https://github.com/openai/tiktoken) for their fast tokenizer implementation. You can download and install (or update to) the latest release of Whisper with the following command:
pip install -U openai-whisper
Alternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:
pip install git+https://github.com/openai/whisper.git
To update the package to the latest version of this repository, please run:
pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git
It also requires the command-line tool [`ffmpeg`](https://ffmpeg.org/) to be installed on your system, which is available from most package managers:
```bash
# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg
# on Arch Linux
sudo pacman -S ffmpeg
# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg
# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg
# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
```
You may need [`rust`](http://rust-lang.org) installed as well, in case [tiktoken](https://github.com/openai/tiktoken) does not provide a pre-built wheel for your platform. If you see installation errors during the `pip install` command above, please follow the [Getting started page](https://www.rust-lang.org/learn/get-started) to install Rust development environment. Additionally, you may need to configure the `PATH` environment variable, e.g. `export PATH="$HOME/.cargo/bin:$PATH"`. If the installation fails with `No module named 'setuptools_rust'`, you need to install `setuptools_rust`, e.g. by running:
```bash
pip install setuptools-rust
```
## Available models and languages
There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and inference speed relative to the large model; actual speed may vary depending on many factors including the available hardware.
| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|
| tiny | 39 M | `tiny.en` | `tiny` | ~1 GB | ~32x |
| base | 74 M | `base.en` | `base` | ~1 GB | ~16x |
| small | 244 M | `small.en` | `small` | ~2 GB | ~6x |
| medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
| large | 1550 M | N/A | `large` | ~10 GB | 1x |
The `.en` models for English-only applications tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.
Whisper's performance varies widely depending on the language. The figure below shows a performance breakdown of `large-v3` and `large-v2` models by language, using WERs (word error rates) or CER (character error rates, shown in *Italic*) evaluated on the Common Voice 15 and Fleurs datasets. Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of [the paper](https://arxiv.org/abs/2212.04356), as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
![WER breakdown by language](https://github.com/openai/whisper/assets/266841/f4619d66-1058-4005-8f67-a9d811b77c62)
## Command-line usage
The following command will transcribe speech in audio files, using the `medium` model:
whisper audio.flac audio.mp3 audio.wav --model medium
The default setting (which selects the `small` model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the `--language` option:
whisper japanese.wav --language Japanese
Adding `--task translate` will translate the speech into English:
whisper japanese.wav --language Japanese --task translate
Run the following to view all available options:
whisper --help
See [tokenizer.py](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py) for the list of all available languages.
## Python usage
Transcription can also be performed within Python:
```python
import whisper
model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])
```
Internally, the `transcribe()` method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.
Below is an example usage of `whisper.detect_language()` and `whisper.decode()` which provide lower-level access to the model.
```python
import whisper
model = whisper.load_model("base")
# load audio and pad/trim it to fit 30 seconds
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)
# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)
# detect the spoken language
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")
# decode the audio
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)
# print the recognized text
print(result.text)
```
## More examples
Please use the [🙌 Show and tell](https://github.com/openai/whisper/discussions/categories/show-and-tell) category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.
## License
Whisper's code and model weights are released under the MIT License. See [LICENSE](https://github.com/openai/whisper/blob/main/LICENSE) for further details.
Raw data
{
"_id": null,
"home_page": "https://github.com/OpenAdaptAI/whisper",
"name": "oa-whisper",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "OpenAI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/13/8b/4e7632e3cdfbd0731eea5b32b63b4158347142b40b154994594ea21b1e21/oa_whisper-20240718.tar.gz",
"platform": null,
"description": "# Whisper\n\n[[Blog]](https://openai.com/blog/whisper)\n[[Paper]](https://arxiv.org/abs/2212.04356)\n[[Model card]](https://github.com/openai/whisper/blob/main/model-card.md)\n[[Colab example]](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)\n\nWhisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.\n\n\n## Approach\n\n![Approach](https://raw.githubusercontent.com/openai/whisper/main/approach.png)\n\nA Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.\n\n\n## Setup\n\nWe used Python 3.9.9 and [PyTorch](https://pytorch.org/) 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The codebase also depends on a few Python packages, most notably [OpenAI's tiktoken](https://github.com/openai/tiktoken) for their fast tokenizer implementation. You can download and install (or update to) the latest release of Whisper with the following command:\n\n pip install -U openai-whisper\n\nAlternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:\n\n pip install git+https://github.com/openai/whisper.git \n\nTo update the package to the latest version of this repository, please run:\n\n pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git\n\nIt also requires the command-line tool [`ffmpeg`](https://ffmpeg.org/) to be installed on your system, which is available from most package managers:\n\n```bash\n# on Ubuntu or Debian\nsudo apt update && sudo apt install ffmpeg\n\n# on Arch Linux\nsudo pacman -S ffmpeg\n\n# on MacOS using Homebrew (https://brew.sh/)\nbrew install ffmpeg\n\n# on Windows using Chocolatey (https://chocolatey.org/)\nchoco install ffmpeg\n\n# on Windows using Scoop (https://scoop.sh/)\nscoop install ffmpeg\n```\n\nYou may need [`rust`](http://rust-lang.org) installed as well, in case [tiktoken](https://github.com/openai/tiktoken) does not provide a pre-built wheel for your platform. If you see installation errors during the `pip install` command above, please follow the [Getting started page](https://www.rust-lang.org/learn/get-started) to install Rust development environment. Additionally, you may need to configure the `PATH` environment variable, e.g. `export PATH=\"$HOME/.cargo/bin:$PATH\"`. If the installation fails with `No module named 'setuptools_rust'`, you need to install `setuptools_rust`, e.g. by running:\n\n```bash\npip install setuptools-rust\n```\n\n\n## Available models and languages\n\nThere are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and inference speed relative to the large model; actual speed may vary depending on many factors including the available hardware.\n\n| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |\n|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|\n| tiny | 39 M | `tiny.en` | `tiny` | ~1 GB | ~32x |\n| base | 74 M | `base.en` | `base` | ~1 GB | ~16x |\n| small | 244 M | `small.en` | `small` | ~2 GB | ~6x |\n| medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |\n| large | 1550 M | N/A | `large` | ~10 GB | 1x |\n\nThe `.en` models for English-only applications tend to perform better, especially for the `tiny.en` and `base.en` models. We observed that the difference becomes less significant for the `small.en` and `medium.en` models.\n\nWhisper's performance varies widely depending on the language. The figure below shows a performance breakdown of `large-v3` and `large-v2` models by language, using WERs (word error rates) or CER (character error rates, shown in *Italic*) evaluated on the Common Voice 15 and Fleurs datasets. Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of [the paper](https://arxiv.org/abs/2212.04356), as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.\n\n![WER breakdown by language](https://github.com/openai/whisper/assets/266841/f4619d66-1058-4005-8f67-a9d811b77c62)\n\n\n\n## Command-line usage\n\nThe following command will transcribe speech in audio files, using the `medium` model:\n\n whisper audio.flac audio.mp3 audio.wav --model medium\n\nThe default setting (which selects the `small` model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the `--language` option:\n\n whisper japanese.wav --language Japanese\n\nAdding `--task translate` will translate the speech into English:\n\n whisper japanese.wav --language Japanese --task translate\n\nRun the following to view all available options:\n\n whisper --help\n\nSee [tokenizer.py](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py) for the list of all available languages.\n\n\n## Python usage\n\nTranscription can also be performed within Python: \n\n```python\nimport whisper\n\nmodel = whisper.load_model(\"base\")\nresult = model.transcribe(\"audio.mp3\")\nprint(result[\"text\"])\n```\n\nInternally, the `transcribe()` method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.\n\nBelow is an example usage of `whisper.detect_language()` and `whisper.decode()` which provide lower-level access to the model.\n\n```python\nimport whisper\n\nmodel = whisper.load_model(\"base\")\n\n# load audio and pad/trim it to fit 30 seconds\naudio = whisper.load_audio(\"audio.mp3\")\naudio = whisper.pad_or_trim(audio)\n\n# make log-Mel spectrogram and move to the same device as the model\nmel = whisper.log_mel_spectrogram(audio).to(model.device)\n\n# detect the spoken language\n_, probs = model.detect_language(mel)\nprint(f\"Detected language: {max(probs, key=probs.get)}\")\n\n# decode the audio\noptions = whisper.DecodingOptions()\nresult = whisper.decode(model, mel, options)\n\n# print the recognized text\nprint(result.text)\n```\n\n## More examples\n\nPlease use the [\ud83d\ude4c Show and tell](https://github.com/openai/whisper/discussions/categories/show-and-tell) category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.\n\n\n## License\n\nWhisper's code and model weights are released under the MIT License. See [LICENSE](https://github.com/openai/whisper/blob/main/LICENSE) for further details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Robust Speech Recognition via Large-Scale Weak Supervision",
"version": "20240718",
"project_urls": {
"Homepage": "https://github.com/OpenAdaptAI/whisper"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9de2c7ecb73113d847086dd4264496eabb5729db3967f46ad48362dc46efcfd0",
"md5": "690383ca8c42ce1081233ef9e0e9ff09",
"sha256": "828c60aa6aca95c0bd000785329554f61687a3202ba915208640ac9ef9dc6c3b"
},
"downloads": -1,
"filename": "oa_whisper-20240718-py3-none-any.whl",
"has_sig": false,
"md5_digest": "690383ca8c42ce1081233ef9e0e9ff09",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 802783,
"upload_time": "2024-07-18T17:29:41",
"upload_time_iso_8601": "2024-07-18T17:29:41.318728Z",
"url": "https://files.pythonhosted.org/packages/9d/e2/c7ecb73113d847086dd4264496eabb5729db3967f46ad48362dc46efcfd0/oa_whisper-20240718-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "138b4e7632e3cdfbd0731eea5b32b63b4158347142b40b154994594ea21b1e21",
"md5": "85f48cbacd23e8a383d47bcf63019e4b",
"sha256": "41b33187bba5874d05f31fa18e2fde7addb8d13a7d9aec64382ea83748bea6fe"
},
"downloads": -1,
"filename": "oa_whisper-20240718.tar.gz",
"has_sig": false,
"md5_digest": "85f48cbacd23e8a383d47bcf63019e4b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 802259,
"upload_time": "2024-07-18T17:29:50",
"upload_time_iso_8601": "2024-07-18T17:29:50.245564Z",
"url": "https://files.pythonhosted.org/packages/13/8b/4e7632e3cdfbd0731eea5b32b63b4158347142b40b154994594ea21b1e21/oa_whisper-20240718.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-18 17:29:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "OpenAdaptAI",
"github_project": "whisper",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "oa-whisper"
}