[![CI](https://github.com/SYSTRAN/faster-whisper/workflows/CI/badge.svg)](https://github.com/SYSTRAN/faster-whisper/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/faster-whisper.svg)](https://badge.fury.io/py/faster-whisper)
# Faster Whisper transcription with CTranslate2
**faster-whisper** is a reimplementation of OpenAI's Whisper model using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models.
This implementation is up to 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.
## Benchmark
### Whisper
For reference, here's the time and memory usage that are required to transcribe [**13 minutes**](https://www.youtube.com/watch?v=0u7tTptBo9I) of audio using different implementations:
* [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258)
* [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[3b010f9](https://github.com/ggerganov/whisper.cpp/commit/3b010f9bed9a6068609e9faf52383aea792b0362)
* [faster-whisper](https://github.com/SYSTRAN/faster-whisper)@[cce6b53e](https://github.com/SYSTRAN/faster-whisper/commit/cce6b53e4554f71172dad188c45f10fb100f6e3e)
### Large-v2 model on GPU
| Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |
| --- | --- | --- | --- | --- | --- |
| openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |
| faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |
| faster-whisper | int8 | 5 | 59s | 3091MB | 3117MB |
*Executed with CUDA 11.7.1 on a NVIDIA Tesla V100S.*
### Small model on CPU
| Implementation | Precision | Beam size | Time | Max. memory |
| --- | --- | --- | --- | --- |
| openai/whisper | fp32 | 5 | 10m31s | 3101MB |
| whisper.cpp | fp32 | 5 | 17m42s | 1581MB |
| whisper.cpp | fp16 | 5 | 12m39s | 873MB |
| faster-whisper | fp32 | 5 | 2m44s | 1675MB |
| faster-whisper | int8 | 5 | 2m04s | 995MB |
*Executed with 8 threads on a Intel(R) Xeon(R) Gold 6226R.*
### Distil-whisper
| Implementation | Precision | Beam size | Time | Gigaspeech WER |
| --- | --- | --- | --- | --- |
| distil-whisper/distil-large-v2 | fp16 | 4 |- | 10.36 |
| [faster-distil-large-v2](https://huggingface.co/Systran/faster-distil-whisper-large-v2) | fp16 | 5 | - | 10.28 |
| distil-whisper/distil-medium.en | fp16 | 4 | - | 11.21 |
| [faster-distil-medium.en](https://huggingface.co/Systran/faster-distil-whisper-medium.en) | fp16 | 5 | - | 11.21 |
*Executed with CUDA 11.4 on a NVIDIA 3090.*
<details>
<summary>testing details (click to expand)</summary>
For `distil-whisper/distil-large-v2`, the WER is tested with code sample from [link](https://huggingface.co/distil-whisper/distil-large-v2#evaluation). for `faster-distil-whisper`, the WER is tested with setting:
```python
from faster_whisper import WhisperModel
model_size = "distil-large-v2"
# model_size = "distil-medium.en"
# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5, language="en")
```
</details>
## Requirements
* Python 3.8 or greater
Unlike openai-whisper, FFmpeg does **not** need to be installed on the system. The audio is decoded with the Python library [PyAV](https://github.com/PyAV-Org/PyAV) which bundles the FFmpeg libraries in its package.
### GPU
GPU execution requires the following NVIDIA libraries to be installed:
* [cuBLAS for CUDA 12](https://developer.nvidia.com/cublas)
* [cuDNN 8 for CUDA 12](https://developer.nvidia.com/cudnn)
**Note**: Latest versions of `ctranslate2` support CUDA 12 only. For CUDA 11, the current workaround is downgrading to the `3.24.0` version of `ctranslate2` (This can be done with `pip install --force-reinstall ctranslate2==3.24.0` or specifying the version in a `requirements.txt`).
There are multiple ways to install the NVIDIA libraries mentioned above. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.
<details>
<summary>Other installation methods (click to expand)</summary>
**Note:** For all these methods below, keep in mind the above note regarding CUDA versions. Depending on your setup, you may need to install the _CUDA 11_ versions of libraries that correspond to the CUDA 12 libraries listed in the instructions below.
#### Use Docker
The libraries (cuBLAS, cuDNN) are installed in these official NVIDIA CUDA Docker images: `nvidia/cuda:12.0.0-runtime-ubuntu20.04` or `nvidia/cuda:12.0.0-runtime-ubuntu22.04`.
#### Install with `pip` (Linux only)
On Linux these libraries can be installed with `pip`. Note that `LD_LIBRARY_PATH` must be set before launching Python.
```bash
pip install nvidia-cublas-cu12 nvidia-cudnn-cu12
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
```
**Note**: Version 9+ of `nvidia-cudnn-cu12` appears to cause issues due its reliance on cuDNN 9 (Faster-Whisper does not currently support cuDNN 9). Ensure your version of the Python package is for cuDNN 8.
#### Download the libraries from Purfview's repository (Windows & Linux)
Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA libraries for Windows & Linux in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs). Decompress the archive and place the libraries in a directory included in the `PATH`.
</details>
## Installation
The module can be installed from [PyPI](https://pypi.org/project/faster-whisper/):
```bash
pip install faster-whisper
```
<details>
<summary>Other installation methods (click to expand)</summary>
### Install the master branch
```bash
pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/refs/heads/master.tar.gz"
```
### Install a specific commit
```bash
pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/a4f1cc8f11433e454c3934442b5e1a4ed5e865c3.tar.gz"
```
</details>
## Usage
### Faster-whisper
```python
from faster_whisper import WhisperModel
model_size = "large-v3"
# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")
# or run on GPU with INT8
# model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", beam_size=5)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
**Warning:** `segments` is a *generator* so the transcription only starts when you iterate over it. The transcription can be run to completion by gathering the segments in a list or a `for` loop:
```python
segments, _ = model.transcribe("audio.mp3")
segments = list(segments) # The transcription will actually run here.
```
### Faster Distil-Whisper
The Distil-Whisper checkpoints are compatible with the Faster-Whisper package. In particular, the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
checkpoint is intrinsically designed to work with the Faster-Whisper transcription algorithm. The following code snippet
demonstrates how to run inference with distil-large-v3 on a specified audio file:
```python
from faster_whisper import WhisperModel
model_size = "distil-large-v3"
model = WhisperModel(model_size, device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5, language="en", condition_on_previous_text=False)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
For more information about the distil-large-v3 model, refer to the original [model card](https://huggingface.co/distil-whisper/distil-large-v3).
### Word-level timestamps
```python
segments, _ = model.transcribe("audio.mp3", word_timestamps=True)
for segment in segments:
for word in segment.words:
print("[%.2fs -> %.2fs] %s" % (word.start, word.end, word.word))
```
### VAD filter
The library integrates the [Silero VAD](https://github.com/snakers4/silero-vad) model to filter out parts of the audio without speech:
```python
segments, _ = model.transcribe("audio.mp3", vad_filter=True)
```
The default behavior is conservative and only removes silence longer than 2 seconds. See the available VAD parameters and default values in the [source code](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/vad.py). They can be customized with the dictionary argument `vad_parameters`:
```python
segments, _ = model.transcribe(
"audio.mp3",
vad_filter=True,
vad_parameters=dict(min_silence_duration_ms=500),
)
```
### Logging
The library logging level can be configured like this:
```python
import logging
logging.basicConfig()
logging.getLogger("faster_whisper").setLevel(logging.DEBUG)
```
### Going further
See more model and transcription options in the [`WhisperModel`](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py) class implementation.
## Community integrations
Here is a non exhaustive list of open-source projects using faster-whisper. Feel free to add your project to the list!
* [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) is an OpenAI compatible server using `faster-whisper`. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
* [WhisperX](https://github.com/m-bain/whisperX) is an award-winning Python library that offers speaker diarization and accurate word-level timestamps using wav2vec2 alignment
* [whisper-ctranslate2](https://github.com/Softcatala/whisper-ctranslate2) is a command line client based on faster-whisper and compatible with the original client from openai/whisper.
* [whisper-diarize](https://github.com/MahmoudAshraf97/whisper-diarization) is a speaker diarization tool that is based on faster-whisper and NVIDIA NeMo.
* [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) Standalone CLI executables of faster-whisper for Windows, Linux & macOS.
* [asr-sd-pipeline](https://github.com/hedrergudene/asr-sd-pipeline) provides a scalable, modular, end to end multi-speaker speech to text solution implemented using AzureML pipelines.
* [Open-Lyrics](https://github.com/zh-plus/Open-Lyrics) is a Python library that transcribes voice files using faster-whisper, and translates/polishes the resulting text into `.lrc` files in the desired language using OpenAI-GPT.
* [wscribe](https://github.com/geekodour/wscribe) is a flexible transcript generation tool supporting faster-whisper, it can export word level transcript and the exported transcript then can be edited with [wscribe-editor](https://github.com/geekodour/wscribe-editor)
* [aTrain](https://github.com/BANDAS-Center/aTrain) is a graphical user interface implementation of faster-whisper developed at the BANDAS-Center at the University of Graz for transcription and diarization in Windows ([Windows Store App](https://apps.microsoft.com/detail/atrain/9N15Q44SZNS2)) and Linux.
* [Whisper-Streaming](https://github.com/ufal/whisper_streaming) implements real-time mode for offline Whisper-like speech-to-text models with faster-whisper as the most recommended back-end. It implements a streaming policy with self-adaptive latency based on the actual source complexity, and demonstrates the state of the art.
* [WhisperLive](https://github.com/collabora/WhisperLive) is a nearly-live implementation of OpenAI's Whisper which uses faster-whisper as the backend to transcribe audio in real-time.
* [Faster-Whisper-Transcriber](https://github.com/BBC-Esq/ctranslate2-faster-whisper-transcriber) is a simple but reliable voice transcriber that provides a user-friendly interface.
## Model conversion
When loading a model from its size such as `WhisperModel("large-v3")`, the corresponding CTranslate2 model is automatically downloaded from the [Hugging Face Hub](https://huggingface.co/Systran).
We also provide a script to convert any Whisper models compatible with the Transformers library. They could be the original OpenAI models or user fine-tuned models.
For example the command below converts the [original "large-v3" Whisper model](https://huggingface.co/openai/whisper-large-v3) and saves the weights in FP16:
```bash
pip install transformers[torch]>=4.23
ct2-transformers-converter --model openai/whisper-large-v3 --output_dir whisper-large-v3-ct2
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
* The option `--model` accepts a model name on the Hub or a path to a model directory.
* If the option `--copy_files tokenizer.json` is not used, the tokenizer configuration is automatically downloaded when the model is loaded later.
Models can also be converted from the code. See the [conversion API](https://opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html).
### Load a converted model
1. Directly load the model from a local directory:
```python
model = faster_whisper.WhisperModel("whisper-large-v3-ct2")
```
2. [Upload your model to the Hugging Face Hub](https://huggingface.co/docs/transformers/model_sharing#upload-with-the-web-interface) and load it from its name:
```python
model = faster_whisper.WhisperModel("username/whisper-large-v3-ct2")
```
## Comparing performance against other implementations
If you are comparing the performance against other Whisper implementations, you should make sure to run the comparison with similar settings. In particular:
* Verify that the same transcription options are used, especially the same beam size. For example in openai/whisper, `model.transcribe` uses a default beam size of 1 but here we use a default beam size of 5.
* When running on CPU, make sure to set the same number of threads. Many frameworks will read the environment variable `OMP_NUM_THREADS`, which can be set when running your script:
```bash
OMP_NUM_THREADS=4 python3 my_script.py
```
Raw data
{
"_id": null,
"home_page": "https://github.com/SYSTRAN/faster-whisper",
"name": "faster-whisper",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "openai whisper speech ctranslate2 inference quantization transformer",
"author": "Guillaume Klein",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/1e/f2/77437ee937233d6e8259e3df511a4662cd7e833dabeaaddbfc929d2a3ed5/faster-whisper-1.0.3.tar.gz",
"platform": null,
"description": "[![CI](https://github.com/SYSTRAN/faster-whisper/workflows/CI/badge.svg)](https://github.com/SYSTRAN/faster-whisper/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/faster-whisper.svg)](https://badge.fury.io/py/faster-whisper)\n\n# Faster Whisper transcription with CTranslate2\n\n**faster-whisper** is a reimplementation of OpenAI's Whisper model using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models.\n\nThis implementation is up to 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.\n\n## Benchmark\n\n### Whisper\n\nFor reference, here's the time and memory usage that are required to transcribe [**13 minutes**](https://www.youtube.com/watch?v=0u7tTptBo9I) of audio using different implementations:\n\n* [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258)\n* [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[3b010f9](https://github.com/ggerganov/whisper.cpp/commit/3b010f9bed9a6068609e9faf52383aea792b0362)\n* [faster-whisper](https://github.com/SYSTRAN/faster-whisper)@[cce6b53e](https://github.com/SYSTRAN/faster-whisper/commit/cce6b53e4554f71172dad188c45f10fb100f6e3e)\n\n### Large-v2 model on GPU\n\n| Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |\n| --- | --- | --- | --- | --- | --- |\n| openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |\n| faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |\n| faster-whisper | int8 | 5 | 59s | 3091MB | 3117MB |\n\n*Executed with CUDA 11.7.1 on a NVIDIA Tesla V100S.*\n\n### Small model on CPU\n\n| Implementation | Precision | Beam size | Time | Max. memory |\n| --- | --- | --- | --- | --- |\n| openai/whisper | fp32 | 5 | 10m31s | 3101MB |\n| whisper.cpp | fp32 | 5 | 17m42s | 1581MB |\n| whisper.cpp | fp16 | 5 | 12m39s | 873MB |\n| faster-whisper | fp32 | 5 | 2m44s | 1675MB |\n| faster-whisper | int8 | 5 | 2m04s | 995MB |\n\n*Executed with 8 threads on a Intel(R) Xeon(R) Gold 6226R.*\n\n\n### Distil-whisper\n\n| Implementation | Precision | Beam size | Time | Gigaspeech WER |\n| --- | --- | --- | --- | --- |\n| distil-whisper/distil-large-v2 | fp16 | 4 |- | 10.36 |\n| [faster-distil-large-v2](https://huggingface.co/Systran/faster-distil-whisper-large-v2) | fp16 | 5 | - | 10.28 |\n| distil-whisper/distil-medium.en | fp16 | 4 | - | 11.21 |\n| [faster-distil-medium.en](https://huggingface.co/Systran/faster-distil-whisper-medium.en) | fp16 | 5 | - | 11.21 |\n\n*Executed with CUDA 11.4 on a NVIDIA 3090.*\n\n<details>\n<summary>testing details (click to expand)</summary>\n\nFor `distil-whisper/distil-large-v2`, the WER is tested with code sample from [link](https://huggingface.co/distil-whisper/distil-large-v2#evaluation). for `faster-distil-whisper`, the WER is tested with setting:\n```python\nfrom faster_whisper import WhisperModel\n\nmodel_size = \"distil-large-v2\"\n# model_size = \"distil-medium.en\"\n# Run on GPU with FP16\nmodel = WhisperModel(model_size, device=\"cuda\", compute_type=\"float16\")\nsegments, info = model.transcribe(\"audio.mp3\", beam_size=5, language=\"en\")\n```\n</details>\n\n## Requirements\n\n* Python 3.8 or greater\n\nUnlike openai-whisper, FFmpeg does **not** need to be installed on the system. The audio is decoded with the Python library [PyAV](https://github.com/PyAV-Org/PyAV) which bundles the FFmpeg libraries in its package.\n\n### GPU\n\nGPU execution requires the following NVIDIA libraries to be installed:\n\n* [cuBLAS for CUDA 12](https://developer.nvidia.com/cublas)\n* [cuDNN 8 for CUDA 12](https://developer.nvidia.com/cudnn)\n\n**Note**: Latest versions of `ctranslate2` support CUDA 12 only. For CUDA 11, the current workaround is downgrading to the `3.24.0` version of `ctranslate2` (This can be done with `pip install --force-reinstall ctranslate2==3.24.0` or specifying the version in a `requirements.txt`).\n\nThere are multiple ways to install the NVIDIA libraries mentioned above. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below. \n\n<details>\n<summary>Other installation methods (click to expand)</summary>\n\n\n**Note:** For all these methods below, keep in mind the above note regarding CUDA versions. Depending on your setup, you may need to install the _CUDA 11_ versions of libraries that correspond to the CUDA 12 libraries listed in the instructions below.\n\n#### Use Docker\n\nThe libraries (cuBLAS, cuDNN) are installed in these official NVIDIA CUDA Docker images: `nvidia/cuda:12.0.0-runtime-ubuntu20.04` or `nvidia/cuda:12.0.0-runtime-ubuntu22.04`.\n\n#### Install with `pip` (Linux only)\n\nOn Linux these libraries can be installed with `pip`. Note that `LD_LIBRARY_PATH` must be set before launching Python.\n\n```bash\npip install nvidia-cublas-cu12 nvidia-cudnn-cu12\n\nexport LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + \":\" + os.path.dirname(nvidia.cudnn.lib.__file__))'`\n```\n\n**Note**: Version 9+ of `nvidia-cudnn-cu12` appears to cause issues due its reliance on cuDNN 9 (Faster-Whisper does not currently support cuDNN 9). Ensure your version of the Python package is for cuDNN 8.\n\n#### Download the libraries from Purfview's repository (Windows & Linux)\n\nPurfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA libraries for Windows & Linux in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs). Decompress the archive and place the libraries in a directory included in the `PATH`.\n\n</details>\n\n## Installation\n\nThe module can be installed from [PyPI](https://pypi.org/project/faster-whisper/):\n\n```bash\npip install faster-whisper\n```\n\n<details>\n<summary>Other installation methods (click to expand)</summary>\n\n### Install the master branch\n\n```bash\npip install --force-reinstall \"faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/refs/heads/master.tar.gz\"\n```\n\n### Install a specific commit\n\n```bash\npip install --force-reinstall \"faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/a4f1cc8f11433e454c3934442b5e1a4ed5e865c3.tar.gz\"\n```\n\n</details>\n\n## Usage\n\n### Faster-whisper\n\n```python\nfrom faster_whisper import WhisperModel\n\nmodel_size = \"large-v3\"\n\n# Run on GPU with FP16\nmodel = WhisperModel(model_size, device=\"cuda\", compute_type=\"float16\")\n\n# or run on GPU with INT8\n# model = WhisperModel(model_size, device=\"cuda\", compute_type=\"int8_float16\")\n# or run on CPU with INT8\n# model = WhisperModel(model_size, device=\"cpu\", compute_type=\"int8\")\n\nsegments, info = model.transcribe(\"audio.mp3\", beam_size=5)\n\nprint(\"Detected language '%s' with probability %f\" % (info.language, info.language_probability))\n\nfor segment in segments:\n print(\"[%.2fs -> %.2fs] %s\" % (segment.start, segment.end, segment.text))\n```\n\n**Warning:** `segments` is a *generator* so the transcription only starts when you iterate over it. The transcription can be run to completion by gathering the segments in a list or a `for` loop:\n\n```python\nsegments, _ = model.transcribe(\"audio.mp3\")\nsegments = list(segments) # The transcription will actually run here.\n```\n### Faster Distil-Whisper\n\nThe Distil-Whisper checkpoints are compatible with the Faster-Whisper package. In particular, the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)\ncheckpoint is intrinsically designed to work with the Faster-Whisper transcription algorithm. The following code snippet \ndemonstrates how to run inference with distil-large-v3 on a specified audio file:\n\n```python\nfrom faster_whisper import WhisperModel\n\nmodel_size = \"distil-large-v3\"\n\nmodel = WhisperModel(model_size, device=\"cuda\", compute_type=\"float16\")\nsegments, info = model.transcribe(\"audio.mp3\", beam_size=5, language=\"en\", condition_on_previous_text=False)\n\nfor segment in segments:\n print(\"[%.2fs -> %.2fs] %s\" % (segment.start, segment.end, segment.text))\n```\n\nFor more information about the distil-large-v3 model, refer to the original [model card](https://huggingface.co/distil-whisper/distil-large-v3).\n\n### Word-level timestamps\n\n```python\nsegments, _ = model.transcribe(\"audio.mp3\", word_timestamps=True)\n\nfor segment in segments:\n for word in segment.words:\n print(\"[%.2fs -> %.2fs] %s\" % (word.start, word.end, word.word))\n```\n\n### VAD filter\n\nThe library integrates the [Silero VAD](https://github.com/snakers4/silero-vad) model to filter out parts of the audio without speech:\n\n```python\nsegments, _ = model.transcribe(\"audio.mp3\", vad_filter=True)\n```\n\nThe default behavior is conservative and only removes silence longer than 2 seconds. See the available VAD parameters and default values in the [source code](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/vad.py). They can be customized with the dictionary argument `vad_parameters`:\n\n```python\nsegments, _ = model.transcribe(\n \"audio.mp3\",\n vad_filter=True,\n vad_parameters=dict(min_silence_duration_ms=500),\n)\n```\n\n### Logging\n\nThe library logging level can be configured like this:\n\n```python\nimport logging\n\nlogging.basicConfig()\nlogging.getLogger(\"faster_whisper\").setLevel(logging.DEBUG)\n```\n\n### Going further\n\nSee more model and transcription options in the [`WhisperModel`](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py) class implementation.\n\n## Community integrations\n\nHere is a non exhaustive list of open-source projects using faster-whisper. Feel free to add your project to the list!\n\n\n* [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) is an OpenAI compatible server using `faster-whisper`. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.\n* [WhisperX](https://github.com/m-bain/whisperX) is an award-winning Python library that offers speaker diarization and accurate word-level timestamps using wav2vec2 alignment\n* [whisper-ctranslate2](https://github.com/Softcatala/whisper-ctranslate2) is a command line client based on faster-whisper and compatible with the original client from openai/whisper.\n* [whisper-diarize](https://github.com/MahmoudAshraf97/whisper-diarization) is a speaker diarization tool that is based on faster-whisper and NVIDIA NeMo.\n* [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) Standalone CLI executables of faster-whisper for Windows, Linux & macOS. \n* [asr-sd-pipeline](https://github.com/hedrergudene/asr-sd-pipeline) provides a scalable, modular, end to end multi-speaker speech to text solution implemented using AzureML pipelines.\n* [Open-Lyrics](https://github.com/zh-plus/Open-Lyrics) is a Python library that transcribes voice files using faster-whisper, and translates/polishes the resulting text into `.lrc` files in the desired language using OpenAI-GPT.\n* [wscribe](https://github.com/geekodour/wscribe) is a flexible transcript generation tool supporting faster-whisper, it can export word level transcript and the exported transcript then can be edited with [wscribe-editor](https://github.com/geekodour/wscribe-editor)\n* [aTrain](https://github.com/BANDAS-Center/aTrain) is a graphical user interface implementation of faster-whisper developed at the BANDAS-Center at the University of Graz for transcription and diarization in Windows ([Windows Store App](https://apps.microsoft.com/detail/atrain/9N15Q44SZNS2)) and Linux.\n* [Whisper-Streaming](https://github.com/ufal/whisper_streaming) implements real-time mode for offline Whisper-like speech-to-text models with faster-whisper as the most recommended back-end. It implements a streaming policy with self-adaptive latency based on the actual source complexity, and demonstrates the state of the art.\n* [WhisperLive](https://github.com/collabora/WhisperLive) is a nearly-live implementation of OpenAI's Whisper which uses faster-whisper as the backend to transcribe audio in real-time.\n* [Faster-Whisper-Transcriber](https://github.com/BBC-Esq/ctranslate2-faster-whisper-transcriber) is a simple but reliable voice transcriber that provides a user-friendly interface.\n\n## Model conversion\n\nWhen loading a model from its size such as `WhisperModel(\"large-v3\")`, the corresponding CTranslate2 model is automatically downloaded from the [Hugging Face Hub](https://huggingface.co/Systran).\n\nWe also provide a script to convert any Whisper models compatible with the Transformers library. They could be the original OpenAI models or user fine-tuned models.\n\nFor example the command below converts the [original \"large-v3\" Whisper model](https://huggingface.co/openai/whisper-large-v3) and saves the weights in FP16:\n\n```bash\npip install transformers[torch]>=4.23\n\nct2-transformers-converter --model openai/whisper-large-v3 --output_dir whisper-large-v3-ct2\n--copy_files tokenizer.json preprocessor_config.json --quantization float16\n```\n\n* The option `--model` accepts a model name on the Hub or a path to a model directory.\n* If the option `--copy_files tokenizer.json` is not used, the tokenizer configuration is automatically downloaded when the model is loaded later.\n\nModels can also be converted from the code. See the [conversion API](https://opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html).\n\n### Load a converted model\n\n1. Directly load the model from a local directory:\n```python\nmodel = faster_whisper.WhisperModel(\"whisper-large-v3-ct2\")\n```\n\n2. [Upload your model to the Hugging Face Hub](https://huggingface.co/docs/transformers/model_sharing#upload-with-the-web-interface) and load it from its name:\n```python\nmodel = faster_whisper.WhisperModel(\"username/whisper-large-v3-ct2\")\n```\n\n## Comparing performance against other implementations\n\nIf you are comparing the performance against other Whisper implementations, you should make sure to run the comparison with similar settings. In particular:\n\n* Verify that the same transcription options are used, especially the same beam size. For example in openai/whisper, `model.transcribe` uses a default beam size of 1 but here we use a default beam size of 5.\n* When running on CPU, make sure to set the same number of threads. Many frameworks will read the environment variable `OMP_NUM_THREADS`, which can be set when running your script:\n\n```bash\nOMP_NUM_THREADS=4 python3 my_script.py\n```\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Faster Whisper transcription with CTranslate2",
"version": "1.0.3",
"project_urls": {
"Homepage": "https://github.com/SYSTRAN/faster-whisper"
},
"split_keywords": [
"openai",
"whisper",
"speech",
"ctranslate2",
"inference",
"quantization",
"transformer"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7f004742b1cd3afd23d0ff9b7e72ec40b2c398988332a5578115728fd83415d1",
"md5": "fa22af7ac529145509fb80d35dc5498d",
"sha256": "364d0e378ab232ed26f39656e5c98548b38045224e206b20f7d8c90e2745b9d3"
},
"downloads": -1,
"filename": "faster_whisper-1.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fa22af7ac529145509fb80d35dc5498d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 1974982,
"upload_time": "2024-07-01T10:06:22",
"upload_time_iso_8601": "2024-07-01T10:06:22.970931Z",
"url": "https://files.pythonhosted.org/packages/7f/00/4742b1cd3afd23d0ff9b7e72ec40b2c398988332a5578115728fd83415d1/faster_whisper-1.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1ef277437ee937233d6e8259e3df511a4662cd7e833dabeaaddbfc929d2a3ed5",
"md5": "98a89a1613612e2be84d36eba75dd4c2",
"sha256": "1a145db86450b56aaa623c8df7d4ef86e8a1159900f60533e2890e98e8453a17"
},
"downloads": -1,
"filename": "faster-whisper-1.0.3.tar.gz",
"has_sig": false,
"md5_digest": "98a89a1613612e2be84d36eba75dd4c2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 1980019,
"upload_time": "2024-07-01T10:06:25",
"upload_time_iso_8601": "2024-07-01T10:06:25.012211Z",
"url": "https://files.pythonhosted.org/packages/1e/f2/77437ee937233d6e8259e3df511a4662cd7e833dabeaaddbfc929d2a3ed5/faster-whisper-1.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-01 10:06:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SYSTRAN",
"github_project": "faster-whisper",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "ctranslate2",
"specs": [
[
"<",
"5"
],
[
">=",
"4.0"
]
]
},
{
"name": "huggingface_hub",
"specs": [
[
">=",
"0.13"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"<",
"1"
],
[
">=",
"0.13"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
">=",
"1.14"
],
[
"<",
"2"
]
]
},
{
"name": "pyannote-audio",
"specs": [
[
">=",
"3.1.1"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"2.1.1"
]
]
},
{
"name": "av",
"specs": [
[
">=",
"11"
]
]
},
{
"name": "tqdm",
"specs": []
}
],
"lcname": "faster-whisper"
}