Name | coqui-tts JSON |
Version |
0.25.3
JSON |
| download |
home_page | None |
Summary | Deep learning for Text to Speech. |
upload_time | 2025-01-16 11:01:37 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <3.13,>=3.9 |
license | MPL-2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# <img src="https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png" height="56"/>
**๐ธ Coqui TTS is a library for advanced Text-to-Speech generation.**
๐ Pretrained models in +1100 languages.
๐ ๏ธ Tools for training new models and fine-tuning existing models in any language.
๐ Utilities for dataset analysis and curation.
[![Discord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)](https://pypi.org/project/coqui-tts/)
[![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0)
[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://pypi.org/project/coqui-tts/)
[![Downloads](https://pepy.tech/badge/coqui-tts)](https://pepy.tech/project/coqui-tts)
[![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml)
[![Docs](<https://readthedocs.org/projects/coqui-tts/badge/?version=latest&style=plastic>)](https://coqui-tts.readthedocs.io/en/latest/)
</div>
## ๐ฃ News
- **Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)**
- 0.25.0: [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.
- 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 0.19.0: XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).
- 0.14.1: You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with ๐ธTTS.
## ๐ฌ Where to ask questions
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
| Type | Platforms |
| -------------------------------------------- | ----------------------------------- |
| ๐จ **Bug Reports, Feature Requests & Ideas** | [GitHub Issue Tracker] |
| ๐ฉโ๐ป **Usage Questions** | [GitHub Discussions] |
| ๐ฏ **General Discussion** | [GitHub Discussions] or [Discord] |
[github issue tracker]: https://github.com/idiap/coqui-ai-TTS/issues
[github discussions]: https://github.com/idiap/coqui-ai-TTS/discussions
[discord]: https://discord.gg/5eXr5seRrv
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
The [issues](https://github.com/coqui-ai/TTS/issues) and
[discussions](https://github.com/coqui-ai/TTS/discussions) in the original
repository are also still a useful source of information.
## ๐ Links and Resources
| Type | Links |
| ------------------------------- | --------------------------------------- |
| ๐ผ **Documentation** | [ReadTheDocs](https://coqui-tts.readthedocs.io/en/latest/)
| ๐พ **Installation** | [TTS/README.md](https://github.com/idiap/coqui-ai-TTS/tree/dev#installation)|
| ๐ฉโ๐ป **Contributing** | [CONTRIBUTING.md](https://github.com/idiap/coqui-ai-TTS/blob/main/CONTRIBUTING.md)|
| ๐ **Released Models** | [Standard models](https://github.com/idiap/coqui-ai-TTS/blob/dev/TTS/.models.json) and [Fairseq models in ~1100 languages](https://github.com/idiap/coqui-ai-TTS#example-text-to-speech-using-fairseq-models-in-1100-languages-)|
## Features
- High-performance text-to-speech and voice conversion models, see list below.
- Fast and efficient model training with detailed training logs on the terminal and Tensorboard.
- Support for multi-speaker and multilingual TTS.
- Released and ready-to-use models.
- Tools to curate TTS datasets under ```dataset_analysis/```.
- Command line and Python APIs to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
## Model Implementations
### Spectrogram models
- [Tacotron](https://arxiv.org/abs/1703.10135), [Tacotron2](https://arxiv.org/abs/1712.05884)
- [Glow-TTS](https://arxiv.org/abs/2005.11129), [SC-GlowTTS](https://arxiv.org/abs/2104.05557)
- [Speedy-Speech](https://arxiv.org/abs/2008.03802)
- [Align-TTS](https://arxiv.org/abs/2003.01950)
- [FastPitch](https://arxiv.org/pdf/2006.06873.pdf)
- [FastSpeech](https://arxiv.org/abs/1905.09263), [FastSpeech2](https://arxiv.org/abs/2006.04558)
- [Capacitron](https://arxiv.org/abs/1906.03402)
- [OverFlow](https://arxiv.org/abs/2211.06892)
- [Neural HMM TTS](https://arxiv.org/abs/2108.13320)
- [Delightful TTS](https://arxiv.org/abs/2110.12612)
### End-to-End Models
- [XTTS](https://arxiv.org/abs/2406.04904)
- [VITS](https://arxiv.org/pdf/2106.06103)
- ๐ธ[YourTTS](https://arxiv.org/abs/2112.02418)
- ๐ข[Tortoise](https://github.com/neonbjb/tortoise-tts)
- ๐ถ[Bark](https://github.com/suno-ai/bark)
### Vocoders
- [MelGAN](https://arxiv.org/abs/1910.06711)
- [MultiBandMelGAN](https://arxiv.org/abs/2005.05106)
- [ParallelWaveGAN](https://arxiv.org/abs/1910.11480)
- [GAN-TTS discriminators](https://arxiv.org/abs/1909.11646)
- [WaveRNN](https://github.com/fatchord/WaveRNN/)
- [WaveGrad](https://arxiv.org/abs/2009.00713)
- [HiFiGAN](https://arxiv.org/abs/2010.05646)
- [UnivNet](https://arxiv.org/abs/2106.07889)
### Voice Conversion
- [FreeVC](https://arxiv.org/abs/2210.15418)
- [kNN-VC](https://doi.org/10.21437/Interspeech.2023-419)
- [OpenVoice](https://arxiv.org/abs/2312.01479)
### Others
- Attention methods: [Guided Attention](https://arxiv.org/abs/1710.08969),
[Forward Backward Decoding](https://arxiv.org/abs/1907.09006),
[Graves Attention](https://arxiv.org/abs/1910.10288),
[Double Decoder Consistency](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/),
[Dynamic Convolutional Attention](https://arxiv.org/pdf/1910.10288.pdf),
[Alignment Network](https://arxiv.org/abs/2108.10447)
- Speaker encoders: [GE2E](https://arxiv.org/abs/1710.10467),
[Angular Loss](https://arxiv.org/pdf/2003.11982.pdf)
You can also help us implement more models.
<!-- start installation -->
## Installation
๐ธTTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13**, but should also
work on Mac and Windows.
If you are only interested in [synthesizing speech](https://coqui-tts.readthedocs.io/en/latest/inference.html) with the pretrained ๐ธTTS models, installing from PyPI is the easiest option.
```bash
pip install coqui-tts
```
If you plan to code or train models, clone ๐ธTTS and install it locally.
```bash
git clone https://github.com/idiap/coqui-ai-TTS
cd coqui-ai-TTS
pip install -e .
```
### Optional dependencies
The following extras allow the installation of optional dependencies:
| Name | Description |
|------|-------------|
| `all` | All optional dependencies |
| `notebooks` | Dependencies only used in notebooks |
| `server` | Dependencies to run the TTS server |
| `bn` | Bangla G2P |
| `ja` | Japanese G2P |
| `ko` | Korean G2P |
| `zh` | Chinese G2P |
| `languages` | All language-specific dependencies |
You can install extras with one of the following commands:
```bash
pip install coqui-tts[server,ja]
pip install -e .[server,ja]
```
### Platforms
If you are on Ubuntu (Debian), you can also run the following commands for installation.
```bash
make system-deps
make install
```
<!-- end installation -->
## Docker Image
You can also try out Coqui TTS without installation with the docker image.
Simply run the following command and you will be able to run TTS:
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/idiap/coqui-tts-cpu
python3 TTS/server/server.py --list_models #To get the list of available models
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
```
You can then enjoy the TTS server [here](http://[::1]:5002/)
More details about the docker images (like GPU support) can be found
[here](https://coqui-tts.readthedocs.io/en/latest/docker_images.html)
## Synthesizing speech by ๐ธTTS
<!-- start inference -->
### ๐ Python API
#### Multi-speaker and multi-lingual model
```python
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available ๐ธTTS models
print(TTS().list_models())
# Initialize TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
# List speakers
print(tts.speakers)
# Run TTS
# โ XTTS supports both, but many models allow only one of the `speaker` and
# `speaker_wav` arguments
# TTS with list of amplitude values as output, clone the voice from `speaker_wav`
wav = tts.tts(
text="Hello world!",
speaker_wav="my/cloning/audio.wav",
language="en"
)
# TTS to a file, use a preset speaker
tts.tts_to_file(
text="Hello world!",
speaker="Craig Gutsy",
language="en",
file_path="output.wav"
)
```
#### Single speaker model
```python
# Initialize TTS with the target model name
tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)
# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
```
#### Voice conversion (VC)
Converting the voice in `source_wav` to the voice of `target_wav`:
```python
tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda")
tts.voice_conversion_to_file(
source_wav="my/source.wav",
target_wav="my/target.wav",
file_path="output.wav"
)
```
Other available voice conversion models:
- `voice_conversion_models/multilingual/multi-dataset/knnvc`
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v1`
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v2`
For more details, see the
[documentation](https://coqui-tts.readthedocs.io/en/latest/vc.html).
#### Voice cloning by combining single speaker TTS model with the default VC model
This way, you can clone voices by using any model in ๐ธTTS. The FreeVC model is
used for voice conversion after synthesizing speech.
```python
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
tts.tts_with_vc_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
speaker_wav="target/speaker.wav",
file_path="output.wav"
)
```
#### TTS using Fairseq models in ~1100 languages ๐คฏ
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
```python
# TTS with fairseq models
api = TTS("tts_models/deu/fairseq/vits")
api.tts_to_file(
"Wie sage ich auf Italienisch, dass ich dich liebe?",
file_path="output.wav"
)
```
### Command-line interface `tts`
<!-- begin-tts-readme -->
Synthesize speech on the command line.
You can either use your trained model or choose a model from the provided list.
- List provided models:
```sh
tts --list_models
```
- Get model information. Use the names obtained from `--list_models`.
```sh
tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```sh
tts --model_info_by_name tts_models/tr/common-voice/glow-tts
tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
#### Single speaker models
- Run TTS with the default model (`tts_models/en/ljspeech/tacotron2-DDC`):
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav
```
- Run TTS and pipe out the generated TTS wav file data:
```sh
tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```
- Run a TTS model with its default vocoder model:
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```
For example:
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--out_path output/path/speech.wav
```
- Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```
For example:
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--vocoder_name "vocoder_models/en/ljspeech/univnet" \
--out_path output/path/speech.wav
```
- Run your own TTS model (using Griffin-Lim Vocoder):
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav
```
- Run your own TTS and Vocoder models:
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav \
--vocoder_path path/to/vocoder.pth \
--vocoder_config_path path/to/vocoder_config.json
```
#### Multi-speaker models
- List the available speakers and choose a `<speaker_id>` among them:
```sh
tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```
- Run the multi-speaker TTS model with the target speaker ID:
```sh
tts --text "Text for TTS." --out_path output/path/speech.wav \
--model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```
- Run your own multi-speaker TTS model:
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav \
--model_path path/to/model.pth --config_path path/to/config.json \
--speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```
#### Voice conversion models
```sh
tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \
--source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
```
<!-- end-tts-readme -->
Raw data
{
"_id": null,
"home_page": null,
"name": "coqui-tts",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.9",
"maintainer_email": "Enno Hermann <enno.hermann@gmail.com>",
"keywords": null,
"author": null,
"author_email": "Eren G\u00f6lge <egolge@coqui.ai>",
"download_url": "https://files.pythonhosted.org/packages/99/d2/df6eca958e06d1bbf3825a07f446b7ade6b303b05ae36af4374cfb163d31/coqui_tts-0.25.3.tar.gz",
"platform": null,
"description": "# <img src=\"https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png\" height=\"56\"/>\n\n\n**\ud83d\udc38 Coqui TTS is a library for advanced Text-to-Speech generation.**\n\n\ud83d\ude80 Pretrained models in +1100 languages.\n\n\ud83d\udee0\ufe0f Tools for training new models and fine-tuning existing models in any language.\n\n\ud83d\udcda Utilities for dataset analysis and curation.\n\n[![Discord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)](https://pypi.org/project/coqui-tts/)\n[![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0)\n[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://pypi.org/project/coqui-tts/)\n[![Downloads](https://pepy.tech/badge/coqui-tts)](https://pepy.tech/project/coqui-tts)\n[![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440)\n[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml)\n[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml)\n[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml)\n[![Docs](<https://readthedocs.org/projects/coqui-tts/badge/?version=latest&style=plastic>)](https://coqui-tts.readthedocs.io/en/latest/)\n\n</div>\n\n## \ud83d\udce3 News\n- **Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)**\n- 0.25.0: [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.\n- 0.24.2: Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.\n- 0.20.0: XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.\n- 0.19.0: XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).\n- 0.14.1: You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with \ud83d\udc38TTS.\n\n## \ud83d\udcac Where to ask questions\nPlease use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.\n\n| Type | Platforms |\n| -------------------------------------------- | ----------------------------------- |\n| \ud83d\udea8 **Bug Reports, Feature Requests & Ideas** | [GitHub Issue Tracker] |\n| \ud83d\udc69\u200d\ud83d\udcbb **Usage Questions** | [GitHub Discussions] |\n| \ud83d\uddef **General Discussion** | [GitHub Discussions] or [Discord] |\n\n[github issue tracker]: https://github.com/idiap/coqui-ai-TTS/issues\n[github discussions]: https://github.com/idiap/coqui-ai-TTS/discussions\n[discord]: https://discord.gg/5eXr5seRrv\n[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials\n\nThe [issues](https://github.com/coqui-ai/TTS/issues) and\n[discussions](https://github.com/coqui-ai/TTS/discussions) in the original\nrepository are also still a useful source of information.\n\n\n## \ud83d\udd17 Links and Resources\n| Type | Links |\n| ------------------------------- | --------------------------------------- |\n| \ud83d\udcbc **Documentation** | [ReadTheDocs](https://coqui-tts.readthedocs.io/en/latest/)\n| \ud83d\udcbe **Installation** | [TTS/README.md](https://github.com/idiap/coqui-ai-TTS/tree/dev#installation)|\n| \ud83d\udc69\u200d\ud83d\udcbb **Contributing** | [CONTRIBUTING.md](https://github.com/idiap/coqui-ai-TTS/blob/main/CONTRIBUTING.md)|\n| \ud83d\ude80 **Released Models** | [Standard models](https://github.com/idiap/coqui-ai-TTS/blob/dev/TTS/.models.json) and [Fairseq models in ~1100 languages](https://github.com/idiap/coqui-ai-TTS#example-text-to-speech-using-fairseq-models-in-1100-languages-)|\n\n## Features\n- High-performance text-to-speech and voice conversion models, see list below.\n- Fast and efficient model training with detailed training logs on the terminal and Tensorboard.\n- Support for multi-speaker and multilingual TTS.\n- Released and ready-to-use models.\n- Tools to curate TTS datasets under ```dataset_analysis/```.\n- Command line and Python APIs to use and test your models.\n- Modular (but not too much) code base enabling easy implementation of new ideas.\n\n## Model Implementations\n### Spectrogram models\n- [Tacotron](https://arxiv.org/abs/1703.10135), [Tacotron2](https://arxiv.org/abs/1712.05884)\n- [Glow-TTS](https://arxiv.org/abs/2005.11129), [SC-GlowTTS](https://arxiv.org/abs/2104.05557)\n- [Speedy-Speech](https://arxiv.org/abs/2008.03802)\n- [Align-TTS](https://arxiv.org/abs/2003.01950)\n- [FastPitch](https://arxiv.org/pdf/2006.06873.pdf)\n- [FastSpeech](https://arxiv.org/abs/1905.09263), [FastSpeech2](https://arxiv.org/abs/2006.04558)\n- [Capacitron](https://arxiv.org/abs/1906.03402)\n- [OverFlow](https://arxiv.org/abs/2211.06892)\n- [Neural HMM TTS](https://arxiv.org/abs/2108.13320)\n- [Delightful TTS](https://arxiv.org/abs/2110.12612)\n\n### End-to-End Models\n- [XTTS](https://arxiv.org/abs/2406.04904)\n- [VITS](https://arxiv.org/pdf/2106.06103)\n- \ud83d\udc38[YourTTS](https://arxiv.org/abs/2112.02418)\n- \ud83d\udc22[Tortoise](https://github.com/neonbjb/tortoise-tts)\n- \ud83d\udc36[Bark](https://github.com/suno-ai/bark)\n\n### Vocoders\n- [MelGAN](https://arxiv.org/abs/1910.06711)\n- [MultiBandMelGAN](https://arxiv.org/abs/2005.05106)\n- [ParallelWaveGAN](https://arxiv.org/abs/1910.11480)\n- [GAN-TTS discriminators](https://arxiv.org/abs/1909.11646)\n- [WaveRNN](https://github.com/fatchord/WaveRNN/)\n- [WaveGrad](https://arxiv.org/abs/2009.00713)\n- [HiFiGAN](https://arxiv.org/abs/2010.05646)\n- [UnivNet](https://arxiv.org/abs/2106.07889)\n\n### Voice Conversion\n- [FreeVC](https://arxiv.org/abs/2210.15418)\n- [kNN-VC](https://doi.org/10.21437/Interspeech.2023-419)\n- [OpenVoice](https://arxiv.org/abs/2312.01479)\n\n### Others\n- Attention methods: [Guided Attention](https://arxiv.org/abs/1710.08969),\n [Forward Backward Decoding](https://arxiv.org/abs/1907.09006),\n [Graves Attention](https://arxiv.org/abs/1910.10288),\n [Double Decoder Consistency](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/),\n [Dynamic Convolutional Attention](https://arxiv.org/pdf/1910.10288.pdf),\n [Alignment Network](https://arxiv.org/abs/2108.10447)\n- Speaker encoders: [GE2E](https://arxiv.org/abs/1710.10467),\n [Angular Loss](https://arxiv.org/pdf/2003.11982.pdf)\n\nYou can also help us implement more models.\n\n<!-- start installation -->\n## Installation\n\n\ud83d\udc38TTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13**, but should also\nwork on Mac and Windows.\n\nIf you are only interested in [synthesizing speech](https://coqui-tts.readthedocs.io/en/latest/inference.html) with the pretrained \ud83d\udc38TTS models, installing from PyPI is the easiest option.\n\n```bash\npip install coqui-tts\n```\n\nIf you plan to code or train models, clone \ud83d\udc38TTS and install it locally.\n\n```bash\ngit clone https://github.com/idiap/coqui-ai-TTS\ncd coqui-ai-TTS\npip install -e .\n```\n\n### Optional dependencies\n\nThe following extras allow the installation of optional dependencies:\n\n| Name | Description |\n|------|-------------|\n| `all` | All optional dependencies |\n| `notebooks` | Dependencies only used in notebooks |\n| `server` | Dependencies to run the TTS server |\n| `bn` | Bangla G2P |\n| `ja` | Japanese G2P |\n| `ko` | Korean G2P |\n| `zh` | Chinese G2P |\n| `languages` | All language-specific dependencies |\n\nYou can install extras with one of the following commands:\n\n```bash\npip install coqui-tts[server,ja]\npip install -e .[server,ja]\n```\n\n### Platforms\n\nIf you are on Ubuntu (Debian), you can also run the following commands for installation.\n\n```bash\nmake system-deps\nmake install\n```\n\n<!-- end installation -->\n\n## Docker Image\nYou can also try out Coqui TTS without installation with the docker image.\nSimply run the following command and you will be able to run TTS:\n\n```bash\ndocker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/idiap/coqui-tts-cpu\npython3 TTS/server/server.py --list_models #To get the list of available models\npython3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server\n```\n\nYou can then enjoy the TTS server [here](http://[::1]:5002/)\nMore details about the docker images (like GPU support) can be found\n[here](https://coqui-tts.readthedocs.io/en/latest/docker_images.html)\n\n\n## Synthesizing speech by \ud83d\udc38TTS\n<!-- start inference -->\n### \ud83d\udc0d Python API\n\n#### Multi-speaker and multi-lingual model\n\n```python\nimport torch\nfrom TTS.api import TTS\n\n# Get device\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# List available \ud83d\udc38TTS models\nprint(TTS().list_models())\n\n# Initialize TTS\ntts = TTS(\"tts_models/multilingual/multi-dataset/xtts_v2\").to(device)\n\n# List speakers\nprint(tts.speakers)\n\n# Run TTS\n# \u2757 XTTS supports both, but many models allow only one of the `speaker` and\n# `speaker_wav` arguments\n\n# TTS with list of amplitude values as output, clone the voice from `speaker_wav`\nwav = tts.tts(\n text=\"Hello world!\",\n speaker_wav=\"my/cloning/audio.wav\",\n language=\"en\"\n)\n\n# TTS to a file, use a preset speaker\ntts.tts_to_file(\n text=\"Hello world!\",\n speaker=\"Craig Gutsy\",\n language=\"en\",\n file_path=\"output.wav\"\n)\n```\n\n#### Single speaker model\n\n```python\n# Initialize TTS with the target model name\ntts = TTS(\"tts_models/de/thorsten/tacotron2-DDC\").to(device)\n\n# Run TTS\ntts.tts_to_file(text=\"Ich bin eine Testnachricht.\", file_path=OUTPUT_PATH)\n```\n\n#### Voice conversion (VC)\n\nConverting the voice in `source_wav` to the voice of `target_wav`:\n\n```python\ntts = TTS(\"voice_conversion_models/multilingual/vctk/freevc24\").to(\"cuda\")\ntts.voice_conversion_to_file(\n source_wav=\"my/source.wav\",\n target_wav=\"my/target.wav\",\n file_path=\"output.wav\"\n)\n```\n\nOther available voice conversion models:\n- `voice_conversion_models/multilingual/multi-dataset/knnvc`\n- `voice_conversion_models/multilingual/multi-dataset/openvoice_v1`\n- `voice_conversion_models/multilingual/multi-dataset/openvoice_v2`\n\nFor more details, see the\n[documentation](https://coqui-tts.readthedocs.io/en/latest/vc.html).\n\n#### Voice cloning by combining single speaker TTS model with the default VC model\n\nThis way, you can clone voices by using any model in \ud83d\udc38TTS. The FreeVC model is\nused for voice conversion after synthesizing speech.\n\n```python\n\ntts = TTS(\"tts_models/de/thorsten/tacotron2-DDC\")\ntts.tts_with_vc_to_file(\n \"Wie sage ich auf Italienisch, dass ich dich liebe?\",\n speaker_wav=\"target/speaker.wav\",\n file_path=\"output.wav\"\n)\n```\n\n#### TTS using Fairseq models in ~1100 languages \ud83e\udd2f\nFor Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.\nYou can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)\nand learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).\n\n```python\n# TTS with fairseq models\napi = TTS(\"tts_models/deu/fairseq/vits\")\napi.tts_to_file(\n \"Wie sage ich auf Italienisch, dass ich dich liebe?\",\n file_path=\"output.wav\"\n)\n```\n\n### Command-line interface `tts`\n\n<!-- begin-tts-readme -->\n\nSynthesize speech on the command line.\n\nYou can either use your trained model or choose a model from the provided list.\n\n- List provided models:\n\n ```sh\n tts --list_models\n ```\n\n- Get model information. Use the names obtained from `--list_models`.\n ```sh\n tts --model_info_by_name \"<model_type>/<language>/<dataset>/<model_name>\"\n ```\n For example:\n ```sh\n tts --model_info_by_name tts_models/tr/common-voice/glow-tts\n tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2\n ```\n\n#### Single speaker models\n\n- Run TTS with the default model (`tts_models/en/ljspeech/tacotron2-DDC`):\n\n ```sh\n tts --text \"Text for TTS\" --out_path output/path/speech.wav\n ```\n\n- Run TTS and pipe out the generated TTS wav file data:\n\n ```sh\n tts --text \"Text for TTS\" --pipe_out --out_path output/path/speech.wav | aplay\n ```\n\n- Run a TTS model with its default vocoder model:\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_name \"<model_type>/<language>/<dataset>/<model_name>\" \\\n --out_path output/path/speech.wav\n ```\n\n For example:\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_name \"tts_models/en/ljspeech/glow-tts\" \\\n --out_path output/path/speech.wav\n ```\n\n- Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_name \"<model_type>/<language>/<dataset>/<model_name>\" \\\n --vocoder_name \"<model_type>/<language>/<dataset>/<model_name>\" \\\n --out_path output/path/speech.wav\n ```\n\n For example:\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_name \"tts_models/en/ljspeech/glow-tts\" \\\n --vocoder_name \"vocoder_models/en/ljspeech/univnet\" \\\n --out_path output/path/speech.wav\n ```\n\n- Run your own TTS model (using Griffin-Lim Vocoder):\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_path path/to/model.pth \\\n --config_path path/to/config.json \\\n --out_path output/path/speech.wav\n ```\n\n- Run your own TTS and Vocoder models:\n\n ```sh\n tts --text \"Text for TTS\" \\\n --model_path path/to/model.pth \\\n --config_path path/to/config.json \\\n --out_path output/path/speech.wav \\\n --vocoder_path path/to/vocoder.pth \\\n --vocoder_config_path path/to/vocoder_config.json\n ```\n\n#### Multi-speaker models\n\n- List the available speakers and choose a `<speaker_id>` among them:\n\n ```sh\n tts --model_name \"<language>/<dataset>/<model_name>\" --list_speaker_idxs\n ```\n\n- Run the multi-speaker TTS model with the target speaker ID:\n\n ```sh\n tts --text \"Text for TTS.\" --out_path output/path/speech.wav \\\n --model_name \"<language>/<dataset>/<model_name>\" --speaker_idx <speaker_id>\n ```\n\n- Run your own multi-speaker TTS model:\n\n ```sh\n tts --text \"Text for TTS\" --out_path output/path/speech.wav \\\n --model_path path/to/model.pth --config_path path/to/config.json \\\n --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>\n ```\n\n#### Voice conversion models\n\n```sh\ntts --out_path output/path/speech.wav --model_name \"<language>/<dataset>/<model_name>\" \\\n --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>\n```\n\n<!-- end-tts-readme -->\n",
"bugtrack_url": null,
"license": "MPL-2.0",
"summary": "Deep learning for Text to Speech.",
"version": "0.25.3",
"project_urls": {
"Discussions": "https://github.com/idiap/coqui-ai-TTS/discussions",
"Documentation": "https://coqui-tts.readthedocs.io",
"Homepage": "https://github.com/idiap/coqui-ai-TTS",
"Issues": "https://github.com/idiap/coqui-ai-TTS/issues",
"Repository": "https://github.com/idiap/coqui-ai-TTS"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6634cf321773e7ac1432de207da10d2f8a42b94357cb989e122f431c3a536d8b",
"md5": "e2411876516d44eb1f70cb96d58cb982",
"sha256": "1a4b2eb47137e40bfba7cfaedf4ccf4e13f73bca3428922edbb25e5cb47f674d"
},
"downloads": -1,
"filename": "coqui_tts-0.25.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e2411876516d44eb1f70cb96d58cb982",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.9",
"size": 863485,
"upload_time": "2025-01-16T11:01:33",
"upload_time_iso_8601": "2025-01-16T11:01:33.375437Z",
"url": "https://files.pythonhosted.org/packages/66/34/cf321773e7ac1432de207da10d2f8a42b94357cb989e122f431c3a536d8b/coqui_tts-0.25.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "99d2df6eca958e06d1bbf3825a07f446b7ade6b303b05ae36af4374cfb163d31",
"md5": "e2615b34e00cb7092a67467ba4ad0dca",
"sha256": "7d9f7a9f41d8ca8e3f2dda28e3902b2ed1283930cc5bd004882173d302d20035"
},
"downloads": -1,
"filename": "coqui_tts-0.25.3.tar.gz",
"has_sig": false,
"md5_digest": "e2615b34e00cb7092a67467ba4ad0dca",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.9",
"size": 1853480,
"upload_time": "2025-01-16T11:01:37",
"upload_time_iso_8601": "2025-01-16T11:01:37.517652Z",
"url": "https://files.pythonhosted.org/packages/99/d2/df6eca958e06d1bbf3825a07f446b7ade6b303b05ae36af4374cfb163d31/coqui_tts-0.25.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-16 11:01:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "idiap",
"github_project": "coqui-ai-TTS",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "coqui-tts"
}