[**🇨🇳中文**](https://github.com/shibing624/parrots/blob/master/README.md) | [**🌐English**](https://github.com/shibing624/parrots/blob/master/README_EN.md) | [**📖文档/Docs**](https://github.com/shibing624/parrots/wiki) | [**🤖模型/Models**](https://huggingface.co/shibing624)
<div align="center">
<a href="https://github.com/shibing624/parrots">
<img src="https://github.com/shibing624/parrots/blob/master/docs/parrots_icon.png" alt="Logo" height="156">
</a>
<br/>
<br/>
<a href="https://huggingface.co/spaces/shibing624/parrots" target="_blank"> Online Demo </a>
<br/>
<img width="100%" src="https://github.com/shibing624/parrots/blob/master/docs/hf.jpg">
</div>
-----------------
# Parrots: ASR and TTS toolkit
[](https://badge.fury.io/py/parrots)
[](https://pepy.tech/project/parrots)
[](CONTRIBUTING.md)
[](https://github.com/shibing624/parrots/graphs/contributors)
[](LICENSE)
[](requirements.txt)
[](https://github.com/shibing624/parrots/issues)
[](#Contact)
## Introduction
Parrots, Automatic Speech Recognition(**ASR**), Text-To-Speech(**TTS**) toolkit, support Chinese, English, Japanese, etc.
**parrots**实现了语音识别和语音合成模型一键调用,开箱即用,支持中英文。
## Features
1. **ASR**:基于`distilwhisper`实现的中文语音识别(ASR)模型,支持中、英等多种语言
2. **TTS**:基于`GPT-SoVITS`训练的语音合成(TTS)模型,支持中、英、日等多种语言
3. **IndexTTS2**:集成了 IndexTTS2 模型,支持情感表达和时长控制的零样本语音合成
- 精确的语音时长控制
- 情感与说话人身份解耦,独立控制音色和情感
- 支持多种情感控制方式:音频参考、情感向量、文本描述
- 高度表现力的情感语音合成
4. **流式TTS**:支持流式语音合成,实现低延迟的实时语音输出
## Install
```shell
pip install torch # or conda install pytorch
pip install -r requirements.txt
pip install parrots
```
or
```shell
pip install torch # or conda install pytorch
git clone https://github.com/shibing624/parrots.git
cd parrots
python setup.py install
```
## Demo
- Offical Demo: https://www.mulanai.com/product/tts/
- HuggingFace Demo: https://huggingface.co/spaces/shibing624/parrots
<img width="85%" src="https://github.com/shibing624/parrots/blob/master/docs/hf.png">
run example: [examples/tts_gradio_demo.py](https://github.com/shibing624/parrots/blob/master/examples/tts_gradio_demo.py) to see the demo:
```shell
python examples/tts_gradio_demo.py
```
## Usage
### ASR(Speech Recognition)
example: [examples/demo_asr.py](https://github.com/shibing624/parrots/blob/master/examples/demo_asr.py)
```python
import os
import sys
sys.path.append('..')
from parrots import SpeechRecognition
pwd_path = os.path.abspath(os.path.dirname(__file__))
if __name__ == '__main__':
m = SpeechRecognition()
r = m.recognize_speech_from_file(os.path.join(pwd_path, 'tushuguan.wav'))
print('[提示] 语音识别结果:', r)
```
output:
```
{'text': '北京图书馆'}
```
### TTS(Speech Synthesis)
#### GPT-SoVITS 基础用法
example: [examples/demo_tts.py](https://github.com/shibing624/parrots/blob/master/examples/demo_tts.py)
```python
from parrots import TextToSpeech
# 初始化 TTS 模型(无需手动配置路径)
m = TextToSpeech(
speaker_model_path="shibing624/parrots-gpt-sovits-speaker-maimai",
speaker_name="MaiMai",
device="cpu", # 或 "cuda" 使用 GPU
half=False # 设置为 True 使用半精度加速
)
# 生成语音
m.predict(
text="你好,欢迎来到北京。这是一个合成录音文件的演示。Welcome to Beijing!",
text_language="auto", # 自动检测语言,也可指定 "zh", "en", "ja"
output_path="output_audio.wav"
)
```
output:
```
Save audio to output_audio.wav
```
#### 流式 TTS(低延迟)
支持流式语音合成,适用于实时对话场景:
```python
from parrots import TextToSpeech
import soundfile as sf
import numpy as np
m = TextToSpeech(
speaker_model_path="shibing624/parrots-gpt-sovits-speaker-maimai",
speaker_name="MaiMai",
)
# 流式生成语音
audio_chunks = []
for audio_chunk in m.predict_stream(
text="这是一段较长的文本,将会被流式合成为语音。",
text_language="zh",
stream_chunk_size=20 # 控制延迟,越小延迟越低
):
audio_chunks.append(audio_chunk)
# 这里可以实时播放 audio_chunk
# 保存完整音频
full_audio = np.concatenate(audio_chunks)
sf.write("streaming_output.wav", full_audio, m.sampling_rate)
```
#### 日志管理
控制日志输出级别:
```python
from parrots import TextToSpeech
from parrots.log import set_log_level, logger
# 设置日志级别
set_log_level("INFO") # 可选: DEBUG, INFO, WARNING, ERROR
m = TextToSpeech(
speaker_model_path="shibing624/parrots-gpt-sovits-speaker-maimai",
speaker_name="MaiMai",
)
# 使用 logger
logger.info("开始语音合成...")
m.predict(
text="你好,世界!",
text_language="zh",
output_path="output.wav"
)
```
#### IndexTTS2 高级用法
IndexTTS2 是一个突破性的情感表达和时长控制的自回归零样本语音合成模型。
example: [examples/demo_indextts.py](https://github.com/shibing624/parrots/blob/master/examples/demo_indextts.py)
**1. 基础语音克隆(使用单个参考音频)**
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "你好,欢迎来到北京。这是一个合成录音文件的演示。"
tts.infer(spk_audio_prompt='examples/voice_01.wav', text=text, output_path="gen.wav", verbose=True)
```
**2. 情感语音合成(使用情感参考音频)**
使用单独的情感参考音频来控制语音合成的情感表达:
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(
spk_audio_prompt='examples/voice_07.wav', # 说话人音色参考
text=text,
output_path="gen.wav",
emo_audio_prompt="examples/emo_sad.wav", # 情感参考音频
verbose=True
)
```
**3. 调整情感强度**
通过 `emo_alpha` 参数(范围 0.0-1.0)调整情感影响程度:
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(
spk_audio_prompt='examples/voice_07.wav',
text=text,
output_path="gen.wav",
emo_audio_prompt="examples/emo_sad.wav",
emo_alpha=0.6, # 情感强度 60%
verbose=True
)
```
**4. 使用情感向量控制**
直接提供 8 维情感向量来精确控制情感,顺序为:
`[开心, 生气, 悲伤, 害怕, 厌恶, 忧郁, 惊讶, 平静]`
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "哇塞!这个爆率也太高了!欧皇附体了!"
tts.infer(
spk_audio_prompt='examples/voice_10.wav',
text=text,
output_path="gen.wav",
emo_vector=[0, 0, 0, 0, 0, 0, 0.45, 0], # 惊讶情感
use_random=False,
verbose=True
)
```
**5. 基于文本的情感控制**
启用 `use_emo_text` 可以根据文本内容自动推断情感:
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "快躲起来!是他要来了!他要来抓我们了!"
tts.infer(
spk_audio_prompt='examples/voice_12.wav',
text=text,
output_path="gen.wav",
emo_alpha=0.6,
use_emo_text=True, # 启用文本情感分析
use_random=False,
verbose=True
)
```
**6. 独立的情感文本描述**
通过 `emo_text` 参数单独指定情感描述文本:
```python
from parrots.indextts import IndexTTS2
tts = IndexTTS2()
text = "快躲起来!是他要来了!他要来抓我们了!"
emo_text = "你吓死我了!你是鬼吗?" # 独立的情感描述
tts.infer(
spk_audio_prompt='examples/voice_12.wav',
text=text,
output_path="gen.wav",
emo_alpha=0.6,
use_emo_text=True,
emo_text=emo_text,
use_random=False,
verbose=True
)
```
**拼音控制说明:**
IndexTTS2 支持中文字符和拼音的混合建模。当需要精确的发音控制时,请提供带有特定拼音标注的文本。
注意:拼音控制不支持所有可能的声母-韵母组合,仅支持有效的中文拼音。
示例:
```python
text = "之前你做DE5很好,所以这一次也DEI3做DE2很好才XING2,如果这次目标完成得不错的话,我们就直接打DI1去银行取钱。"
```
### 命令行模式(CLI)
支持通过命令行方式执行ARS和TTS任务,代码:[cli.py](https://github.com/shibing624/parrots/blob/master/parrots/cli.py)
```
> parrots -h
NAME
parrots
SYNOPSIS
parrots COMMAND
COMMANDS
COMMAND is one of the following:
asr
Entry point of asr, recognize speech from file
tts
Entry point of tts, generate speech audio from text
```
run:
```shell
pip install parrots -U
# asr example
parrots asr -h
parrots asr examples/tushuguan.wav
# tts example
parrots tts -h
parrots tts "你好,欢迎来北京。welcome to the city." output_audio.wav
```
- `asr`、`tts`是二级命令,asr是语音识别,tts是语音合成,默认使用的模型是中文模型
- 各二级命令使用方法见`parrots asr -h`
- 上面示例中`examples/tushuguan.wav`是`asr`方法的`audio_file_path`参数,输入的音频文件(required)
## Release Models
### ASR
- [BELLE-2/Belle-distilwhisper-large-v2-zh](https://huggingface.co/BELLE-2/Belle-distilwhisper-large-v2-zh)
### IndexTTS2
- [IndexTeam/IndexTTS-2](https://huggingface.co/IndexTeam/IndexTTS-2) - 最新的情感表达和时长控制模型
- [IndexTeam/IndexTTS-1.5](https://huggingface.co/IndexTeam/IndexTTS-1.5) - 改进的稳定性和英语性能
- [IndexTeam/Index-TTS](https://huggingface.co/IndexTeam/Index-TTS) - 初始版本
相关论文:
- [IndexTTS2 Paper](https://arxiv.org/abs/2506.21619) - 情感表达和时长控制的突破
- [IndexTTS Paper](https://arxiv.org/abs/2502.05512) - 工业级可控零样本 TTS
### GPT-SoVITS TTS
- [shibing624/parrots-gpt-sovits-speaker](https://huggingface.co/shibing624/parrots-gpt-sovits-speaker)
| speaker name | 说话人名 | character | 角色特点 | language | 语言 |
|--|--|--|--|--|--|
| KuileBlanc | 葵·勒布朗 | lady | 标准美式女声 | en | 英 |
| LongShouRen | 龙守仁 | gentleman | 标准美式男声 | en | 英 |
| MaiMai | 卖卖| singing female anchor | 唱歌女主播声 | zh | 中 |
| XingTong | 星瞳 | singing ai girl | 活泼女声 | zh | 中 |
| XuanShen | 炫神 | game male anchor | 游戏男主播声 | zh | 中 |
| KusanagiNene | 草薙寧々 | loli | 萝莉女学生声 | ja | 日 |
- [shibing624/parrots-gpt-sovits-speaker-maimai](https://huggingface.co/shibing624/parrots-gpt-sovits-speaker-maimai)
| speaker name | 说话人名 | character | 角色特点 | language | 语言 |
|--|--|--|--|--|--|
| MaiMai | 卖卖| singing female anchor | 唱歌女主播声 | zh | 中 |
## 更新日志
### v0.3.0 (2025-11)
- 🔥 集成 IndexTTS2 模型,支持情感表达和时长控制的零样本语音合成
- ✨ 支持多种情感控制方式:音频参考、情感向量、文本描述
- ✨ 实现情感与说话人身份解耦,独立控制音色和情感
- ✨ 支持拼音混合建模,实现精确发音控制
- 🐛 修复 transformers 4.50+ 兼容性问题
- 🐛 修复字典参数访问错误
- 📝 新增 IndexTTS2 使用示例和文档
### v0.2.0 (2025-10)
- ✨ 新增流式 TTS 功能,支持低延迟实时语音合成
- ✨ 新增统一的日志管理系统(基于 loguru)
- 🐛 修复 PyTorch 2.0+ 的 `weight_norm` 弃用警告
- 🐛 修复 `torch.stft` 的 `return_complex=False` 弃用警告
- 🐛 修复 librosa 的 `resample` 和 `time_stretch` 警告
- 🔧 优化模型加载机制,无需手动添加 `sys.path`
- 📝 完善文档和示例代码
### v0.1.0 (2024-12)
- 🎉 初始版本发布
- ✨ 支持 ASR(语音识别)
- ✨ 支持 TTS(语音合成)
- ✨ 支持中、英、日多语言
## Contact
- Issue(建议):[](https://github.com/shibing624/parrots/issues)
- 邮件我:xuming: xuming624@qq.com
- 微信我:加我*微信号:xuming624*, 进Python-NLP交流群,备注:*姓名-公司名-NLP*
<img src="https://github.com/shibing624/parrots/blob/master/docs/wechat.jpeg" width="200" />
## Citation
如果你在研究中使用了parrots,请按如下格式引用:
```latex
@misc{parrots,
title={parrots: ASR and TTS Tool},
author={Ming Xu},
year={2024},
howpublished={\url{https://github.com/shibing624/parrots}},
}
```
## License
授权协议为 [The Apache License 2.0](/LICENSE),可免费用做商业用途。请在产品说明中附加parrots的链接和授权协议。
## Contribute
项目代码还很粗糙,如果大家对代码有所改进,欢迎提交回本项目,在提交之前,注意以下两点:
- 在`tests`添加相应的单元测试
- 使用`python -m pytest`来运行所有单元测试,确保所有单测都是通过的
之后即可提交PR。
## Reference
#### ASR(Speech Recognition)
- [EAT: Enhanced ASR-TTS for Self-supervised Speech Recognition](https://arxiv.org/abs/2104.07474)
- [PaddlePaddle/PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech)
- [NVIDIA/NeMo](https://github.com/NVIDIA/NeMo)
#### TTS(Speech Synthesis)
- [IndexTeam/IndexTTS](https://github.com/index-tts/index-tts) - IndexTTS2 情感表达和时长控制
- [coqui-ai/TTS](https://github.com/coqui-ai/TTS)
- [keonlee9420/Expressive-FastSpeech2](https://github.com/keonlee9420/Expressive-FastSpeech2)
- [TensorSpeech/TensorflowTTS](https://github.com/TensorSpeech/TensorflowTTS)
- [RVC-Boss/GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)
Raw data
{
"_id": null,
"home_page": "https://github.com/shibing624/parrots",
"name": "parrots",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8.0",
"maintainer_email": null,
"keywords": "TTS, ASR, text to speech, speech",
"author": "XuMing",
"author_email": "xuming624@qq.com",
"download_url": "https://files.pythonhosted.org/packages/9c/05/7c7f00ba5edc75448923c07cca6ef4babdb383f98464276beea75a8a0692/parrots-1.2.4.tar.gz",
"platform": "Windows",
"description": "[**\ud83c\udde8\ud83c\uddf3\u4e2d\u6587**](https://github.com/shibing624/parrots/blob/master/README.md) | [**\ud83c\udf10English**](https://github.com/shibing624/parrots/blob/master/README_EN.md) | [**\ud83d\udcd6\u6587\u6863/Docs**](https://github.com/shibing624/parrots/wiki) | [**\ud83e\udd16\u6a21\u578b/Models**](https://huggingface.co/shibing624) \n\n<div align=\"center\">\n <a href=\"https://github.com/shibing624/parrots\">\n <img src=\"https://github.com/shibing624/parrots/blob/master/docs/parrots_icon.png\" alt=\"Logo\" height=\"156\">\n </a>\n <br/>\n <br/>\n <a href=\"https://huggingface.co/spaces/shibing624/parrots\" target=\"_blank\"> Online Demo </a>\n <br/>\n <img width=\"100%\" src=\"https://github.com/shibing624/parrots/blob/master/docs/hf.jpg\">\n</div>\n\n\n-----------------\n\n# Parrots: ASR and TTS toolkit\n[](https://badge.fury.io/py/parrots)\n[](https://pepy.tech/project/parrots)\n[](CONTRIBUTING.md)\n[](https://github.com/shibing624/parrots/graphs/contributors)\n[](LICENSE)\n[](requirements.txt)\n[](https://github.com/shibing624/parrots/issues)\n[](#Contact)\n\n## Introduction\nParrots, Automatic Speech Recognition(**ASR**), Text-To-Speech(**TTS**) toolkit, support Chinese, English, Japanese, etc.\n\n**parrots**\u5b9e\u73b0\u4e86\u8bed\u97f3\u8bc6\u522b\u548c\u8bed\u97f3\u5408\u6210\u6a21\u578b\u4e00\u952e\u8c03\u7528\uff0c\u5f00\u7bb1\u5373\u7528\uff0c\u652f\u6301\u4e2d\u82f1\u6587\u3002\n\n## Features\n1. **ASR**\uff1a\u57fa\u4e8e`distilwhisper`\u5b9e\u73b0\u7684\u4e2d\u6587\u8bed\u97f3\u8bc6\u522b\uff08ASR\uff09\u6a21\u578b\uff0c\u652f\u6301\u4e2d\u3001\u82f1\u7b49\u591a\u79cd\u8bed\u8a00\n2. **TTS**\uff1a\u57fa\u4e8e`GPT-SoVITS`\u8bad\u7ec3\u7684\u8bed\u97f3\u5408\u6210\uff08TTS\uff09\u6a21\u578b\uff0c\u652f\u6301\u4e2d\u3001\u82f1\u3001\u65e5\u7b49\u591a\u79cd\u8bed\u8a00\n3. **IndexTTS2**\uff1a\u96c6\u6210\u4e86 IndexTTS2 \u6a21\u578b\uff0c\u652f\u6301\u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\u7684\u96f6\u6837\u672c\u8bed\u97f3\u5408\u6210\n - \u7cbe\u786e\u7684\u8bed\u97f3\u65f6\u957f\u63a7\u5236\n - \u60c5\u611f\u4e0e\u8bf4\u8bdd\u4eba\u8eab\u4efd\u89e3\u8026\uff0c\u72ec\u7acb\u63a7\u5236\u97f3\u8272\u548c\u60c5\u611f\n - \u652f\u6301\u591a\u79cd\u60c5\u611f\u63a7\u5236\u65b9\u5f0f\uff1a\u97f3\u9891\u53c2\u8003\u3001\u60c5\u611f\u5411\u91cf\u3001\u6587\u672c\u63cf\u8ff0\n - \u9ad8\u5ea6\u8868\u73b0\u529b\u7684\u60c5\u611f\u8bed\u97f3\u5408\u6210\n4. **\u6d41\u5f0fTTS**\uff1a\u652f\u6301\u6d41\u5f0f\u8bed\u97f3\u5408\u6210\uff0c\u5b9e\u73b0\u4f4e\u5ef6\u8fdf\u7684\u5b9e\u65f6\u8bed\u97f3\u8f93\u51fa\n\n\n\n\n## Install\n```shell\npip install torch # or conda install pytorch\npip install -r requirements.txt\npip install parrots\n```\nor\n```shell\npip install torch # or conda install pytorch\ngit clone https://github.com/shibing624/parrots.git\ncd parrots\npython setup.py install\n```\n\n## Demo\n- Offical Demo: https://www.mulanai.com/product/tts/\n- HuggingFace Demo: https://huggingface.co/spaces/shibing624/parrots\n\n<img width=\"85%\" src=\"https://github.com/shibing624/parrots/blob/master/docs/hf.png\">\n\nrun example: [examples/tts_gradio_demo.py](https://github.com/shibing624/parrots/blob/master/examples/tts_gradio_demo.py) to see the demo:\n```shell\npython examples/tts_gradio_demo.py\n```\n\n## Usage\n### ASR(Speech Recognition)\nexample: [examples/demo_asr.py](https://github.com/shibing624/parrots/blob/master/examples/demo_asr.py)\n```python\nimport os\nimport sys\n\nsys.path.append('..')\nfrom parrots import SpeechRecognition\n\npwd_path = os.path.abspath(os.path.dirname(__file__))\n\nif __name__ == '__main__':\n m = SpeechRecognition()\n r = m.recognize_speech_from_file(os.path.join(pwd_path, 'tushuguan.wav'))\n print('[\u63d0\u793a] \u8bed\u97f3\u8bc6\u522b\u7ed3\u679c\uff1a', r)\n\n```\n\noutput:\n```\n{'text': '\u5317\u4eac\u56fe\u4e66\u9986'}\n```\n\n### TTS(Speech Synthesis)\n\n#### GPT-SoVITS \u57fa\u7840\u7528\u6cd5\nexample: [examples/demo_tts.py](https://github.com/shibing624/parrots/blob/master/examples/demo_tts.py)\n```python\nfrom parrots import TextToSpeech\n\n# \u521d\u59cb\u5316 TTS \u6a21\u578b\uff08\u65e0\u9700\u624b\u52a8\u914d\u7f6e\u8def\u5f84\uff09\nm = TextToSpeech(\n speaker_model_path=\"shibing624/parrots-gpt-sovits-speaker-maimai\",\n speaker_name=\"MaiMai\",\n device=\"cpu\", # \u6216 \"cuda\" \u4f7f\u7528 GPU\n half=False # \u8bbe\u7f6e\u4e3a True \u4f7f\u7528\u534a\u7cbe\u5ea6\u52a0\u901f\n)\n\n# \u751f\u6210\u8bed\u97f3\nm.predict(\n text=\"\u4f60\u597d\uff0c\u6b22\u8fce\u6765\u5230\u5317\u4eac\u3002\u8fd9\u662f\u4e00\u4e2a\u5408\u6210\u5f55\u97f3\u6587\u4ef6\u7684\u6f14\u793a\u3002Welcome to Beijing!\",\n text_language=\"auto\", # \u81ea\u52a8\u68c0\u6d4b\u8bed\u8a00\uff0c\u4e5f\u53ef\u6307\u5b9a \"zh\", \"en\", \"ja\"\n output_path=\"output_audio.wav\"\n)\n```\n\noutput:\n```\nSave audio to output_audio.wav\n```\n\n#### \u6d41\u5f0f TTS\uff08\u4f4e\u5ef6\u8fdf\uff09\n\n\u652f\u6301\u6d41\u5f0f\u8bed\u97f3\u5408\u6210\uff0c\u9002\u7528\u4e8e\u5b9e\u65f6\u5bf9\u8bdd\u573a\u666f\uff1a\n\n```python\nfrom parrots import TextToSpeech\nimport soundfile as sf\nimport numpy as np\n\nm = TextToSpeech(\n speaker_model_path=\"shibing624/parrots-gpt-sovits-speaker-maimai\",\n speaker_name=\"MaiMai\",\n)\n\n# \u6d41\u5f0f\u751f\u6210\u8bed\u97f3\naudio_chunks = []\nfor audio_chunk in m.predict_stream(\n text=\"\u8fd9\u662f\u4e00\u6bb5\u8f83\u957f\u7684\u6587\u672c\uff0c\u5c06\u4f1a\u88ab\u6d41\u5f0f\u5408\u6210\u4e3a\u8bed\u97f3\u3002\",\n text_language=\"zh\",\n stream_chunk_size=20 # \u63a7\u5236\u5ef6\u8fdf\uff0c\u8d8a\u5c0f\u5ef6\u8fdf\u8d8a\u4f4e\n):\n audio_chunks.append(audio_chunk)\n # \u8fd9\u91cc\u53ef\u4ee5\u5b9e\u65f6\u64ad\u653e audio_chunk\n\n# \u4fdd\u5b58\u5b8c\u6574\u97f3\u9891\nfull_audio = np.concatenate(audio_chunks)\nsf.write(\"streaming_output.wav\", full_audio, m.sampling_rate)\n```\n\n#### \u65e5\u5fd7\u7ba1\u7406\n\n\u63a7\u5236\u65e5\u5fd7\u8f93\u51fa\u7ea7\u522b\uff1a\n\n```python\nfrom parrots import TextToSpeech\nfrom parrots.log import set_log_level, logger\n\n# \u8bbe\u7f6e\u65e5\u5fd7\u7ea7\u522b\nset_log_level(\"INFO\") # \u53ef\u9009: DEBUG, INFO, WARNING, ERROR\n\nm = TextToSpeech(\n speaker_model_path=\"shibing624/parrots-gpt-sovits-speaker-maimai\",\n speaker_name=\"MaiMai\",\n)\n\n# \u4f7f\u7528 logger\nlogger.info(\"\u5f00\u59cb\u8bed\u97f3\u5408\u6210...\")\nm.predict(\n text=\"\u4f60\u597d\uff0c\u4e16\u754c\uff01\",\n text_language=\"zh\",\n output_path=\"output.wav\"\n)\n```\n\n#### IndexTTS2 \u9ad8\u7ea7\u7528\u6cd5\n\nIndexTTS2 \u662f\u4e00\u4e2a\u7a81\u7834\u6027\u7684\u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\u7684\u81ea\u56de\u5f52\u96f6\u6837\u672c\u8bed\u97f3\u5408\u6210\u6a21\u578b\u3002\n\nexample: [examples/demo_indextts.py](https://github.com/shibing624/parrots/blob/master/examples/demo_indextts.py)\n\n**1. \u57fa\u7840\u8bed\u97f3\u514b\u9686\uff08\u4f7f\u7528\u5355\u4e2a\u53c2\u8003\u97f3\u9891\uff09**\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u4f60\u597d\uff0c\u6b22\u8fce\u6765\u5230\u5317\u4eac\u3002\u8fd9\u662f\u4e00\u4e2a\u5408\u6210\u5f55\u97f3\u6587\u4ef6\u7684\u6f14\u793a\u3002\"\ntts.infer(spk_audio_prompt='examples/voice_01.wav', text=text, output_path=\"gen.wav\", verbose=True)\n```\n\n**2. \u60c5\u611f\u8bed\u97f3\u5408\u6210\uff08\u4f7f\u7528\u60c5\u611f\u53c2\u8003\u97f3\u9891\uff09**\n\n\u4f7f\u7528\u5355\u72ec\u7684\u60c5\u611f\u53c2\u8003\u97f3\u9891\u6765\u63a7\u5236\u8bed\u97f3\u5408\u6210\u7684\u60c5\u611f\u8868\u8fbe\uff1a\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u9152\u697c\u4e27\u5c3d\u5929\u826f\uff0c\u5f00\u59cb\u501f\u673a\u7ade\u62cd\u623f\u95f4\uff0c\u54ce\uff0c\u4e00\u7fa4\u8822\u8d27\u3002\"\ntts.infer(\n spk_audio_prompt='examples/voice_07.wav', # \u8bf4\u8bdd\u4eba\u97f3\u8272\u53c2\u8003\n text=text, \n output_path=\"gen.wav\",\n emo_audio_prompt=\"examples/emo_sad.wav\", # \u60c5\u611f\u53c2\u8003\u97f3\u9891\n verbose=True\n)\n```\n\n**3. \u8c03\u6574\u60c5\u611f\u5f3a\u5ea6**\n\n\u901a\u8fc7 `emo_alpha` \u53c2\u6570\uff08\u8303\u56f4 0.0-1.0\uff09\u8c03\u6574\u60c5\u611f\u5f71\u54cd\u7a0b\u5ea6\uff1a\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u9152\u697c\u4e27\u5c3d\u5929\u826f\uff0c\u5f00\u59cb\u501f\u673a\u7ade\u62cd\u623f\u95f4\uff0c\u54ce\uff0c\u4e00\u7fa4\u8822\u8d27\u3002\"\ntts.infer(\n spk_audio_prompt='examples/voice_07.wav',\n text=text,\n output_path=\"gen.wav\",\n emo_audio_prompt=\"examples/emo_sad.wav\",\n emo_alpha=0.6, # \u60c5\u611f\u5f3a\u5ea6 60%\n verbose=True\n)\n```\n\n**4. \u4f7f\u7528\u60c5\u611f\u5411\u91cf\u63a7\u5236**\n\n\u76f4\u63a5\u63d0\u4f9b 8 \u7ef4\u60c5\u611f\u5411\u91cf\u6765\u7cbe\u786e\u63a7\u5236\u60c5\u611f\uff0c\u987a\u5e8f\u4e3a\uff1a\n`[\u5f00\u5fc3, \u751f\u6c14, \u60b2\u4f24, \u5bb3\u6015, \u538c\u6076, \u5fe7\u90c1, \u60ca\u8bb6, \u5e73\u9759]`\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u54c7\u585e\uff01\u8fd9\u4e2a\u7206\u7387\u4e5f\u592a\u9ad8\u4e86\uff01\u6b27\u7687\u9644\u4f53\u4e86\uff01\"\ntts.infer(\n spk_audio_prompt='examples/voice_10.wav',\n text=text,\n output_path=\"gen.wav\",\n emo_vector=[0, 0, 0, 0, 0, 0, 0.45, 0], # \u60ca\u8bb6\u60c5\u611f\n use_random=False,\n verbose=True\n)\n```\n\n**5. \u57fa\u4e8e\u6587\u672c\u7684\u60c5\u611f\u63a7\u5236**\n\n\u542f\u7528 `use_emo_text` \u53ef\u4ee5\u6839\u636e\u6587\u672c\u5185\u5bb9\u81ea\u52a8\u63a8\u65ad\u60c5\u611f\uff1a\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u5feb\u8eb2\u8d77\u6765\uff01\u662f\u4ed6\u8981\u6765\u4e86\uff01\u4ed6\u8981\u6765\u6293\u6211\u4eec\u4e86\uff01\"\ntts.infer(\n spk_audio_prompt='examples/voice_12.wav',\n text=text,\n output_path=\"gen.wav\",\n emo_alpha=0.6,\n use_emo_text=True, # \u542f\u7528\u6587\u672c\u60c5\u611f\u5206\u6790\n use_random=False,\n verbose=True\n)\n```\n\n**6. \u72ec\u7acb\u7684\u60c5\u611f\u6587\u672c\u63cf\u8ff0**\n\n\u901a\u8fc7 `emo_text` \u53c2\u6570\u5355\u72ec\u6307\u5b9a\u60c5\u611f\u63cf\u8ff0\u6587\u672c\uff1a\n\n```python\nfrom parrots.indextts import IndexTTS2\n\ntts = IndexTTS2()\ntext = \"\u5feb\u8eb2\u8d77\u6765\uff01\u662f\u4ed6\u8981\u6765\u4e86\uff01\u4ed6\u8981\u6765\u6293\u6211\u4eec\u4e86\uff01\"\nemo_text = \"\u4f60\u5413\u6b7b\u6211\u4e86\uff01\u4f60\u662f\u9b3c\u5417\uff1f\" # \u72ec\u7acb\u7684\u60c5\u611f\u63cf\u8ff0\ntts.infer(\n spk_audio_prompt='examples/voice_12.wav',\n text=text,\n output_path=\"gen.wav\",\n emo_alpha=0.6,\n use_emo_text=True,\n emo_text=emo_text,\n use_random=False,\n verbose=True\n)\n```\n\n**\u62fc\u97f3\u63a7\u5236\u8bf4\u660e\uff1a**\n\nIndexTTS2 \u652f\u6301\u4e2d\u6587\u5b57\u7b26\u548c\u62fc\u97f3\u7684\u6df7\u5408\u5efa\u6a21\u3002\u5f53\u9700\u8981\u7cbe\u786e\u7684\u53d1\u97f3\u63a7\u5236\u65f6\uff0c\u8bf7\u63d0\u4f9b\u5e26\u6709\u7279\u5b9a\u62fc\u97f3\u6807\u6ce8\u7684\u6587\u672c\u3002\n\u6ce8\u610f\uff1a\u62fc\u97f3\u63a7\u5236\u4e0d\u652f\u6301\u6240\u6709\u53ef\u80fd\u7684\u58f0\u6bcd-\u97f5\u6bcd\u7ec4\u5408\uff0c\u4ec5\u652f\u6301\u6709\u6548\u7684\u4e2d\u6587\u62fc\u97f3\u3002\n\n\u793a\u4f8b\uff1a\n```python\ntext = \"\u4e4b\u524d\u4f60\u505aDE5\u5f88\u597d\uff0c\u6240\u4ee5\u8fd9\u4e00\u6b21\u4e5fDEI3\u505aDE2\u5f88\u597d\u624dXING2\uff0c\u5982\u679c\u8fd9\u6b21\u76ee\u6807\u5b8c\u6210\u5f97\u4e0d\u9519\u7684\u8bdd\uff0c\u6211\u4eec\u5c31\u76f4\u63a5\u6253DI1\u53bb\u94f6\u884c\u53d6\u94b1\u3002\"\n```\n\n### \u547d\u4ee4\u884c\u6a21\u5f0f\uff08CLI\uff09\n\n\u652f\u6301\u901a\u8fc7\u547d\u4ee4\u884c\u65b9\u5f0f\u6267\u884cARS\u548cTTS\u4efb\u52a1\uff0c\u4ee3\u7801\uff1a[cli.py](https://github.com/shibing624/parrots/blob/master/parrots/cli.py)\n\n```\n> parrots -h \n\nNAME\n parrots\n\nSYNOPSIS\n parrots COMMAND\n\nCOMMANDS\n COMMAND is one of the following:\n\n asr\n Entry point of asr, recognize speech from file\n\n tts\n Entry point of tts, generate speech audio from text\n\n```\n\nrun\uff1a\n\n```shell\npip install parrots -U\n# asr example\nparrots asr -h\nparrots asr examples/tushuguan.wav\n\n# tts example\nparrots tts -h\nparrots tts \"\u4f60\u597d\uff0c\u6b22\u8fce\u6765\u5317\u4eac\u3002welcome to the city.\" output_audio.wav\n```\n\n- `asr`\u3001`tts`\u662f\u4e8c\u7ea7\u547d\u4ee4\uff0casr\u662f\u8bed\u97f3\u8bc6\u522b\uff0ctts\u662f\u8bed\u97f3\u5408\u6210\uff0c\u9ed8\u8ba4\u4f7f\u7528\u7684\u6a21\u578b\u662f\u4e2d\u6587\u6a21\u578b\n- \u5404\u4e8c\u7ea7\u547d\u4ee4\u4f7f\u7528\u65b9\u6cd5\u89c1`parrots asr -h`\n- \u4e0a\u9762\u793a\u4f8b\u4e2d`examples/tushuguan.wav`\u662f`asr`\u65b9\u6cd5\u7684`audio_file_path`\u53c2\u6570\uff0c\u8f93\u5165\u7684\u97f3\u9891\u6587\u4ef6\uff08required\uff09\n\n## Release Models\n\n### ASR\n- [BELLE-2/Belle-distilwhisper-large-v2-zh](https://huggingface.co/BELLE-2/Belle-distilwhisper-large-v2-zh)\n\n### IndexTTS2\n- [IndexTeam/IndexTTS-2](https://huggingface.co/IndexTeam/IndexTTS-2) - \u6700\u65b0\u7684\u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\u6a21\u578b\n- [IndexTeam/IndexTTS-1.5](https://huggingface.co/IndexTeam/IndexTTS-1.5) - \u6539\u8fdb\u7684\u7a33\u5b9a\u6027\u548c\u82f1\u8bed\u6027\u80fd\n- [IndexTeam/Index-TTS](https://huggingface.co/IndexTeam/Index-TTS) - \u521d\u59cb\u7248\u672c\n\n\u76f8\u5173\u8bba\u6587\uff1a\n- [IndexTTS2 Paper](https://arxiv.org/abs/2506.21619) - \u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\u7684\u7a81\u7834\n- [IndexTTS Paper](https://arxiv.org/abs/2502.05512) - \u5de5\u4e1a\u7ea7\u53ef\u63a7\u96f6\u6837\u672c TTS\n\n### GPT-SoVITS TTS\n\n- [shibing624/parrots-gpt-sovits-speaker](https://huggingface.co/shibing624/parrots-gpt-sovits-speaker)\n\n| speaker name | \u8bf4\u8bdd\u4eba\u540d | character | \u89d2\u8272\u7279\u70b9 | language | \u8bed\u8a00 |\n|--|--|--|--|--|--|\n| KuileBlanc | \u8475\u00b7\u52d2\u5e03\u6717 | lady | \u6807\u51c6\u7f8e\u5f0f\u5973\u58f0 | en | \u82f1 |\n| LongShouRen | \u9f99\u5b88\u4ec1 | gentleman | \u6807\u51c6\u7f8e\u5f0f\u7537\u58f0 | en | \u82f1 |\n| MaiMai | \u5356\u5356| singing female anchor | \u5531\u6b4c\u5973\u4e3b\u64ad\u58f0 | zh | \u4e2d |\n| XingTong | \u661f\u77b3 | singing ai girl | \u6d3b\u6cfc\u5973\u58f0 | zh | \u4e2d |\n| XuanShen | \u70ab\u795e | game male anchor | \u6e38\u620f\u7537\u4e3b\u64ad\u58f0 | zh | \u4e2d |\n| KusanagiNene | \u8349\u8599\u5be7\u3005 | loli | \u841d\u8389\u5973\u5b66\u751f\u58f0 | ja | \u65e5 |\n\n- [shibing624/parrots-gpt-sovits-speaker-maimai](https://huggingface.co/shibing624/parrots-gpt-sovits-speaker-maimai)\n\n| speaker name | \u8bf4\u8bdd\u4eba\u540d | character | \u89d2\u8272\u7279\u70b9 | language | \u8bed\u8a00 |\n|--|--|--|--|--|--|\n| MaiMai | \u5356\u5356| singing female anchor | \u5531\u6b4c\u5973\u4e3b\u64ad\u58f0 | zh | \u4e2d |\n\n## \u66f4\u65b0\u65e5\u5fd7\n\n### v0.3.0 (2025-11)\n- \ud83d\udd25 \u96c6\u6210 IndexTTS2 \u6a21\u578b\uff0c\u652f\u6301\u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\u7684\u96f6\u6837\u672c\u8bed\u97f3\u5408\u6210\n- \u2728 \u652f\u6301\u591a\u79cd\u60c5\u611f\u63a7\u5236\u65b9\u5f0f\uff1a\u97f3\u9891\u53c2\u8003\u3001\u60c5\u611f\u5411\u91cf\u3001\u6587\u672c\u63cf\u8ff0\n- \u2728 \u5b9e\u73b0\u60c5\u611f\u4e0e\u8bf4\u8bdd\u4eba\u8eab\u4efd\u89e3\u8026\uff0c\u72ec\u7acb\u63a7\u5236\u97f3\u8272\u548c\u60c5\u611f\n- \u2728 \u652f\u6301\u62fc\u97f3\u6df7\u5408\u5efa\u6a21\uff0c\u5b9e\u73b0\u7cbe\u786e\u53d1\u97f3\u63a7\u5236\n- \ud83d\udc1b \u4fee\u590d transformers 4.50+ \u517c\u5bb9\u6027\u95ee\u9898\n- \ud83d\udc1b \u4fee\u590d\u5b57\u5178\u53c2\u6570\u8bbf\u95ee\u9519\u8bef\n- \ud83d\udcdd \u65b0\u589e IndexTTS2 \u4f7f\u7528\u793a\u4f8b\u548c\u6587\u6863\n\n### v0.2.0 (2025-10)\n- \u2728 \u65b0\u589e\u6d41\u5f0f TTS \u529f\u80fd\uff0c\u652f\u6301\u4f4e\u5ef6\u8fdf\u5b9e\u65f6\u8bed\u97f3\u5408\u6210\n- \u2728 \u65b0\u589e\u7edf\u4e00\u7684\u65e5\u5fd7\u7ba1\u7406\u7cfb\u7edf\uff08\u57fa\u4e8e loguru\uff09\n- \ud83d\udc1b \u4fee\u590d PyTorch 2.0+ \u7684 `weight_norm` \u5f03\u7528\u8b66\u544a\n- \ud83d\udc1b \u4fee\u590d `torch.stft` \u7684 `return_complex=False` \u5f03\u7528\u8b66\u544a\n- \ud83d\udc1b \u4fee\u590d librosa \u7684 `resample` \u548c `time_stretch` \u8b66\u544a\n- \ud83d\udd27 \u4f18\u5316\u6a21\u578b\u52a0\u8f7d\u673a\u5236\uff0c\u65e0\u9700\u624b\u52a8\u6dfb\u52a0 `sys.path`\n- \ud83d\udcdd \u5b8c\u5584\u6587\u6863\u548c\u793a\u4f8b\u4ee3\u7801\n\n### v0.1.0 (2024-12)\n- \ud83c\udf89 \u521d\u59cb\u7248\u672c\u53d1\u5e03\n- \u2728 \u652f\u6301 ASR\uff08\u8bed\u97f3\u8bc6\u522b\uff09\n- \u2728 \u652f\u6301 TTS\uff08\u8bed\u97f3\u5408\u6210\uff09\n- \u2728 \u652f\u6301\u4e2d\u3001\u82f1\u3001\u65e5\u591a\u8bed\u8a00\n\n## Contact\n\n- Issue(\u5efa\u8bae)\uff1a[](https://github.com/shibing624/parrots/issues)\n- \u90ae\u4ef6\u6211\uff1axuming: xuming624@qq.com\n- \u5fae\u4fe1\u6211\uff1a\u52a0\u6211*\u5fae\u4fe1\u53f7\uff1axuming624*, \u8fdbPython-NLP\u4ea4\u6d41\u7fa4\uff0c\u5907\u6ce8\uff1a*\u59d3\u540d-\u516c\u53f8\u540d-NLP*\n\n<img src=\"https://github.com/shibing624/parrots/blob/master/docs/wechat.jpeg\" width=\"200\" />\n\n\n## Citation\n\n\u5982\u679c\u4f60\u5728\u7814\u7a76\u4e2d\u4f7f\u7528\u4e86parrots\uff0c\u8bf7\u6309\u5982\u4e0b\u683c\u5f0f\u5f15\u7528\uff1a\n\n```latex\n@misc{parrots,\n title={parrots: ASR and TTS Tool},\n author={Ming Xu},\n year={2024},\n howpublished={\\url{https://github.com/shibing624/parrots}},\n}\n```\n\n## License\n\n\n\u6388\u6743\u534f\u8bae\u4e3a [The Apache License 2.0](/LICENSE)\uff0c\u53ef\u514d\u8d39\u7528\u505a\u5546\u4e1a\u7528\u9014\u3002\u8bf7\u5728\u4ea7\u54c1\u8bf4\u660e\u4e2d\u9644\u52a0parrots\u7684\u94fe\u63a5\u548c\u6388\u6743\u534f\u8bae\u3002\n\n\n## Contribute\n\u9879\u76ee\u4ee3\u7801\u8fd8\u5f88\u7c97\u7cd9\uff0c\u5982\u679c\u5927\u5bb6\u5bf9\u4ee3\u7801\u6709\u6240\u6539\u8fdb\uff0c\u6b22\u8fce\u63d0\u4ea4\u56de\u672c\u9879\u76ee\uff0c\u5728\u63d0\u4ea4\u4e4b\u524d\uff0c\u6ce8\u610f\u4ee5\u4e0b\u4e24\u70b9\uff1a\n\n - \u5728`tests`\u6dfb\u52a0\u76f8\u5e94\u7684\u5355\u5143\u6d4b\u8bd5\n - \u4f7f\u7528`python -m pytest`\u6765\u8fd0\u884c\u6240\u6709\u5355\u5143\u6d4b\u8bd5\uff0c\u786e\u4fdd\u6240\u6709\u5355\u6d4b\u90fd\u662f\u901a\u8fc7\u7684\n\n\u4e4b\u540e\u5373\u53ef\u63d0\u4ea4PR\u3002\n\n\n## Reference\n#### ASR(Speech Recognition)\n- [EAT: Enhanced ASR-TTS for Self-supervised Speech Recognition](https://arxiv.org/abs/2104.07474)\n- [PaddlePaddle/PaddleSpeech](https://github.com/PaddlePaddle/PaddleSpeech)\n- [NVIDIA/NeMo](https://github.com/NVIDIA/NeMo)\n#### TTS(Speech Synthesis)\n- [IndexTeam/IndexTTS](https://github.com/index-tts/index-tts) - IndexTTS2 \u60c5\u611f\u8868\u8fbe\u548c\u65f6\u957f\u63a7\u5236\n- [coqui-ai/TTS](https://github.com/coqui-ai/TTS)\n- [keonlee9420/Expressive-FastSpeech2](https://github.com/keonlee9420/Expressive-FastSpeech2)\n- [TensorSpeech/TensorflowTTS](https://github.com/TensorSpeech/TensorflowTTS)\n- [RVC-Boss/GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS)\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Parrots, Automatic Speech Recognition(**ASR**), Text-To-Speech(**TTS**) toolkit",
"version": "1.2.4",
"project_urls": {
"Homepage": "https://github.com/shibing624/parrots"
},
"split_keywords": [
"tts",
" asr",
" text to speech",
" speech"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9c057c7f00ba5edc75448923c07cca6ef4babdb383f98464276beea75a8a0692",
"md5": "31e28c3153befc8f2c9d21f492736680",
"sha256": "70b08afd0c7d4c9dc867ebe0c5d30c4c8732a5ca024dd563650cc53eda98cca2"
},
"downloads": -1,
"filename": "parrots-1.2.4.tar.gz",
"has_sig": false,
"md5_digest": "31e28c3153befc8f2c9d21f492736680",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8.0",
"size": 25590201,
"upload_time": "2025-11-05T13:45:47",
"upload_time_iso_8601": "2025-11-05T13:45:47.151261Z",
"url": "https://files.pythonhosted.org/packages/9c/05/7c7f00ba5edc75448923c07cca6ef4babdb383f98464276beea75a8a0692/parrots-1.2.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-11-05 13:45:47",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "shibing624",
"github_project": "parrots",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pypinyin",
"specs": []
},
{
"name": "jieba",
"specs": []
},
{
"name": "loguru",
"specs": []
},
{
"name": "transformers",
"specs": [
[
"<=",
"4.57.1"
]
]
},
{
"name": "huggingface_hub",
"specs": []
},
{
"name": "librosa",
"specs": []
},
{
"name": "nltk",
"specs": []
},
{
"name": "g2p_en",
"specs": []
},
{
"name": "cn2an",
"specs": []
},
{
"name": "zh-normalization",
"specs": []
},
{
"name": "einops",
"specs": []
},
{
"name": "soundfile",
"specs": []
},
{
"name": "fire",
"specs": []
},
{
"name": "tqdm",
"specs": []
},
{
"name": "descript-audiotools",
"specs": []
},
{
"name": "torchaudio",
"specs": []
},
{
"name": "munch",
"specs": []
},
{
"name": "wetext",
"specs": []
},
{
"name": "pandas",
"specs": []
},
{
"name": "sentencepiece",
"specs": []
}
],
"lcname": "parrots"
}