ppasr


Nameppasr JSON
Version 2.4.8 PyPI version JSON
download
home_pagehttps://github.com/yeyupiaoling/PPASR
SummaryAutomatic speech recognition toolkit on PaddlePaddle
upload_time2024-05-01 03:15:47
maintainerNone
docs_urlNone
authoryeyupiaoling
requires_pythonNone
licenseApache License 2.0
keywords asr paddle
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![python version](https://img.shields.io/badge/python-3.8+-orange.svg)
![GitHub forks](https://img.shields.io/github/forks/yeyupiaoling/PPASR)
![GitHub Repo stars](https://img.shields.io/github/stars/yeyupiaoling/PPASR)
![GitHub](https://img.shields.io/github/license/yeyupiaoling/PPASR)
![支持系统](https://img.shields.io/badge/支持系统-Win/Linux/MAC-9cf)

# PPASR流式与非流式语音识别项目

本项目将分三个阶段分支,分别是[入门级](https://github.com/yeyupiaoling/PPASR/tree/%E5%85%A5%E9%97%A8%E7%BA%A7) 、[进阶级](https://github.com/yeyupiaoling/PPASR/tree/%E8%BF%9B%E9%98%B6%E7%BA%A7) 和[最终级](https://github.com/yeyupiaoling/PPASR) 分支,当前为最终级的V2版本,如果想使用最终级的V1版本,请在这个分支[r1.x](https://github.com/yeyupiaoling/PPASR/tree/r1.x)。PPASR中文名称PaddlePaddle中文语音识别(PaddlePaddle Automatic Speech Recognition),是一款基于PaddlePaddle实现的语音识别框架,PPASR致力于简单,实用的语音识别项目。可部署在服务器,Nvidia Jetson设备,未来还计划支持Android等移动设备。**别忘了star**

**欢迎大家扫码入知识星球或者QQ群讨论,知识星球里面提供项目的模型文件和博主其他相关项目的模型文件,也包括其他一些资源。**

<div align="center">
  <img src="https://yeyupiaoling.cn/zsxq.png" alt="知识星球" width="400">
  <img src="https://yeyupiaoling.cn/qq.png" alt="QQ群" width="400">
</div>


## 在线使用

**1. [在AI Studio平台训练预测](https://aistudio.baidu.com/aistudio/projectdetail/3290199)**

**2. [在线使用Dome](https://www.doiduoyi.com/?app=SPEECHRECOG)**

**3. [inscode](https://inscode.csdn.net/@yeyupiaoling/ppasr)**

<br/>

**本项目使用的环境:**
 - Anaconda 3
 - Python 3.8
 - PaddlePaddle 2.5.1
 - Windows 10 or Ubuntu 18.04


## 项目快速了解

 1. 本项目支持流式识别模型`deepspeech2`、`conformer`、`squeezeformer`,`efficient_conformer`,每个模型都支持流式识别和非流式识别,在配置文件中`streaming`参数设置。
 2. 本项目支持两种解码器,分别是集束搜索解码器`ctc_beam_search`和贪心解码器`ctc_greedy`,集束搜索解码器`ctc_beam_search`准确率更高。
 3. 下面提供了一系列预训练模型的下载,下载预训练模型之后,需要把全部文件复制到项目根目录,并执行导出模型才可以使用语音识别。

## 更新记录

 - 2023.01.28: 调整配置文件结构,支持efficient_conformer模型。
 - 2022.12.05: 支持自动混合精度训练和导出量化模型。
 - 2022.11.26: 支持Squeezeformer模型。
 - 2022.11.01: 修改Conformer模型的解码器为BiTransformerDecoder,增加SpecSubAugmentor数据增强器。
 - 2022.10.29: 正式发布最终级的V2版本。

## 视频讲解

 - [知识点讲解(哔哩哔哩)](https://www.bilibili.com/video/BV1Rr4y1D7iZ)
 - [流式识别的使用讲解(哔哩哔哩)](https://www.bilibili.com/video/BV1Te4y1h7KK)


# 快速使用

这里介绍如何使用PPASR快速进行语音识别,前提是要安装PPASR,文档请看[快速安装](./docs/install.md)。执行过程不需要手动下载模型,全部自动完成。

1. 短语音识别
```python
from ppasr.predict import PPASRPredictor

predictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')

wav_path = 'dataset/test.wav'
result = predictor.predict(audio_data=wav_path, use_pun=False)
score, text = result['score'], result['text']
print(f"识别结果: {text}, 得分: {int(score)}")
```

2. 长语音识别
```python
from ppasr.predict import PPASRPredictor

predictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')

wav_path = 'dataset/test_long.wav'
result = predictor.predict_long(audio_data=wav_path, use_pun=False)
score, text = result['score'], result['text']
print(f"识别结果: {text}, 得分: {score}")
```

3. 模拟流式识别
```python
import time
import wave

from ppasr.predict import PPASRPredictor

predictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')

# 识别间隔时间
interval_time = 0.5
CHUNK = int(16000 * interval_time)
# 读取数据
wav_path = 'dataset/test.wav'
wf = wave.open(wav_path, 'rb')
data = wf.readframes(CHUNK)
# 播放
while data != b'':
    start = time.time()
    d = wf.readframes(CHUNK)
    result = predictor.predict_stream(audio_data=data, use_pun=False, is_end=d == b'')
    data = d
    if result is None: continue
    score, text = result['score'], result['text']
    print(f"【实时结果】:消耗时间:{int((time.time() - start) * 1000)}ms, 识别结果: {text}, 得分: {int(score)}")
# 重置流式识别
predictor.reset_stream()
```


## 模型下载

1. [WenetSpeech](./docs/wenetspeech.md) (10000小时) 的预训练模型列表:

|    使用模型     | 是否为流式 | 预处理方式 | 语言  |                               测试集字错率                                |   下载地址   |
|:-----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|
|  conformer  | True  | fbank | 普通话 | 0.03579(aishell_test)<br>0.11081(test_net)<br>0.16031(test_meeting) | 加入知识星球获取 |
| deepspeech2 | True  | fbank | 普通话 |                        0.05379(aishell_test)                        | 加入知识星球获取 |



2.  [WenetSpeech](./docs/wenetspeech.md) (10000小时)+[中文语音数据集](https://download.csdn.net/download/qq_33200967/87003964) (3000+小时) 的预训练模型列表:

|    使用模型    | 是否为流式 | 预处理方式 | 语言  |                               测试集字错率                                |   下载地址   |
|:----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|
| conformere | True  | fbank | 普通话 | 0.02923(aishell_test)<br>0.11876(test_net)<br>0.18346(test_meeting) | 加入知识星球获取 |



3. [AIShell](https://openslr.magicdatatech.com/resources/33) (179小时) 的预训练模型列表:

|        使用模型         | 是否为流式 | 预处理方式 | 语言  | 测试集字错率  |   下载地址    |
|:-------------------:|:-----:|:-----:|:---:|:-------:|:---------:|
|    squeezeformer    | True  | fbank | 普通话 | 0.04675 | 加入知识星球获取  |
|      conformer      | True  | fbank | 普通话 | 0.04178 | 加入知识星球获取  |
| efficient_conformer | True  | fbank | 普通话 | 0.04143 | 加入知识星球获取  |
|     deepspeech2     | True  | fbank | 普通话 | 0.09732 | 加入知识星球获取  |


4. [Librispeech](https://openslr.magicdatatech.com/resources/12) (960小时) 的预训练模型列表:

|        使用模型         | 是否为流式 | 预处理方式 | 语言 | 测试集词错率  |   下载地址   |
|:-------------------:|:-----:|:-----:|:--:|:-------:|:--------:|
|    squeezeformer    | True  | fbank | 英文 | 0.13033 | 加入知识星球获取 | 
|      conformer      | True  | fbank | 英文 | 0.08109 | 加入知识星球获取 | 
| efficient_conformer | True  | fbank | 英文 |         | 加入知识星球获取 | 
|     deepspeech2     | True  | fbank | 英文 | 0.15294 | 加入知识星球获取 |


**说明:** 
1. 这里字错率或者词错率是使用`eval.py`程序并使用集束搜索解码`ctc_beam_search`方法计算得到的。
2. 没有提供预测模型,需要把全部文件复制到项目的根目录下,执行`export_model.py`导出预测模型。
3. 由于算力不足,这里只提供了流式模型,但全部模型都支持流式和非流式的,在配置文件中`streaming`参数设置。

>有问题欢迎提 [issue](https://github.com/yeyupiaoling/PPASR/issues) 交流

## 文档教程

- [快速安装](./docs/install.md)
- [快速使用](./docs/GETTING_STARTED.md)
- [数据准备](./docs/dataset.md)
- [WenetSpeech数据集](./docs/wenetspeech.md)
- [合成语音数据](./docs/generate_audio.md)
- [数据增强](./docs/augment.md)
- [训练模型](./docs/train.md)
- [集束搜索解码](./docs/beam_search.md)
- [执行评估](./docs/eval.md)
- [导出模型](./docs/export_model.md)
- [使用标点符号模型](./docs/punctuation.md)
- [使用语音活动检测(VAD)](./docs/vad.md)
- 预测
   - [本地预测](./docs/infer.md)
   - [长语音预测](./docs/infer.md)
   - [Web部署模型](./docs/infer.md)
   - [GUI界面预测](./docs/infer.md)
   - [Nvidia Jetson部署](./docs/nvidia-jetson.md)

## 相关项目
 - 基于PaddlePaddle实现的声纹识别:[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)
 - 基于PaddlePaddle静态图实现的语音识别:[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)
 - 基于Pytorch实现的语音识别:[MASR](https://github.com/yeyupiaoling/MASR)


## 特别感谢

 - 感谢 <img src="docs/images/PyCharm_icon.png" height="25" width="25" >[JetBrains开源社区](https://jb.gg/OpenSourceSupport) 提供开发工具。

## 打赏作者

<br/>
<div align="center">
<p>打赏一块钱支持一下作者</p>
<img src="https://yeyupiaoling.cn/reward.png" alt="打赏作者" width="400">
</div>

## 参考资料
 - https://github.com/PaddlePaddle/PaddleSpeech
 - https://github.com/jiwidi/DeepSpeech-pytorch
 - https://github.com/wenet-e2e/WenetSpeech

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yeyupiaoling/PPASR",
    "name": "ppasr",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "asr, paddle",
    "author": "yeyupiaoling",
    "author_email": null,
    "download_url": "https://github.com/yeyupiaoling/PPASR.git",
    "platform": null,
    "description": "![python version](https://img.shields.io/badge/python-3.8+-orange.svg)\r\n![GitHub forks](https://img.shields.io/github/forks/yeyupiaoling/PPASR)\r\n![GitHub Repo stars](https://img.shields.io/github/stars/yeyupiaoling/PPASR)\r\n![GitHub](https://img.shields.io/github/license/yeyupiaoling/PPASR)\r\n![\u652f\u6301\u7cfb\u7edf](https://img.shields.io/badge/\u652f\u6301\u7cfb\u7edf-Win/Linux/MAC-9cf)\r\n\r\n# PPASR\u6d41\u5f0f\u4e0e\u975e\u6d41\u5f0f\u8bed\u97f3\u8bc6\u522b\u9879\u76ee\r\n\r\n\u672c\u9879\u76ee\u5c06\u5206\u4e09\u4e2a\u9636\u6bb5\u5206\u652f\uff0c\u5206\u522b\u662f[\u5165\u95e8\u7ea7](https://github.com/yeyupiaoling/PPASR/tree/%E5%85%A5%E9%97%A8%E7%BA%A7) \u3001[\u8fdb\u9636\u7ea7](https://github.com/yeyupiaoling/PPASR/tree/%E8%BF%9B%E9%98%B6%E7%BA%A7) \u548c[\u6700\u7ec8\u7ea7](https://github.com/yeyupiaoling/PPASR) \u5206\u652f\uff0c\u5f53\u524d\u4e3a\u6700\u7ec8\u7ea7\u7684V2\u7248\u672c\uff0c\u5982\u679c\u60f3\u4f7f\u7528\u6700\u7ec8\u7ea7\u7684V1\u7248\u672c\uff0c\u8bf7\u5728\u8fd9\u4e2a\u5206\u652f[r1.x](https://github.com/yeyupiaoling/PPASR/tree/r1.x)\u3002PPASR\u4e2d\u6587\u540d\u79f0PaddlePaddle\u4e2d\u6587\u8bed\u97f3\u8bc6\u522b\uff08PaddlePaddle Automatic Speech Recognition\uff09\uff0c\u662f\u4e00\u6b3e\u57fa\u4e8ePaddlePaddle\u5b9e\u73b0\u7684\u8bed\u97f3\u8bc6\u522b\u6846\u67b6\uff0cPPASR\u81f4\u529b\u4e8e\u7b80\u5355\uff0c\u5b9e\u7528\u7684\u8bed\u97f3\u8bc6\u522b\u9879\u76ee\u3002\u53ef\u90e8\u7f72\u5728\u670d\u52a1\u5668\uff0cNvidia Jetson\u8bbe\u5907\uff0c\u672a\u6765\u8fd8\u8ba1\u5212\u652f\u6301Android\u7b49\u79fb\u52a8\u8bbe\u5907\u3002**\u522b\u5fd8\u4e86star**\r\n\r\n**\u6b22\u8fce\u5927\u5bb6\u626b\u7801\u5165\u77e5\u8bc6\u661f\u7403\u6216\u8005QQ\u7fa4\u8ba8\u8bba\uff0c\u77e5\u8bc6\u661f\u7403\u91cc\u9762\u63d0\u4f9b\u9879\u76ee\u7684\u6a21\u578b\u6587\u4ef6\u548c\u535a\u4e3b\u5176\u4ed6\u76f8\u5173\u9879\u76ee\u7684\u6a21\u578b\u6587\u4ef6\uff0c\u4e5f\u5305\u62ec\u5176\u4ed6\u4e00\u4e9b\u8d44\u6e90\u3002**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://yeyupiaoling.cn/zsxq.png\" alt=\"\u77e5\u8bc6\u661f\u7403\" width=\"400\">\r\n  <img src=\"https://yeyupiaoling.cn/qq.png\" alt=\"QQ\u7fa4\" width=\"400\">\r\n</div>\r\n\r\n\r\n## \u5728\u7ebf\u4f7f\u7528\r\n\r\n**1. [\u5728AI Studio\u5e73\u53f0\u8bad\u7ec3\u9884\u6d4b](https://aistudio.baidu.com/aistudio/projectdetail/3290199)**\r\n\r\n**2. [\u5728\u7ebf\u4f7f\u7528Dome](https://www.doiduoyi.com/?app=SPEECHRECOG)**\r\n\r\n**3. [inscode](https://inscode.csdn.net/@yeyupiaoling/ppasr)**\r\n\r\n<br/>\r\n\r\n**\u672c\u9879\u76ee\u4f7f\u7528\u7684\u73af\u5883\uff1a**\r\n - Anaconda 3\r\n - Python 3.8\r\n - PaddlePaddle 2.5.1\r\n - Windows 10 or Ubuntu 18.04\r\n\r\n\r\n## \u9879\u76ee\u5feb\u901f\u4e86\u89e3\r\n\r\n 1. \u672c\u9879\u76ee\u652f\u6301\u6d41\u5f0f\u8bc6\u522b\u6a21\u578b`deepspeech2`\u3001`conformer`\u3001`squeezeformer`\uff0c`efficient_conformer`\uff0c\u6bcf\u4e2a\u6a21\u578b\u90fd\u652f\u6301\u6d41\u5f0f\u8bc6\u522b\u548c\u975e\u6d41\u5f0f\u8bc6\u522b\uff0c\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d`streaming`\u53c2\u6570\u8bbe\u7f6e\u3002\r\n 2. \u672c\u9879\u76ee\u652f\u6301\u4e24\u79cd\u89e3\u7801\u5668\uff0c\u5206\u522b\u662f\u96c6\u675f\u641c\u7d22\u89e3\u7801\u5668`ctc_beam_search`\u548c\u8d2a\u5fc3\u89e3\u7801\u5668`ctc_greedy`\uff0c\u96c6\u675f\u641c\u7d22\u89e3\u7801\u5668`ctc_beam_search`\u51c6\u786e\u7387\u66f4\u9ad8\u3002\r\n 3. \u4e0b\u9762\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u9884\u8bad\u7ec3\u6a21\u578b\u7684\u4e0b\u8f7d\uff0c\u4e0b\u8f7d\u9884\u8bad\u7ec3\u6a21\u578b\u4e4b\u540e\uff0c\u9700\u8981\u628a\u5168\u90e8\u6587\u4ef6\u590d\u5236\u5230\u9879\u76ee\u6839\u76ee\u5f55\uff0c\u5e76\u6267\u884c\u5bfc\u51fa\u6a21\u578b\u624d\u53ef\u4ee5\u4f7f\u7528\u8bed\u97f3\u8bc6\u522b\u3002\r\n\r\n## \u66f4\u65b0\u8bb0\u5f55\r\n\r\n - 2023.01.28: \u8c03\u6574\u914d\u7f6e\u6587\u4ef6\u7ed3\u6784\uff0c\u652f\u6301efficient_conformer\u6a21\u578b\u3002\r\n - 2022.12.05: \u652f\u6301\u81ea\u52a8\u6df7\u5408\u7cbe\u5ea6\u8bad\u7ec3\u548c\u5bfc\u51fa\u91cf\u5316\u6a21\u578b\u3002\r\n - 2022.11.26: \u652f\u6301Squeezeformer\u6a21\u578b\u3002\r\n - 2022.11.01: \u4fee\u6539Conformer\u6a21\u578b\u7684\u89e3\u7801\u5668\u4e3aBiTransformerDecoder\uff0c\u589e\u52a0SpecSubAugmentor\u6570\u636e\u589e\u5f3a\u5668\u3002\r\n - 2022.10.29: \u6b63\u5f0f\u53d1\u5e03\u6700\u7ec8\u7ea7\u7684V2\u7248\u672c\u3002\r\n\r\n## \u89c6\u9891\u8bb2\u89e3\r\n\r\n - [\u77e5\u8bc6\u70b9\u8bb2\u89e3\uff08\u54d4\u54e9\u54d4\u54e9\uff09](https://www.bilibili.com/video/BV1Rr4y1D7iZ)\r\n - [\u6d41\u5f0f\u8bc6\u522b\u7684\u4f7f\u7528\u8bb2\u89e3\uff08\u54d4\u54e9\u54d4\u54e9\uff09](https://www.bilibili.com/video/BV1Te4y1h7KK)\r\n\r\n\r\n# \u5feb\u901f\u4f7f\u7528\r\n\r\n\u8fd9\u91cc\u4ecb\u7ecd\u5982\u4f55\u4f7f\u7528PPASR\u5feb\u901f\u8fdb\u884c\u8bed\u97f3\u8bc6\u522b\uff0c\u524d\u63d0\u662f\u8981\u5b89\u88c5PPASR\uff0c\u6587\u6863\u8bf7\u770b[\u5feb\u901f\u5b89\u88c5](./docs/install.md)\u3002\u6267\u884c\u8fc7\u7a0b\u4e0d\u9700\u8981\u624b\u52a8\u4e0b\u8f7d\u6a21\u578b\uff0c\u5168\u90e8\u81ea\u52a8\u5b8c\u6210\u3002\r\n\r\n1. \u77ed\u8bed\u97f3\u8bc6\u522b\r\n```python\r\nfrom ppasr.predict import PPASRPredictor\r\n\r\npredictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')\r\n\r\nwav_path = 'dataset/test.wav'\r\nresult = predictor.predict(audio_data=wav_path, use_pun=False)\r\nscore, text = result['score'], result['text']\r\nprint(f\"\u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {int(score)}\")\r\n```\r\n\r\n2. \u957f\u8bed\u97f3\u8bc6\u522b\r\n```python\r\nfrom ppasr.predict import PPASRPredictor\r\n\r\npredictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')\r\n\r\nwav_path = 'dataset/test_long.wav'\r\nresult = predictor.predict_long(audio_data=wav_path, use_pun=False)\r\nscore, text = result['score'], result['text']\r\nprint(f\"\u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {score}\")\r\n```\r\n\r\n3. \u6a21\u62df\u6d41\u5f0f\u8bc6\u522b\r\n```python\r\nimport time\r\nimport wave\r\n\r\nfrom ppasr.predict import PPASRPredictor\r\n\r\npredictor = PPASRPredictor(model_tag='conformer_streaming_fbank_wenetspeech')\r\n\r\n# \u8bc6\u522b\u95f4\u9694\u65f6\u95f4\r\ninterval_time = 0.5\r\nCHUNK = int(16000 * interval_time)\r\n# \u8bfb\u53d6\u6570\u636e\r\nwav_path = 'dataset/test.wav'\r\nwf = wave.open(wav_path, 'rb')\r\ndata = wf.readframes(CHUNK)\r\n# \u64ad\u653e\r\nwhile data != b'':\r\n    start = time.time()\r\n    d = wf.readframes(CHUNK)\r\n    result = predictor.predict_stream(audio_data=data, use_pun=False, is_end=d == b'')\r\n    data = d\r\n    if result is None: continue\r\n    score, text = result['score'], result['text']\r\n    print(f\"\u3010\u5b9e\u65f6\u7ed3\u679c\u3011\uff1a\u6d88\u8017\u65f6\u95f4\uff1a{int((time.time() - start) * 1000)}ms, \u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {int(score)}\")\r\n# \u91cd\u7f6e\u6d41\u5f0f\u8bc6\u522b\r\npredictor.reset_stream()\r\n```\r\n\r\n\r\n## \u6a21\u578b\u4e0b\u8f7d\r\n\r\n1. [WenetSpeech](./docs/wenetspeech.md) (10000\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|    \u4f7f\u7528\u6a21\u578b     | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  |                               \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387                                |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:-----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|\r\n|  conformer  | True  | fbank | \u666e\u901a\u8bdd | 0.03579(aishell_test)<br>0.11081(test_net)<br>0.16031(test_meeting) | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n| deepspeech2 | True  | fbank | \u666e\u901a\u8bdd |                        0.05379(aishell_test)                        | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n\r\n\r\n\r\n2.  [WenetSpeech](./docs/wenetspeech.md) (10000\u5c0f\u65f6)+[\u4e2d\u6587\u8bed\u97f3\u6570\u636e\u96c6](https://download.csdn.net/download/qq_33200967/87003964) (3000+\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|    \u4f7f\u7528\u6a21\u578b    | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  |                               \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387                                |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|\r\n| conformere | True  | fbank | \u666e\u901a\u8bdd | 0.02923(aishell_test)<br>0.11876(test_net)<br>0.18346(test_meeting) | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n\r\n\r\n\r\n3. [AIShell](https://openslr.magicdatatech.com/resources/33) (179\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|        \u4f7f\u7528\u6a21\u578b         | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  | \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387  |   \u4e0b\u8f7d\u5730\u5740    |\r\n|:-------------------:|:-----:|:-----:|:---:|:-------:|:---------:|\r\n|    squeezeformer    | True  | fbank | \u666e\u901a\u8bdd | 0.04675 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6  |\r\n|      conformer      | True  | fbank | \u666e\u901a\u8bdd | 0.04178 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6  |\r\n| efficient_conformer | True  | fbank | \u666e\u901a\u8bdd | 0.04143 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6  |\r\n|     deepspeech2     | True  | fbank | \u666e\u901a\u8bdd | 0.09732 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6  |\r\n\r\n\r\n4. [Librispeech](https://openslr.magicdatatech.com/resources/12) (960\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|        \u4f7f\u7528\u6a21\u578b         | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00 | \u6d4b\u8bd5\u96c6\u8bcd\u9519\u7387  |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:-------------------:|:-----:|:-----:|:--:|:-------:|:--------:|\r\n|    squeezeformer    | True  | fbank | \u82f1\u6587 | 0.13033 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n|      conformer      | True  | fbank | \u82f1\u6587 | 0.08109 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n| efficient_conformer | True  | fbank | \u82f1\u6587 |         | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n|     deepspeech2     | True  | fbank | \u82f1\u6587 | 0.15294 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n\r\n\r\n**\u8bf4\u660e\uff1a** \r\n1. \u8fd9\u91cc\u5b57\u9519\u7387\u6216\u8005\u8bcd\u9519\u7387\u662f\u4f7f\u7528`eval.py`\u7a0b\u5e8f\u5e76\u4f7f\u7528\u96c6\u675f\u641c\u7d22\u89e3\u7801`ctc_beam_search`\u65b9\u6cd5\u8ba1\u7b97\u5f97\u5230\u7684\u3002\r\n2. \u6ca1\u6709\u63d0\u4f9b\u9884\u6d4b\u6a21\u578b\uff0c\u9700\u8981\u628a\u5168\u90e8\u6587\u4ef6\u590d\u5236\u5230\u9879\u76ee\u7684\u6839\u76ee\u5f55\u4e0b\uff0c\u6267\u884c`export_model.py`\u5bfc\u51fa\u9884\u6d4b\u6a21\u578b\u3002\r\n3. \u7531\u4e8e\u7b97\u529b\u4e0d\u8db3\uff0c\u8fd9\u91cc\u53ea\u63d0\u4f9b\u4e86\u6d41\u5f0f\u6a21\u578b\uff0c\u4f46\u5168\u90e8\u6a21\u578b\u90fd\u652f\u6301\u6d41\u5f0f\u548c\u975e\u6d41\u5f0f\u7684\uff0c\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d`streaming`\u53c2\u6570\u8bbe\u7f6e\u3002\r\n\r\n>\u6709\u95ee\u9898\u6b22\u8fce\u63d0 [issue](https://github.com/yeyupiaoling/PPASR/issues) \u4ea4\u6d41\r\n\r\n## \u6587\u6863\u6559\u7a0b\r\n\r\n- [\u5feb\u901f\u5b89\u88c5](./docs/install.md)\r\n- [\u5feb\u901f\u4f7f\u7528](./docs/GETTING_STARTED.md)\r\n- [\u6570\u636e\u51c6\u5907](./docs/dataset.md)\r\n- [WenetSpeech\u6570\u636e\u96c6](./docs/wenetspeech.md)\r\n- [\u5408\u6210\u8bed\u97f3\u6570\u636e](./docs/generate_audio.md)\r\n- [\u6570\u636e\u589e\u5f3a](./docs/augment.md)\r\n- [\u8bad\u7ec3\u6a21\u578b](./docs/train.md)\r\n- [\u96c6\u675f\u641c\u7d22\u89e3\u7801](./docs/beam_search.md)\r\n- [\u6267\u884c\u8bc4\u4f30](./docs/eval.md)\r\n- [\u5bfc\u51fa\u6a21\u578b](./docs/export_model.md)\r\n- [\u4f7f\u7528\u6807\u70b9\u7b26\u53f7\u6a21\u578b](./docs/punctuation.md)\r\n- [\u4f7f\u7528\u8bed\u97f3\u6d3b\u52a8\u68c0\u6d4b\uff08VAD\uff09](./docs/vad.md)\r\n- \u9884\u6d4b\r\n   - [\u672c\u5730\u9884\u6d4b](./docs/infer.md)\r\n   - [\u957f\u8bed\u97f3\u9884\u6d4b](./docs/infer.md)\r\n   - [Web\u90e8\u7f72\u6a21\u578b](./docs/infer.md)\r\n   - [GUI\u754c\u9762\u9884\u6d4b](./docs/infer.md)\r\n   - [Nvidia Jetson\u90e8\u7f72](./docs/nvidia-jetson.md)\r\n\r\n## \u76f8\u5173\u9879\u76ee\r\n - \u57fa\u4e8ePaddlePaddle\u5b9e\u73b0\u7684\u58f0\u7eb9\u8bc6\u522b\uff1a[VoiceprintRecognition-PaddlePaddle](https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle)\r\n - \u57fa\u4e8ePaddlePaddle\u9759\u6001\u56fe\u5b9e\u73b0\u7684\u8bed\u97f3\u8bc6\u522b\uff1a[PaddlePaddle-DeepSpeech](https://github.com/yeyupiaoling/PaddlePaddle-DeepSpeech)\r\n - \u57fa\u4e8ePytorch\u5b9e\u73b0\u7684\u8bed\u97f3\u8bc6\u522b\uff1a[MASR](https://github.com/yeyupiaoling/MASR)\r\n\r\n\r\n## \u7279\u522b\u611f\u8c22\r\n\r\n - \u611f\u8c22 <img src=\"docs/images/PyCharm_icon.png\" height=\"25\" width=\"25\" >[JetBrains\u5f00\u6e90\u793e\u533a](https://jb.gg/OpenSourceSupport) \u63d0\u4f9b\u5f00\u53d1\u5de5\u5177\u3002\r\n\r\n## \u6253\u8d4f\u4f5c\u8005\r\n\r\n<br/>\r\n<div align=\"center\">\r\n<p>\u6253\u8d4f\u4e00\u5757\u94b1\u652f\u6301\u4e00\u4e0b\u4f5c\u8005</p>\r\n<img src=\"https://yeyupiaoling.cn/reward.png\" alt=\"\u6253\u8d4f\u4f5c\u8005\" width=\"400\">\r\n</div>\r\n\r\n## \u53c2\u8003\u8d44\u6599\r\n - https://github.com/PaddlePaddle/PaddleSpeech\r\n - https://github.com/jiwidi/DeepSpeech-pytorch\r\n - https://github.com/wenet-e2e/WenetSpeech\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Automatic speech recognition toolkit on PaddlePaddle",
    "version": "2.4.8",
    "project_urls": {
        "Download": "https://github.com/yeyupiaoling/PPASR.git",
        "Homepage": "https://github.com/yeyupiaoling/PPASR"
    },
    "split_keywords": [
        "asr",
        " paddle"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d1726ed8a194643eccf6ccb96a7ed7191efd0c1ddc698ad102d193721250e863",
                "md5": "6f559e43736605eb59b4732f87198677",
                "sha256": "f93c206478bd3005e4cd1a867d0aaae5fc4f1d4fe3ae4918b9f4851108fb0ef7"
            },
            "downloads": -1,
            "filename": "ppasr-2.4.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6f559e43736605eb59b4732f87198677",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1649767,
            "upload_time": "2024-05-01T03:15:47",
            "upload_time_iso_8601": "2024-05-01T03:15:47.532473Z",
            "url": "https://files.pythonhosted.org/packages/d1/72/6ed8a194643eccf6ccb96a7ed7191efd0c1ddc698ad102d193721250e863/ppasr-2.4.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-01 03:15:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yeyupiaoling",
    "github_project": "PPASR",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "ppasr"
}
        
Elapsed time: 0.34540s