masr


Namemasr JSON
Version 2.3.8 PyPI version JSON
download
home_pagehttps://github.com/yeyupiaoling/MASR
SummaryAutomatic speech recognition toolkit on Pytorch
upload_time2024-05-01 03:17:04
maintainerNone
docs_urlNone
authoryeyupiaoling
requires_pythonNone
licenseApache License 2.0
keywords asr pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![python version](https://img.shields.io/badge/python-3.8+-orange.svg)
![GitHub forks](https://img.shields.io/github/forks/yeyupiaoling/MASR)
![GitHub Repo stars](https://img.shields.io/github/stars/yeyupiaoling/MASR)
![GitHub](https://img.shields.io/github/license/yeyupiaoling/MASR)
![支持系统](https://img.shields.io/badge/支持系统-Win/Linux/MAC-9cf)

# MASR流式与非流式语音识别项目

MASR是一款基于Pytorch实现的自动语音识别框架,MASR全称是神奇的自动语音识别框架(Magical Automatic Speech Recognition),当前为V2版本,如果想使用V1版本,请在这个分支[r1.x](https://github.com/yeyupiaoling/MASR/tree/r1.x)。MASR致力于简单,实用的语音识别项目。可部署在服务器,Nvidia Jetson设备,未来还计划支持Android等移动设备。


**欢迎大家扫码入知识星球或者QQ群讨论,知识星球里面提供项目的模型文件和博主其他相关项目的模型文件,也包括其他一些资源。**

<div align="center">
  <img src="https://yeyupiaoling.cn/zsxq.png" alt="知识星球" width="400">
  <img src="https://yeyupiaoling.cn/qq.png" alt="QQ群" width="400">
</div>


本项目使用的环境:
 - Anaconda 3
 - Python 3.8
 - Pytorch 1.13.1
 - Windows 10 or Ubuntu 18.04


## 项目快速了解

 1. 本项目支持流式识别模型`deepspeech2`、`conformer`、`squeezeformer`,`efficient_conformer`,每个模型都支持流式识别和非流式识别,在配置文件中`streaming`参数设置。
 2. 本项目支持两种解码器,分别是集束搜索解码器`ctc_beam_search`和贪心解码器`ctc_greedy`,集束搜索解码器`ctc_beam_search`准确率更高。
 3. 下面提供了一系列预训练模型的下载,下载预训练模型之后,需要把全部文件复制到项目根目录,并执行导出模型才可以使用语音识别。


## 更新记录

 - 2023.01.28: 调整配置文件结构,支持efficient_conformer模型。
 - 2022.11: 正式发布最终级的V2版本。


## 视频讲解

这个是PPSAR的视频教程,项目是通用的,可以参考使用。

 - [知识点讲解(哔哩哔哩)](https://www.bilibili.com/video/BV1Rr4y1D7iZ)
 - [流式识别的使用讲解(哔哩哔哩)](https://www.bilibili.com/video/BV1Te4y1h7KK)

## 在线使用

**- [在线使用Dome](https://www.doiduoyi.com/?app=SPEECHRECOG)**

# 快速使用

这里介绍如何使用MASR快速进行语音识别,前提是要安装MASR,文档请看[快速安装](./docs/install.md)。执行过程不需要手动下载模型,全部自动完成。

1. 短语音识别
```python
from masr.predict import MASRPredictor

predictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')

wav_path = 'dataset/test.wav'
result = predictor.predict(audio_data=wav_path, use_pun=False)
score, text = result['score'], result['text']
print(f"识别结果: {text}, 得分: {int(score)}")
```

2. 长语音识别
```python
from masr.predict import MASRPredictor

predictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')

wav_path = 'dataset/test_long.wav'
result = predictor.predict_long(audio_data=wav_path, use_pun=False)
score, text = result['score'], result['text']
print(f"识别结果: {text}, 得分: {score}")
```

3. 模拟流式识别
```python
import time
import wave

from masr.predict import MASRPredictor

predictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')

# 识别间隔时间
interval_time = 0.5
CHUNK = int(16000 * interval_time)
# 读取数据
wav_path = 'dataset/test.wav'
wf = wave.open(wav_path, 'rb')
data = wf.readframes(CHUNK)
# 播放
while data != b'':
    start = time.time()
    d = wf.readframes(CHUNK)
    result = predictor.predict_stream(audio_data=data, use_pun=False, is_end=d == b'')
    data = d
    if result is None: continue
    score, text = result['score'], result['text']
    print(f"【实时结果】:消耗时间:{int((time.time() - start) * 1000)}ms, 识别结果: {text}, 得分: {int(score)}")
# 重置流式识别
predictor.reset_stream()
```


## 模型下载


1. [WenetSpeech](./docs/wenetspeech.md) (10000小时) 的预训练模型列表:

|   使用模型    | 是否为流式 | 预处理方式 | 语言  | 测试集字错率 | 下载地址 |
|:---------:|:-----:|:-----:|:---:|:------:|:----:|
| conformer | True  | fbank | 普通话 |        |      |


2.  [WenetSpeech](./docs/wenetspeech.md) (10000小时)+[中文语音数据集](https://download.csdn.net/download/qq_33200967/87003964) (3000+小时) 的预训练模型列表:

|    使用模型    | 是否为流式 | 预处理方式 | 语言  |                               测试集字错率                                |   下载地址   |
|:----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|
| conformere | True  | fbank | 普通话 | 0.03179(aishell_test)<br>0.16722(test_net)<br>0.20317(test_meeting) | 加入知识星球获取 |


3. [AIShell](https://openslr.magicdatatech.com/resources/33) (179小时) 的预训练模型列表:

|        使用模型         | 是否为流式 | 预处理方式 | 语言  | 测试集字错率  |   下载地址   |
|:-------------------:|:-----:|:-----:|:---:|:-------:|:--------:|
|    squeezeformer    | True  | fbank | 普通话 | 0.04137 | 加入知识星球获取 |
|      conformer      | True  | fbank | 普通话 | 0.04491 | 加入知识星球获取 |
| efficient_conformer | True  | fbank | 普通话 | 0.04073 | 加入知识星球获取 |
|     deepspeech2     | True  | fbank | 普通话 | 0.06907 | 加入知识星球获取 |


4. [Librispeech](https://openslr.magicdatatech.com/resources/12) (960小时) 的预训练模型列表:

|        使用模型         | 是否为流式 | 预处理方式 | 语言 | 测试集词错率  |   下载地址   |
|:-------------------:|:-----:|:-----:|:--:|:-------:|:--------:|
|    squeezeformer    | True  | fbank | 英文 | 0.09715 | 加入知识星球获取 | 
|      conformer      | True  | fbank | 英文 | 0.09265 | 加入知识星球获取 | 
| efficient_conformer | True  | fbank | 英文 |         | 加入知识星球获取 | 
|     deepspeech2     | True  | fbank | 英文 | 0.19423 | 加入知识星球获取 | 


**说明:** 
1. 这里字错率或者词错率是使用`eval.py`程序并使用集束搜索解码`ctc_beam_search`方法计算得到的。
2. 没有提供预测模型,需要把全部文件复制到项目的根目录下,执行`export_model.py`导出预测模型。
3. 由于算力不足,这里只提供了流式模型,但全部模型都支持流式和非流式的,在配置文件中`streaming`参数设置。

>有问题欢迎提 [issue](https://github.com/yeyupiaoling/MASR/issues) 交流


## 文档教程

- [快速安装](./docs/install.md)
- [快速使用](./docs/GETTING_STARTED.md)
- [数据准备](./docs/dataset.md)
- [WenetSpeech数据集](./docs/wenetspeech.md)
- [合成语音数据](./docs/generate_audio.md)
- [数据增强](./docs/augment.md)
- [训练模型](./docs/train.md)
- [集束搜索解码](./docs/beam_search.md)
- [执行评估](./docs/eval.md)
- [导出模型](./docs/export_model.md)
- [使用标点符号模型](./docs/punctuation.md)
- [使用语音活动检测(VAD)](./docs/vad.md)
- 预测
   - [本地预测](./docs/infer.md)
   - [长语音预测](./docs/infer.md)
   - [Web部署模型](./docs/infer.md)
   - [GUI界面预测](./docs/infer.md)


## 相关项目
 - 基于Pytorch实现的声纹识别:[VoiceprintRecognition-Pytorch](https://github.com/yeyupiaoling/VoiceprintRecognition-Pytorch)
 - 基于Pytorch实现的分类:[AudioClassification-Pytorch](https://github.com/yeyupiaoling/AudioClassification-Pytorch)
 - 基于PaddlePaddle实现的语音识别:[PPASR](https://github.com/yeyupiaoling/PPASR)


## 打赏作者

<br/>
<div align="center">
<p>打赏一块钱支持一下作者</p>
<img src="https://yeyupiaoling.cn/reward.png" alt="打赏作者" width="400">
</div>


## 参考资料
 - https://github.com/yeyupiaoling/PPASR
 - https://github.com/jiwidi/DeepSpeech-pytorch
 - https://github.com/wenet-e2e/WenetSpeech
 - https://github.com/SeanNaren/deepspeech.pytorch

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yeyupiaoling/MASR",
    "name": "masr",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "asr, pytorch",
    "author": "yeyupiaoling",
    "author_email": null,
    "download_url": "https://github.com/yeyupiaoling/MASR.git",
    "platform": null,
    "description": "![python version](https://img.shields.io/badge/python-3.8+-orange.svg)\r\n![GitHub forks](https://img.shields.io/github/forks/yeyupiaoling/MASR)\r\n![GitHub Repo stars](https://img.shields.io/github/stars/yeyupiaoling/MASR)\r\n![GitHub](https://img.shields.io/github/license/yeyupiaoling/MASR)\r\n![\u652f\u6301\u7cfb\u7edf](https://img.shields.io/badge/\u652f\u6301\u7cfb\u7edf-Win/Linux/MAC-9cf)\r\n\r\n# MASR\u6d41\u5f0f\u4e0e\u975e\u6d41\u5f0f\u8bed\u97f3\u8bc6\u522b\u9879\u76ee\r\n\r\nMASR\u662f\u4e00\u6b3e\u57fa\u4e8ePytorch\u5b9e\u73b0\u7684\u81ea\u52a8\u8bed\u97f3\u8bc6\u522b\u6846\u67b6\uff0cMASR\u5168\u79f0\u662f\u795e\u5947\u7684\u81ea\u52a8\u8bed\u97f3\u8bc6\u522b\u6846\u67b6\uff08Magical Automatic Speech Recognition\uff09\uff0c\u5f53\u524d\u4e3aV2\u7248\u672c\uff0c\u5982\u679c\u60f3\u4f7f\u7528V1\u7248\u672c\uff0c\u8bf7\u5728\u8fd9\u4e2a\u5206\u652f[r1.x](https://github.com/yeyupiaoling/MASR/tree/r1.x)\u3002MASR\u81f4\u529b\u4e8e\u7b80\u5355\uff0c\u5b9e\u7528\u7684\u8bed\u97f3\u8bc6\u522b\u9879\u76ee\u3002\u53ef\u90e8\u7f72\u5728\u670d\u52a1\u5668\uff0cNvidia Jetson\u8bbe\u5907\uff0c\u672a\u6765\u8fd8\u8ba1\u5212\u652f\u6301Android\u7b49\u79fb\u52a8\u8bbe\u5907\u3002\r\n\r\n\r\n**\u6b22\u8fce\u5927\u5bb6\u626b\u7801\u5165\u77e5\u8bc6\u661f\u7403\u6216\u8005QQ\u7fa4\u8ba8\u8bba\uff0c\u77e5\u8bc6\u661f\u7403\u91cc\u9762\u63d0\u4f9b\u9879\u76ee\u7684\u6a21\u578b\u6587\u4ef6\u548c\u535a\u4e3b\u5176\u4ed6\u76f8\u5173\u9879\u76ee\u7684\u6a21\u578b\u6587\u4ef6\uff0c\u4e5f\u5305\u62ec\u5176\u4ed6\u4e00\u4e9b\u8d44\u6e90\u3002**\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://yeyupiaoling.cn/zsxq.png\" alt=\"\u77e5\u8bc6\u661f\u7403\" width=\"400\">\r\n  <img src=\"https://yeyupiaoling.cn/qq.png\" alt=\"QQ\u7fa4\" width=\"400\">\r\n</div>\r\n\r\n\r\n\u672c\u9879\u76ee\u4f7f\u7528\u7684\u73af\u5883\uff1a\r\n - Anaconda 3\r\n - Python 3.8\r\n - Pytorch 1.13.1\r\n - Windows 10 or Ubuntu 18.04\r\n\r\n\r\n## \u9879\u76ee\u5feb\u901f\u4e86\u89e3\r\n\r\n 1. \u672c\u9879\u76ee\u652f\u6301\u6d41\u5f0f\u8bc6\u522b\u6a21\u578b`deepspeech2`\u3001`conformer`\u3001`squeezeformer`\uff0c`efficient_conformer`\uff0c\u6bcf\u4e2a\u6a21\u578b\u90fd\u652f\u6301\u6d41\u5f0f\u8bc6\u522b\u548c\u975e\u6d41\u5f0f\u8bc6\u522b\uff0c\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d`streaming`\u53c2\u6570\u8bbe\u7f6e\u3002\r\n 2. \u672c\u9879\u76ee\u652f\u6301\u4e24\u79cd\u89e3\u7801\u5668\uff0c\u5206\u522b\u662f\u96c6\u675f\u641c\u7d22\u89e3\u7801\u5668`ctc_beam_search`\u548c\u8d2a\u5fc3\u89e3\u7801\u5668`ctc_greedy`\uff0c\u96c6\u675f\u641c\u7d22\u89e3\u7801\u5668`ctc_beam_search`\u51c6\u786e\u7387\u66f4\u9ad8\u3002\r\n 3. \u4e0b\u9762\u63d0\u4f9b\u4e86\u4e00\u7cfb\u5217\u9884\u8bad\u7ec3\u6a21\u578b\u7684\u4e0b\u8f7d\uff0c\u4e0b\u8f7d\u9884\u8bad\u7ec3\u6a21\u578b\u4e4b\u540e\uff0c\u9700\u8981\u628a\u5168\u90e8\u6587\u4ef6\u590d\u5236\u5230\u9879\u76ee\u6839\u76ee\u5f55\uff0c\u5e76\u6267\u884c\u5bfc\u51fa\u6a21\u578b\u624d\u53ef\u4ee5\u4f7f\u7528\u8bed\u97f3\u8bc6\u522b\u3002\r\n\r\n\r\n## \u66f4\u65b0\u8bb0\u5f55\r\n\r\n - 2023.01.28: \u8c03\u6574\u914d\u7f6e\u6587\u4ef6\u7ed3\u6784\uff0c\u652f\u6301efficient_conformer\u6a21\u578b\u3002\r\n - 2022.11: \u6b63\u5f0f\u53d1\u5e03\u6700\u7ec8\u7ea7\u7684V2\u7248\u672c\u3002\r\n\r\n\r\n## \u89c6\u9891\u8bb2\u89e3\r\n\r\n\u8fd9\u4e2a\u662fPPSAR\u7684\u89c6\u9891\u6559\u7a0b\uff0c\u9879\u76ee\u662f\u901a\u7528\u7684\uff0c\u53ef\u4ee5\u53c2\u8003\u4f7f\u7528\u3002\r\n\r\n - [\u77e5\u8bc6\u70b9\u8bb2\u89e3\uff08\u54d4\u54e9\u54d4\u54e9\uff09](https://www.bilibili.com/video/BV1Rr4y1D7iZ)\r\n - [\u6d41\u5f0f\u8bc6\u522b\u7684\u4f7f\u7528\u8bb2\u89e3\uff08\u54d4\u54e9\u54d4\u54e9\uff09](https://www.bilibili.com/video/BV1Te4y1h7KK)\r\n\r\n## \u5728\u7ebf\u4f7f\u7528\r\n\r\n**- [\u5728\u7ebf\u4f7f\u7528Dome](https://www.doiduoyi.com/?app=SPEECHRECOG)**\r\n\r\n# \u5feb\u901f\u4f7f\u7528\r\n\r\n\u8fd9\u91cc\u4ecb\u7ecd\u5982\u4f55\u4f7f\u7528MASR\u5feb\u901f\u8fdb\u884c\u8bed\u97f3\u8bc6\u522b\uff0c\u524d\u63d0\u662f\u8981\u5b89\u88c5MASR\uff0c\u6587\u6863\u8bf7\u770b[\u5feb\u901f\u5b89\u88c5](./docs/install.md)\u3002\u6267\u884c\u8fc7\u7a0b\u4e0d\u9700\u8981\u624b\u52a8\u4e0b\u8f7d\u6a21\u578b\uff0c\u5168\u90e8\u81ea\u52a8\u5b8c\u6210\u3002\r\n\r\n1. \u77ed\u8bed\u97f3\u8bc6\u522b\r\n```python\r\nfrom masr.predict import MASRPredictor\r\n\r\npredictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')\r\n\r\nwav_path = 'dataset/test.wav'\r\nresult = predictor.predict(audio_data=wav_path, use_pun=False)\r\nscore, text = result['score'], result['text']\r\nprint(f\"\u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {int(score)}\")\r\n```\r\n\r\n2. \u957f\u8bed\u97f3\u8bc6\u522b\r\n```python\r\nfrom masr.predict import MASRPredictor\r\n\r\npredictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')\r\n\r\nwav_path = 'dataset/test_long.wav'\r\nresult = predictor.predict_long(audio_data=wav_path, use_pun=False)\r\nscore, text = result['score'], result['text']\r\nprint(f\"\u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {score}\")\r\n```\r\n\r\n3. \u6a21\u62df\u6d41\u5f0f\u8bc6\u522b\r\n```python\r\nimport time\r\nimport wave\r\n\r\nfrom masr.predict import MASRPredictor\r\n\r\npredictor = MASRPredictor(model_tag='conformer_streaming_fbank_aishell')\r\n\r\n# \u8bc6\u522b\u95f4\u9694\u65f6\u95f4\r\ninterval_time = 0.5\r\nCHUNK = int(16000 * interval_time)\r\n# \u8bfb\u53d6\u6570\u636e\r\nwav_path = 'dataset/test.wav'\r\nwf = wave.open(wav_path, 'rb')\r\ndata = wf.readframes(CHUNK)\r\n# \u64ad\u653e\r\nwhile data != b'':\r\n    start = time.time()\r\n    d = wf.readframes(CHUNK)\r\n    result = predictor.predict_stream(audio_data=data, use_pun=False, is_end=d == b'')\r\n    data = d\r\n    if result is None: continue\r\n    score, text = result['score'], result['text']\r\n    print(f\"\u3010\u5b9e\u65f6\u7ed3\u679c\u3011\uff1a\u6d88\u8017\u65f6\u95f4\uff1a{int((time.time() - start) * 1000)}ms, \u8bc6\u522b\u7ed3\u679c: {text}, \u5f97\u5206: {int(score)}\")\r\n# \u91cd\u7f6e\u6d41\u5f0f\u8bc6\u522b\r\npredictor.reset_stream()\r\n```\r\n\r\n\r\n## \u6a21\u578b\u4e0b\u8f7d\r\n\r\n\r\n1. [WenetSpeech](./docs/wenetspeech.md) (10000\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|   \u4f7f\u7528\u6a21\u578b    | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  | \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387 | \u4e0b\u8f7d\u5730\u5740 |\r\n|:---------:|:-----:|:-----:|:---:|:------:|:----:|\r\n| conformer | True  | fbank | \u666e\u901a\u8bdd |        |      |\r\n\r\n\r\n2.  [WenetSpeech](./docs/wenetspeech.md) (10000\u5c0f\u65f6)+[\u4e2d\u6587\u8bed\u97f3\u6570\u636e\u96c6](https://download.csdn.net/download/qq_33200967/87003964) (3000+\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|    \u4f7f\u7528\u6a21\u578b    | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  |                               \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387                                |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:----------:|:-----:|:-----:|:---:|:-------------------------------------------------------------------:|:--------:|\r\n| conformere | True  | fbank | \u666e\u901a\u8bdd | 0.03179(aishell_test)<br>0.16722(test_net)<br>0.20317(test_meeting) | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n\r\n\r\n3. [AIShell](https://openslr.magicdatatech.com/resources/33) (179\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|        \u4f7f\u7528\u6a21\u578b         | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00  | \u6d4b\u8bd5\u96c6\u5b57\u9519\u7387  |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:-------------------:|:-----:|:-----:|:---:|:-------:|:--------:|\r\n|    squeezeformer    | True  | fbank | \u666e\u901a\u8bdd | 0.04137 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n|      conformer      | True  | fbank | \u666e\u901a\u8bdd | 0.04491 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n| efficient_conformer | True  | fbank | \u666e\u901a\u8bdd | 0.04073 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n|     deepspeech2     | True  | fbank | \u666e\u901a\u8bdd | 0.06907 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 |\r\n\r\n\r\n4. [Librispeech](https://openslr.magicdatatech.com/resources/12) (960\u5c0f\u65f6) \u7684\u9884\u8bad\u7ec3\u6a21\u578b\u5217\u8868\uff1a\r\n\r\n|        \u4f7f\u7528\u6a21\u578b         | \u662f\u5426\u4e3a\u6d41\u5f0f | \u9884\u5904\u7406\u65b9\u5f0f | \u8bed\u8a00 | \u6d4b\u8bd5\u96c6\u8bcd\u9519\u7387  |   \u4e0b\u8f7d\u5730\u5740   |\r\n|:-------------------:|:-----:|:-----:|:--:|:-------:|:--------:|\r\n|    squeezeformer    | True  | fbank | \u82f1\u6587 | 0.09715 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n|      conformer      | True  | fbank | \u82f1\u6587 | 0.09265 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n| efficient_conformer | True  | fbank | \u82f1\u6587 |         | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n|     deepspeech2     | True  | fbank | \u82f1\u6587 | 0.19423 | \u52a0\u5165\u77e5\u8bc6\u661f\u7403\u83b7\u53d6 | \r\n\r\n\r\n**\u8bf4\u660e\uff1a** \r\n1. \u8fd9\u91cc\u5b57\u9519\u7387\u6216\u8005\u8bcd\u9519\u7387\u662f\u4f7f\u7528`eval.py`\u7a0b\u5e8f\u5e76\u4f7f\u7528\u96c6\u675f\u641c\u7d22\u89e3\u7801`ctc_beam_search`\u65b9\u6cd5\u8ba1\u7b97\u5f97\u5230\u7684\u3002\r\n2. \u6ca1\u6709\u63d0\u4f9b\u9884\u6d4b\u6a21\u578b\uff0c\u9700\u8981\u628a\u5168\u90e8\u6587\u4ef6\u590d\u5236\u5230\u9879\u76ee\u7684\u6839\u76ee\u5f55\u4e0b\uff0c\u6267\u884c`export_model.py`\u5bfc\u51fa\u9884\u6d4b\u6a21\u578b\u3002\r\n3. \u7531\u4e8e\u7b97\u529b\u4e0d\u8db3\uff0c\u8fd9\u91cc\u53ea\u63d0\u4f9b\u4e86\u6d41\u5f0f\u6a21\u578b\uff0c\u4f46\u5168\u90e8\u6a21\u578b\u90fd\u652f\u6301\u6d41\u5f0f\u548c\u975e\u6d41\u5f0f\u7684\uff0c\u5728\u914d\u7f6e\u6587\u4ef6\u4e2d`streaming`\u53c2\u6570\u8bbe\u7f6e\u3002\r\n\r\n>\u6709\u95ee\u9898\u6b22\u8fce\u63d0 [issue](https://github.com/yeyupiaoling/MASR/issues) \u4ea4\u6d41\r\n\r\n\r\n## \u6587\u6863\u6559\u7a0b\r\n\r\n- [\u5feb\u901f\u5b89\u88c5](./docs/install.md)\r\n- [\u5feb\u901f\u4f7f\u7528](./docs/GETTING_STARTED.md)\r\n- [\u6570\u636e\u51c6\u5907](./docs/dataset.md)\r\n- [WenetSpeech\u6570\u636e\u96c6](./docs/wenetspeech.md)\r\n- [\u5408\u6210\u8bed\u97f3\u6570\u636e](./docs/generate_audio.md)\r\n- [\u6570\u636e\u589e\u5f3a](./docs/augment.md)\r\n- [\u8bad\u7ec3\u6a21\u578b](./docs/train.md)\r\n- [\u96c6\u675f\u641c\u7d22\u89e3\u7801](./docs/beam_search.md)\r\n- [\u6267\u884c\u8bc4\u4f30](./docs/eval.md)\r\n- [\u5bfc\u51fa\u6a21\u578b](./docs/export_model.md)\r\n- [\u4f7f\u7528\u6807\u70b9\u7b26\u53f7\u6a21\u578b](./docs/punctuation.md)\r\n- [\u4f7f\u7528\u8bed\u97f3\u6d3b\u52a8\u68c0\u6d4b\uff08VAD\uff09](./docs/vad.md)\r\n- \u9884\u6d4b\r\n   - [\u672c\u5730\u9884\u6d4b](./docs/infer.md)\r\n   - [\u957f\u8bed\u97f3\u9884\u6d4b](./docs/infer.md)\r\n   - [Web\u90e8\u7f72\u6a21\u578b](./docs/infer.md)\r\n   - [GUI\u754c\u9762\u9884\u6d4b](./docs/infer.md)\r\n\r\n\r\n## \u76f8\u5173\u9879\u76ee\r\n - \u57fa\u4e8ePytorch\u5b9e\u73b0\u7684\u58f0\u7eb9\u8bc6\u522b\uff1a[VoiceprintRecognition-Pytorch](https://github.com/yeyupiaoling/VoiceprintRecognition-Pytorch)\r\n - \u57fa\u4e8ePytorch\u5b9e\u73b0\u7684\u5206\u7c7b\uff1a[AudioClassification-Pytorch](https://github.com/yeyupiaoling/AudioClassification-Pytorch)\r\n - \u57fa\u4e8ePaddlePaddle\u5b9e\u73b0\u7684\u8bed\u97f3\u8bc6\u522b\uff1a[PPASR](https://github.com/yeyupiaoling/PPASR)\r\n\r\n\r\n## \u6253\u8d4f\u4f5c\u8005\r\n\r\n<br/>\r\n<div align=\"center\">\r\n<p>\u6253\u8d4f\u4e00\u5757\u94b1\u652f\u6301\u4e00\u4e0b\u4f5c\u8005</p>\r\n<img src=\"https://yeyupiaoling.cn/reward.png\" alt=\"\u6253\u8d4f\u4f5c\u8005\" width=\"400\">\r\n</div>\r\n\r\n\r\n## \u53c2\u8003\u8d44\u6599\r\n - https://github.com/yeyupiaoling/PPASR\r\n - https://github.com/jiwidi/DeepSpeech-pytorch\r\n - https://github.com/wenet-e2e/WenetSpeech\r\n - https://github.com/SeanNaren/deepspeech.pytorch\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Automatic speech recognition toolkit on Pytorch",
    "version": "2.3.8",
    "project_urls": {
        "Download": "https://github.com/yeyupiaoling/MASR.git",
        "Homepage": "https://github.com/yeyupiaoling/MASR"
    },
    "split_keywords": [
        "asr",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cbd48267e59c0865236ffb44292a224f1ff5a485ff0ceb01650187921e91121e",
                "md5": "0be74d1478962f711a93788c938288d7",
                "sha256": "0eac9019512e82f2fdfcf396d7eb09826549daedac899a1b86aee6f97b6547a0"
            },
            "downloads": -1,
            "filename": "masr-2.3.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0be74d1478962f711a93788c938288d7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1641919,
            "upload_time": "2024-05-01T03:17:04",
            "upload_time_iso_8601": "2024-05-01T03:17:04.835444Z",
            "url": "https://files.pythonhosted.org/packages/cb/d4/8267e59c0865236ffb44292a224f1ff5a485ff0ceb01650187921e91121e/masr-2.3.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-01 03:17:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yeyupiaoling",
    "github_project": "MASR",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "masr"
}
        
Elapsed time: 0.42186s