sherpa-onnx


Namesherpa-onnx JSON
Version 1.10.34 PyPI version JSON
download
home_pagehttps://github.com/k2-fsa/sherpa-onnx
SummaryNone
upload_time2024-12-10 12:35:54
maintainerNone
docs_urlNone
authorThe sherpa-onnx development team
requires_python>=3.6
licenseApache licensed, as found in the LICENSE file
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ### Supported functions

|Speech recognition| Speech synthesis |
|------------------|------------------|
|   ✔️              |         ✔️        |

|Speaker identification| Speaker diarization | Speaker verification |
|----------------------|-------------------- |------------------------|
|   ✔️                  |         ✔️           |            ✔️           |

| Spoken Language identification | Audio tagging | Voice activity detection |
|--------------------------------|---------------|--------------------------|
|                 ✔️              |          ✔️    |                ✔️         |

| Keyword spotting | Add punctuation |
|------------------|-----------------|
|     ✔️            |       ✔️         |

### Supported platforms

|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |
|------------|---------|---------|------------|-------|-------|-----------|
|   x64      |  ✔️      |         |   ✔️        | ✔️     |  ✔️    |   ✔️       |
|   x86      |  ✔️      |         |   ✔️        |       |       |           |
|   arm64    |  ✔️      | ✔️       |   ✔️        | ✔️     |  ✔️    |   ✔️       |
|   arm32    |  ✔️      |         |            |       |  ✔️    |   ✔️       |
|   riscv64  |         |         |            |       |  ✔️    |           |

### Supported programming languages

| 1. C++ | 2. C  | 3. Python | 4. JavaScript |
|--------|-------|-----------|---------------|
|   ✔️    | ✔️     | ✔️         |    ✔️          |

|5. Java | 6. C# | 7. Kotlin | 8. Swift |
|--------|-------|-----------|----------|
| ✔️      |  ✔️    | ✔️         |  ✔️       |

| 9. Go | 10. Dart | 11. Rust | 12. Pascal |
|-------|----------|----------|------------|
| ✔️     |  ✔️       |   ✔️      |    ✔️       |

For Rust support, please see [sherpa-rs][sherpa-rs]

It also supports WebAssembly.

## Introduction

This repository supports running the following functions **locally**

  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported
  - Text-to-speech (i.e., TTS)
  - Speaker diarization
  - Speaker identification
  - Speaker verification
  - Spoken language identification
  - Audio tagging
  - VAD (e.g., [silero-vad][silero-vad])
  - Keyword spotting

on the following platforms and operating systems:

  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64)
  - Linux, macOS, Windows, openKylin
  - Android, WearOS
  - iOS
  - HarmonyOS
  - NodeJS
  - WebAssembly
  - [Raspberry Pi][Raspberry Pi]
  - [RV1126][RV1126]
  - [LicheePi4A][LicheePi4A]
  - [VisionFive 2][VisionFive 2]
  - [旭日X3派][旭日X3派]
  - [爱芯派][爱芯派]
  - etc

with the following APIs

  - C++, C, Python, Go, ``C#``
  - Java, Kotlin, JavaScript
  - Swift, Rust
  - Dart, Object Pascal

### Links for Huggingface Spaces

<details>
<summary>You can visit the following Huggingface spaces to try sherpa-onnx without
installing anything. All you need is a browser.</summary>

| Description                                           | URL                                     |
|-------------------------------------------------------|-----------------------------------------|
| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]|
| Speech recognition                                    | [Click me][hf-space-asr]                |
| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        |
| Speech synthesis                                      | [Click me][hf-space-tts]                |
| Generate subtitles                                    | [Click me][hf-space-subtitle]           |
| Audio tagging                                         | [Click me][hf-space-audio-tagging]      |
| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       |

We also have spaces built using WebAssembly. They are listed below:

| Description                                                                              | Huggingface space| ModelScope space|
|------------------------------------------------------------------------------------------|------------------|-----------------|
|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[地址][wasm-ms-vad]|
|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[地址][wasm-hf-streaming-asr-zh-en-zipformer]|
|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-paraformer]|
|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-yue-paraformer]|
|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[地址][wasm-ms-streaming-asr-en-zipformer]|
|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [地址][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|
|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [地址][wasm-ms-vad-asr-en-whisper-tiny-en]|
|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [地址][wasm-ms-vad-asr-en-moonshine-tiny-en]|
|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [地址][wasm-ms-vad-asr-en-zipformer-gigaspeech]|
|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [地址][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|
|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [地址][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|
|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [地址][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|
|VAD + speech recognition (Chinese 多种方言) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [地址][wasm-ms-vad-asr-zh-telespeech]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [地址][wasm-ms-vad-asr-zh-en-paraformer-large]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [地址][wasm-ms-vad-asr-zh-en-paraformer-small]|
|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [地址][wasm-ms-tts-piper-en]|
|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [地址][wasm-ms-tts-piper-de]|
|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[地址][wasm-ms-speaker-diarization]|

</details>

### Links for pre-built Android APKs

<details>

<summary>You can find pre-built Android APKs for this repository in the following table</summary>

| Description                            | URL                                | 中国用户                          |
|----------------------------------------|------------------------------------|-----------------------------------|
| Speaker diarization                    | [Address][apk-speaker-diarization] | [点此][apk-speaker-diarization-cn]|
| Streaming speech recognition           | [Address][apk-streaming-asr]       | [点此][apk-streaming-asr-cn]      |
| Text-to-speech                         | [Address][apk-tts]                 | [点此][apk-tts-cn]                |
| Voice activity detection (VAD)         | [Address][apk-vad]                 | [点此][apk-vad-cn]                |
| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [点此][apk-vad-asr-cn]            |
| Two-pass speech recognition            | [Address][apk-2pass]               | [点此][apk-2pass-cn]              |
| Audio tagging                          | [Address][apk-at]                  | [点此][apk-at-cn]                 |
| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [点此][apk-at-wearos-cn]          |
| Speaker identification                 | [Address][apk-sid]                 | [点此][apk-sid-cn]                |
| Spoken language identification         | [Address][apk-slid]                | [点此][apk-slid-cn]               |
| Keyword spotting                       | [Address][apk-kws]                 | [点此][apk-kws-cn]                |

</details>

### Links for pre-built Flutter APPs

<details>

#### Real-time speech recognition

| Description                    | URL                                 | 中国用户                            |
|--------------------------------|-------------------------------------|-------------------------------------|
| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [点此][apk-flutter-streaming-asr-cn]|

#### Text-to-speech

| Description                              | URL                                | 中国用户                           |
|------------------------------------------|------------------------------------|------------------------------------|
| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [点此][flutter-tts-android-cn]     |
| Linux (x64)                              | [Address][flutter-tts-linux]       | [点此][flutter-tts-linux-cn]       |
| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [点此][flutter-tts-macos-arm64-cn] |
| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [点此][flutter-tts-macos-x64-cn]   |
| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [点此][flutter-tts-win-x64-cn]     |

> Note: You need to build from source for iOS.

</details>

### Links for pre-built Lazarus APPs

<details>

#### Generating subtitles

| Description                    | URL                        | 中国用户                   |
|--------------------------------|----------------------------|----------------------------|
| Generate subtitles (生成字幕)  | [Address][lazarus-subtitle]| [点此][lazarus-subtitle-cn]|

</details>

### Links for pre-trained models

<details>

| Description                                 | URL                                                                                   |
|---------------------------------------------|---------------------------------------------------------------------------------------|
| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |
| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |
| VAD                                         | [Address][vad-models]                                                                 |
| Keyword spotting                            | [Address][kws-models]                                                                 |
| Audio tagging                               | [Address][at-models]                                                                  |
| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |
| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|
| Punctuation                                 | [Address][punct-models]                                                               |
| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |

</details>

#### Some pre-trained ASR models (Streaming)

<details>

Please see

  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>

for more models. The following table lists only **SOME** of them.


|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|
|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|
|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|
|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|

</details>


#### Some pre-trained ASR models (Non-Streaming)

<details>

Please see

  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>

for more models. The following table lists only **SOME** of them.

|Name | Supported Languages| Description|
|-----|-----|----|
|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|
|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|
|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| 支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|
|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| 也支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|
|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|
|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|
|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|
|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|
|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|
|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|
|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| 支持多种方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|

</details>

### Useful links

- Documentation: https://k2-fsa.github.io/sherpa/onnx/
- Bilibili 演示视频: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi

### How to reach us

Please see
https://k2-fsa.github.io/sherpa/social-groups.html
for 新一代 Kaldi **微信交流群** and **QQ 交流群**.

## Projects using sherpa-onnx

### [voiceapi](https://github.com/ruzhila/voiceapi)

<details>
  <summary>Streaming ASR and TTS based on FastAPI</summary>


It shows how to use the ASR and TTS Python APIs with FastAPI.
</details>

### [腾讯会议摸鱼工具 TMSpeech](https://github.com/jxlpzqc/TMSpeech)

Uses streaming ASR in C# with graphical user interface.

Video demo in Chinese: [【开源】Windows实时字幕软件(网课/开会必备)](https://www.bilibili.com/video/BV1rX4y1p7Nx)

### [lol互动助手](https://github.com/l1veIn/lol-wom-electron)

It uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)

Video demo in Chinese: [爆了!炫神教你开打字挂!真正影响胜率的英雄联盟工具!英雄联盟的最后一块拼图!和游戏中的每个人无障碍沟通!](https://www.bilibili.com/video/BV142tje9E74)


[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs
[silero-vad]: https://github.com/snakers4/silero-vad
[Raspberry Pi]: https://www.raspberrypi.com/
[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf
[LicheePi4A]: https://sipeed.com/licheepi4a
[VisionFive 2]: https://www.starfivetech.com/en/site/boards
[旭日X3派]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html
[爱芯派]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html
[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization
[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition
[Whisper]: https://github.com/openai/whisper
[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech
[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging
[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification
[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx
[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx
[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary
[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en
[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en
[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice
[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice
[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice
[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[ReazonSpeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf
[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[GigaSpeech2]: https://github.com/SpeechColab/GigaSpeech2
[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[TeleSpeech-ASR]: https://github.com/Tele-AI/TeleSpeech-ASR
[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx
[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx
[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html
[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html
[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html
[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html
[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html
[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html
[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html
[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html
[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html
[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html
[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html
[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html
[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html
[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html
[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html
[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html
[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html
[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html
[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html
[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html
[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html
[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html
[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html
[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html
[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html
[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html
[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html
[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html
[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html
[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html
[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html
[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html
[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html
[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html
[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html
[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx
[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models
[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models
[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models
[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models
[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech
[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech
[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2
[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2
[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2
[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2
[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2
[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2
[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2
[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2
[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2
[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2
[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2
[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2
[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/k2-fsa/sherpa-onnx",
    "name": "sherpa-onnx",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "The sherpa-onnx development team",
    "author_email": "dpovey@gmail.com",
    "download_url": null,
    "platform": null,
    "description": "### Supported functions\n\n|Speech recognition| Speech synthesis |\n|------------------|------------------|\n|   \u2714\ufe0f              |         \u2714\ufe0f        |\n\n|Speaker identification| Speaker diarization | Speaker verification |\n|----------------------|-------------------- |------------------------|\n|   \u2714\ufe0f                  |         \u2714\ufe0f           |            \u2714\ufe0f           |\n\n| Spoken Language identification | Audio tagging | Voice activity detection |\n|--------------------------------|---------------|--------------------------|\n|                 \u2714\ufe0f              |          \u2714\ufe0f    |                \u2714\ufe0f         |\n\n| Keyword spotting | Add punctuation |\n|------------------|-----------------|\n|     \u2714\ufe0f            |       \u2714\ufe0f         |\n\n### Supported platforms\n\n|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |\n|------------|---------|---------|------------|-------|-------|-----------|\n|   x64      |  \u2714\ufe0f      |         |   \u2714\ufe0f        | \u2714\ufe0f     |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   x86      |  \u2714\ufe0f      |         |   \u2714\ufe0f        |       |       |           |\n|   arm64    |  \u2714\ufe0f      | \u2714\ufe0f       |   \u2714\ufe0f        | \u2714\ufe0f     |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   arm32    |  \u2714\ufe0f      |         |            |       |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   riscv64  |         |         |            |       |  \u2714\ufe0f    |           |\n\n### Supported programming languages\n\n| 1. C++ | 2. C  | 3. Python | 4. JavaScript |\n|--------|-------|-----------|---------------|\n|   \u2714\ufe0f    | \u2714\ufe0f     | \u2714\ufe0f         |    \u2714\ufe0f          |\n\n|5. Java | 6. C# | 7. Kotlin | 8. Swift |\n|--------|-------|-----------|----------|\n| \u2714\ufe0f      |  \u2714\ufe0f    | \u2714\ufe0f         |  \u2714\ufe0f       |\n\n| 9. Go | 10. Dart | 11. Rust | 12. Pascal |\n|-------|----------|----------|------------|\n| \u2714\ufe0f     |  \u2714\ufe0f       |   \u2714\ufe0f      |    \u2714\ufe0f       |\n\nFor Rust support, please see [sherpa-rs][sherpa-rs]\n\nIt also supports WebAssembly.\n\n## Introduction\n\nThis repository supports running the following functions **locally**\n\n  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported\n  - Text-to-speech (i.e., TTS)\n  - Speaker diarization\n  - Speaker identification\n  - Speaker verification\n  - Spoken language identification\n  - Audio tagging\n  - VAD (e.g., [silero-vad][silero-vad])\n  - Keyword spotting\n\non the following platforms and operating systems:\n\n  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64)\n  - Linux, macOS, Windows, openKylin\n  - Android, WearOS\n  - iOS\n  - HarmonyOS\n  - NodeJS\n  - WebAssembly\n  - [Raspberry Pi][Raspberry Pi]\n  - [RV1126][RV1126]\n  - [LicheePi4A][LicheePi4A]\n  - [VisionFive 2][VisionFive 2]\n  - [\u65ed\u65e5X3\u6d3e][\u65ed\u65e5X3\u6d3e]\n  - [\u7231\u82af\u6d3e][\u7231\u82af\u6d3e]\n  - etc\n\nwith the following APIs\n\n  - C++, C, Python, Go, ``C#``\n  - Java, Kotlin, JavaScript\n  - Swift, Rust\n  - Dart, Object Pascal\n\n### Links for Huggingface Spaces\n\n<details>\n<summary>You can visit the following Huggingface spaces to try sherpa-onnx without\ninstalling anything. All you need is a browser.</summary>\n\n| Description                                           | URL                                     |\n|-------------------------------------------------------|-----------------------------------------|\n| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]|\n| Speech recognition                                    | [Click me][hf-space-asr]                |\n| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        |\n| Speech synthesis                                      | [Click me][hf-space-tts]                |\n| Generate subtitles                                    | [Click me][hf-space-subtitle]           |\n| Audio tagging                                         | [Click me][hf-space-audio-tagging]      |\n| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       |\n\nWe also have spaces built using WebAssembly. They are listed below:\n\n| Description                                                                              | Huggingface space| ModelScope space|\n|------------------------------------------------------------------------------------------|------------------|-----------------|\n|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[\u5730\u5740][wasm-ms-vad]|\n|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[\u5730\u5740][wasm-hf-streaming-asr-zh-en-zipformer]|\n|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-paraformer]|\n|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-yue-paraformer]|\n|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[\u5730\u5740][wasm-ms-streaming-asr-en-zipformer]|\n|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|\n|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-whisper-tiny-en]|\n|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-moonshine-tiny-en]|\n|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [\u5730\u5740][wasm-ms-vad-asr-en-zipformer-gigaspeech]|\n|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|\n|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [\u5730\u5740][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|\n|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [\u5730\u5740][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|\n|VAD + speech recognition (Chinese \u591a\u79cd\u65b9\u8a00) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-telespeech]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-large]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-small]|\n|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [\u5730\u5740][wasm-ms-tts-piper-en]|\n|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [\u5730\u5740][wasm-ms-tts-piper-de]|\n|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[\u5730\u5740][wasm-ms-speaker-diarization]|\n\n</details>\n\n### Links for pre-built Android APKs\n\n<details>\n\n<summary>You can find pre-built Android APKs for this repository in the following table</summary>\n\n| Description                            | URL                                | \u4e2d\u56fd\u7528\u6237                          |\n|----------------------------------------|------------------------------------|-----------------------------------|\n| Speaker diarization                    | [Address][apk-speaker-diarization] | [\u70b9\u6b64][apk-speaker-diarization-cn]|\n| Streaming speech recognition           | [Address][apk-streaming-asr]       | [\u70b9\u6b64][apk-streaming-asr-cn]      |\n| Text-to-speech                         | [Address][apk-tts]                 | [\u70b9\u6b64][apk-tts-cn]                |\n| Voice activity detection (VAD)         | [Address][apk-vad]                 | [\u70b9\u6b64][apk-vad-cn]                |\n| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [\u70b9\u6b64][apk-vad-asr-cn]            |\n| Two-pass speech recognition            | [Address][apk-2pass]               | [\u70b9\u6b64][apk-2pass-cn]              |\n| Audio tagging                          | [Address][apk-at]                  | [\u70b9\u6b64][apk-at-cn]                 |\n| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [\u70b9\u6b64][apk-at-wearos-cn]          |\n| Speaker identification                 | [Address][apk-sid]                 | [\u70b9\u6b64][apk-sid-cn]                |\n| Spoken language identification         | [Address][apk-slid]                | [\u70b9\u6b64][apk-slid-cn]               |\n| Keyword spotting                       | [Address][apk-kws]                 | [\u70b9\u6b64][apk-kws-cn]                |\n\n</details>\n\n### Links for pre-built Flutter APPs\n\n<details>\n\n#### Real-time speech recognition\n\n| Description                    | URL                                 | \u4e2d\u56fd\u7528\u6237                            |\n|--------------------------------|-------------------------------------|-------------------------------------|\n| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [\u70b9\u6b64][apk-flutter-streaming-asr-cn]|\n\n#### Text-to-speech\n\n| Description                              | URL                                | \u4e2d\u56fd\u7528\u6237                           |\n|------------------------------------------|------------------------------------|------------------------------------|\n| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [\u70b9\u6b64][flutter-tts-android-cn]     |\n| Linux (x64)                              | [Address][flutter-tts-linux]       | [\u70b9\u6b64][flutter-tts-linux-cn]       |\n| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [\u70b9\u6b64][flutter-tts-macos-arm64-cn] |\n| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [\u70b9\u6b64][flutter-tts-macos-x64-cn]   |\n| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [\u70b9\u6b64][flutter-tts-win-x64-cn]     |\n\n> Note: You need to build from source for iOS.\n\n</details>\n\n### Links for pre-built Lazarus APPs\n\n<details>\n\n#### Generating subtitles\n\n| Description                    | URL                        | \u4e2d\u56fd\u7528\u6237                   |\n|--------------------------------|----------------------------|----------------------------|\n| Generate subtitles (\u751f\u6210\u5b57\u5e55)  | [Address][lazarus-subtitle]| [\u70b9\u6b64][lazarus-subtitle-cn]|\n\n</details>\n\n### Links for pre-trained models\n\n<details>\n\n| Description                                 | URL                                                                                   |\n|---------------------------------------------|---------------------------------------------------------------------------------------|\n| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |\n| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |\n| VAD                                         | [Address][vad-models]                                                                 |\n| Keyword spotting                            | [Address][kws-models]                                                                 |\n| Audio tagging                               | [Address][at-models]                                                                  |\n| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |\n| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|\n| Punctuation                                 | [Address][punct-models]                                                               |\n| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |\n\n</details>\n\n#### Some pre-trained ASR models (Streaming)\n\n<details>\n\nPlease see\n\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|\n|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|\n|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|\n|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|\n\n</details>\n\n\n#### Some pre-trained ASR models (Non-Streaming)\n\n<details>\n\nPlease see\n\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|\n|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|\n|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| \u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|\n|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| \u4e5f\u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|\n|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|\n|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|\n|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|\n|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|\n|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|\n|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|\n|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| \u652f\u6301\u591a\u79cd\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|\n\n</details>\n\n### Useful links\n\n- Documentation: https://k2-fsa.github.io/sherpa/onnx/\n- Bilibili \u6f14\u793a\u89c6\u9891: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi\n\n### How to reach us\n\nPlease see\nhttps://k2-fsa.github.io/sherpa/social-groups.html\nfor \u65b0\u4e00\u4ee3 Kaldi **\u5fae\u4fe1\u4ea4\u6d41\u7fa4** and **QQ \u4ea4\u6d41\u7fa4**.\n\n## Projects using sherpa-onnx\n\n### [voiceapi](https://github.com/ruzhila/voiceapi)\n\n<details>\n  <summary>Streaming ASR and TTS based on FastAPI</summary>\n\n\nIt shows how to use the ASR and TTS Python APIs with FastAPI.\n</details>\n\n### [\u817e\u8baf\u4f1a\u8bae\u6478\u9c7c\u5de5\u5177 TMSpeech](https://github.com/jxlpzqc/TMSpeech)\n\nUses streaming ASR in C# with graphical user interface.\n\nVideo demo in Chinese: [\u3010\u5f00\u6e90\u3011Windows\u5b9e\u65f6\u5b57\u5e55\u8f6f\u4ef6\uff08\u7f51\u8bfe/\u5f00\u4f1a\u5fc5\u5907\uff09](https://www.bilibili.com/video/BV1rX4y1p7Nx)\n\n### [lol\u4e92\u52a8\u52a9\u624b](https://github.com/l1veIn/lol-wom-electron)\n\nIt uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)\n\nVideo demo in Chinese: [\u7206\u4e86\uff01\u70ab\u795e\u6559\u4f60\u5f00\u6253\u5b57\u6302\uff01\u771f\u6b63\u5f71\u54cd\u80dc\u7387\u7684\u82f1\u96c4\u8054\u76df\u5de5\u5177\uff01\u82f1\u96c4\u8054\u76df\u7684\u6700\u540e\u4e00\u5757\u62fc\u56fe\uff01\u548c\u6e38\u620f\u4e2d\u7684\u6bcf\u4e2a\u4eba\u65e0\u969c\u788d\u6c9f\u901a\uff01](https://www.bilibili.com/video/BV142tje9E74)\n\n\n[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs\n[silero-vad]: https://github.com/snakers4/silero-vad\n[Raspberry Pi]: https://www.raspberrypi.com/\n[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf\n[LicheePi4A]: https://sipeed.com/licheepi4a\n[VisionFive 2]: https://www.starfivetech.com/en/site/boards\n[\u65ed\u65e5X3\u6d3e]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html\n[\u7231\u82af\u6d3e]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html\n[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization\n[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition\n[Whisper]: https://github.com/openai/whisper\n[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper\n[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech\n[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos\n[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging\n[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification\n[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx\n[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx\n[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary\n[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice\n[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice\n[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice\n[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[ReazonSpeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf\n[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[GigaSpeech2]: https://github.com/SpeechColab/GigaSpeech2\n[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[TeleSpeech-ASR]: https://github.com/Tele-AI/TeleSpeech-ASR\n[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx\n[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx\n[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html\n[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html\n[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html\n[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html\n[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html\n[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html\n[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html\n[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html\n[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html\n[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html\n[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html\n[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html\n[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html\n[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html\n[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html\n[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html\n[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html\n[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html\n[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html\n[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html\n[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html\n[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html\n[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html\n[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html\n[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html\n[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html\n[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html\n[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html\n[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html\n[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html\n[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html\n[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html\n[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html\n[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html\n[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html\n[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html\n[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models\n[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models\n[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx\n[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models\n[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models\n[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models\n[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models\n[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech\n[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech\n[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2\n[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2\n[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2\n[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2\n[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2\n[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2\n[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2\n[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2\n[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2\n[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2\n[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2\n[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2\n",
    "bugtrack_url": null,
    "license": "Apache licensed, as found in the LICENSE file",
    "summary": null,
    "version": "1.10.34",
    "project_urls": {
        "Homepage": "https://github.com/k2-fsa/sherpa-onnx"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bf1ac555bce710af866237ab96e8b1f4dc77f9133b477615ef8573d2ecf547a3",
                "md5": "aeed5a6169a930d48f4445a6f2fc4f6b",
                "sha256": "57d80b176773e8adbc5e2380139f5aec2186aea53af4278b9c093860a1aaa80e"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp310-cp310-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "aeed5a6169a930d48f4445a6f2fc4f6b",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 34145507,
            "upload_time": "2024-12-10T12:35:54",
            "upload_time_iso_8601": "2024-12-10T12:35:54.071174Z",
            "url": "https://files.pythonhosted.org/packages/bf/1a/c555bce710af866237ab96e8b1f4dc77f9133b477615ef8573d2ecf547a3/sherpa_onnx-1.10.34-cp310-cp310-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "82a24c91deb662d61b06dc355607887b8c5b6649eaf8c6f90d495aeb12318aeb",
                "md5": "e68e1c4f6c5665f6e5f8bbd49d339af2",
                "sha256": "cf00f74abe0f3843e8e13aadd0407f96a651baa203f4f64a56260d6df4feaa09"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp310-cp310-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "e68e1c4f6c5665f6e5f8bbd49d339af2",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 18524488,
            "upload_time": "2024-12-10T12:48:10",
            "upload_time_iso_8601": "2024-12-10T12:48:10.384228Z",
            "url": "https://files.pythonhosted.org/packages/82/a2/4c91deb662d61b06dc355607887b8c5b6649eaf8c6f90d495aeb12318aeb/sherpa_onnx-1.10.34-cp310-cp310-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0003286378a988830141b9d85a4ae4498b294ee6cc2bf3924657944bbd3df039",
                "md5": "addfbbd8e0b3a12b303b837e7f8655b6",
                "sha256": "f4392a31478cef9808f3ef4eca2e0f91fb0a1b8a695ffb1ee1a37b085a51a5fe"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "addfbbd8e0b3a12b303b837e7f8655b6",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 21362280,
            "upload_time": "2024-12-10T13:13:46",
            "upload_time_iso_8601": "2024-12-10T13:13:46.568172Z",
            "url": "https://files.pythonhosted.org/packages/00/03/286378a988830141b9d85a4ae4498b294ee6cc2bf3924657944bbd3df039/sherpa_onnx-1.10.34-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "411c984d6940a9e0c48f9dda64bfbedddcb938fe9916d23fbf671ab46c75ddb4",
                "md5": "42017a86d76c80c9d539a2f1b9bf0eaa",
                "sha256": "0b120db4e4529d2a64397fe09d41f6f89a29223f983c7997e8a83f8620b000af"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp310-cp310-win32.whl",
            "has_sig": false,
            "md5_digest": "42017a86d76c80c9d539a2f1b9bf0eaa",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 18859931,
            "upload_time": "2024-12-10T12:42:08",
            "upload_time_iso_8601": "2024-12-10T12:42:08.821737Z",
            "url": "https://files.pythonhosted.org/packages/41/1c/984d6940a9e0c48f9dda64bfbedddcb938fe9916d23fbf671ab46c75ddb4/sherpa_onnx-1.10.34-cp310-cp310-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "df1ea589f2590a01a6fcf3dd76bb609e0b609c17eadfb0094b0699d9277c4741",
                "md5": "4eaeebf3f941a3d84ff64a2cc1a8442a",
                "sha256": "00ccfe814caf1dc161433d4619e940012d553dbe9122c76a9278afa473ec2575"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp311-cp311-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "4eaeebf3f941a3d84ff64a2cc1a8442a",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.6",
            "size": 34149115,
            "upload_time": "2024-12-10T12:36:19",
            "upload_time_iso_8601": "2024-12-10T12:36:19.968999Z",
            "url": "https://files.pythonhosted.org/packages/df/1e/a589f2590a01a6fcf3dd76bb609e0b609c17eadfb0094b0699d9277c4741/sherpa_onnx-1.10.34-cp311-cp311-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "677a322baccd95380235e11da85168b036febd68cec22fc48d53c4380c28a46a",
                "md5": "82497e325fdb4cf4b93e1e6cf6192411",
                "sha256": "d752bf3a3198172935b36f1c13826fa009e3e71f92545b3995ac63464e28f4f7"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp311-cp311-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "82497e325fdb4cf4b93e1e6cf6192411",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.6",
            "size": 18525972,
            "upload_time": "2024-12-10T13:12:46",
            "upload_time_iso_8601": "2024-12-10T13:12:46.499793Z",
            "url": "https://files.pythonhosted.org/packages/67/7a/322baccd95380235e11da85168b036febd68cec22fc48d53c4380c28a46a/sherpa_onnx-1.10.34-cp311-cp311-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "652fa75506d436b2f29cdf26392e03b8a84e6bb45623f17278ad0d1e707396e4",
                "md5": "1270e03860b42a392f9457aaf5462ebc",
                "sha256": "9d458f8fb78c3b607cdc9d527e3569816322aca406f73239cbba70a264d84e69"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "1270e03860b42a392f9457aaf5462ebc",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.6",
            "size": 21363233,
            "upload_time": "2024-12-10T13:09:09",
            "upload_time_iso_8601": "2024-12-10T13:09:09.365845Z",
            "url": "https://files.pythonhosted.org/packages/65/2f/a75506d436b2f29cdf26392e03b8a84e6bb45623f17278ad0d1e707396e4/sherpa_onnx-1.10.34-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "89a6da4830fc00aec1281e1d6b3258d178624fc73a3948f5bcb774c060aac2d2",
                "md5": "fdce9549ecb0bb81ce42950fd5d21c3e",
                "sha256": "86ee860a6c4903fb9f6d2f250c21c6796ac812635e964800fafe845a793ae926"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp311-cp311-win32.whl",
            "has_sig": false,
            "md5_digest": "fdce9549ecb0bb81ce42950fd5d21c3e",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.6",
            "size": 18860894,
            "upload_time": "2024-12-10T12:41:05",
            "upload_time_iso_8601": "2024-12-10T12:41:05.637590Z",
            "url": "https://files.pythonhosted.org/packages/89/a6/da4830fc00aec1281e1d6b3258d178624fc73a3948f5bcb774c060aac2d2/sherpa_onnx-1.10.34-cp311-cp311-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "886a1ad895d5b4916fdc6b2ccb8613332d5166eeb25ab01e152b114513d7a295",
                "md5": "a631152593dff309a98450466d2dcc7b",
                "sha256": "316461ff26f679304ba9ddc0e9ab9b3ba9ea9649a3601f68da3b93f365fd2034"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp312-cp312-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "a631152593dff309a98450466d2dcc7b",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 34163632,
            "upload_time": "2024-12-10T12:36:07",
            "upload_time_iso_8601": "2024-12-10T12:36:07.410396Z",
            "url": "https://files.pythonhosted.org/packages/88/6a/1ad895d5b4916fdc6b2ccb8613332d5166eeb25ab01e152b114513d7a295/sherpa_onnx-1.10.34-cp312-cp312-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3a7c0ae6408be8cac7d0ee0c921f8c4117289b9fd76055aebcce5b318602c3ce",
                "md5": "fbf55356e94887b13db48281dde65185",
                "sha256": "fb61a3736bd96650bfeefba3b9d38dd41ef79b57b23460786839682fec56616c"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp312-cp312-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "fbf55356e94887b13db48281dde65185",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 18536705,
            "upload_time": "2024-12-10T13:05:18",
            "upload_time_iso_8601": "2024-12-10T13:05:18.741115Z",
            "url": "https://files.pythonhosted.org/packages/3a/7c/0ae6408be8cac7d0ee0c921f8c4117289b9fd76055aebcce5b318602c3ce/sherpa_onnx-1.10.34-cp312-cp312-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "709903c88f2d50bf1f03fc6c32b3ee7f0c664c7462c8edf063a4ab4fe501a966",
                "md5": "3045aca0864bdb93895a85bc6535551b",
                "sha256": "d4b09764266c61d445c2e411e0284c94f83c03b27aa0a9a3772cebea9d0a2f44"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "3045aca0864bdb93895a85bc6535551b",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 21363456,
            "upload_time": "2024-12-10T13:15:13",
            "upload_time_iso_8601": "2024-12-10T13:15:13.657312Z",
            "url": "https://files.pythonhosted.org/packages/70/99/03c88f2d50bf1f03fc6c32b3ee7f0c664c7462c8edf063a4ab4fe501a966/sherpa_onnx-1.10.34-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "52e296a228b8cecae4a9b0179b3328eebcb21bc9a8240b1be3074a066f8b9aaf",
                "md5": "caec8dc9259715e564919d8011126781",
                "sha256": "2f343a058bee6b3e1222ba0cbf9b914e92d6d4aba00c38488e5e1605d350ea4a"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp312-cp312-win32.whl",
            "has_sig": false,
            "md5_digest": "caec8dc9259715e564919d8011126781",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 18861548,
            "upload_time": "2024-12-10T12:40:50",
            "upload_time_iso_8601": "2024-12-10T12:40:50.180544Z",
            "url": "https://files.pythonhosted.org/packages/52/e2/96a228b8cecae4a9b0179b3328eebcb21bc9a8240b1be3074a066f8b9aaf/sherpa_onnx-1.10.34-cp312-cp312-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "dd15fcb876b28901d32d108d234f2f42b13408f93cf4dcb21213ff8863488a12",
                "md5": "5f439d24d4396e294e1d4d9d4b6aafcc",
                "sha256": "bd4d2777eb97a3319d7c4c2b4962d44f158a8f24f09cc88295a7c2d74a592dc8"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp313-cp313-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "5f439d24d4396e294e1d4d9d4b6aafcc",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.6",
            "size": 34163588,
            "upload_time": "2024-12-10T12:34:51",
            "upload_time_iso_8601": "2024-12-10T12:34:51.874363Z",
            "url": "https://files.pythonhosted.org/packages/dd/15/fcb876b28901d32d108d234f2f42b13408f93cf4dcb21213ff8863488a12/sherpa_onnx-1.10.34-cp313-cp313-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "50da6b0982719a1d7012d3e50cbe7d34aab64f3679e2f76449419bafdb697b7a",
                "md5": "142d9022850b067bdc309fe8ab9bd275",
                "sha256": "c0fe034d0477e0e398911f66b59296089a19f6c2ff33fe29b34d7636602ff3a6"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp313-cp313-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "142d9022850b067bdc309fe8ab9bd275",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.6",
            "size": 18536685,
            "upload_time": "2024-12-10T13:13:49",
            "upload_time_iso_8601": "2024-12-10T13:13:49.365353Z",
            "url": "https://files.pythonhosted.org/packages/50/da/6b0982719a1d7012d3e50cbe7d34aab64f3679e2f76449419bafdb697b7a/sherpa_onnx-1.10.34-cp313-cp313-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e12d1c3a762d8d0fc1026a198c658a72334ce9f35c50a5ab037c26aa7a397330",
                "md5": "ec654f6332386f205d08e78f6774eb78",
                "sha256": "25cf5266d237c1abef98ed45a7a89097f89ea56ccf89503f3f2d2f2137d186a8"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp313-cp313-win32.whl",
            "has_sig": false,
            "md5_digest": "ec654f6332386f205d08e78f6774eb78",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.6",
            "size": 18862051,
            "upload_time": "2024-12-10T12:41:26",
            "upload_time_iso_8601": "2024-12-10T12:41:26.391890Z",
            "url": "https://files.pythonhosted.org/packages/e1/2d/1c3a762d8d0fc1026a198c658a72334ce9f35c50a5ab037c26aa7a397330/sherpa_onnx-1.10.34-cp313-cp313-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "97c09df037679005bf5a6761abd357327a7c4a2dccefd2be845f0893200c58e8",
                "md5": "d7487bff61a176025661daf1778fee50",
                "sha256": "e8589e47a6d8806231eadcdd45e7192a6b80785eeb37298bbf4ba5a48994f0cc"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp37-cp37m-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "d7487bff61a176025661daf1778fee50",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": ">=3.6",
            "size": 18516526,
            "upload_time": "2024-12-10T13:14:48",
            "upload_time_iso_8601": "2024-12-10T13:14:48.305755Z",
            "url": "https://files.pythonhosted.org/packages/97/c0/9df037679005bf5a6761abd357327a7c4a2dccefd2be845f0893200c58e8/sherpa_onnx-1.10.34-cp37-cp37m-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a432db35a6f5ef0f1ec49becb3a51653b07ccb54258f91489c1cd24f64b657d9",
                "md5": "ef0e98a211ef2897e5c955f3daa52459",
                "sha256": "3a5a5a7b8514977f6dd5205e8690ebd1ceb9481d1f00af1f1922295ac0905bfd"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp37-cp37m-win32.whl",
            "has_sig": false,
            "md5_digest": "ef0e98a211ef2897e5c955f3daa52459",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": ">=3.6",
            "size": 18862587,
            "upload_time": "2024-12-10T12:40:16",
            "upload_time_iso_8601": "2024-12-10T12:40:16.394962Z",
            "url": "https://files.pythonhosted.org/packages/a4/32/db35a6f5ef0f1ec49becb3a51653b07ccb54258f91489c1cd24f64b657d9/sherpa_onnx-1.10.34-cp37-cp37m-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "48535bd7c51c1aa8bc7b09d849b8907f284b8eec0716c8a306530204c2685c07",
                "md5": "f51669bb9311be1c69a5b7cc73714b33",
                "sha256": "478cec43fb2d3eae0f4b477c530fd496d9070d68a2c45bbbedf3b0aa63018814"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp38-cp38-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "f51669bb9311be1c69a5b7cc73714b33",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 34145055,
            "upload_time": "2024-12-10T12:49:57",
            "upload_time_iso_8601": "2024-12-10T12:49:57.105507Z",
            "url": "https://files.pythonhosted.org/packages/48/53/5bd7c51c1aa8bc7b09d849b8907f284b8eec0716c8a306530204c2685c07/sherpa_onnx-1.10.34-cp38-cp38-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d5f5b39734a47a5c9ab89dbfde400e74df087588d4efb6aae7451d35e12df675",
                "md5": "02ebb074a9c3244320e00bd715c02ae5",
                "sha256": "1924467bac143cde7bb971e60918413241f8eafd46491a799a05f55ffd851b5f"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp38-cp38-win32.whl",
            "has_sig": false,
            "md5_digest": "02ebb074a9c3244320e00bd715c02ae5",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 18860476,
            "upload_time": "2024-12-10T12:40:34",
            "upload_time_iso_8601": "2024-12-10T12:40:34.558201Z",
            "url": "https://files.pythonhosted.org/packages/d5/f5/b39734a47a5c9ab89dbfde400e74df087588d4efb6aae7451d35e12df675/sherpa_onnx-1.10.34-cp38-cp38-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "239e8dcd185ad1805d898b9aca4b2c439f958f788ad17c87ea998c2795768583",
                "md5": "5c585c0d6336ad506b002d3ffd8c6c23",
                "sha256": "de5e6680e4b61f6651c817f7740b732f976ff0841372b62b2706447bba556862"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp39-cp39-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "5c585c0d6336ad506b002d3ffd8c6c23",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.6",
            "size": 34145662,
            "upload_time": "2024-12-10T12:43:36",
            "upload_time_iso_8601": "2024-12-10T12:43:36.963346Z",
            "url": "https://files.pythonhosted.org/packages/23/9e/8dcd185ad1805d898b9aca4b2c439f958f788ad17c87ea998c2795768583/sherpa_onnx-1.10.34-cp39-cp39-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3dd20d0391189ebc7e59dadc41eb7fe81a337219a0b05f2bedd5ce40afcab3e3",
                "md5": "c85303dc2b8d7acce42a97dcc85ec41d",
                "sha256": "ca8f81e088fe016b54b843d7dedaad576f6c7442e5315883cd51e3a4a005db60"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.10.34-cp39-cp39-win32.whl",
            "has_sig": false,
            "md5_digest": "c85303dc2b8d7acce42a97dcc85ec41d",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.6",
            "size": 18861760,
            "upload_time": "2024-12-10T12:40:22",
            "upload_time_iso_8601": "2024-12-10T12:40:22.018192Z",
            "url": "https://files.pythonhosted.org/packages/3d/d2/0d0391189ebc7e59dadc41eb7fe81a337219a0b05f2bedd5ce40afcab3e3/sherpa_onnx-1.10.34-cp39-cp39-win32.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-10 12:35:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "k2-fsa",
    "github_project": "sherpa-onnx",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sherpa-onnx"
}
        
Elapsed time: 0.39457s