### Supported functions
|Speech recognition| [Speech synthesis][tts-url] | [Source separation][ss-url] |
|------------------|------------------|-------------------|
|   ✔️              |         ✔️        |       ✔️           |
|Speaker identification| [Speaker diarization][sd-url] | Speaker verification |
|----------------------|-------------------- |------------------------|
|   ✔️                  |         ✔️           |            ✔️           |
| [Spoken Language identification][slid-url] | [Audio tagging][at-url] | [Voice activity detection][vad-url] |
|--------------------------------|---------------|--------------------------|
|                 ✔️              |          ✔️    |                ✔️         |
| [Keyword spotting][kws-url] | [Add punctuation][punct-url] | [Speech enhancement][se-url] |
|------------------|-----------------|--------------------|
|     ✔️            |       ✔️         |      ✔️             |
### Supported platforms
|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |
|------------|---------|---------|------------|-------|-------|-----------|
|   x64      |  ✔️      |         |   ✔️      | ✔️    |  ✔️    |   ✔️   |
|   x86      |  ✔️      |         |   ✔️      |       |        |        |
|   arm64    |  ✔️      | ✔️      |   ✔️      | ✔️    |  ✔️    |   ✔️   |
|   arm32    |  ✔️      |         |           |       |  ✔️    |   ✔️   |
|   riscv64  |          |         |           |       |  ✔️    |        |
### Supported programming languages
| 1. C++ | 2. C  | 3. Python | 4. JavaScript |
|--------|-------|-----------|---------------|
|   ✔️    | ✔️     | ✔️         |    ✔️          |
|5. Java | 6. C# | 7. Kotlin | 8. Swift |
|--------|-------|-----------|----------|
| ✔️      |  ✔️    | ✔️         |  ✔️       |
| 9. Go | 10. Dart | 11. Rust | 12. Pascal |
|-------|----------|----------|------------|
| ✔️     |  ✔️       |   ✔️      |    ✔️       |
For Rust support, please see [sherpa-rs][sherpa-rs]
It also supports WebAssembly.
[Join our discord](https://discord.gg/fJdxzg2VbG)
## Introduction
This repository supports running the following functions **locally**
  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported
  - Text-to-speech (i.e., TTS)
  - Speaker diarization
  - Speaker identification
  - Speaker verification
  - Spoken language identification
  - Audio tagging
  - VAD (e.g., [silero-vad][silero-vad])
  - Speech enhancement (e.g., [gtcrn][gtcrn])
  - Keyword spotting
  - Source separation (e.g., [spleeter][spleeter], [UVR][UVR])
on the following platforms and operating systems:
  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64), **RK NPU**, **Ascend NPU**
  - Linux, macOS, Windows, openKylin
  - Android, WearOS
  - iOS
  - HarmonyOS
  - NodeJS
  - WebAssembly
  - [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)
  - [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)
  - [Raspberry Pi][Raspberry Pi]
  - [RV1126][RV1126]
  - [LicheePi4A][LicheePi4A]
  - [VisionFive 2][VisionFive 2]
  - [旭日X3派][旭日X3派]
  - [爱芯派][爱芯派]
  - [RK3588][RK3588]
  - etc
with the following APIs
  - C++, C, Python, Go, ``C#``
  - Java, Kotlin, JavaScript
  - Swift, Rust
  - Dart, Object Pascal
### Links for Huggingface Spaces
<details>
<summary>You can visit the following Huggingface spaces to try sherpa-onnx without
installing anything. All you need is a browser.</summary>
| Description                                           | URL                                     | 中国镜像                               |
|-------------------------------------------------------|-----------------------------------------|----------------------------------------|
| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]| [镜像][hf-space-speaker-diarization-cn]|
| Speech recognition                                    | [Click me][hf-space-asr]                | [镜像][hf-space-asr-cn]                |
| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        | [镜像][hf-space-asr-whisper-cn]        |
| Speech synthesis                                      | [Click me][hf-space-tts]                | [镜像][hf-space-tts-cn]                |
| Generate subtitles                                    | [Click me][hf-space-subtitle]           | [镜像][hf-space-subtitle-cn]           |
| Audio tagging                                         | [Click me][hf-space-audio-tagging]      | [镜像][hf-space-audio-tagging-cn]      |
| Source separation                                     | [Click me][hf-space-source-separation]  | [镜像][hf-space-source-separation-cn]  |
| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       | [镜像][hf-space-slid-whisper-cn]       |
We also have spaces built using WebAssembly. They are listed below:
| Description                                                                              | Huggingface space| ModelScope space|
|------------------------------------------------------------------------------------------|------------------|-----------------|
|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[地址][wasm-ms-vad]|
|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[地址][wasm-hf-streaming-asr-zh-en-zipformer]|
|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-paraformer]|
|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-yue-paraformer]|
|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[地址][wasm-ms-streaming-asr-en-zipformer]|
|VAD + speech recognition (Chinese) with [Zipformer CTC](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|[Click me][wasm-hf-vad-asr-zh-zipformer-ctc-07-03]| [地址][wasm-ms-vad-asr-zh-zipformer-ctc-07-03]|
|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [地址][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|
|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [地址][wasm-ms-vad-asr-en-whisper-tiny-en]|
|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [地址][wasm-ms-vad-asr-en-moonshine-tiny-en]|
|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [地址][wasm-ms-vad-asr-en-zipformer-gigaspeech]|
|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [地址][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|
|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [地址][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|
|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [地址][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|
|VAD + speech recognition (Chinese 多种方言) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [地址][wasm-ms-vad-asr-zh-telespeech]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [地址][wasm-ms-vad-asr-zh-en-paraformer-large]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [地址][wasm-ms-vad-asr-zh-en-paraformer-small]|
|VAD + speech recognition (多语种及多种中文方言) with [Dolphin][Dolphin]-base          |[Click me][wasm-hf-vad-asr-multi-lang-dolphin-base]| [地址][wasm-ms-vad-asr-multi-lang-dolphin-base]|
|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [地址][wasm-ms-tts-piper-en]|
|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [地址][wasm-ms-tts-piper-de]|
|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[地址][wasm-ms-speaker-diarization]|
</details>
### Links for pre-built Android APKs
<details>
<summary>You can find pre-built Android APKs for this repository in the following table</summary>
| Description                            | URL                                | 中国用户                          |
|----------------------------------------|------------------------------------|-----------------------------------|
| Speaker diarization                    | [Address][apk-speaker-diarization] | [点此][apk-speaker-diarization-cn]|
| Streaming speech recognition           | [Address][apk-streaming-asr]       | [点此][apk-streaming-asr-cn]      |
| Simulated-streaming speech recognition | [Address][apk-simula-streaming-asr]| [点此][apk-simula-streaming-asr-cn]|
| Text-to-speech                         | [Address][apk-tts]                 | [点此][apk-tts-cn]                |
| Voice activity detection (VAD)         | [Address][apk-vad]                 | [点此][apk-vad-cn]                |
| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [点此][apk-vad-asr-cn]            |
| Two-pass speech recognition            | [Address][apk-2pass]               | [点此][apk-2pass-cn]              |
| Audio tagging                          | [Address][apk-at]                  | [点此][apk-at-cn]                 |
| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [点此][apk-at-wearos-cn]          |
| Speaker identification                 | [Address][apk-sid]                 | [点此][apk-sid-cn]                |
| Spoken language identification         | [Address][apk-slid]                | [点此][apk-slid-cn]               |
| Keyword spotting                       | [Address][apk-kws]                 | [点此][apk-kws-cn]                |
</details>
### Links for pre-built Flutter APPs
<details>
#### Real-time speech recognition
| Description                    | URL                                 | 中国用户                            |
|--------------------------------|-------------------------------------|-------------------------------------|
| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [点此][apk-flutter-streaming-asr-cn]|
#### Text-to-speech
| Description                              | URL                                | 中国用户                           |
|------------------------------------------|------------------------------------|------------------------------------|
| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [点此][flutter-tts-android-cn]     |
| Linux (x64)                              | [Address][flutter-tts-linux]       | [点此][flutter-tts-linux-cn]       |
| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [点此][flutter-tts-macos-arm64-cn] |
| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [点此][flutter-tts-macos-x64-cn]   |
| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [点此][flutter-tts-win-x64-cn]     |
> Note: You need to build from source for iOS.
</details>
### Links for pre-built Lazarus APPs
<details>
#### Generating subtitles
| Description                    | URL                        | 中国用户                   |
|--------------------------------|----------------------------|----------------------------|
| Generate subtitles (生成字幕)  | [Address][lazarus-subtitle]| [点此][lazarus-subtitle-cn]|
</details>
### Links for pre-trained models
<details>
| Description                                 | URL                                                                                   |
|---------------------------------------------|---------------------------------------------------------------------------------------|
| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |
| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |
| VAD                                         | [Address][vad-models]                                                                 |
| Keyword spotting                            | [Address][kws-models]                                                                 |
| Audio tagging                               | [Address][at-models]                                                                  |
| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |
| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|
| Punctuation                                 | [Address][punct-models]                                                               |
| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |
| Speech enhancement                          | [Address][speech-enhancement-models]                                                  |
| Source separation                           | [Address][source-separation-models]                                                  |
</details>
#### Some pre-trained ASR models (Streaming)
<details>
Please see
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>
for more models. The following table lists only **SOME** of them.
|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|
|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|
|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|
|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|
</details>
#### Some pre-trained ASR models (Non-Streaming)
<details>
Please see
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>
for more models. The following table lists only **SOME** of them.
|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-parakeet-tdt-0-6b-v2-int8-english)| English | It is converted from <https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2>|
|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|
|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|
|[sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|Chinese| A Zipformer CTC model|
|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| 支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|
|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| 也支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|
|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|
|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|
|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|
|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|
|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|
|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|
|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| 支持多种方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|
</details>
### Useful links
- Documentation: https://k2-fsa.github.io/sherpa/onnx/
- Bilibili 演示视频: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi
### How to reach us
Please see
https://k2-fsa.github.io/sherpa/social-groups.html
for 新一代 Kaldi **微信交流群** and **QQ 交流群**.
## Projects using sherpa-onnx
### [BreezeApp](https://github.com/mtkresearch/BreezeApp) from [MediaTek Research](https://github.com/mtkresearch)
> BreezeAPP is a mobile AI application developed for both Android and iOS platforms.
> Users can download it directly from the App Store and enjoy a variety of features
> offline, including speech-to-text, text-to-speech, text-based chatbot interactions,
> and image question-answering
  - [Download APK for BreezeAPP](https://huggingface.co/MediaTek-Research/BreezeApp/resolve/main/BreezeApp.apk)
  - [APK 中国镜像](https://hf-mirror.com/MediaTek-Research/BreezeApp/blob/main/BreezeApp.apk)
| 1 | 2 | 3 |
|---|---|---|
||||
### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)
Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking
face running locally across platforms
See also <https://github.com/t41372/Open-LLM-VTuber/pull/50>
### [voiceapi](https://github.com/ruzhila/voiceapi)
<details>
  <summary>Streaming ASR and TTS based on FastAPI</summary>
It shows how to use the ASR and TTS Python APIs with FastAPI.
</details>
### [腾讯会议摸鱼工具 TMSpeech](https://github.com/jxlpzqc/TMSpeech)
Uses streaming ASR in C# with graphical user interface.
Video demo in Chinese: [【开源】Windows实时字幕软件(网课/开会必备)](https://www.bilibili.com/video/BV1rX4y1p7Nx)
### [lol互动助手](https://github.com/l1veIn/lol-wom-electron)
It uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)
Video demo in Chinese: [爆了!炫神教你开打字挂!真正影响胜率的英雄联盟工具!英雄联盟的最后一块拼图!和游戏中的每个人无障碍沟通!](https://www.bilibili.com/video/BV142tje9E74)
### [Sherpa-ONNX 语音识别服务器](https://github.com/hfyydd/sherpa-onnx-server)
A server based on nodejs providing Restful API for speech recognition.
### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)
一个模块化,全过程可离线,低占用率的对话机器人/智能音箱
It uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)
and [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)
are used.
### [Flutter-EasySpeechRecognition](https://github.com/Jason-chen-coder/Flutter-EasySpeechRecognition)
It extends [./flutter-examples/streaming_asr](./flutter-examples/streaming_asr) by
downloading models inside the app to reduce the size of the app.
Note: [[Team B] Sherpa AI backend](https://github.com/umgc/spring2025/pull/82) also uses
sherpa-onnx in a Flutter APP.
### [sherpa-onnx-unity](https://github.com/xue-fei/sherpa-onnx-unity)
sherpa-onnx in Unity. See also [#1695](https://github.com/k2-fsa/sherpa-onnx/issues/1695),
[#1892](https://github.com/k2-fsa/sherpa-onnx/issues/1892), and [#1859](https://github.com/k2-fsa/sherpa-onnx/issues/1859)
### [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)
本项目为xiaozhi-esp32提供后端服务,帮助您快速搭建ESP32设备控制服务器
Backend service for xiaozhi-esp32, helps you quickly build an ESP32 device control server.
See also
  - [ASR新增轻量级sherpa-onnx-asr](https://github.com/xinnan-tech/xiaozhi-esp32-server/issues/315)
  - [feat: ASR增加sherpa-onnx模型](https://github.com/xinnan-tech/xiaozhi-esp32-server/pull/379)
### [KaithemAutomation](https://github.com/EternityForest/KaithemAutomation)
Pure Python, GUI-focused home automation/consumer grade SCADA.
It uses TTS from sherpa-onnx. See also [✨ Speak command that uses the new globally configured TTS model.](https://github.com/EternityForest/KaithemAutomation/commit/8e64d2b138725e426532f7d66bb69dd0b4f53693)
### [Open-XiaoAI KWS](https://github.com/idootop/open-xiaoai-kws)
Enable custom wake word for XiaoAi Speakers. 让小爱音箱支持自定义唤醒词。
Video demo in Chinese: [小爱同学启动~˶╹ꇴ╹˶!](https://www.bilibili.com/video/BV1YfVUz5EMj)
### [C++ WebSocket ASR Server](https://github.com/mawwalker/stt-server)
It provides a WebSocket server based on C++ for ASR using sherpa-onnx.
### [Go WebSocket Server](https://github.com/bbeyondllove/asr_server)
It provides a WebSocket server based on the Go programming language for sherpa-onnx.
### [Making robot Paimon, Ep10 "The AI Part 1"](https://www.youtube.com/watch?v=KxPKkwxGWZs)
It is a [YouTube video](https://www.youtube.com/watch?v=KxPKkwxGWZs),
showing how the author tried to use AI so he can have a conversation with Paimon.
It uses sherpa-onnx for speech-to-text and text-to-speech.
|1|
|---|
||
### [TtsReader - Desktop application](https://github.com/ys-pro-duction/TtsReader)
A desktop text-to-speech application built using Kotlin Multiplatform.
### [MentraOS](https://github.com/Mentra-Community/MentraOS)
> Smart glasses OS, with dozens of built-in apps. Users get AI assistant, notifications,
> translation, screen mirror, captions, and more. Devs get to write 1 app that runs on
> any pair of smart glasses.
It uses sherpa-onnx for real-time speech recognition on iOS and Android devices.
See also <https://github.com/Mentra-Community/MentraOS/pull/861>
It uses Swift for iOS and Java for Android.
### [flet_sherpa_onnx](https://github.com/SamYuan1990/flet_sherpa_onnx)
Flet ASR/STT component based on sherpa-onnx.
Example [a chat box agent](https://github.com/SamYuan1990/i18n-agent-action)
### [elderly-companion](https://github.com/SearocIsMe/elderly-companion)
It uses sherpa-onnx's Python API for real-time speech recognition in ROS2 with RK NPU.
### [achatbot-go](https://github.com/ai-bot-pro/achatbot-go)
a multimodal chatbot based on go with sherpa-onnx's speech lib api.
[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs
[silero-vad]: https://github.com/snakers4/silero-vad
[Raspberry Pi]: https://www.raspberrypi.com/
[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf
[LicheePi4A]: https://sipeed.com/licheepi4a
[VisionFive 2]: https://www.starfivetech.com/en/site/boards
[旭日X3派]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html
[爱芯派]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html
[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization
[hf-space-speaker-diarization-cn]: https://hf.qhduan.com/spaces/k2-fsa/speaker-diarization
[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition
[hf-space-asr-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition
[Whisper]: https://github.com/openai/whisper
[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-asr-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech
[hf-space-tts-cn]: https://hf.qhduan.com/spaces/k2-fsa/text-to-speech
[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-subtitle-cn]: https://hf.qhduan.com/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging
[hf-space-audio-tagging-cn]: https://hf.qhduan.com/spaces/k2-fsa/audio-tagging
[hf-space-source-separation]: https://huggingface.co/spaces/k2-fsa/source-separation
[hf-space-source-separation-cn]: https://hf.qhduan.com/spaces/k2-fsa/source-separation
[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification
[hf-space-slid-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/spoken-language-identification
[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx
[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx
[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary
[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en
[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en
[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice
[wasm-hf-vad-asr-zh-zipformer-ctc-07-03]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc
[wasm-ms-vad-asr-zh-zipformer-ctc-07-03]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc/summary
[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice
[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice
[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[reazonspeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf
[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[gigaspeech2]: https://github.com/speechcolab/gigaspeech2
[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[telespeech-asr]: https://github.com/tele-ai/telespeech-asr
[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[dolphin]: https://github.com/dataoceanai/dolphin
[wasm-ms-vad-asr-multi-lang-dolphin-base]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc
[wasm-hf-vad-asr-multi-lang-dolphin-base]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc
[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx
[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx
[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html
[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html
[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html
[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html
[apk-simula-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr.html
[apk-simula-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr-cn.html
[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html
[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html
[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html
[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html
[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html
[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html
[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html
[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html
[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html
[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html
[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html
[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html
[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html
[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html
[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html
[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html
[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html
[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html
[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html
[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html
[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html
[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html
[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html
[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html
[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html
[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html
[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html
[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html
[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html
[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html
[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html
[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx
[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models
[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models
[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models
[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models
[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech
[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech
[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2
[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2
[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2
[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2
[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2
[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2
[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2
[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2
[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2
[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2
[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2
[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2
[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2
[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9
[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/
[speech-enhancement-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speech-enhancement-models
[source-separation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/source-separation-models
[RK3588]: https://www.rock-chips.com/uploads/pdf/2022.8.26/192/RK3588%20Brief%20Datasheet.pdf
[spleeter]: https://github.com/deezer/spleeter
[UVR]: https://github.com/Anjok07/ultimatevocalremovergui
[gtcrn]: https://github.com/Xiaobin-Rong/gtcrn
[tts-url]: https://k2-fsa.github.io/sherpa/onnx/tts/all-in-one.html
[ss-url]: https://k2-fsa.github.io/sherpa/onnx/source-separation/index.html
[sd-url]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/index.html
[slid-url]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/index.html
[at-url]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/index.html
[vad-url]: https://k2-fsa.github.io/sherpa/onnx/vad/index.html
[kws-url]: https://k2-fsa.github.io/sherpa/onnx/kws/index.html
[punct-url]: https://k2-fsa.github.io/sherpa/onnx/punctuation/index.html
[se-url]: https://k2-fsa.github.io/sherpa/onnx/speech-enhancment/index.html
            
         
        Raw data
        
            {
    "_id": null,
    "home_page": "https://github.com/k2-fsa/sherpa-onnx",
    "name": "sherpa-onnx",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "The sherpa-onnx development team",
    "author_email": "dpovey@gmail.com",
    "download_url": null,
    "platform": null,
    "description": "### Supported functions\r\n\r\n|Speech recognition| [Speech synthesis][tts-url] | [Source separation][ss-url] |\r\n|------------------|------------------|-------------------|\r\n|   \u2714\ufe0f              |         \u2714\ufe0f        |       \u2714\ufe0f           |\r\n\r\n|Speaker identification| [Speaker diarization][sd-url] | Speaker verification |\r\n|----------------------|-------------------- |------------------------|\r\n|   \u2714\ufe0f                  |         \u2714\ufe0f           |            \u2714\ufe0f           |\r\n\r\n| [Spoken Language identification][slid-url] | [Audio tagging][at-url] | [Voice activity detection][vad-url] |\r\n|--------------------------------|---------------|--------------------------|\r\n|                 \u2714\ufe0f              |          \u2714\ufe0f    |                \u2714\ufe0f         |\r\n\r\n| [Keyword spotting][kws-url] | [Add punctuation][punct-url] | [Speech enhancement][se-url] |\r\n|------------------|-----------------|--------------------|\r\n|     \u2714\ufe0f            |       \u2714\ufe0f         |      \u2714\ufe0f             |\r\n\r\n\r\n### Supported platforms\r\n\r\n|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |\r\n|------------|---------|---------|------------|-------|-------|-----------|\r\n|   x64      |  \u2714\ufe0f      |         |   \u2714\ufe0f      | \u2714\ufe0f    |  \u2714\ufe0f    |   \u2714\ufe0f   |\r\n|   x86      |  \u2714\ufe0f      |         |   \u2714\ufe0f      |       |        |        |\r\n|   arm64    |  \u2714\ufe0f      | \u2714\ufe0f      |   \u2714\ufe0f      | \u2714\ufe0f    |  \u2714\ufe0f    |   \u2714\ufe0f   |\r\n|   arm32    |  \u2714\ufe0f      |         |           |       |  \u2714\ufe0f    |   \u2714\ufe0f   |\r\n|   riscv64  |          |         |           |       |  \u2714\ufe0f    |        |\r\n\r\n### Supported programming languages\r\n\r\n| 1. C++ | 2. C  | 3. Python | 4. JavaScript |\r\n|--------|-------|-----------|---------------|\r\n|   \u2714\ufe0f    | \u2714\ufe0f     | \u2714\ufe0f         |    \u2714\ufe0f          |\r\n\r\n|5. Java | 6. C# | 7. Kotlin | 8. Swift |\r\n|--------|-------|-----------|----------|\r\n| \u2714\ufe0f      |  \u2714\ufe0f    | \u2714\ufe0f         |  \u2714\ufe0f       |\r\n\r\n| 9. Go | 10. Dart | 11. Rust | 12. Pascal |\r\n|-------|----------|----------|------------|\r\n| \u2714\ufe0f     |  \u2714\ufe0f       |   \u2714\ufe0f      |    \u2714\ufe0f       |\r\n\r\nFor Rust support, please see [sherpa-rs][sherpa-rs]\r\n\r\nIt also supports WebAssembly.\r\n\r\n[Join our discord](https://discord.gg/fJdxzg2VbG)\r\n\r\n\r\n## Introduction\r\n\r\nThis repository supports running the following functions **locally**\r\n\r\n  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported\r\n  - Text-to-speech (i.e., TTS)\r\n  - Speaker diarization\r\n  - Speaker identification\r\n  - Speaker verification\r\n  - Spoken language identification\r\n  - Audio tagging\r\n  - VAD (e.g., [silero-vad][silero-vad])\r\n  - Speech enhancement (e.g., [gtcrn][gtcrn])\r\n  - Keyword spotting\r\n  - Source separation (e.g., [spleeter][spleeter], [UVR][UVR])\r\n\r\non the following platforms and operating systems:\r\n\r\n  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64), **RK NPU**, **Ascend NPU**\r\n  - Linux, macOS, Windows, openKylin\r\n  - Android, WearOS\r\n  - iOS\r\n  - HarmonyOS\r\n  - NodeJS\r\n  - WebAssembly\r\n  - [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)\r\n  - [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)\r\n  - [Raspberry Pi][Raspberry Pi]\r\n  - [RV1126][RV1126]\r\n  - [LicheePi4A][LicheePi4A]\r\n  - [VisionFive 2][VisionFive 2]\r\n  - [\u65ed\u65e5X3\u6d3e][\u65ed\u65e5X3\u6d3e]\r\n  - [\u7231\u82af\u6d3e][\u7231\u82af\u6d3e]\r\n  - [RK3588][RK3588]\r\n  - etc\r\n\r\nwith the following APIs\r\n\r\n  - C++, C, Python, Go, ``C#``\r\n  - Java, Kotlin, JavaScript\r\n  - Swift, Rust\r\n  - Dart, Object Pascal\r\n\r\n### Links for Huggingface Spaces\r\n\r\n<details>\r\n<summary>You can visit the following Huggingface spaces to try sherpa-onnx without\r\ninstalling anything. All you need is a browser.</summary>\r\n\r\n| Description                                           | URL                                     | \u4e2d\u56fd\u955c\u50cf                               |\r\n|-------------------------------------------------------|-----------------------------------------|----------------------------------------|\r\n| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]| [\u955c\u50cf][hf-space-speaker-diarization-cn]|\r\n| Speech recognition                                    | [Click me][hf-space-asr]                | [\u955c\u50cf][hf-space-asr-cn]                |\r\n| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        | [\u955c\u50cf][hf-space-asr-whisper-cn]        |\r\n| Speech synthesis                                      | [Click me][hf-space-tts]                | [\u955c\u50cf][hf-space-tts-cn]                |\r\n| Generate subtitles                                    | [Click me][hf-space-subtitle]           | [\u955c\u50cf][hf-space-subtitle-cn]           |\r\n| Audio tagging                                         | [Click me][hf-space-audio-tagging]      | [\u955c\u50cf][hf-space-audio-tagging-cn]      |\r\n| Source separation                                     | [Click me][hf-space-source-separation]  | [\u955c\u50cf][hf-space-source-separation-cn]  |\r\n| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       | [\u955c\u50cf][hf-space-slid-whisper-cn]       |\r\n\r\nWe also have spaces built using WebAssembly. They are listed below:\r\n\r\n| Description                                                                              | Huggingface space| ModelScope space|\r\n|------------------------------------------------------------------------------------------|------------------|-----------------|\r\n|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[\u5730\u5740][wasm-ms-vad]|\r\n|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[\u5730\u5740][wasm-hf-streaming-asr-zh-en-zipformer]|\r\n|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-paraformer]|\r\n|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-yue-paraformer]|\r\n|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[\u5730\u5740][wasm-ms-streaming-asr-en-zipformer]|\r\n|VAD + speech recognition (Chinese) with [Zipformer CTC](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|[Click me][wasm-hf-vad-asr-zh-zipformer-ctc-07-03]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-ctc-07-03]|\r\n|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|\r\n|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-whisper-tiny-en]|\r\n|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-moonshine-tiny-en]|\r\n|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [\u5730\u5740][wasm-ms-vad-asr-en-zipformer-gigaspeech]|\r\n|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|\r\n|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [\u5730\u5740][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|\r\n|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [\u5730\u5740][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|\r\n|VAD + speech recognition (Chinese \u591a\u79cd\u65b9\u8a00) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-telespeech]|\r\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-large]|\r\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-small]|\r\n|VAD + speech recognition (\u591a\u8bed\u79cd\u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with [Dolphin][Dolphin]-base          |[Click me][wasm-hf-vad-asr-multi-lang-dolphin-base]| [\u5730\u5740][wasm-ms-vad-asr-multi-lang-dolphin-base]|\r\n|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [\u5730\u5740][wasm-ms-tts-piper-en]|\r\n|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [\u5730\u5740][wasm-ms-tts-piper-de]|\r\n|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[\u5730\u5740][wasm-ms-speaker-diarization]|\r\n\r\n</details>\r\n\r\n### Links for pre-built Android APKs\r\n\r\n<details>\r\n\r\n<summary>You can find pre-built Android APKs for this repository in the following table</summary>\r\n\r\n| Description                            | URL                                | \u4e2d\u56fd\u7528\u6237                          |\r\n|----------------------------------------|------------------------------------|-----------------------------------|\r\n| Speaker diarization                    | [Address][apk-speaker-diarization] | [\u70b9\u6b64][apk-speaker-diarization-cn]|\r\n| Streaming speech recognition           | [Address][apk-streaming-asr]       | [\u70b9\u6b64][apk-streaming-asr-cn]      |\r\n| Simulated-streaming speech recognition | [Address][apk-simula-streaming-asr]| [\u70b9\u6b64][apk-simula-streaming-asr-cn]|\r\n| Text-to-speech                         | [Address][apk-tts]                 | [\u70b9\u6b64][apk-tts-cn]                |\r\n| Voice activity detection (VAD)         | [Address][apk-vad]                 | [\u70b9\u6b64][apk-vad-cn]                |\r\n| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [\u70b9\u6b64][apk-vad-asr-cn]            |\r\n| Two-pass speech recognition            | [Address][apk-2pass]               | [\u70b9\u6b64][apk-2pass-cn]              |\r\n| Audio tagging                          | [Address][apk-at]                  | [\u70b9\u6b64][apk-at-cn]                 |\r\n| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [\u70b9\u6b64][apk-at-wearos-cn]          |\r\n| Speaker identification                 | [Address][apk-sid]                 | [\u70b9\u6b64][apk-sid-cn]                |\r\n| Spoken language identification         | [Address][apk-slid]                | [\u70b9\u6b64][apk-slid-cn]               |\r\n| Keyword spotting                       | [Address][apk-kws]                 | [\u70b9\u6b64][apk-kws-cn]                |\r\n\r\n</details>\r\n\r\n### Links for pre-built Flutter APPs\r\n\r\n<details>\r\n\r\n#### Real-time speech recognition\r\n\r\n| Description                    | URL                                 | \u4e2d\u56fd\u7528\u6237                            |\r\n|--------------------------------|-------------------------------------|-------------------------------------|\r\n| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [\u70b9\u6b64][apk-flutter-streaming-asr-cn]|\r\n\r\n#### Text-to-speech\r\n\r\n| Description                              | URL                                | \u4e2d\u56fd\u7528\u6237                           |\r\n|------------------------------------------|------------------------------------|------------------------------------|\r\n| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [\u70b9\u6b64][flutter-tts-android-cn]     |\r\n| Linux (x64)                              | [Address][flutter-tts-linux]       | [\u70b9\u6b64][flutter-tts-linux-cn]       |\r\n| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [\u70b9\u6b64][flutter-tts-macos-arm64-cn] |\r\n| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [\u70b9\u6b64][flutter-tts-macos-x64-cn]   |\r\n| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [\u70b9\u6b64][flutter-tts-win-x64-cn]     |\r\n\r\n> Note: You need to build from source for iOS.\r\n\r\n</details>\r\n\r\n### Links for pre-built Lazarus APPs\r\n\r\n<details>\r\n\r\n#### Generating subtitles\r\n\r\n| Description                    | URL                        | \u4e2d\u56fd\u7528\u6237                   |\r\n|--------------------------------|----------------------------|----------------------------|\r\n| Generate subtitles (\u751f\u6210\u5b57\u5e55)  | [Address][lazarus-subtitle]| [\u70b9\u6b64][lazarus-subtitle-cn]|\r\n\r\n</details>\r\n\r\n### Links for pre-trained models\r\n\r\n<details>\r\n\r\n| Description                                 | URL                                                                                   |\r\n|---------------------------------------------|---------------------------------------------------------------------------------------|\r\n| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |\r\n| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |\r\n| VAD                                         | [Address][vad-models]                                                                 |\r\n| Keyword spotting                            | [Address][kws-models]                                                                 |\r\n| Audio tagging                               | [Address][at-models]                                                                  |\r\n| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |\r\n| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|\r\n| Punctuation                                 | [Address][punct-models]                                                               |\r\n| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |\r\n| Speech enhancement                          | [Address][speech-enhancement-models]                                                  |\r\n| Source separation                           | [Address][source-separation-models]                                                  |\r\n\r\n</details>\r\n\r\n#### Some pre-trained ASR models (Streaming)\r\n\r\n<details>\r\n\r\nPlease see\r\n\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>\r\n\r\nfor more models. The following table lists only **SOME** of them.\r\n\r\n\r\n|Name | Supported Languages| Description|\r\n|-----|-----|----|\r\n|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|\r\n|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|\r\n|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|\r\n|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|\r\n|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|\r\n|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|\r\n\r\n</details>\r\n\r\n\r\n#### Some pre-trained ASR models (Non-Streaming)\r\n\r\n<details>\r\n\r\nPlease see\r\n\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>\r\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>\r\n\r\nfor more models. The following table lists only **SOME** of them.\r\n\r\n|Name | Supported Languages| Description|\r\n|-----|-----|----|\r\n|[sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-parakeet-tdt-0-6b-v2-int8-english)| English | It is converted from <https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2>|\r\n|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|\r\n|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|\r\n|[sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|Chinese| A Zipformer CTC model|\r\n|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| \u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|\r\n|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| \u4e5f\u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|\r\n|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|\r\n|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|\r\n|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|\r\n|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|\r\n|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|\r\n|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|\r\n|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| \u652f\u6301\u591a\u79cd\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|\r\n\r\n</details>\r\n\r\n### Useful links\r\n\r\n- Documentation: https://k2-fsa.github.io/sherpa/onnx/\r\n- Bilibili \u6f14\u793a\u89c6\u9891: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi\r\n\r\n### How to reach us\r\n\r\nPlease see\r\nhttps://k2-fsa.github.io/sherpa/social-groups.html\r\nfor \u65b0\u4e00\u4ee3 Kaldi **\u5fae\u4fe1\u4ea4\u6d41\u7fa4** and **QQ \u4ea4\u6d41\u7fa4**.\r\n\r\n## Projects using sherpa-onnx\r\n\r\n### [BreezeApp](https://github.com/mtkresearch/BreezeApp) from [MediaTek Research](https://github.com/mtkresearch)\r\n\r\n> BreezeAPP is a mobile AI application developed for both Android and iOS platforms.\r\n> Users can download it directly from the App Store and enjoy a variety of features\r\n> offline, including speech-to-text, text-to-speech, text-based chatbot interactions,\r\n> and image question-answering\r\n\r\n  - [Download APK for BreezeAPP](https://huggingface.co/MediaTek-Research/BreezeApp/resolve/main/BreezeApp.apk)\r\n  - [APK \u4e2d\u56fd\u955c\u50cf](https://hf-mirror.com/MediaTek-Research/BreezeApp/blob/main/BreezeApp.apk)\r\n\r\n| 1 | 2 | 3 |\r\n|---|---|---|\r\n||||\r\n\r\n### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)\r\n\r\nTalk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking\r\nface running locally across platforms\r\n\r\nSee also <https://github.com/t41372/Open-LLM-VTuber/pull/50>\r\n\r\n### [voiceapi](https://github.com/ruzhila/voiceapi)\r\n\r\n<details>\r\n  <summary>Streaming ASR and TTS based on FastAPI</summary>\r\n\r\n\r\nIt shows how to use the ASR and TTS Python APIs with FastAPI.\r\n</details>\r\n\r\n### [\u817e\u8baf\u4f1a\u8bae\u6478\u9c7c\u5de5\u5177 TMSpeech](https://github.com/jxlpzqc/TMSpeech)\r\n\r\nUses streaming ASR in C# with graphical user interface.\r\n\r\nVideo demo in Chinese: [\u3010\u5f00\u6e90\u3011Windows\u5b9e\u65f6\u5b57\u5e55\u8f6f\u4ef6\uff08\u7f51\u8bfe/\u5f00\u4f1a\u5fc5\u5907\uff09](https://www.bilibili.com/video/BV1rX4y1p7Nx)\r\n\r\n### [lol\u4e92\u52a8\u52a9\u624b](https://github.com/l1veIn/lol-wom-electron)\r\n\r\nIt uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)\r\n\r\nVideo demo in Chinese: [\u7206\u4e86\uff01\u70ab\u795e\u6559\u4f60\u5f00\u6253\u5b57\u6302\uff01\u771f\u6b63\u5f71\u54cd\u80dc\u7387\u7684\u82f1\u96c4\u8054\u76df\u5de5\u5177\uff01\u82f1\u96c4\u8054\u76df\u7684\u6700\u540e\u4e00\u5757\u62fc\u56fe\uff01\u548c\u6e38\u620f\u4e2d\u7684\u6bcf\u4e2a\u4eba\u65e0\u969c\u788d\u6c9f\u901a\uff01](https://www.bilibili.com/video/BV142tje9E74)\r\n\r\n### [Sherpa-ONNX \u8bed\u97f3\u8bc6\u522b\u670d\u52a1\u5668](https://github.com/hfyydd/sherpa-onnx-server)\r\n\r\nA server based on nodejs providing Restful API for speech recognition.\r\n\r\n### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)\r\n\r\n\u4e00\u4e2a\u6a21\u5757\u5316\uff0c\u5168\u8fc7\u7a0b\u53ef\u79bb\u7ebf\uff0c\u4f4e\u5360\u7528\u7387\u7684\u5bf9\u8bdd\u673a\u5668\u4eba/\u667a\u80fd\u97f3\u7bb1\r\n\r\nIt uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)\r\nand [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)\r\nare used.\r\n\r\n### [Flutter-EasySpeechRecognition](https://github.com/Jason-chen-coder/Flutter-EasySpeechRecognition)\r\n\r\nIt extends [./flutter-examples/streaming_asr](./flutter-examples/streaming_asr) by\r\ndownloading models inside the app to reduce the size of the app.\r\n\r\nNote: [[Team B] Sherpa AI backend](https://github.com/umgc/spring2025/pull/82) also uses\r\nsherpa-onnx in a Flutter APP.\r\n\r\n### [sherpa-onnx-unity](https://github.com/xue-fei/sherpa-onnx-unity)\r\n\r\nsherpa-onnx in Unity. See also [#1695](https://github.com/k2-fsa/sherpa-onnx/issues/1695),\r\n[#1892](https://github.com/k2-fsa/sherpa-onnx/issues/1892), and [#1859](https://github.com/k2-fsa/sherpa-onnx/issues/1859)\r\n\r\n### [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)\r\n\r\n\u672c\u9879\u76ee\u4e3axiaozhi-esp32\u63d0\u4f9b\u540e\u7aef\u670d\u52a1\uff0c\u5e2e\u52a9\u60a8\u5feb\u901f\u642d\u5efaESP32\u8bbe\u5907\u63a7\u5236\u670d\u52a1\u5668\r\nBackend service for xiaozhi-esp32, helps you quickly build an ESP32 device control server.\r\n\r\nSee also\r\n\r\n  - [ASR\u65b0\u589e\u8f7b\u91cf\u7ea7sherpa-onnx-asr](https://github.com/xinnan-tech/xiaozhi-esp32-server/issues/315)\r\n  - [feat: ASR\u589e\u52a0sherpa-onnx\u6a21\u578b](https://github.com/xinnan-tech/xiaozhi-esp32-server/pull/379)\r\n\r\n### [KaithemAutomation](https://github.com/EternityForest/KaithemAutomation)\r\n\r\nPure Python, GUI-focused home automation/consumer grade SCADA.\r\n\r\nIt uses TTS from sherpa-onnx. See also [\u2728 Speak command that uses the new globally configured TTS model.](https://github.com/EternityForest/KaithemAutomation/commit/8e64d2b138725e426532f7d66bb69dd0b4f53693)\r\n\r\n### [Open-XiaoAI KWS](https://github.com/idootop/open-xiaoai-kws)\r\n\r\nEnable custom wake word for XiaoAi Speakers. \u8ba9\u5c0f\u7231\u97f3\u7bb1\u652f\u6301\u81ea\u5b9a\u4e49\u5524\u9192\u8bcd\u3002\r\n\r\nVideo demo in Chinese: [\u5c0f\u7231\u540c\u5b66\u542f\u52a8\uff5e\u02f6\u2579\ua1f4\u2579\u02f6\uff01](https://www.bilibili.com/video/BV1YfVUz5EMj)\r\n\r\n### [C++ WebSocket ASR Server](https://github.com/mawwalker/stt-server)\r\n\r\nIt provides a WebSocket server based on C++ for ASR using sherpa-onnx.\r\n\r\n### [Go WebSocket Server](https://github.com/bbeyondllove/asr_server)\r\n\r\nIt provides a WebSocket server based on the Go programming language for sherpa-onnx.\r\n\r\n### [Making robot Paimon, Ep10 \"The AI Part 1\"](https://www.youtube.com/watch?v=KxPKkwxGWZs)\r\n\r\nIt is a [YouTube video](https://www.youtube.com/watch?v=KxPKkwxGWZs),\r\nshowing how the author tried to use AI so he can have a conversation with Paimon.\r\n\r\nIt uses sherpa-onnx for speech-to-text and text-to-speech.\r\n|1|\r\n|---|\r\n||\r\n\r\n### [TtsReader - Desktop application](https://github.com/ys-pro-duction/TtsReader)\r\n\r\nA desktop text-to-speech application built using Kotlin Multiplatform.\r\n\r\n### [MentraOS](https://github.com/Mentra-Community/MentraOS)\r\n\r\n> Smart glasses OS, with dozens of built-in apps. Users get AI assistant, notifications,\r\n> translation, screen mirror, captions, and more. Devs get to write 1 app that runs on\r\n> any pair of smart glasses.\r\n\r\nIt uses sherpa-onnx for real-time speech recognition on iOS and Android devices.\r\nSee also <https://github.com/Mentra-Community/MentraOS/pull/861>\r\n\r\nIt uses Swift for iOS and Java for Android.\r\n\r\n### [flet_sherpa_onnx](https://github.com/SamYuan1990/flet_sherpa_onnx)\r\n\r\nFlet ASR/STT component based on sherpa-onnx.\r\nExample [a chat box agent](https://github.com/SamYuan1990/i18n-agent-action)\r\n\r\n### [elderly-companion](https://github.com/SearocIsMe/elderly-companion)\r\n\r\nIt uses sherpa-onnx's Python API for real-time speech recognition in ROS2 with RK NPU.\r\n\r\n### [achatbot-go](https://github.com/ai-bot-pro/achatbot-go)\r\n\r\na multimodal chatbot based on go with sherpa-onnx's speech lib api.\r\n\r\n[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs\r\n[silero-vad]: https://github.com/snakers4/silero-vad\r\n[Raspberry Pi]: https://www.raspberrypi.com/\r\n[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf\r\n[LicheePi4A]: https://sipeed.com/licheepi4a\r\n[VisionFive 2]: https://www.starfivetech.com/en/site/boards\r\n[\u65ed\u65e5X3\u6d3e]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html\r\n[\u7231\u82af\u6d3e]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html\r\n[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization\r\n[hf-space-speaker-diarization-cn]: https://hf.qhduan.com/spaces/k2-fsa/speaker-diarization\r\n[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition\r\n[hf-space-asr-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition\r\n[Whisper]: https://github.com/openai/whisper\r\n[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper\r\n[hf-space-asr-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition-with-whisper\r\n[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech\r\n[hf-space-tts-cn]: https://hf.qhduan.com/spaces/k2-fsa/text-to-speech\r\n[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos\r\n[hf-space-subtitle-cn]: https://hf.qhduan.com/spaces/k2-fsa/generate-subtitles-for-videos\r\n[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging\r\n[hf-space-audio-tagging-cn]: https://hf.qhduan.com/spaces/k2-fsa/audio-tagging\r\n[hf-space-source-separation]: https://huggingface.co/spaces/k2-fsa/source-separation\r\n[hf-space-source-separation-cn]: https://hf.qhduan.com/spaces/k2-fsa/source-separation\r\n[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification\r\n[hf-space-slid-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/spoken-language-identification\r\n[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx\r\n[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx\r\n[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\r\n[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\r\n[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\r\n[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\r\n[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary\r\n[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\r\n[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\r\n[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en\r\n[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en\r\n[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice\r\n[wasm-hf-vad-asr-zh-zipformer-ctc-07-03]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc\r\n[wasm-ms-vad-asr-zh-zipformer-ctc-07-03]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc/summary\r\n[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice\r\n[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice\r\n[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\r\n[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\r\n[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\r\n[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\r\n[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\r\n[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\r\n[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\r\n[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\r\n[reazonspeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf\r\n[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\r\n[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\r\n[gigaspeech2]: https://github.com/speechcolab/gigaspeech2\r\n[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer\r\n[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer\r\n[telespeech-asr]: https://github.com/tele-ai/telespeech-asr\r\n[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\r\n[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\r\n[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\r\n[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\r\n[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\r\n[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\r\n[dolphin]: https://github.com/dataoceanai/dolphin\r\n[wasm-ms-vad-asr-multi-lang-dolphin-base]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc\r\n[wasm-hf-vad-asr-multi-lang-dolphin-base]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc\r\n\r\n[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en\r\n[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en\r\n[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de\r\n[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de\r\n[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx\r\n[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx\r\n[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html\r\n[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html\r\n[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html\r\n[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html\r\n[apk-simula-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr.html\r\n[apk-simula-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr-cn.html\r\n[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html\r\n[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html\r\n[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html\r\n[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html\r\n[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html\r\n[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html\r\n[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html\r\n[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html\r\n[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html\r\n[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html\r\n[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html\r\n[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html\r\n[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html\r\n[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html\r\n[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html\r\n[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html\r\n[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html\r\n[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html\r\n[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html\r\n[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html\r\n[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html\r\n[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html\r\n[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html\r\n[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html\r\n[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html\r\n[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html\r\n[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html\r\n[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html\r\n[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html\r\n[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html\r\n[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html\r\n[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html\r\n[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models\r\n[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models\r\n[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx\r\n[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models\r\n[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models\r\n[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\r\n[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\r\n[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models\r\n[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models\r\n[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech\r\n[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech\r\n[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2\r\n[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2\r\n[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2\r\n[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2\r\n[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2\r\n[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2\r\n[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2\r\n[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2\r\n[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2\r\n[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2\r\n[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2\r\n[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2\r\n[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2\r\n[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2\r\n[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2\r\n[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2\r\n[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9\r\n[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/\r\n[speech-enhancement-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speech-enhancement-models\r\n[source-separation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/source-separation-models\r\n[RK3588]: https://www.rock-chips.com/uploads/pdf/2022.8.26/192/RK3588%20Brief%20Datasheet.pdf\r\n[spleeter]: https://github.com/deezer/spleeter\r\n[UVR]: https://github.com/Anjok07/ultimatevocalremovergui\r\n[gtcrn]: https://github.com/Xiaobin-Rong/gtcrn\r\n[tts-url]: https://k2-fsa.github.io/sherpa/onnx/tts/all-in-one.html\r\n[ss-url]: https://k2-fsa.github.io/sherpa/onnx/source-separation/index.html\r\n[sd-url]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/index.html\r\n[slid-url]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/index.html\r\n[at-url]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/index.html\r\n[vad-url]: https://k2-fsa.github.io/sherpa/onnx/vad/index.html\r\n[kws-url]: https://k2-fsa.github.io/sherpa/onnx/kws/index.html\r\n[punct-url]: https://k2-fsa.github.io/sherpa/onnx/punctuation/index.html\r\n[se-url]: https://k2-fsa.github.io/sherpa/onnx/speech-enhancment/index.html\r\n",
    "bugtrack_url": null,
    "license": "Apache licensed, as found in the LICENSE file",
    "summary": null,
    "version": "1.12.15",
    "project_urls": {
        "Homepage": "https://github.com/k2-fsa/sherpa-onnx"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "06312467e3158bfcb2f894d4c1119e76335ee193151011c1cfffeb2fb9306383",
                "md5": "55e8d708c891de70f5913374ed8b4d98",
                "sha256": "0f1d008441520b53fb3530be7784fc7a4e5b601edd5e50e199830916cc7b257a"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp310-cp310-macosx_10_15_x86_64.whl",
            "has_sig": false,
            "md5_digest": "55e8d708c891de70f5913374ed8b4d98",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.7",
            "size": 2022041,
            "upload_time": "2025-10-22T05:16:41",
            "upload_time_iso_8601": "2025-10-22T05:16:41.610316Z",
            "url": "https://files.pythonhosted.org/packages/06/31/2467e3158bfcb2f894d4c1119e76335ee193151011c1cfffeb2fb9306383/sherpa_onnx-1.12.15-cp310-cp310-macosx_10_15_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b9e4431ef214eda90306f8ee636e98c559431c9de5a3ac8db07acc2b8a1a6f08",
                "md5": "73da2cc804133ce66822284c8269644c",
                "sha256": "3c324dfb084ad5d1a9904e0017e7cd44358de67a6c68a2abeb21af94cb2b4fd2"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "has_sig": false,
            "md5_digest": "73da2cc804133ce66822284c8269644c",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.7",
            "size": 3898943,
            "upload_time": "2025-10-22T05:16:27",
            "upload_time_iso_8601": "2025-10-22T05:16:27.726048Z",
            "url": "https://files.pythonhosted.org/packages/b9/e4/431ef214eda90306f8ee636e98c559431c9de5a3ac8db07acc2b8a1a6f08/sherpa_onnx-1.12.15-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "240b8b3fe94535997a02d398e30b22a22eef6f08101ba81e858b00f482a15435",
                "md5": "0ccb4500cd86460a5f9ec0ce6f847153",
                "sha256": "92091f0fb158d452c5abc9140e08687a904d037b4d998b992e88fbaef59a80c3"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp310-cp310-win32.whl",
            "has_sig": false,
            "md5_digest": "0ccb4500cd86460a5f9ec0ce6f847153",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.7",
            "size": 1617681,
            "upload_time": "2025-10-22T05:12:32",
            "upload_time_iso_8601": "2025-10-22T05:12:32.950127Z",
            "url": "https://files.pythonhosted.org/packages/24/0b/8b3fe94535997a02d398e30b22a22eef6f08101ba81e858b00f482a15435/sherpa_onnx-1.12.15-cp310-cp310-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6b5f2fbad5ed0ca8159e75014a12599c1b0ccb57b7f80d8dc1b6c9fa9ffaa00",
                "md5": "283ff557a31f381fa12950dc08a95028",
                "sha256": "b16ce5337cad7cd788d52643fe181f8dedbcc1fd83dcaa873f9ffe3a47261a6b"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp311-cp311-macosx_10_15_universal2.whl",
            "has_sig": false,
            "md5_digest": "283ff557a31f381fa12950dc08a95028",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.7",
            "size": 3829341,
            "upload_time": "2025-10-22T05:09:18",
            "upload_time_iso_8601": "2025-10-22T05:09:18.453346Z",
            "url": "https://files.pythonhosted.org/packages/b6/b5/f2fbad5ed0ca8159e75014a12599c1b0ccb57b7f80d8dc1b6c9fa9ffaa00/sherpa_onnx-1.12.15-cp311-cp311-macosx_10_15_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c1cd32b5a9337754b598a29f92a8f5c8d852bc0274b1043749e285d670339802",
                "md5": "70fe22b561b178d0207dd4e44fa81650",
                "sha256": "43fbfa1ef680454bfb9ff6f29529dda93879c772333fc1f245a828eedfaf99be"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp311-cp311-macosx_10_15_x86_64.whl",
            "has_sig": false,
            "md5_digest": "70fe22b561b178d0207dd4e44fa81650",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.7",
            "size": 2023565,
            "upload_time": "2025-10-22T04:54:36",
            "upload_time_iso_8601": "2025-10-22T04:54:36.306980Z",
            "url": "https://files.pythonhosted.org/packages/c1/cd/32b5a9337754b598a29f92a8f5c8d852bc0274b1043749e285d670339802/sherpa_onnx-1.12.15-cp311-cp311-macosx_10_15_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bf1df32f5c47f3265ce47f51e801c785332c126409bc5407e633f3ea85de3ebf",
                "md5": "ca7f3d101c6f392a47fb2a79039c3044",
                "sha256": "cea4b98c38abcf539e638844a6cd87a9f93bfec1bf1261badf1e7ab94731353a"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp311-cp311-win32.whl",
            "has_sig": false,
            "md5_digest": "ca7f3d101c6f392a47fb2a79039c3044",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.7",
            "size": 1616138,
            "upload_time": "2025-10-22T05:12:40",
            "upload_time_iso_8601": "2025-10-22T05:12:40.390547Z",
            "url": "https://files.pythonhosted.org/packages/bf/1d/f32f5c47f3265ce47f51e801c785332c126409bc5407e633f3ea85de3ebf/sherpa_onnx-1.12.15-cp311-cp311-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b077c5427046e2478b5cceb17ccf114c9a1c50b668ca22baf5d95341f0d616a1",
                "md5": "e64b3c6b62dd15491ef666a3c98630e3",
                "sha256": "afc2dfe9c39d7e3880e2108ee68db0487f02de22a3490f7ed090fd05ae1c8e94"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp312-cp312-macosx_10_15_universal2.whl",
            "has_sig": false,
            "md5_digest": "e64b3c6b62dd15491ef666a3c98630e3",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.7",
            "size": 3846083,
            "upload_time": "2025-10-22T05:14:28",
            "upload_time_iso_8601": "2025-10-22T05:14:28.287716Z",
            "url": "https://files.pythonhosted.org/packages/b0/77/c5427046e2478b5cceb17ccf114c9a1c50b668ca22baf5d95341f0d616a1/sherpa_onnx-1.12.15-cp312-cp312-macosx_10_15_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "763b14896d8b5360b5ca539ae6e925ad18525437db9aab89faa8d5fbf973f1e7",
                "md5": "3f9e6d4581b7504ade0cc708b8edb3a1",
                "sha256": "e1aabb9b8e1ccb54caed30204dc72379006bf8c609db7e514204ac0546d809a3"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "has_sig": false,
            "md5_digest": "3f9e6d4581b7504ade0cc708b8edb3a1",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.7",
            "size": 3898710,
            "upload_time": "2025-10-22T05:09:42",
            "upload_time_iso_8601": "2025-10-22T05:09:42.746364Z",
            "url": "https://files.pythonhosted.org/packages/76/3b/14896d8b5360b5ca539ae6e925ad18525437db9aab89faa8d5fbf973f1e7/sherpa_onnx-1.12.15-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3c48263bd202f8616e3641b3c551666e02fd9e4829e38e0f122deb54f359cff7",
                "md5": "a727232a23f08ea83628146cd7e19d1f",
                "sha256": "ec60570f7aec0f77d6ba28b0b4a850ef8caf7e6316d36d912951d3c77386b885"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp312-cp312-win32.whl",
            "has_sig": false,
            "md5_digest": "a727232a23f08ea83628146cd7e19d1f",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.7",
            "size": 1620533,
            "upload_time": "2025-10-22T05:05:52",
            "upload_time_iso_8601": "2025-10-22T05:05:52.981255Z",
            "url": "https://files.pythonhosted.org/packages/3c/48/263bd202f8616e3641b3c551666e02fd9e4829e38e0f122deb54f359cff7/sherpa_onnx-1.12.15-cp312-cp312-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "555489aa6f4d1845e064e610d2f6d45bbae129d4db179941d9a9d98cef600d75",
                "md5": "0bd48e1fe940de7bc1bfed612e2ba3b7",
                "sha256": "030018aa898014f2cb04468fcdf746de1577e3c63b5820b4021a4aeb28e2f50e"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp313-cp313-macosx_10_15_x86_64.whl",
            "has_sig": false,
            "md5_digest": "0bd48e1fe940de7bc1bfed612e2ba3b7",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.7",
            "size": 2038671,
            "upload_time": "2025-10-22T05:14:53",
            "upload_time_iso_8601": "2025-10-22T05:14:53.434179Z",
            "url": "https://files.pythonhosted.org/packages/55/54/89aa6f4d1845e064e610d2f6d45bbae129d4db179941d9a9d98cef600d75/sherpa_onnx-1.12.15-cp313-cp313-macosx_10_15_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ffd85f2669800a0a44bf03f750c9517d3f55638ed69d0865ea5a4a6c9bceb764",
                "md5": "9a12c45caa7cbcc89eacb0257353750f",
                "sha256": "d993420fda528e8e94178058040921bfe58934a5b6b56a871f4224af46b67ab8"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp313-cp313-win32.whl",
            "has_sig": false,
            "md5_digest": "9a12c45caa7cbcc89eacb0257353750f",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.7",
            "size": 1620773,
            "upload_time": "2025-10-22T04:53:13",
            "upload_time_iso_8601": "2025-10-22T04:53:13.306016Z",
            "url": "https://files.pythonhosted.org/packages/ff/d8/5f2669800a0a44bf03f750c9517d3f55638ed69d0865ea5a4a6c9bceb764/sherpa_onnx-1.12.15-cp313-cp313-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "620d26c02d41a3e02c0a40e0ae99661c2215cc80bd4dbce981a78f1bbf833c14",
                "md5": "64f397fafba4c4e710f7b240cc692243",
                "sha256": "313f0e74fe12eae4f25eb0e5badeba6f4317d6d6f3d3114a31b9f5831e5bdc14"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp314-cp314-macosx_10_15_x86_64.whl",
            "has_sig": false,
            "md5_digest": "64f397fafba4c4e710f7b240cc692243",
            "packagetype": "bdist_wheel",
            "python_version": "cp314",
            "requires_python": ">=3.7",
            "size": 2038849,
            "upload_time": "2025-10-22T05:00:13",
            "upload_time_iso_8601": "2025-10-22T05:00:13.814241Z",
            "url": "https://files.pythonhosted.org/packages/62/0d/26c02d41a3e02c0a40e0ae99661c2215cc80bd4dbce981a78f1bbf833c14/sherpa_onnx-1.12.15-cp314-cp314-macosx_10_15_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a3d29b67c12df61cbd0a51faa8cf8770d9b61c59291ead6494aea992ce0bec81",
                "md5": "7548711dc8527a2eb6224b4646536f82",
                "sha256": "e18f40fc41f89fda0b731f056e314eeeba84f149271d16badd05603fae2055ef"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp314-cp314-win32.whl",
            "has_sig": false,
            "md5_digest": "7548711dc8527a2eb6224b4646536f82",
            "packagetype": "bdist_wheel",
            "python_version": "cp314",
            "requires_python": ">=3.7",
            "size": 1656290,
            "upload_time": "2025-10-22T05:06:19",
            "upload_time_iso_8601": "2025-10-22T05:06:19.800786Z",
            "url": "https://files.pythonhosted.org/packages/a3/d2/9b67c12df61cbd0a51faa8cf8770d9b61c59291ead6494aea992ce0bec81/sherpa_onnx-1.12.15-cp314-cp314-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c484a73642db9af88efe5ca55f5c8d43f59b7d16cc62a68633816ba5b4e88201",
                "md5": "00b7ba29c0d2facc97e13f811ba98a96",
                "sha256": "9c684ef250f8901a076941b885a43809526e85bc19bcd613e67b831b24e993fe"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp38-cp38-win32.whl",
            "has_sig": false,
            "md5_digest": "00b7ba29c0d2facc97e13f811ba98a96",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.7",
            "size": 1615848,
            "upload_time": "2025-10-22T05:06:32",
            "upload_time_iso_8601": "2025-10-22T05:06:32.835279Z",
            "url": "https://files.pythonhosted.org/packages/c4/84/a73642db9af88efe5ca55f5c8d43f59b7d16cc62a68633816ba5b4e88201/sherpa_onnx-1.12.15-cp38-cp38-win32.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "67451a88ca40ff6955352266232f8ec01fdcc29ecf9a8ad89e66209778c7d66f",
                "md5": "24203e5de82eae4c9739e1b45bbf9c8c",
                "sha256": "488ef9018ae784e60bf1ddf3c6a5302c380408529c55336eb8bdca87e1c7dec1"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp39-cp39-macosx_10_15_universal2.whl",
            "has_sig": false,
            "md5_digest": "24203e5de82eae4c9739e1b45bbf9c8c",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.7",
            "size": 3826074,
            "upload_time": "2025-10-22T05:01:16",
            "upload_time_iso_8601": "2025-10-22T05:01:16.058929Z",
            "url": "https://files.pythonhosted.org/packages/67/45/1a88ca40ff6955352266232f8ec01fdcc29ecf9a8ad89e66209778c7d66f/sherpa_onnx-1.12.15-cp39-cp39-macosx_10_15_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f4b0ad01cb28fa9c923d895143828d14a33889c28ac79d2ae24ac67a87976920",
                "md5": "7246dd18c28b1e73f7cae9a347a94b0f",
                "sha256": "79b524571b52962db04d9b3617141ded24eee37f6beab82d4a6cb7b27af7e924"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "has_sig": false,
            "md5_digest": "7246dd18c28b1e73f7cae9a347a94b0f",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.7",
            "size": 3899086,
            "upload_time": "2025-10-22T05:13:00",
            "upload_time_iso_8601": "2025-10-22T05:13:00.386397Z",
            "url": "https://files.pythonhosted.org/packages/f4/b0/ad01cb28fa9c923d895143828d14a33889c28ac79d2ae24ac67a87976920/sherpa_onnx-1.12.15-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5bbddd38f0c573777513e290333d0037ad61a1ce2b8f5d03ce7d28c2a04ff58a",
                "md5": "a07c5ac9f624a824a541d9a1c2b08d9e",
                "sha256": "77c713ac7c684a6450099eacb846327c957c5f79f2f1c4beb31ac6221373996b"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.15-cp39-cp39-win32.whl",
            "has_sig": false,
            "md5_digest": "a07c5ac9f624a824a541d9a1c2b08d9e",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.7",
            "size": 1617894,
            "upload_time": "2025-10-22T05:02:20",
            "upload_time_iso_8601": "2025-10-22T05:02:20.014966Z",
            "url": "https://files.pythonhosted.org/packages/5b/bd/dd38f0c573777513e290333d0037ad61a1ce2b8f5d03ce7d28c2a04ff58a/sherpa_onnx-1.12.15-cp39-cp39-win32.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-22 05:16:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "k2-fsa",
    "github_project": "sherpa-onnx",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sherpa-onnx"
}