sherpa-onnx


Namesherpa-onnx JSON
Version 1.12.7 PyPI version JSON
download
home_pagehttps://github.com/k2-fsa/sherpa-onnx
SummaryNone
upload_time2025-07-27 17:17:57
maintainerNone
docs_urlNone
authorThe sherpa-onnx development team
requires_python>=3.6
licenseApache licensed, as found in the LICENSE file
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ### Supported functions

|Speech recognition| [Speech synthesis][tts-url] | [Source separation][ss-url] |
|------------------|------------------|-------------------|
|   ✔️              |         ✔️        |       ✔️           |

|Speaker identification| [Speaker diarization][sd-url] | Speaker verification |
|----------------------|-------------------- |------------------------|
|   ✔️                  |         ✔️           |            ✔️           |

| [Spoken Language identification][slid-url] | [Audio tagging][at-url] | [Voice activity detection][vad-url] |
|--------------------------------|---------------|--------------------------|
|                 ✔️              |          ✔️    |                ✔️         |

| [Keyword spotting][kws-url] | [Add punctuation][punct-url] | [Speech enhancement][se-url] |
|------------------|-----------------|--------------------|
|     ✔️            |       ✔️         |      ✔️             |


### Supported platforms

|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |
|------------|---------|---------|------------|-------|-------|-----------|
|   x64      |  ✔️      |         |   ✔️        | ✔️     |  ✔️    |   ✔️       |
|   x86      |  ✔️      |         |   ✔️        |       |       |           |
|   arm64    |  ✔️      | ✔️       |   ✔️        | ✔️     |  ✔️    |   ✔️       |
|   arm32    |  ✔️      |         |            |       |  ✔️    |   ✔️       |
|   riscv64  |         |         |            |       |  ✔️    |           |

### Supported programming languages

| 1. C++ | 2. C  | 3. Python | 4. JavaScript |
|--------|-------|-----------|---------------|
|   ✔️    | ✔️     | ✔️         |    ✔️          |

|5. Java | 6. C# | 7. Kotlin | 8. Swift |
|--------|-------|-----------|----------|
| ✔️      |  ✔️    | ✔️         |  ✔️       |

| 9. Go | 10. Dart | 11. Rust | 12. Pascal |
|-------|----------|----------|------------|
| ✔️     |  ✔️       |   ✔️      |    ✔️       |

For Rust support, please see [sherpa-rs][sherpa-rs]

It also supports WebAssembly.

## Introduction

This repository supports running the following functions **locally**

  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported
  - Text-to-speech (i.e., TTS)
  - Speaker diarization
  - Speaker identification
  - Speaker verification
  - Spoken language identification
  - Audio tagging
  - VAD (e.g., [silero-vad][silero-vad])
  - Speech enhancement (e.g., [gtcrn][gtcrn])
  - Keyword spotting
  - Source separation (e.g., [spleeter][spleeter], [UVR][UVR])

on the following platforms and operating systems:

  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64), **RK NPU**
  - Linux, macOS, Windows, openKylin
  - Android, WearOS
  - iOS
  - HarmonyOS
  - NodeJS
  - WebAssembly
  - [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)
  - [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)
  - [Raspberry Pi][Raspberry Pi]
  - [RV1126][RV1126]
  - [LicheePi4A][LicheePi4A]
  - [VisionFive 2][VisionFive 2]
  - [旭日X3派][旭日X3派]
  - [爱芯派][爱芯派]
  - [RK3588][RK3588]
  - etc

with the following APIs

  - C++, C, Python, Go, ``C#``
  - Java, Kotlin, JavaScript
  - Swift, Rust
  - Dart, Object Pascal

### Links for Huggingface Spaces

<details>
<summary>You can visit the following Huggingface spaces to try sherpa-onnx without
installing anything. All you need is a browser.</summary>

| Description                                           | URL                                     | 中国镜像                               |
|-------------------------------------------------------|-----------------------------------------|----------------------------------------|
| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]| [镜像][hf-space-speaker-diarization-cn]|
| Speech recognition                                    | [Click me][hf-space-asr]                | [镜像][hf-space-asr-cn]                |
| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        | [镜像][hf-space-asr-whisper-cn]        |
| Speech synthesis                                      | [Click me][hf-space-tts]                | [镜像][hf-space-tts-cn]                |
| Generate subtitles                                    | [Click me][hf-space-subtitle]           | [镜像][hf-space-subtitle-cn]           |
| Audio tagging                                         | [Click me][hf-space-audio-tagging]      | [镜像][hf-space-audio-tagging-cn]      |
| Source separation                                     | [Click me][hf-space-source-separation]  | [镜像][hf-space-source-separation-cn]  |
| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       | [镜像][hf-space-slid-whisper-cn]       |

We also have spaces built using WebAssembly. They are listed below:

| Description                                                                              | Huggingface space| ModelScope space|
|------------------------------------------------------------------------------------------|------------------|-----------------|
|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[地址][wasm-ms-vad]|
|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[地址][wasm-hf-streaming-asr-zh-en-zipformer]|
|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-paraformer]|
|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-yue-paraformer]|
|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[地址][wasm-ms-streaming-asr-en-zipformer]|
|VAD + speech recognition (Chinese) with [Zipformer CTC](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|[Click me][wasm-hf-vad-asr-zh-zipformer-ctc-07-03]| [地址][wasm-ms-vad-asr-zh-zipformer-ctc-07-03]|
|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [地址][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|
|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [地址][wasm-ms-vad-asr-en-whisper-tiny-en]|
|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [地址][wasm-ms-vad-asr-en-moonshine-tiny-en]|
|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [地址][wasm-ms-vad-asr-en-zipformer-gigaspeech]|
|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [地址][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|
|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [地址][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|
|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [地址][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|
|VAD + speech recognition (Chinese 多种方言) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [地址][wasm-ms-vad-asr-zh-telespeech]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [地址][wasm-ms-vad-asr-zh-en-paraformer-large]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [地址][wasm-ms-vad-asr-zh-en-paraformer-small]|
|VAD + speech recognition (多语种及多种中文方言) with [Dolphin][Dolphin]-base          |[Click me][wasm-hf-vad-asr-multi-lang-dolphin-base]| [地址][wasm-ms-vad-asr-multi-lang-dolphin-base]|
|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [地址][wasm-ms-tts-piper-en]|
|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [地址][wasm-ms-tts-piper-de]|
|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[地址][wasm-ms-speaker-diarization]|

</details>

### Links for pre-built Android APKs

<details>

<summary>You can find pre-built Android APKs for this repository in the following table</summary>

| Description                            | URL                                | 中国用户                          |
|----------------------------------------|------------------------------------|-----------------------------------|
| Speaker diarization                    | [Address][apk-speaker-diarization] | [点此][apk-speaker-diarization-cn]|
| Streaming speech recognition           | [Address][apk-streaming-asr]       | [点此][apk-streaming-asr-cn]      |
| Simulated-streaming speech recognition | [Address][apk-simula-streaming-asr]| [点此][apk-simula-streaming-asr-cn]|
| Text-to-speech                         | [Address][apk-tts]                 | [点此][apk-tts-cn]                |
| Voice activity detection (VAD)         | [Address][apk-vad]                 | [点此][apk-vad-cn]                |
| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [点此][apk-vad-asr-cn]            |
| Two-pass speech recognition            | [Address][apk-2pass]               | [点此][apk-2pass-cn]              |
| Audio tagging                          | [Address][apk-at]                  | [点此][apk-at-cn]                 |
| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [点此][apk-at-wearos-cn]          |
| Speaker identification                 | [Address][apk-sid]                 | [点此][apk-sid-cn]                |
| Spoken language identification         | [Address][apk-slid]                | [点此][apk-slid-cn]               |
| Keyword spotting                       | [Address][apk-kws]                 | [点此][apk-kws-cn]                |

</details>

### Links for pre-built Flutter APPs

<details>

#### Real-time speech recognition

| Description                    | URL                                 | 中国用户                            |
|--------------------------------|-------------------------------------|-------------------------------------|
| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [点此][apk-flutter-streaming-asr-cn]|

#### Text-to-speech

| Description                              | URL                                | 中国用户                           |
|------------------------------------------|------------------------------------|------------------------------------|
| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [点此][flutter-tts-android-cn]     |
| Linux (x64)                              | [Address][flutter-tts-linux]       | [点此][flutter-tts-linux-cn]       |
| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [点此][flutter-tts-macos-arm64-cn] |
| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [点此][flutter-tts-macos-x64-cn]   |
| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [点此][flutter-tts-win-x64-cn]     |

> Note: You need to build from source for iOS.

</details>

### Links for pre-built Lazarus APPs

<details>

#### Generating subtitles

| Description                    | URL                        | 中国用户                   |
|--------------------------------|----------------------------|----------------------------|
| Generate subtitles (生成字幕)  | [Address][lazarus-subtitle]| [点此][lazarus-subtitle-cn]|

</details>

### Links for pre-trained models

<details>

| Description                                 | URL                                                                                   |
|---------------------------------------------|---------------------------------------------------------------------------------------|
| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |
| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |
| VAD                                         | [Address][vad-models]                                                                 |
| Keyword spotting                            | [Address][kws-models]                                                                 |
| Audio tagging                               | [Address][at-models]                                                                  |
| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |
| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|
| Punctuation                                 | [Address][punct-models]                                                               |
| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |
| Speech enhancement                          | [Address][speech-enhancement-models]                                                  |
| Source separation                           | [Address][source-separation-models]                                                  |

</details>

#### Some pre-trained ASR models (Streaming)

<details>

Please see

  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>

for more models. The following table lists only **SOME** of them.


|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|
|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|
|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|
|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|

</details>


#### Some pre-trained ASR models (Non-Streaming)

<details>

Please see

  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>
  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>

for more models. The following table lists only **SOME** of them.

|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-parakeet-tdt-0-6b-v2-int8-english)| English | It is converted from <https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2>|
|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|
|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|
|[sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|Chinese| A Zipformer CTC model|
|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| 支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|
|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| 也支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|
|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|
|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|
|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|
|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|
|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|
|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|
|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| 支持多种方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|

</details>

### Useful links

- Documentation: https://k2-fsa.github.io/sherpa/onnx/
- Bilibili 演示视频: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi

### How to reach us

Please see
https://k2-fsa.github.io/sherpa/social-groups.html
for 新一代 Kaldi **微信交流群** and **QQ 交流群**.

## Projects using sherpa-onnx

### [BreezeApp](https://github.com/mtkresearch/BreezeApp) from [MediaTek Research](https://github.com/mtkresearch)

> BreezeAPP is a mobile AI application developed for both Android and iOS platforms.
> Users can download it directly from the App Store and enjoy a variety of features
> offline, including speech-to-text, text-to-speech, text-based chatbot interactions,
> and image question-answering

  - [Download APK for BreezeAPP](https://huggingface.co/MediaTek-Research/BreezeApp/resolve/main/BreezeApp.apk)
  - [APK 中国镜像](https://hf-mirror.com/MediaTek-Research/BreezeApp/blob/main/BreezeApp.apk)

| 1 | 2 | 3 |
|---|---|---|
|![](https://github.com/user-attachments/assets/1cdbc057-b893-4de6-9e9c-f1d7dfd1d992)|![](https://github.com/user-attachments/assets/d77cd98e-b057-442f-860d-d5befd5c769b)|![](https://github.com/user-attachments/assets/57e546bf-3d39-45b9-b392-b48ca4fb3c58)|

### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)

Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking
face running locally across platforms

See also <https://github.com/t41372/Open-LLM-VTuber/pull/50>

### [voiceapi](https://github.com/ruzhila/voiceapi)

<details>
  <summary>Streaming ASR and TTS based on FastAPI</summary>


It shows how to use the ASR and TTS Python APIs with FastAPI.
</details>

### [腾讯会议摸鱼工具 TMSpeech](https://github.com/jxlpzqc/TMSpeech)

Uses streaming ASR in C# with graphical user interface.

Video demo in Chinese: [【开源】Windows实时字幕软件(网课/开会必备)](https://www.bilibili.com/video/BV1rX4y1p7Nx)

### [lol互动助手](https://github.com/l1veIn/lol-wom-electron)

It uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)

Video demo in Chinese: [爆了!炫神教你开打字挂!真正影响胜率的英雄联盟工具!英雄联盟的最后一块拼图!和游戏中的每个人无障碍沟通!](https://www.bilibili.com/video/BV142tje9E74)

### [Sherpa-ONNX 语音识别服务器](https://github.com/hfyydd/sherpa-onnx-server)

A server based on nodejs providing Restful API for speech recognition.

### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)

一个模块化,全过程可离线,低占用率的对话机器人/智能音箱

It uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)
and [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)
are used.

### [Flutter-EasySpeechRecognition](https://github.com/Jason-chen-coder/Flutter-EasySpeechRecognition)

It extends [./flutter-examples/streaming_asr](./flutter-examples/streaming_asr) by
downloading models inside the app to reduce the size of the app.

Note: [[Team B] Sherpa AI backend](https://github.com/umgc/spring2025/pull/82) also uses
sherpa-onnx in a Flutter APP.

### [sherpa-onnx-unity](https://github.com/xue-fei/sherpa-onnx-unity)

sherpa-onnx in Unity. See also [#1695](https://github.com/k2-fsa/sherpa-onnx/issues/1695),
[#1892](https://github.com/k2-fsa/sherpa-onnx/issues/1892), and [#1859](https://github.com/k2-fsa/sherpa-onnx/issues/1859)

### [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)

本项目为xiaozhi-esp32提供后端服务,帮助您快速搭建ESP32设备控制服务器
Backend service for xiaozhi-esp32, helps you quickly build an ESP32 device control server.

See also

  - [ASR新增轻量级sherpa-onnx-asr](https://github.com/xinnan-tech/xiaozhi-esp32-server/issues/315)
  - [feat: ASR增加sherpa-onnx模型](https://github.com/xinnan-tech/xiaozhi-esp32-server/pull/379)

### [KaithemAutomation](https://github.com/EternityForest/KaithemAutomation)

Pure Python, GUI-focused home automation/consumer grade SCADA.

It uses TTS from sherpa-onnx. See also [✨ Speak command that uses the new globally configured TTS model.](https://github.com/EternityForest/KaithemAutomation/commit/8e64d2b138725e426532f7d66bb69dd0b4f53693)

### [Open-XiaoAI KWS](https://github.com/idootop/open-xiaoai-kws)

Enable custom wake word for XiaoAi Speakers. 让小爱音箱支持自定义唤醒词。

Video demo in Chinese: [小爱同学启动~˶╹ꇴ╹˶!](https://www.bilibili.com/video/BV1YfVUz5EMj)

### [C++ WebSocket ASR Server](https://github.com/mawwalker/stt-server)

It provides a WebSocket server based on C++ for ASR using sherpa-onnx.

### [Go WebSocket Server](https://github.com/bbeyondllove/asr_server)

It provides a WebSocket server based on the Go programming language for sherpa-onnx.

### [Making robot Paimon, Ep10 "The AI Part 1"](https://www.youtube.com/watch?v=KxPKkwxGWZs)

It is a [YouTube video](https://www.youtube.com/watch?v=KxPKkwxGWZs),
showing how the author tried to use AI so he can have a conversation with Paimon.

It uses sherpa-onnx for speech-to-text and text-to-speech.
|1|
|---|
|![](https://github.com/user-attachments/assets/f6eea2d5-1807-42cb-9160-be8da2971e1f)|

[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs
[silero-vad]: https://github.com/snakers4/silero-vad
[Raspberry Pi]: https://www.raspberrypi.com/
[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf
[LicheePi4A]: https://sipeed.com/licheepi4a
[VisionFive 2]: https://www.starfivetech.com/en/site/boards
[旭日X3派]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html
[爱芯派]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html
[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization
[hf-space-speaker-diarization-cn]: https://hf.qhduan.com/spaces/k2-fsa/speaker-diarization
[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition
[hf-space-asr-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition
[Whisper]: https://github.com/openai/whisper
[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-asr-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech
[hf-space-tts-cn]: https://hf.qhduan.com/spaces/k2-fsa/text-to-speech
[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-subtitle-cn]: https://hf.qhduan.com/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging
[hf-space-audio-tagging-cn]: https://hf.qhduan.com/spaces/k2-fsa/audio-tagging
[hf-space-source-separation]: https://huggingface.co/spaces/k2-fsa/source-separation
[hf-space-source-separation-cn]: https://hf.qhduan.com/spaces/k2-fsa/source-separation
[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification
[hf-space-slid-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/spoken-language-identification
[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx
[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx
[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary
[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en
[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en
[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice
[wasm-hf-vad-asr-zh-zipformer-ctc-07-03]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc
[wasm-ms-vad-asr-zh-zipformer-ctc-07-03]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc/summary
[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice
[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice
[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[reazonspeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf
[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[gigaspeech2]: https://github.com/speechcolab/gigaspeech2
[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[telespeech-asr]: https://github.com/tele-ai/telespeech-asr
[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[dolphin]: https://github.com/dataoceanai/dolphin
[wasm-ms-vad-asr-multi-lang-dolphin-base]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc
[wasm-hf-vad-asr-multi-lang-dolphin-base]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc

[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx
[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx
[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html
[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html
[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html
[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html
[apk-simula-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr.html
[apk-simula-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr-cn.html
[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html
[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html
[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html
[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html
[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html
[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html
[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html
[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html
[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html
[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html
[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html
[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html
[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html
[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html
[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html
[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html
[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html
[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html
[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html
[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html
[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html
[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html
[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html
[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html
[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html
[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html
[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html
[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html
[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html
[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html
[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html
[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx
[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models
[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models
[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models
[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models
[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech
[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech
[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2
[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2
[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2
[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2
[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2
[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2
[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2
[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2
[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2
[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2
[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2
[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2
[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2
[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9
[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/
[speech-enhancement-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speech-enhancement-models
[source-separation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/source-separation-models
[RK3588]: https://www.rock-chips.com/uploads/pdf/2022.8.26/192/RK3588%20Brief%20Datasheet.pdf
[spleeter]: https://github.com/deezer/spleeter
[UVR]: https://github.com/Anjok07/ultimatevocalremovergui
[gtcrn]: https://github.com/Xiaobin-Rong/gtcrn
[tts-url]: https://k2-fsa.github.io/sherpa/onnx/tts/all-in-one.html
[ss-url]: https://k2-fsa.github.io/sherpa/onnx/source-separation/index.html
[sd-url]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/index.html
[slid-url]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/index.html
[at-url]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/index.html
[vad-url]: https://k2-fsa.github.io/sherpa/onnx/vad/index.html
[kws-url]: https://k2-fsa.github.io/sherpa/onnx/kws/index.html
[punct-url]: https://k2-fsa.github.io/sherpa/onnx/punctuation/index.html
[se-url]: https://k2-fsa.github.io/sherpa/onnx/speech-enhancment/index.html

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/k2-fsa/sherpa-onnx",
    "name": "sherpa-onnx",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "The sherpa-onnx development team",
    "author_email": "dpovey@gmail.com",
    "download_url": null,
    "platform": null,
    "description": "### Supported functions\n\n|Speech recognition| [Speech synthesis][tts-url] | [Source separation][ss-url] |\n|------------------|------------------|-------------------|\n|   \u2714\ufe0f              |         \u2714\ufe0f        |       \u2714\ufe0f           |\n\n|Speaker identification| [Speaker diarization][sd-url] | Speaker verification |\n|----------------------|-------------------- |------------------------|\n|   \u2714\ufe0f                  |         \u2714\ufe0f           |            \u2714\ufe0f           |\n\n| [Spoken Language identification][slid-url] | [Audio tagging][at-url] | [Voice activity detection][vad-url] |\n|--------------------------------|---------------|--------------------------|\n|                 \u2714\ufe0f              |          \u2714\ufe0f    |                \u2714\ufe0f         |\n\n| [Keyword spotting][kws-url] | [Add punctuation][punct-url] | [Speech enhancement][se-url] |\n|------------------|-----------------|--------------------|\n|     \u2714\ufe0f            |       \u2714\ufe0f         |      \u2714\ufe0f             |\n\n\n### Supported platforms\n\n|Architecture| Android | iOS     | Windows    | macOS | linux | HarmonyOS |\n|------------|---------|---------|------------|-------|-------|-----------|\n|   x64      |  \u2714\ufe0f      |         |   \u2714\ufe0f        | \u2714\ufe0f     |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   x86      |  \u2714\ufe0f      |         |   \u2714\ufe0f        |       |       |           |\n|   arm64    |  \u2714\ufe0f      | \u2714\ufe0f       |   \u2714\ufe0f        | \u2714\ufe0f     |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   arm32    |  \u2714\ufe0f      |         |            |       |  \u2714\ufe0f    |   \u2714\ufe0f       |\n|   riscv64  |         |         |            |       |  \u2714\ufe0f    |           |\n\n### Supported programming languages\n\n| 1. C++ | 2. C  | 3. Python | 4. JavaScript |\n|--------|-------|-----------|---------------|\n|   \u2714\ufe0f    | \u2714\ufe0f     | \u2714\ufe0f         |    \u2714\ufe0f          |\n\n|5. Java | 6. C# | 7. Kotlin | 8. Swift |\n|--------|-------|-----------|----------|\n| \u2714\ufe0f      |  \u2714\ufe0f    | \u2714\ufe0f         |  \u2714\ufe0f       |\n\n| 9. Go | 10. Dart | 11. Rust | 12. Pascal |\n|-------|----------|----------|------------|\n| \u2714\ufe0f     |  \u2714\ufe0f       |   \u2714\ufe0f      |    \u2714\ufe0f       |\n\nFor Rust support, please see [sherpa-rs][sherpa-rs]\n\nIt also supports WebAssembly.\n\n## Introduction\n\nThis repository supports running the following functions **locally**\n\n  - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported\n  - Text-to-speech (i.e., TTS)\n  - Speaker diarization\n  - Speaker identification\n  - Speaker verification\n  - Spoken language identification\n  - Audio tagging\n  - VAD (e.g., [silero-vad][silero-vad])\n  - Speech enhancement (e.g., [gtcrn][gtcrn])\n  - Keyword spotting\n  - Source separation (e.g., [spleeter][spleeter], [UVR][UVR])\n\non the following platforms and operating systems:\n\n  - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64), **RK NPU**\n  - Linux, macOS, Windows, openKylin\n  - Android, WearOS\n  - iOS\n  - HarmonyOS\n  - NodeJS\n  - WebAssembly\n  - [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)\n  - [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)\n  - [Raspberry Pi][Raspberry Pi]\n  - [RV1126][RV1126]\n  - [LicheePi4A][LicheePi4A]\n  - [VisionFive 2][VisionFive 2]\n  - [\u65ed\u65e5X3\u6d3e][\u65ed\u65e5X3\u6d3e]\n  - [\u7231\u82af\u6d3e][\u7231\u82af\u6d3e]\n  - [RK3588][RK3588]\n  - etc\n\nwith the following APIs\n\n  - C++, C, Python, Go, ``C#``\n  - Java, Kotlin, JavaScript\n  - Swift, Rust\n  - Dart, Object Pascal\n\n### Links for Huggingface Spaces\n\n<details>\n<summary>You can visit the following Huggingface spaces to try sherpa-onnx without\ninstalling anything. All you need is a browser.</summary>\n\n| Description                                           | URL                                     | \u4e2d\u56fd\u955c\u50cf                               |\n|-------------------------------------------------------|-----------------------------------------|----------------------------------------|\n| Speaker diarization                                   | [Click me][hf-space-speaker-diarization]| [\u955c\u50cf][hf-space-speaker-diarization-cn]|\n| Speech recognition                                    | [Click me][hf-space-asr]                | [\u955c\u50cf][hf-space-asr-cn]                |\n| Speech recognition with [Whisper][Whisper]            | [Click me][hf-space-asr-whisper]        | [\u955c\u50cf][hf-space-asr-whisper-cn]        |\n| Speech synthesis                                      | [Click me][hf-space-tts]                | [\u955c\u50cf][hf-space-tts-cn]                |\n| Generate subtitles                                    | [Click me][hf-space-subtitle]           | [\u955c\u50cf][hf-space-subtitle-cn]           |\n| Audio tagging                                         | [Click me][hf-space-audio-tagging]      | [\u955c\u50cf][hf-space-audio-tagging-cn]      |\n| Source separation                                     | [Click me][hf-space-source-separation]  | [\u955c\u50cf][hf-space-source-separation-cn]  |\n| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper]       | [\u955c\u50cf][hf-space-slid-whisper-cn]       |\n\nWe also have spaces built using WebAssembly. They are listed below:\n\n| Description                                                                              | Huggingface space| ModelScope space|\n|------------------------------------------------------------------------------------------|------------------|-----------------|\n|Voice activity detection with [silero-vad][silero-vad]                                    | [Click me][wasm-hf-vad]|[\u5730\u5740][wasm-ms-vad]|\n|Real-time speech recognition (Chinese + English) with Zipformer                           | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[\u5730\u5740][wasm-hf-streaming-asr-zh-en-zipformer]|\n|Real-time speech recognition (Chinese + English) with Paraformer                          |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-paraformer]|\n|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-yue-paraformer]|\n|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer]    |[\u5730\u5740][wasm-ms-streaming-asr-en-zipformer]|\n|VAD + speech recognition (Chinese) with [Zipformer CTC](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|[Click me][wasm-hf-vad-asr-zh-zipformer-ctc-07-03]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-ctc-07-03]|\n|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|\n|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-whisper-tiny-en]|\n|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-moonshine-tiny-en]|\n|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech]    |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [\u5730\u5740][wasm-ms-vad-asr-en-zipformer-gigaspeech]|\n|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech]  |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|\n|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [\u5730\u5740][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|\n|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2]      |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [\u5730\u5740][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|\n|VAD + speech recognition (Chinese \u591a\u79cd\u65b9\u8a00) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-telespeech]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-large          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-large]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-small          |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-small]|\n|VAD + speech recognition (\u591a\u8bed\u79cd\u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with [Dolphin][Dolphin]-base          |[Click me][wasm-hf-vad-asr-multi-lang-dolphin-base]| [\u5730\u5740][wasm-ms-vad-asr-multi-lang-dolphin-base]|\n|Speech synthesis (English)                                                                  |[Click me][wasm-hf-tts-piper-en]| [\u5730\u5740][wasm-ms-tts-piper-en]|\n|Speech synthesis (German)                                                                   |[Click me][wasm-hf-tts-piper-de]| [\u5730\u5740][wasm-ms-tts-piper-de]|\n|Speaker diarization                                                                         |[Click me][wasm-hf-speaker-diarization]|[\u5730\u5740][wasm-ms-speaker-diarization]|\n\n</details>\n\n### Links for pre-built Android APKs\n\n<details>\n\n<summary>You can find pre-built Android APKs for this repository in the following table</summary>\n\n| Description                            | URL                                | \u4e2d\u56fd\u7528\u6237                          |\n|----------------------------------------|------------------------------------|-----------------------------------|\n| Speaker diarization                    | [Address][apk-speaker-diarization] | [\u70b9\u6b64][apk-speaker-diarization-cn]|\n| Streaming speech recognition           | [Address][apk-streaming-asr]       | [\u70b9\u6b64][apk-streaming-asr-cn]      |\n| Simulated-streaming speech recognition | [Address][apk-simula-streaming-asr]| [\u70b9\u6b64][apk-simula-streaming-asr-cn]|\n| Text-to-speech                         | [Address][apk-tts]                 | [\u70b9\u6b64][apk-tts-cn]                |\n| Voice activity detection (VAD)         | [Address][apk-vad]                 | [\u70b9\u6b64][apk-vad-cn]                |\n| VAD + non-streaming speech recognition | [Address][apk-vad-asr]             | [\u70b9\u6b64][apk-vad-asr-cn]            |\n| Two-pass speech recognition            | [Address][apk-2pass]               | [\u70b9\u6b64][apk-2pass-cn]              |\n| Audio tagging                          | [Address][apk-at]                  | [\u70b9\u6b64][apk-at-cn]                 |\n| Audio tagging (WearOS)                 | [Address][apk-at-wearos]           | [\u70b9\u6b64][apk-at-wearos-cn]          |\n| Speaker identification                 | [Address][apk-sid]                 | [\u70b9\u6b64][apk-sid-cn]                |\n| Spoken language identification         | [Address][apk-slid]                | [\u70b9\u6b64][apk-slid-cn]               |\n| Keyword spotting                       | [Address][apk-kws]                 | [\u70b9\u6b64][apk-kws-cn]                |\n\n</details>\n\n### Links for pre-built Flutter APPs\n\n<details>\n\n#### Real-time speech recognition\n\n| Description                    | URL                                 | \u4e2d\u56fd\u7528\u6237                            |\n|--------------------------------|-------------------------------------|-------------------------------------|\n| Streaming speech recognition   | [Address][apk-flutter-streaming-asr]| [\u70b9\u6b64][apk-flutter-streaming-asr-cn]|\n\n#### Text-to-speech\n\n| Description                              | URL                                | \u4e2d\u56fd\u7528\u6237                           |\n|------------------------------------------|------------------------------------|------------------------------------|\n| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android]     | [\u70b9\u6b64][flutter-tts-android-cn]     |\n| Linux (x64)                              | [Address][flutter-tts-linux]       | [\u70b9\u6b64][flutter-tts-linux-cn]       |\n| macOS (x64)                              | [Address][flutter-tts-macos-x64]   | [\u70b9\u6b64][flutter-tts-macos-arm64-cn] |\n| macOS (arm64)                            | [Address][flutter-tts-macos-arm64] | [\u70b9\u6b64][flutter-tts-macos-x64-cn]   |\n| Windows (x64)                            | [Address][flutter-tts-win-x64]     | [\u70b9\u6b64][flutter-tts-win-x64-cn]     |\n\n> Note: You need to build from source for iOS.\n\n</details>\n\n### Links for pre-built Lazarus APPs\n\n<details>\n\n#### Generating subtitles\n\n| Description                    | URL                        | \u4e2d\u56fd\u7528\u6237                   |\n|--------------------------------|----------------------------|----------------------------|\n| Generate subtitles (\u751f\u6210\u5b57\u5e55)  | [Address][lazarus-subtitle]| [\u70b9\u6b64][lazarus-subtitle-cn]|\n\n</details>\n\n### Links for pre-trained models\n\n<details>\n\n| Description                                 | URL                                                                                   |\n|---------------------------------------------|---------------------------------------------------------------------------------------|\n| Speech recognition (speech to text, ASR)    | [Address][asr-models]                                                                 |\n| Text-to-speech (TTS)                        | [Address][tts-models]                                                                 |\n| VAD                                         | [Address][vad-models]                                                                 |\n| Keyword spotting                            | [Address][kws-models]                                                                 |\n| Audio tagging                               | [Address][at-models]                                                                  |\n| Speaker identification (Speaker ID)         | [Address][sid-models]                                                                 |\n| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from  [Speech recognition][asr-models]|\n| Punctuation                                 | [Address][punct-models]                                                               |\n| Speaker segmentation                        | [Address][speaker-segmentation-models]                                                |\n| Speech enhancement                          | [Address][speech-enhancement-models]                                                  |\n| Source separation                           | [Address][source-separation-models]                                                  |\n\n</details>\n\n#### Some pre-trained ASR models (Streaming)\n\n<details>\n\nPlease see\n\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|\n|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|\n|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|\n|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|\n\n</details>\n\n\n#### Some pre-trained ASR models (Non-Streaming)\n\n<details>\n\nPlease see\n\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>\n  - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[sherpa-onnx-nemo-parakeet-tdt-0.6b-v2-int8](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-parakeet-tdt-0-6b-v2-int8-english)| English | It is converted from <https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2>|\n|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|\n|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|\n|[sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/icefall/zipformer.html#sherpa-onnx-zipformer-ctc-zh-int8-2025-07-03-chinese)|Chinese| A Zipformer CTC model|\n|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| \u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|\n|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| \u4e5f\u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|\n|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|\n|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|\n|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|\n|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|\n|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|\n|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|\n|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| \u652f\u6301\u591a\u79cd\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|\n\n</details>\n\n### Useful links\n\n- Documentation: https://k2-fsa.github.io/sherpa/onnx/\n- Bilibili \u6f14\u793a\u89c6\u9891: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi\n\n### How to reach us\n\nPlease see\nhttps://k2-fsa.github.io/sherpa/social-groups.html\nfor \u65b0\u4e00\u4ee3 Kaldi **\u5fae\u4fe1\u4ea4\u6d41\u7fa4** and **QQ \u4ea4\u6d41\u7fa4**.\n\n## Projects using sherpa-onnx\n\n### [BreezeApp](https://github.com/mtkresearch/BreezeApp) from [MediaTek Research](https://github.com/mtkresearch)\n\n> BreezeAPP is a mobile AI application developed for both Android and iOS platforms.\n> Users can download it directly from the App Store and enjoy a variety of features\n> offline, including speech-to-text, text-to-speech, text-based chatbot interactions,\n> and image question-answering\n\n  - [Download APK for BreezeAPP](https://huggingface.co/MediaTek-Research/BreezeApp/resolve/main/BreezeApp.apk)\n  - [APK \u4e2d\u56fd\u955c\u50cf](https://hf-mirror.com/MediaTek-Research/BreezeApp/blob/main/BreezeApp.apk)\n\n| 1 | 2 | 3 |\n|---|---|---|\n|![](https://github.com/user-attachments/assets/1cdbc057-b893-4de6-9e9c-f1d7dfd1d992)|![](https://github.com/user-attachments/assets/d77cd98e-b057-442f-860d-d5befd5c769b)|![](https://github.com/user-attachments/assets/57e546bf-3d39-45b9-b392-b48ca4fb3c58)|\n\n### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)\n\nTalk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking\nface running locally across platforms\n\nSee also <https://github.com/t41372/Open-LLM-VTuber/pull/50>\n\n### [voiceapi](https://github.com/ruzhila/voiceapi)\n\n<details>\n  <summary>Streaming ASR and TTS based on FastAPI</summary>\n\n\nIt shows how to use the ASR and TTS Python APIs with FastAPI.\n</details>\n\n### [\u817e\u8baf\u4f1a\u8bae\u6478\u9c7c\u5de5\u5177 TMSpeech](https://github.com/jxlpzqc/TMSpeech)\n\nUses streaming ASR in C# with graphical user interface.\n\nVideo demo in Chinese: [\u3010\u5f00\u6e90\u3011Windows\u5b9e\u65f6\u5b57\u5e55\u8f6f\u4ef6\uff08\u7f51\u8bfe/\u5f00\u4f1a\u5fc5\u5907\uff09](https://www.bilibili.com/video/BV1rX4y1p7Nx)\n\n### [lol\u4e92\u52a8\u52a9\u624b](https://github.com/l1veIn/lol-wom-electron)\n\nIt uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)\n\nVideo demo in Chinese: [\u7206\u4e86\uff01\u70ab\u795e\u6559\u4f60\u5f00\u6253\u5b57\u6302\uff01\u771f\u6b63\u5f71\u54cd\u80dc\u7387\u7684\u82f1\u96c4\u8054\u76df\u5de5\u5177\uff01\u82f1\u96c4\u8054\u76df\u7684\u6700\u540e\u4e00\u5757\u62fc\u56fe\uff01\u548c\u6e38\u620f\u4e2d\u7684\u6bcf\u4e2a\u4eba\u65e0\u969c\u788d\u6c9f\u901a\uff01](https://www.bilibili.com/video/BV142tje9E74)\n\n### [Sherpa-ONNX \u8bed\u97f3\u8bc6\u522b\u670d\u52a1\u5668](https://github.com/hfyydd/sherpa-onnx-server)\n\nA server based on nodejs providing Restful API for speech recognition.\n\n### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)\n\n\u4e00\u4e2a\u6a21\u5757\u5316\uff0c\u5168\u8fc7\u7a0b\u53ef\u79bb\u7ebf\uff0c\u4f4e\u5360\u7528\u7387\u7684\u5bf9\u8bdd\u673a\u5668\u4eba/\u667a\u80fd\u97f3\u7bb1\n\nIt uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)\nand [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)\nare used.\n\n### [Flutter-EasySpeechRecognition](https://github.com/Jason-chen-coder/Flutter-EasySpeechRecognition)\n\nIt extends [./flutter-examples/streaming_asr](./flutter-examples/streaming_asr) by\ndownloading models inside the app to reduce the size of the app.\n\nNote: [[Team B] Sherpa AI backend](https://github.com/umgc/spring2025/pull/82) also uses\nsherpa-onnx in a Flutter APP.\n\n### [sherpa-onnx-unity](https://github.com/xue-fei/sherpa-onnx-unity)\n\nsherpa-onnx in Unity. See also [#1695](https://github.com/k2-fsa/sherpa-onnx/issues/1695),\n[#1892](https://github.com/k2-fsa/sherpa-onnx/issues/1892), and [#1859](https://github.com/k2-fsa/sherpa-onnx/issues/1859)\n\n### [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)\n\n\u672c\u9879\u76ee\u4e3axiaozhi-esp32\u63d0\u4f9b\u540e\u7aef\u670d\u52a1\uff0c\u5e2e\u52a9\u60a8\u5feb\u901f\u642d\u5efaESP32\u8bbe\u5907\u63a7\u5236\u670d\u52a1\u5668\nBackend service for xiaozhi-esp32, helps you quickly build an ESP32 device control server.\n\nSee also\n\n  - [ASR\u65b0\u589e\u8f7b\u91cf\u7ea7sherpa-onnx-asr](https://github.com/xinnan-tech/xiaozhi-esp32-server/issues/315)\n  - [feat: ASR\u589e\u52a0sherpa-onnx\u6a21\u578b](https://github.com/xinnan-tech/xiaozhi-esp32-server/pull/379)\n\n### [KaithemAutomation](https://github.com/EternityForest/KaithemAutomation)\n\nPure Python, GUI-focused home automation/consumer grade SCADA.\n\nIt uses TTS from sherpa-onnx. See also [\u2728 Speak command that uses the new globally configured TTS model.](https://github.com/EternityForest/KaithemAutomation/commit/8e64d2b138725e426532f7d66bb69dd0b4f53693)\n\n### [Open-XiaoAI KWS](https://github.com/idootop/open-xiaoai-kws)\n\nEnable custom wake word for XiaoAi Speakers. \u8ba9\u5c0f\u7231\u97f3\u7bb1\u652f\u6301\u81ea\u5b9a\u4e49\u5524\u9192\u8bcd\u3002\n\nVideo demo in Chinese: [\u5c0f\u7231\u540c\u5b66\u542f\u52a8\uff5e\u02f6\u2579\ua1f4\u2579\u02f6\uff01](https://www.bilibili.com/video/BV1YfVUz5EMj)\n\n### [C++ WebSocket ASR Server](https://github.com/mawwalker/stt-server)\n\nIt provides a WebSocket server based on C++ for ASR using sherpa-onnx.\n\n### [Go WebSocket Server](https://github.com/bbeyondllove/asr_server)\n\nIt provides a WebSocket server based on the Go programming language for sherpa-onnx.\n\n### [Making robot Paimon, Ep10 \"The AI Part 1\"](https://www.youtube.com/watch?v=KxPKkwxGWZs)\n\nIt is a [YouTube video](https://www.youtube.com/watch?v=KxPKkwxGWZs),\nshowing how the author tried to use AI so he can have a conversation with Paimon.\n\nIt uses sherpa-onnx for speech-to-text and text-to-speech.\n|1|\n|---|\n|![](https://github.com/user-attachments/assets/f6eea2d5-1807-42cb-9160-be8da2971e1f)|\n\n[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs\n[silero-vad]: https://github.com/snakers4/silero-vad\n[Raspberry Pi]: https://www.raspberrypi.com/\n[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf\n[LicheePi4A]: https://sipeed.com/licheepi4a\n[VisionFive 2]: https://www.starfivetech.com/en/site/boards\n[\u65ed\u65e5X3\u6d3e]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html\n[\u7231\u82af\u6d3e]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html\n[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization\n[hf-space-speaker-diarization-cn]: https://hf.qhduan.com/spaces/k2-fsa/speaker-diarization\n[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition\n[hf-space-asr-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition\n[Whisper]: https://github.com/openai/whisper\n[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper\n[hf-space-asr-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/automatic-speech-recognition-with-whisper\n[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech\n[hf-space-tts-cn]: https://hf.qhduan.com/spaces/k2-fsa/text-to-speech\n[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos\n[hf-space-subtitle-cn]: https://hf.qhduan.com/spaces/k2-fsa/generate-subtitles-for-videos\n[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging\n[hf-space-audio-tagging-cn]: https://hf.qhduan.com/spaces/k2-fsa/audio-tagging\n[hf-space-source-separation]: https://huggingface.co/spaces/k2-fsa/source-separation\n[hf-space-source-separation-cn]: https://hf.qhduan.com/spaces/k2-fsa/source-separation\n[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification\n[hf-space-slid-whisper-cn]: https://hf.qhduan.com/spaces/k2-fsa/spoken-language-identification\n[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx\n[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx\n[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary\n[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice\n[wasm-hf-vad-asr-zh-zipformer-ctc-07-03]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc\n[wasm-ms-vad-asr-zh-zipformer-ctc-07-03]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-ctc/summary\n[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice\n[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice\n[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[reazonspeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf\n[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[gigaspeech2]: https://github.com/speechcolab/gigaspeech2\n[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[telespeech-asr]: https://github.com/tele-ai/telespeech-asr\n[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[dolphin]: https://github.com/dataoceanai/dolphin\n[wasm-ms-vad-asr-multi-lang-dolphin-base]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc\n[wasm-hf-vad-asr-multi-lang-dolphin-base]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-multi-lang-dophin-ctc\n\n[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx\n[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx\n[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html\n[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html\n[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html\n[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html\n[apk-simula-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr.html\n[apk-simula-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-simulate-streaming-asr-cn.html\n[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html\n[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html\n[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html\n[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html\n[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html\n[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html\n[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html\n[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html\n[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html\n[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html\n[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html\n[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html\n[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html\n[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html\n[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html\n[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html\n[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html\n[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html\n[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html\n[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html\n[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html\n[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html\n[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html\n[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html\n[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html\n[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html\n[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html\n[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html\n[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html\n[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html\n[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html\n[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html\n[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models\n[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models\n[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx\n[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models\n[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models\n[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models\n[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models\n[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech\n[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech\n[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2\n[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2\n[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2\n[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2\n[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2\n[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2\n[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2\n[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2\n[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2\n[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2\n[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2\n[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2\n[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9\n[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/\n[speech-enhancement-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speech-enhancement-models\n[source-separation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/source-separation-models\n[RK3588]: https://www.rock-chips.com/uploads/pdf/2022.8.26/192/RK3588%20Brief%20Datasheet.pdf\n[spleeter]: https://github.com/deezer/spleeter\n[UVR]: https://github.com/Anjok07/ultimatevocalremovergui\n[gtcrn]: https://github.com/Xiaobin-Rong/gtcrn\n[tts-url]: https://k2-fsa.github.io/sherpa/onnx/tts/all-in-one.html\n[ss-url]: https://k2-fsa.github.io/sherpa/onnx/source-separation/index.html\n[sd-url]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/index.html\n[slid-url]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/index.html\n[at-url]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/index.html\n[vad-url]: https://k2-fsa.github.io/sherpa/onnx/vad/index.html\n[kws-url]: https://k2-fsa.github.io/sherpa/onnx/kws/index.html\n[punct-url]: https://k2-fsa.github.io/sherpa/onnx/punctuation/index.html\n[se-url]: https://k2-fsa.github.io/sherpa/onnx/speech-enhancment/index.html\n",
    "bugtrack_url": null,
    "license": "Apache licensed, as found in the LICENSE file",
    "summary": null,
    "version": "1.12.7",
    "project_urls": {
        "Homepage": "https://github.com/k2-fsa/sherpa-onnx"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7a260fc94624ae04a7f3011ae9cdccf929b193523bd6ea94f741a41b1d1eae06",
                "md5": "b935f33bf3bc4679b5d2837964eec643",
                "sha256": "35e014585d6e63256f1fe76ebc21a1dee80ca384fbb16de55e7b89a489afb0d5"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp310-cp310-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "b935f33bf3bc4679b5d2837964eec643",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 19308658,
            "upload_time": "2025-07-27T17:17:57",
            "upload_time_iso_8601": "2025-07-27T17:17:57.707767Z",
            "url": "https://files.pythonhosted.org/packages/7a/26/0fc94624ae04a7f3011ae9cdccf929b193523bd6ea94f741a41b1d1eae06/sherpa_onnx-1.12.7-cp310-cp310-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "532729b32e59d4e189469f7e638454bfbabfc21a57275ee22dfefce8a734241b",
                "md5": "e0e7fa1d340eecb923717e06e79dc6bb",
                "sha256": "df1962784eb76a0a10082c83387ad7e021307efde7917f4df94834085a5e326e"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl",
            "has_sig": false,
            "md5_digest": "e0e7fa1d340eecb923717e06e79dc6bb",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 26111582,
            "upload_time": "2025-07-27T17:24:46",
            "upload_time_iso_8601": "2025-07-27T17:24:46.096835Z",
            "url": "https://files.pythonhosted.org/packages/53/27/29b32e59d4e189469f7e638454bfbabfc21a57275ee22dfefce8a734241b/sherpa_onnx-1.12.7-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4b85c371e34869ddfc6bad14e2c27f31ae018627e6016a417388dec271975377",
                "md5": "56a9142b27ad5356a2ed40839d904cfa",
                "sha256": "4cd87319a7c5199fe0f35652169efc412ec26b2d6520c008e636bc5934adbf1f"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "56a9142b27ad5356a2ed40839d904cfa",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.6",
            "size": 25390842,
            "upload_time": "2025-07-27T17:35:24",
            "upload_time_iso_8601": "2025-07-27T17:35:24.068497Z",
            "url": "https://files.pythonhosted.org/packages/4b/85/c371e34869ddfc6bad14e2c27f31ae018627e6016a417388dec271975377/sherpa_onnx-1.12.7-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e49c97770901a9dd7acc33511f4b7c5737d7fb449d6cf7313502895b752db2b9",
                "md5": "448a5228c955ca1b38a25fe72b75c3c4",
                "sha256": "92abb9de4205a9475c557a7f86dfeceac1cee8a3e5b8086178b7ffcab977e98e"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp311-cp311-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "448a5228c955ca1b38a25fe72b75c3c4",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.6",
            "size": 19309922,
            "upload_time": "2025-07-27T17:24:49",
            "upload_time_iso_8601": "2025-07-27T17:24:49.125003Z",
            "url": "https://files.pythonhosted.org/packages/e4/9c/97770901a9dd7acc33511f4b7c5737d7fb449d6cf7313502895b752db2b9/sherpa_onnx-1.12.7-cp311-cp311-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f86b4d9a593f3faa625b645cc719706844e6e85c0234dedb64a9ccacc61629bb",
                "md5": "3732828c9ab4ef91ec0a712ac08ef982",
                "sha256": "392caaafec88bb5a740b80b90834777ad7ab596bcd92b5768daf597f1db7123c"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp312-cp312-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "3732828c9ab4ef91ec0a712ac08ef982",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 19316398,
            "upload_time": "2025-07-27T17:17:54",
            "upload_time_iso_8601": "2025-07-27T17:17:54.996200Z",
            "url": "https://files.pythonhosted.org/packages/f8/6b/4d9a593f3faa625b645cc719706844e6e85c0234dedb64a9ccacc61629bb/sherpa_onnx-1.12.7-cp312-cp312-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "be824c076d5d6d73dd5caad8cc8f6387a1c128579ae6129de493dcba0dec848a",
                "md5": "69ddad043c6c426f69ae277432a874bf",
                "sha256": "04f031fe3eeb8e54a05c2e02d8f51cf92a323b501e35b7da57c91ed7a5782c9e"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp312-cp312-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "69ddad043c6c426f69ae277432a874bf",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.6",
            "size": 21935351,
            "upload_time": "2025-07-27T17:37:39",
            "upload_time_iso_8601": "2025-07-27T17:37:39.666747Z",
            "url": "https://files.pythonhosted.org/packages/be/82/4c076d5d6d73dd5caad8cc8f6387a1c128579ae6129de493dcba0dec848a/sherpa_onnx-1.12.7-cp312-cp312-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0b00bc2ea246a78d5fd5ef9a52c631d116c4b8bfcf526268094206e0d23d0f43",
                "md5": "2472229b224d580830fc00c6b544979a",
                "sha256": "b89fb23684df56e93773be922c47aec7015e6fcb7543d5f991121059c94d25b3"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp313-cp313-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "2472229b224d580830fc00c6b544979a",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.6",
            "size": 19316342,
            "upload_time": "2025-07-27T17:39:00",
            "upload_time_iso_8601": "2025-07-27T17:39:00.604369Z",
            "url": "https://files.pythonhosted.org/packages/0b/00/bc2ea246a78d5fd5ef9a52c631d116c4b8bfcf526268094206e0d23d0f43/sherpa_onnx-1.12.7-cp313-cp313-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "06d2c606467dfb9cb562b7148a4a8962c7d2897a270bac386546a797131c8d1b",
                "md5": "13e0f44632cc8edd4c2afe0f29f170bd",
                "sha256": "9bed7e5425d846c839d2531ee5f6a7d486b385a5d3022b86a9f4f41286f183ed"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl",
            "has_sig": false,
            "md5_digest": "13e0f44632cc8edd4c2afe0f29f170bd",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.6",
            "size": 26109991,
            "upload_time": "2025-07-27T17:38:15",
            "upload_time_iso_8601": "2025-07-27T17:38:15.055345Z",
            "url": "https://files.pythonhosted.org/packages/06/d2/c606467dfb9cb562b7148a4a8962c7d2897a270bac386546a797131c8d1b/sherpa_onnx-1.12.7-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "faacc6ceb65511690dd97f9c33ff5fb3171a752277aecde46fa6e9de2cb37fce",
                "md5": "100388694a6ed104696290761f592fee",
                "sha256": "f4bfe9c38b82c78458117b9e3e4d04663a1d2049fc990873af17ce7f07cb08a7"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp38-cp38-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "100388694a6ed104696290761f592fee",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 19307961,
            "upload_time": "2025-07-27T17:22:19",
            "upload_time_iso_8601": "2025-07-27T17:22:19.644611Z",
            "url": "https://files.pythonhosted.org/packages/fa/ac/c6ceb65511690dd97f9c33ff5fb3171a752277aecde46fa6e9de2cb37fce/sherpa_onnx-1.12.7-cp38-cp38-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff43e56428b9bb273f88929f88666287498fc2476415677944dc14736e9bc4a0",
                "md5": "14afca48707502276531f492fe54a621",
                "sha256": "68d48fb1675500b1c1a46f98ef0f42fcbe7764cab8640d729842462666e11128"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp38-cp38-macosx_11_0_x86_64.whl",
            "has_sig": false,
            "md5_digest": "14afca48707502276531f492fe54a621",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.6",
            "size": 21920763,
            "upload_time": "2025-07-27T17:30:27",
            "upload_time_iso_8601": "2025-07-27T17:30:27.118956Z",
            "url": "https://files.pythonhosted.org/packages/ff/43/e56428b9bb273f88929f88666287498fc2476415677944dc14736e9bc4a0/sherpa_onnx-1.12.7-cp38-cp38-macosx_11_0_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "953df46a9e75762eedf204a04eee49ae8fb19a1677516f7257e0aead8807b2d0",
                "md5": "e5e9bf1aad3463267ff28459dad5bb1e",
                "sha256": "e40cca46dee357bc08b5816a02ad41ed8a338bcacc4d6fb6b77886e3b4c271e3"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp39-cp39-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "e5e9bf1aad3463267ff28459dad5bb1e",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.6",
            "size": 19308243,
            "upload_time": "2025-07-27T17:18:34",
            "upload_time_iso_8601": "2025-07-27T17:18:34.544480Z",
            "url": "https://files.pythonhosted.org/packages/95/3d/f46a9e75762eedf204a04eee49ae8fb19a1677516f7257e0aead8807b2d0/sherpa_onnx-1.12.7-cp39-cp39-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3457a9110120ab52e4d4f82e4560f173717e82ee96fa43eef2c6b66277af7466",
                "md5": "62bd31dd59e411a6d7f2be384d3d4c40",
                "sha256": "7ef357629bf578e7fac8db4f19150d416b24a5735061ad78e6980a3ad5aa7f3b"
            },
            "downloads": -1,
            "filename": "sherpa_onnx-1.12.7-cp39-cp39-macosx_11_0_universal2.whl",
            "has_sig": false,
            "md5_digest": "62bd31dd59e411a6d7f2be384d3d4c40",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.6",
            "size": 41183719,
            "upload_time": "2025-07-27T17:36:57",
            "upload_time_iso_8601": "2025-07-27T17:36:57.716080Z",
            "url": "https://files.pythonhosted.org/packages/34/57/a9110120ab52e4d4f82e4560f173717e82ee96fa43eef2c6b66277af7466/sherpa_onnx-1.12.7-cp39-cp39-macosx_11_0_universal2.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-27 17:17:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "k2-fsa",
    "github_project": "sherpa-onnx",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sherpa-onnx"
}
        
Elapsed time: 1.52861s