### Supported functions
|Speech recognition| Speech synthesis |
|------------------|------------------|
| ✔️ | ✔️ |
|Speaker identification| Speaker diarization | Speaker verification |
|----------------------|-------------------- |------------------------|
| ✔️ | ✔️ | ✔️ |
| Spoken Language identification | Audio tagging | Voice activity detection |
|--------------------------------|---------------|--------------------------|
| ✔️ | ✔️ | ✔️ |
| Keyword spotting | Add punctuation |
|------------------|-----------------|
| ✔️ | ✔️ |
### Supported platforms
|Architecture| Android | iOS | Windows | macOS | linux | HarmonyOS |
|------------|---------|---------|------------|-------|-------|-----------|
| x64 | ✔️ | | ✔️ | ✔️ | ✔️ | ✔️ |
| x86 | ✔️ | | ✔️ | | | |
| arm64 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| arm32 | ✔️ | | | | ✔️ | ✔️ |
| riscv64 | | | | | ✔️ | |
### Supported programming languages
| 1. C++ | 2. C | 3. Python | 4. JavaScript |
|--------|-------|-----------|---------------|
| ✔️ | ✔️ | ✔️ | ✔️ |
|5. Java | 6. C# | 7. Kotlin | 8. Swift |
|--------|-------|-----------|----------|
| ✔️ | ✔️ | ✔️ | ✔️ |
| 9. Go | 10. Dart | 11. Rust | 12. Pascal |
|-------|----------|----------|------------|
| ✔️ | ✔️ | ✔️ | ✔️ |
For Rust support, please see [sherpa-rs][sherpa-rs]
It also supports WebAssembly.
## Introduction
This repository supports running the following functions **locally**
- Speech-to-text (i.e., ASR); both streaming and non-streaming are supported
- Text-to-speech (i.e., TTS)
- Speaker diarization
- Speaker identification
- Speaker verification
- Spoken language identification
- Audio tagging
- VAD (e.g., [silero-vad][silero-vad])
- Keyword spotting
on the following platforms and operating systems:
- x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64)
- Linux, macOS, Windows, openKylin
- Android, WearOS
- iOS
- HarmonyOS
- NodeJS
- WebAssembly
- [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)
- [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)
- [Raspberry Pi][Raspberry Pi]
- [RV1126][RV1126]
- [LicheePi4A][LicheePi4A]
- [VisionFive 2][VisionFive 2]
- [旭日X3派][旭日X3派]
- [爱芯派][爱芯派]
- etc
with the following APIs
- C++, C, Python, Go, ``C#``
- Java, Kotlin, JavaScript
- Swift, Rust
- Dart, Object Pascal
### Links for Huggingface Spaces
<details>
<summary>You can visit the following Huggingface spaces to try sherpa-onnx without
installing anything. All you need is a browser.</summary>
| Description | URL |
|-------------------------------------------------------|-----------------------------------------|
| Speaker diarization | [Click me][hf-space-speaker-diarization]|
| Speech recognition | [Click me][hf-space-asr] |
| Speech recognition with [Whisper][Whisper] | [Click me][hf-space-asr-whisper] |
| Speech synthesis | [Click me][hf-space-tts] |
| Generate subtitles | [Click me][hf-space-subtitle] |
| Audio tagging | [Click me][hf-space-audio-tagging] |
| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper] |
We also have spaces built using WebAssembly. They are listed below:
| Description | Huggingface space| ModelScope space|
|------------------------------------------------------------------------------------------|------------------|-----------------|
|Voice activity detection with [silero-vad][silero-vad] | [Click me][wasm-hf-vad]|[地址][wasm-ms-vad]|
|Real-time speech recognition (Chinese + English) with Zipformer | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[地址][wasm-hf-streaming-asr-zh-en-zipformer]|
|Real-time speech recognition (Chinese + English) with Paraformer |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-paraformer]|
|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [地址][wasm-ms-streaming-asr-zh-en-yue-paraformer]|
|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer] |[地址][wasm-ms-streaming-asr-en-zipformer]|
|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [地址][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|
|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [地址][wasm-ms-vad-asr-en-whisper-tiny-en]|
|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [地址][wasm-ms-vad-asr-en-moonshine-tiny-en]|
|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech] |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [地址][wasm-ms-vad-asr-en-zipformer-gigaspeech]|
|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech] |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [地址][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|
|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [地址][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|
|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2] |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [地址][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|
|VAD + speech recognition (Chinese 多种方言) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [地址][wasm-ms-vad-asr-zh-telespeech]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-large |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [地址][wasm-ms-vad-asr-zh-en-paraformer-large]|
|VAD + speech recognition (English + Chinese, 及多种中文方言) with Paraformer-small |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [地址][wasm-ms-vad-asr-zh-en-paraformer-small]|
|Speech synthesis (English) |[Click me][wasm-hf-tts-piper-en]| [地址][wasm-ms-tts-piper-en]|
|Speech synthesis (German) |[Click me][wasm-hf-tts-piper-de]| [地址][wasm-ms-tts-piper-de]|
|Speaker diarization |[Click me][wasm-hf-speaker-diarization]|[地址][wasm-ms-speaker-diarization]|
</details>
### Links for pre-built Android APKs
<details>
<summary>You can find pre-built Android APKs for this repository in the following table</summary>
| Description | URL | 中国用户 |
|----------------------------------------|------------------------------------|-----------------------------------|
| Speaker diarization | [Address][apk-speaker-diarization] | [点此][apk-speaker-diarization-cn]|
| Streaming speech recognition | [Address][apk-streaming-asr] | [点此][apk-streaming-asr-cn] |
| Text-to-speech | [Address][apk-tts] | [点此][apk-tts-cn] |
| Voice activity detection (VAD) | [Address][apk-vad] | [点此][apk-vad-cn] |
| VAD + non-streaming speech recognition | [Address][apk-vad-asr] | [点此][apk-vad-asr-cn] |
| Two-pass speech recognition | [Address][apk-2pass] | [点此][apk-2pass-cn] |
| Audio tagging | [Address][apk-at] | [点此][apk-at-cn] |
| Audio tagging (WearOS) | [Address][apk-at-wearos] | [点此][apk-at-wearos-cn] |
| Speaker identification | [Address][apk-sid] | [点此][apk-sid-cn] |
| Spoken language identification | [Address][apk-slid] | [点此][apk-slid-cn] |
| Keyword spotting | [Address][apk-kws] | [点此][apk-kws-cn] |
</details>
### Links for pre-built Flutter APPs
<details>
#### Real-time speech recognition
| Description | URL | 中国用户 |
|--------------------------------|-------------------------------------|-------------------------------------|
| Streaming speech recognition | [Address][apk-flutter-streaming-asr]| [点此][apk-flutter-streaming-asr-cn]|
#### Text-to-speech
| Description | URL | 中国用户 |
|------------------------------------------|------------------------------------|------------------------------------|
| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android] | [点此][flutter-tts-android-cn] |
| Linux (x64) | [Address][flutter-tts-linux] | [点此][flutter-tts-linux-cn] |
| macOS (x64) | [Address][flutter-tts-macos-x64] | [点此][flutter-tts-macos-arm64-cn] |
| macOS (arm64) | [Address][flutter-tts-macos-arm64] | [点此][flutter-tts-macos-x64-cn] |
| Windows (x64) | [Address][flutter-tts-win-x64] | [点此][flutter-tts-win-x64-cn] |
> Note: You need to build from source for iOS.
</details>
### Links for pre-built Lazarus APPs
<details>
#### Generating subtitles
| Description | URL | 中国用户 |
|--------------------------------|----------------------------|----------------------------|
| Generate subtitles (生成字幕) | [Address][lazarus-subtitle]| [点此][lazarus-subtitle-cn]|
</details>
### Links for pre-trained models
<details>
| Description | URL |
|---------------------------------------------|---------------------------------------------------------------------------------------|
| Speech recognition (speech to text, ASR) | [Address][asr-models] |
| Text-to-speech (TTS) | [Address][tts-models] |
| VAD | [Address][vad-models] |
| Keyword spotting | [Address][kws-models] |
| Audio tagging | [Address][at-models] |
| Speaker identification (Speaker ID) | [Address][sid-models] |
| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from [Speech recognition][asr-models]|
| Punctuation | [Address][punct-models] |
| Speaker segmentation | [Address][speaker-segmentation-models] |
</details>
#### Some pre-trained ASR models (Streaming)
<details>
Please see
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>
for more models. The following table lists only **SOME** of them.
|Name | Supported Languages| Description|
|-----|-----|----|
|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|
|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|
|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|
|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|
|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|
</details>
#### Some pre-trained ASR models (Non-Streaming)
<details>
Please see
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>
- <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>
for more models. The following table lists only **SOME** of them.
|Name | Supported Languages| Description|
|-----|-----|----|
|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|
|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|
|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| 支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|
|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| 也支持多种中文方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|
|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|
|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|
|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|
|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|
|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|
|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|
|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| 支持多种方言. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|
</details>
### Useful links
- Documentation: https://k2-fsa.github.io/sherpa/onnx/
- Bilibili 演示视频: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi
### How to reach us
Please see
https://k2-fsa.github.io/sherpa/social-groups.html
for 新一代 Kaldi **微信交流群** and **QQ 交流群**.
## Projects using sherpa-onnx
### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)
Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking
face running locally across platforms
See also <https://github.com/t41372/Open-LLM-VTuber/pull/50>
### [voiceapi](https://github.com/ruzhila/voiceapi)
<details>
<summary>Streaming ASR and TTS based on FastAPI</summary>
It shows how to use the ASR and TTS Python APIs with FastAPI.
</details>
### [腾讯会议摸鱼工具 TMSpeech](https://github.com/jxlpzqc/TMSpeech)
Uses streaming ASR in C# with graphical user interface.
Video demo in Chinese: [【开源】Windows实时字幕软件(网课/开会必备)](https://www.bilibili.com/video/BV1rX4y1p7Nx)
### [lol互动助手](https://github.com/l1veIn/lol-wom-electron)
It uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)
Video demo in Chinese: [爆了!炫神教你开打字挂!真正影响胜率的英雄联盟工具!英雄联盟的最后一块拼图!和游戏中的每个人无障碍沟通!](https://www.bilibili.com/video/BV142tje9E74)
### [Sherpa-ONNX 语音识别服务器](https://github.com/hfyydd/sherpa-onnx-server)
A server based on nodejs providing Restful API for speech recognition.
### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)
一个模块化,全过程可离线,低占用率的对话机器人/智能音箱
It uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)
and [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)
are used.
[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs
[silero-vad]: https://github.com/snakers4/silero-vad
[Raspberry Pi]: https://www.raspberrypi.com/
[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf
[LicheePi4A]: https://sipeed.com/licheepi4a
[VisionFive 2]: https://www.starfivetech.com/en/site/boards
[旭日X3派]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html
[爱芯派]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html
[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization
[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition
[Whisper]: https://github.com/openai/whisper
[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper
[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech
[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos
[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging
[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification
[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx
[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx
[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en
[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer
[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary
[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer
[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en
[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en
[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice
[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice
[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice
[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny
[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny
[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech
[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech
[ReazonSpeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf
[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer
[GigaSpeech2]: https://github.com/SpeechColab/GigaSpeech2
[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer
[TeleSpeech-ASR]: https://github.com/Tele-AI/TeleSpeech-ASR
[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech
[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer
[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small
[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en
[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de
[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx
[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx
[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html
[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html
[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html
[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html
[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html
[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html
[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html
[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html
[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html
[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html
[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html
[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html
[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html
[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html
[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html
[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html
[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html
[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html
[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html
[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html
[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html
[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html
[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html
[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html
[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html
[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html
[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html
[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html
[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html
[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html
[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html
[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html
[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html
[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html
[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html
[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx
[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models
[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models
[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models
[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models
[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models
[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech
[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech
[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2
[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2
[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2
[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2
[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2
[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2
[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2
[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2
[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2
[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2
[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2
[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2
[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2
[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2
[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9
[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/
Raw data
{
"_id": null,
"home_page": "https://github.com/k2-fsa/sherpa-onnx",
"name": "sherpa-onnx",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": null,
"author": "The sherpa-onnx development team",
"author_email": "dpovey@gmail.com",
"download_url": null,
"platform": null,
"description": "### Supported functions\n\n|Speech recognition| Speech synthesis |\n|------------------|------------------|\n| \u2714\ufe0f | \u2714\ufe0f |\n\n|Speaker identification| Speaker diarization | Speaker verification |\n|----------------------|-------------------- |------------------------|\n| \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n\n| Spoken Language identification | Audio tagging | Voice activity detection |\n|--------------------------------|---------------|--------------------------|\n| \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n\n| Keyword spotting | Add punctuation |\n|------------------|-----------------|\n| \u2714\ufe0f | \u2714\ufe0f |\n\n### Supported platforms\n\n|Architecture| Android | iOS | Windows | macOS | linux | HarmonyOS |\n|------------|---------|---------|------------|-------|-------|-----------|\n| x64 | \u2714\ufe0f | | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n| x86 | \u2714\ufe0f | | \u2714\ufe0f | | | |\n| arm64 | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n| arm32 | \u2714\ufe0f | | | | \u2714\ufe0f | \u2714\ufe0f |\n| riscv64 | | | | | \u2714\ufe0f | |\n\n### Supported programming languages\n\n| 1. C++ | 2. C | 3. Python | 4. JavaScript |\n|--------|-------|-----------|---------------|\n| \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n\n|5. Java | 6. C# | 7. Kotlin | 8. Swift |\n|--------|-------|-----------|----------|\n| \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n\n| 9. Go | 10. Dart | 11. Rust | 12. Pascal |\n|-------|----------|----------|------------|\n| \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f | \u2714\ufe0f |\n\nFor Rust support, please see [sherpa-rs][sherpa-rs]\n\nIt also supports WebAssembly.\n\n## Introduction\n\nThis repository supports running the following functions **locally**\n\n - Speech-to-text (i.e., ASR); both streaming and non-streaming are supported\n - Text-to-speech (i.e., TTS)\n - Speaker diarization\n - Speaker identification\n - Speaker verification\n - Spoken language identification\n - Audio tagging\n - VAD (e.g., [silero-vad][silero-vad])\n - Keyword spotting\n\non the following platforms and operating systems:\n\n - x86, ``x86_64``, 32-bit ARM, 64-bit ARM (arm64, aarch64), RISC-V (riscv64)\n - Linux, macOS, Windows, openKylin\n - Android, WearOS\n - iOS\n - HarmonyOS\n - NodeJS\n - WebAssembly\n - [NVIDIA Jetson Orin NX][NVIDIA Jetson Orin NX] (Support running on both CPU and GPU)\n - [NVIDIA Jetson Nano B01][NVIDIA Jetson Nano B01] (Support running on both CPU and GPU)\n - [Raspberry Pi][Raspberry Pi]\n - [RV1126][RV1126]\n - [LicheePi4A][LicheePi4A]\n - [VisionFive 2][VisionFive 2]\n - [\u65ed\u65e5X3\u6d3e][\u65ed\u65e5X3\u6d3e]\n - [\u7231\u82af\u6d3e][\u7231\u82af\u6d3e]\n - etc\n\nwith the following APIs\n\n - C++, C, Python, Go, ``C#``\n - Java, Kotlin, JavaScript\n - Swift, Rust\n - Dart, Object Pascal\n\n### Links for Huggingface Spaces\n\n<details>\n<summary>You can visit the following Huggingface spaces to try sherpa-onnx without\ninstalling anything. All you need is a browser.</summary>\n\n| Description | URL |\n|-------------------------------------------------------|-----------------------------------------|\n| Speaker diarization | [Click me][hf-space-speaker-diarization]|\n| Speech recognition | [Click me][hf-space-asr] |\n| Speech recognition with [Whisper][Whisper] | [Click me][hf-space-asr-whisper] |\n| Speech synthesis | [Click me][hf-space-tts] |\n| Generate subtitles | [Click me][hf-space-subtitle] |\n| Audio tagging | [Click me][hf-space-audio-tagging] |\n| Spoken language identification with [Whisper][Whisper]| [Click me][hf-space-slid-whisper] |\n\nWe also have spaces built using WebAssembly. They are listed below:\n\n| Description | Huggingface space| ModelScope space|\n|------------------------------------------------------------------------------------------|------------------|-----------------|\n|Voice activity detection with [silero-vad][silero-vad] | [Click me][wasm-hf-vad]|[\u5730\u5740][wasm-ms-vad]|\n|Real-time speech recognition (Chinese + English) with Zipformer | [Click me][wasm-hf-streaming-asr-zh-en-zipformer]|[\u5730\u5740][wasm-hf-streaming-asr-zh-en-zipformer]|\n|Real-time speech recognition (Chinese + English) with Paraformer |[Click me][wasm-hf-streaming-asr-zh-en-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-paraformer]|\n|Real-time speech recognition (Chinese + English + Cantonese) with [Paraformer-large][Paraformer-large]|[Click me][wasm-hf-streaming-asr-zh-en-yue-paraformer]| [\u5730\u5740][wasm-ms-streaming-asr-zh-en-yue-paraformer]|\n|Real-time speech recognition (English) |[Click me][wasm-hf-streaming-asr-en-zipformer] |[\u5730\u5740][wasm-ms-streaming-asr-en-zipformer]|\n|VAD + speech recognition (Chinese + English + Korean + Japanese + Cantonese) with [SenseVoice][SenseVoice]|[Click me][wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]|\n|VAD + speech recognition (English) with [Whisper][Whisper] tiny.en|[Click me][wasm-hf-vad-asr-en-whisper-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-whisper-tiny-en]|\n|VAD + speech recognition (English) with [Moonshine tiny][Moonshine tiny]|[Click me][wasm-hf-vad-asr-en-moonshine-tiny-en]| [\u5730\u5740][wasm-ms-vad-asr-en-moonshine-tiny-en]|\n|VAD + speech recognition (English) with Zipformer trained with [GigaSpeech][GigaSpeech] |[Click me][wasm-hf-vad-asr-en-zipformer-gigaspeech]| [\u5730\u5740][wasm-ms-vad-asr-en-zipformer-gigaspeech]|\n|VAD + speech recognition (Chinese) with Zipformer trained with [WenetSpeech][WenetSpeech] |[Click me][wasm-hf-vad-asr-zh-zipformer-wenetspeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-zipformer-wenetspeech]|\n|VAD + speech recognition (Japanese) with Zipformer trained with [ReazonSpeech][ReazonSpeech]|[Click me][wasm-hf-vad-asr-ja-zipformer-reazonspeech]| [\u5730\u5740][wasm-ms-vad-asr-ja-zipformer-reazonspeech]|\n|VAD + speech recognition (Thai) with Zipformer trained with [GigaSpeech2][GigaSpeech2] |[Click me][wasm-hf-vad-asr-th-zipformer-gigaspeech2]| [\u5730\u5740][wasm-ms-vad-asr-th-zipformer-gigaspeech2]|\n|VAD + speech recognition (Chinese \u591a\u79cd\u65b9\u8a00) with a [TeleSpeech-ASR][TeleSpeech-ASR] CTC model|[Click me][wasm-hf-vad-asr-zh-telespeech]| [\u5730\u5740][wasm-ms-vad-asr-zh-telespeech]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-large |[Click me][wasm-hf-vad-asr-zh-en-paraformer-large]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-large]|\n|VAD + speech recognition (English + Chinese, \u53ca\u591a\u79cd\u4e2d\u6587\u65b9\u8a00) with Paraformer-small |[Click me][wasm-hf-vad-asr-zh-en-paraformer-small]| [\u5730\u5740][wasm-ms-vad-asr-zh-en-paraformer-small]|\n|Speech synthesis (English) |[Click me][wasm-hf-tts-piper-en]| [\u5730\u5740][wasm-ms-tts-piper-en]|\n|Speech synthesis (German) |[Click me][wasm-hf-tts-piper-de]| [\u5730\u5740][wasm-ms-tts-piper-de]|\n|Speaker diarization |[Click me][wasm-hf-speaker-diarization]|[\u5730\u5740][wasm-ms-speaker-diarization]|\n\n</details>\n\n### Links for pre-built Android APKs\n\n<details>\n\n<summary>You can find pre-built Android APKs for this repository in the following table</summary>\n\n| Description | URL | \u4e2d\u56fd\u7528\u6237 |\n|----------------------------------------|------------------------------------|-----------------------------------|\n| Speaker diarization | [Address][apk-speaker-diarization] | [\u70b9\u6b64][apk-speaker-diarization-cn]|\n| Streaming speech recognition | [Address][apk-streaming-asr] | [\u70b9\u6b64][apk-streaming-asr-cn] |\n| Text-to-speech | [Address][apk-tts] | [\u70b9\u6b64][apk-tts-cn] |\n| Voice activity detection (VAD) | [Address][apk-vad] | [\u70b9\u6b64][apk-vad-cn] |\n| VAD + non-streaming speech recognition | [Address][apk-vad-asr] | [\u70b9\u6b64][apk-vad-asr-cn] |\n| Two-pass speech recognition | [Address][apk-2pass] | [\u70b9\u6b64][apk-2pass-cn] |\n| Audio tagging | [Address][apk-at] | [\u70b9\u6b64][apk-at-cn] |\n| Audio tagging (WearOS) | [Address][apk-at-wearos] | [\u70b9\u6b64][apk-at-wearos-cn] |\n| Speaker identification | [Address][apk-sid] | [\u70b9\u6b64][apk-sid-cn] |\n| Spoken language identification | [Address][apk-slid] | [\u70b9\u6b64][apk-slid-cn] |\n| Keyword spotting | [Address][apk-kws] | [\u70b9\u6b64][apk-kws-cn] |\n\n</details>\n\n### Links for pre-built Flutter APPs\n\n<details>\n\n#### Real-time speech recognition\n\n| Description | URL | \u4e2d\u56fd\u7528\u6237 |\n|--------------------------------|-------------------------------------|-------------------------------------|\n| Streaming speech recognition | [Address][apk-flutter-streaming-asr]| [\u70b9\u6b64][apk-flutter-streaming-asr-cn]|\n\n#### Text-to-speech\n\n| Description | URL | \u4e2d\u56fd\u7528\u6237 |\n|------------------------------------------|------------------------------------|------------------------------------|\n| Android (arm64-v8a, armeabi-v7a, x86_64) | [Address][flutter-tts-android] | [\u70b9\u6b64][flutter-tts-android-cn] |\n| Linux (x64) | [Address][flutter-tts-linux] | [\u70b9\u6b64][flutter-tts-linux-cn] |\n| macOS (x64) | [Address][flutter-tts-macos-x64] | [\u70b9\u6b64][flutter-tts-macos-arm64-cn] |\n| macOS (arm64) | [Address][flutter-tts-macos-arm64] | [\u70b9\u6b64][flutter-tts-macos-x64-cn] |\n| Windows (x64) | [Address][flutter-tts-win-x64] | [\u70b9\u6b64][flutter-tts-win-x64-cn] |\n\n> Note: You need to build from source for iOS.\n\n</details>\n\n### Links for pre-built Lazarus APPs\n\n<details>\n\n#### Generating subtitles\n\n| Description | URL | \u4e2d\u56fd\u7528\u6237 |\n|--------------------------------|----------------------------|----------------------------|\n| Generate subtitles (\u751f\u6210\u5b57\u5e55) | [Address][lazarus-subtitle]| [\u70b9\u6b64][lazarus-subtitle-cn]|\n\n</details>\n\n### Links for pre-trained models\n\n<details>\n\n| Description | URL |\n|---------------------------------------------|---------------------------------------------------------------------------------------|\n| Speech recognition (speech to text, ASR) | [Address][asr-models] |\n| Text-to-speech (TTS) | [Address][tts-models] |\n| VAD | [Address][vad-models] |\n| Keyword spotting | [Address][kws-models] |\n| Audio tagging | [Address][at-models] |\n| Speaker identification (Speaker ID) | [Address][sid-models] |\n| Spoken language identification (Language ID)| See multi-lingual [Whisper][Whisper] ASR models from [Speech recognition][asr-models]|\n| Punctuation | [Address][punct-models] |\n| Speaker segmentation | [Address][speaker-segmentation-models] |\n\n</details>\n\n#### Some pre-trained ASR models (Streaming)\n\n<details>\n\nPlease see\n\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-paraformer/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-ctc/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20][sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#csukuangfj-sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16][sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]| Chinese, English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16-bilingual-chinese-english)|\n|[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23][sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]|Chinese| Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-zh-14m-2023-02-23)|\n|[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17][sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]|English|Suitable for Cortex A7 CPU. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-en-20m-2023-02-17)|\n|[sherpa-onnx-streaming-zipformer-korean-2024-06-16][sherpa-onnx-streaming-zipformer-korean-2024-06-16]|Korean| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#sherpa-onnx-streaming-zipformer-korean-2024-06-16-korean)|\n|[sherpa-onnx-streaming-zipformer-fr-2023-04-14][sherpa-onnx-streaming-zipformer-fr-2023-04-14]|French| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/online-transducer/zipformer-transducer-models.html#shaojieli-sherpa-onnx-streaming-zipformer-fr-2023-04-14-french)|\n\n</details>\n\n\n#### Some pre-trained ASR models (Non-Streaming)\n\n<details>\n\nPlease see\n\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/index.html>\n - <https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/index.html>\n\nfor more models. The following table lists only **SOME** of them.\n\n|Name | Supported Languages| Description|\n|-----|-----|----|\n|[Whisper tiny.en](https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-whisper-tiny.en.tar.bz2)|English| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/whisper/tiny.en.html)|\n|[Moonshine tiny][Moonshine tiny]|English|See [also](https://github.com/usefulsensors/moonshine)|\n|[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17][sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]|Chinese, Cantonese, English, Korean, Japanese| \u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/sense-voice/index.html)|\n|[sherpa-onnx-paraformer-zh-2024-03-09][sherpa-onnx-paraformer-zh-2024-03-09]|Chinese, English| \u4e5f\u652f\u6301\u591a\u79cd\u4e2d\u6587\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-paraformer/paraformer-models.html#csukuangfj-sherpa-onnx-paraformer-zh-2024-03-09-chinese-english)|\n|[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01][sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]|Japanese|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01-japanese)|\n|[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24][sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/nemo-transducer-models.html#sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24-russian)|\n|[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24][sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]|Russian| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-ctc/nemo/russian.html#sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24)|\n|[sherpa-onnx-zipformer-ru-2024-09-18][sherpa-onnx-zipformer-ru-2024-09-18]|Russian|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-ru-2024-09-18-russian)|\n|[sherpa-onnx-zipformer-korean-2024-06-24][sherpa-onnx-zipformer-korean-2024-06-24]|Korean|See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-korean-2024-06-24-korean)|\n|[sherpa-onnx-zipformer-thai-2024-06-20][sherpa-onnx-zipformer-thai-2024-06-20]|Thai| See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/offline-transducer/zipformer-transducer-models.html#sherpa-onnx-zipformer-thai-2024-06-20-thai)|\n|[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04][sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]|Chinese| \u652f\u6301\u591a\u79cd\u65b9\u8a00. See [also](https://k2-fsa.github.io/sherpa/onnx/pretrained_models/telespeech/models.html#sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04)|\n\n</details>\n\n### Useful links\n\n- Documentation: https://k2-fsa.github.io/sherpa/onnx/\n- Bilibili \u6f14\u793a\u89c6\u9891: https://search.bilibili.com/all?keyword=%E6%96%B0%E4%B8%80%E4%BB%A3Kaldi\n\n### How to reach us\n\nPlease see\nhttps://k2-fsa.github.io/sherpa/social-groups.html\nfor \u65b0\u4e00\u4ee3 Kaldi **\u5fae\u4fe1\u4ea4\u6d41\u7fa4** and **QQ \u4ea4\u6d41\u7fa4**.\n\n## Projects using sherpa-onnx\n\n### [Open-LLM-VTuber](https://github.com/t41372/Open-LLM-VTuber)\n\nTalk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking\nface running locally across platforms\n\nSee also <https://github.com/t41372/Open-LLM-VTuber/pull/50>\n\n### [voiceapi](https://github.com/ruzhila/voiceapi)\n\n<details>\n <summary>Streaming ASR and TTS based on FastAPI</summary>\n\n\nIt shows how to use the ASR and TTS Python APIs with FastAPI.\n</details>\n\n### [\u817e\u8baf\u4f1a\u8bae\u6478\u9c7c\u5de5\u5177 TMSpeech](https://github.com/jxlpzqc/TMSpeech)\n\nUses streaming ASR in C# with graphical user interface.\n\nVideo demo in Chinese: [\u3010\u5f00\u6e90\u3011Windows\u5b9e\u65f6\u5b57\u5e55\u8f6f\u4ef6\uff08\u7f51\u8bfe/\u5f00\u4f1a\u5fc5\u5907\uff09](https://www.bilibili.com/video/BV1rX4y1p7Nx)\n\n### [lol\u4e92\u52a8\u52a9\u624b](https://github.com/l1veIn/lol-wom-electron)\n\nIt uses the JavaScript API of sherpa-onnx along with [Electron](https://electronjs.org/)\n\nVideo demo in Chinese: [\u7206\u4e86\uff01\u70ab\u795e\u6559\u4f60\u5f00\u6253\u5b57\u6302\uff01\u771f\u6b63\u5f71\u54cd\u80dc\u7387\u7684\u82f1\u96c4\u8054\u76df\u5de5\u5177\uff01\u82f1\u96c4\u8054\u76df\u7684\u6700\u540e\u4e00\u5757\u62fc\u56fe\uff01\u548c\u6e38\u620f\u4e2d\u7684\u6bcf\u4e2a\u4eba\u65e0\u969c\u788d\u6c9f\u901a\uff01](https://www.bilibili.com/video/BV142tje9E74)\n\n### [Sherpa-ONNX \u8bed\u97f3\u8bc6\u522b\u670d\u52a1\u5668](https://github.com/hfyydd/sherpa-onnx-server)\n\nA server based on nodejs providing Restful API for speech recognition.\n\n### [QSmartAssistant](https://github.com/xinhecuican/QSmartAssistant)\n\n\u4e00\u4e2a\u6a21\u5757\u5316\uff0c\u5168\u8fc7\u7a0b\u53ef\u79bb\u7ebf\uff0c\u4f4e\u5360\u7528\u7387\u7684\u5bf9\u8bdd\u673a\u5668\u4eba/\u667a\u80fd\u97f3\u7bb1\n\nIt uses QT. Both [ASR](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#asr)\nand [TTS](https://github.com/xinhecuican/QSmartAssistant/blob/master/doc/%E5%AE%89%E8%A3%85.md#tts)\nare used.\n\n\n[sherpa-rs]: https://github.com/thewh1teagle/sherpa-rs\n[silero-vad]: https://github.com/snakers4/silero-vad\n[Raspberry Pi]: https://www.raspberrypi.com/\n[RV1126]: https://www.rock-chips.com/uploads/pdf/2022.8.26/191/RV1126%20Brief%20Datasheet.pdf\n[LicheePi4A]: https://sipeed.com/licheepi4a\n[VisionFive 2]: https://www.starfivetech.com/en/site/boards\n[\u65ed\u65e5X3\u6d3e]: https://developer.horizon.ai/api/v1/fileData/documents_pi/index.html\n[\u7231\u82af\u6d3e]: https://wiki.sipeed.com/hardware/zh/maixIII/ax-pi/axpi.html\n[hf-space-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/speaker-diarization\n[hf-space-asr]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition\n[Whisper]: https://github.com/openai/whisper\n[hf-space-asr-whisper]: https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition-with-whisper\n[hf-space-tts]: https://huggingface.co/spaces/k2-fsa/text-to-speech\n[hf-space-subtitle]: https://huggingface.co/spaces/k2-fsa/generate-subtitles-for-videos\n[hf-space-audio-tagging]: https://huggingface.co/spaces/k2-fsa/audio-tagging\n[hf-space-slid-whisper]: https://huggingface.co/spaces/k2-fsa/spoken-language-identification\n[wasm-hf-vad]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-sherpa-onnx\n[wasm-ms-vad]: https://modelscope.cn/studios/csukuangfj/web-assembly-vad-sherpa-onnx\n[wasm-hf-streaming-asr-zh-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-ms-streaming-asr-zh-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en\n[wasm-hf-streaming-asr-zh-en-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-streaming-asr-zh-en-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-en-paraformer\n[Paraformer-large]: https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary\n[wasm-hf-streaming-asr-zh-en-yue-paraformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-ms-streaming-asr-zh-en-yue-paraformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-zh-cantonese-en-paraformer\n[wasm-hf-streaming-asr-en-zipformer]: https://huggingface.co/spaces/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[wasm-ms-streaming-asr-en-zipformer]: https://modelscope.cn/studios/k2-fsa/web-assembly-asr-sherpa-onnx-en\n[SenseVoice]: https://github.com/FunAudioLLM/SenseVoice\n[wasm-hf-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-ja-ko-cantonese-sense-voice\n[wasm-ms-vad-asr-zh-en-ko-ja-yue-sense-voice]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-zh-en-jp-ko-cantonese-sense-voice\n[wasm-hf-vad-asr-en-whisper-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-ms-vad-asr-en-whisper-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-whisper-tiny\n[wasm-hf-vad-asr-en-moonshine-tiny-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-ms-vad-asr-en-moonshine-tiny-en]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-en-moonshine-tiny\n[wasm-hf-vad-asr-en-zipformer-gigaspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-ms-vad-asr-en-zipformer-gigaspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-en-zipformer-gigaspeech\n[wasm-hf-vad-asr-zh-zipformer-wenetspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[wasm-ms-vad-asr-zh-zipformer-wenetspeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-zipformer-wenetspeech\n[ReazonSpeech]: https://research.reazon.jp/_static/reazonspeech_nlp2023.pdf\n[wasm-hf-vad-asr-ja-zipformer-reazonspeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[wasm-ms-vad-asr-ja-zipformer-reazonspeech]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-ja-zipformer\n[GigaSpeech2]: https://github.com/SpeechColab/GigaSpeech2\n[wasm-hf-vad-asr-th-zipformer-gigaspeech2]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[wasm-ms-vad-asr-th-zipformer-gigaspeech2]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-vad-asr-sherpa-onnx-th-zipformer\n[TeleSpeech-ASR]: https://github.com/Tele-AI/TeleSpeech-ASR\n[wasm-hf-vad-asr-zh-telespeech]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-ms-vad-asr-zh-telespeech]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-telespeech\n[wasm-hf-vad-asr-zh-en-paraformer-large]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-ms-vad-asr-zh-en-paraformer-large]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer\n[wasm-hf-vad-asr-zh-en-paraformer-small]: https://huggingface.co/spaces/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[wasm-ms-vad-asr-zh-en-paraformer-small]: https://www.modelscope.cn/studios/k2-fsa/web-assembly-vad-asr-sherpa-onnx-zh-en-paraformer-small\n[wasm-hf-tts-piper-en]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-ms-tts-piper-en]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-en\n[wasm-hf-tts-piper-de]: https://huggingface.co/spaces/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-ms-tts-piper-de]: https://modelscope.cn/studios/k2-fsa/web-assembly-tts-sherpa-onnx-de\n[wasm-hf-speaker-diarization]: https://huggingface.co/spaces/k2-fsa/web-assembly-speaker-diarization-sherpa-onnx\n[wasm-ms-speaker-diarization]: https://www.modelscope.cn/studios/csukuangfj/web-assembly-speaker-diarization-sherpa-onnx\n[apk-speaker-diarization]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk.html\n[apk-speaker-diarization-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-diarization/apk-cn.html\n[apk-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/android/apk.html\n[apk-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-cn.html\n[apk-tts]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html\n[apk-tts-cn]: https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine-cn.html\n[apk-vad]: https://k2-fsa.github.io/sherpa/onnx/vad/apk.html\n[apk-vad-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-cn.html\n[apk-vad-asr]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr.html\n[apk-vad-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/vad/apk-asr-cn.html\n[apk-2pass]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass.html\n[apk-2pass-cn]: https://k2-fsa.github.io/sherpa/onnx/android/apk-2pass-cn.html\n[apk-at]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk.html\n[apk-at-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-cn.html\n[apk-at-wearos]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos.html\n[apk-at-wearos-cn]: https://k2-fsa.github.io/sherpa/onnx/audio-tagging/apk-wearos-cn.html\n[apk-sid]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk.html\n[apk-sid-cn]: https://k2-fsa.github.io/sherpa/onnx/speaker-identification/apk-cn.html\n[apk-slid]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk.html\n[apk-slid-cn]: https://k2-fsa.github.io/sherpa/onnx/spoken-language-identification/apk-cn.html\n[apk-kws]: https://k2-fsa.github.io/sherpa/onnx/kws/apk.html\n[apk-kws-cn]: https://k2-fsa.github.io/sherpa/onnx/kws/apk-cn.html\n[apk-flutter-streaming-asr]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app.html\n[apk-flutter-streaming-asr-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/asr/app-cn.html\n[flutter-tts-android]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android.html\n[flutter-tts-android-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-android-cn.html\n[flutter-tts-linux]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux.html\n[flutter-tts-linux-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-linux-cn.html\n[flutter-tts-macos-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64.html\n[flutter-tts-macos-arm64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-x64-cn.html\n[flutter-tts-macos-arm64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64.html\n[flutter-tts-macos-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-macos-arm64-cn.html\n[flutter-tts-win-x64]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win.html\n[flutter-tts-win-x64-cn]: https://k2-fsa.github.io/sherpa/onnx/flutter/tts-win-cn.html\n[lazarus-subtitle]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles.html\n[lazarus-subtitle-cn]: https://k2-fsa.github.io/sherpa/onnx/lazarus/download-generated-subtitles-cn.html\n[asr-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models\n[tts-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models\n[vad-models]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx\n[kws-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/kws-models\n[at-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/audio-tagging-models\n[sid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[slid-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-recongition-models\n[punct-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/punctuation-models\n[speaker-segmentation-models]: https://github.com/k2-fsa/sherpa-onnx/releases/tag/speaker-segmentation-models\n[GigaSpeech]: https://github.com/SpeechColab/GigaSpeech\n[WenetSpeech]: https://github.com/wenet-e2e/WenetSpeech\n[sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-bilingual-zh-en-2023-02-20.tar.bz2\n[sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-small-bilingual-zh-en-2023-02-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-korean-2024-06-16]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-korean-2024-06-16.tar.bz2\n[sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-zh-14M-2023-02-23.tar.bz2\n[sherpa-onnx-streaming-zipformer-en-20M-2023-02-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-en-20M-2023-02-17.tar.bz2\n[sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ja-reazonspeech-2024-08-01.tar.bz2\n[sherpa-onnx-zipformer-ru-2024-09-18]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-ru-2024-09-18.tar.bz2\n[sherpa-onnx-zipformer-korean-2024-06-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2\n[sherpa-onnx-zipformer-thai-2024-06-20]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2\n[sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-transducer-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-paraformer-zh-2024-03-09]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-paraformer-zh-2024-03-09.tar.bz2\n[sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-nemo-ctc-giga-am-russian-2024-10-24.tar.bz2\n[sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-telespeech-ctc-int8-zh-2024-06-04.tar.bz2\n[sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2\n[sherpa-onnx-streaming-zipformer-fr-2023-04-14]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-streaming-zipformer-fr-2023-04-14.tar.bz2\n[Moonshine tiny]: https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-moonshine-tiny-en-int8.tar.bz2\n[NVIDIA Jetson Orin NX]: https://developer.download.nvidia.com/assets/embedded/secure/jetson/orin_nx/docs/Jetson_Orin_NX_DS-10712-001_v0.5.pdf?RCPGu9Q6OVAOv7a7vgtwc9-BLScXRIWq6cSLuditMALECJ_dOj27DgnqAPGVnT2VpiNpQan9SyFy-9zRykR58CokzbXwjSA7Gj819e91AXPrWkGZR3oS1VLxiDEpJa_Y0lr7UT-N4GnXtb8NlUkP4GkCkkF_FQivGPrAucCUywL481GH_WpP_p7ziHU1Wg==&t=eyJscyI6ImdzZW8iLCJsc2QiOiJodHRwczovL3d3dy5nb29nbGUuY29tLmhrLyJ9\n[NVIDIA Jetson Nano B01]: https://www.seeedstudio.com/blog/2020/01/16/new-revision-of-jetson-nano-dev-kit-now-supports-new-jetson-nano-module/\n",
"bugtrack_url": null,
"license": "Apache licensed, as found in the LICENSE file",
"summary": null,
"version": "1.10.44",
"project_urls": {
"Homepage": "https://github.com/k2-fsa/sherpa-onnx"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "12be27dcb43014fc0256e4318c1242fa88b059baf67bbd92f039c7b4f80ef440",
"md5": "49cdb8b31684ab7b3b7d678cd821c326",
"sha256": "3ce1cb75fa5bf4e68ad01cdfb543bf5804500f92277e9637972fedba6bfd8203"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "49cdb8b31684ab7b3b7d678cd821c326",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 16981048,
"upload_time": "2025-02-13T11:09:39",
"upload_time_iso_8601": "2025-02-13T11:09:39.332685Z",
"url": "https://files.pythonhosted.org/packages/12/be/27dcb43014fc0256e4318c1242fa88b059baf67bbd92f039c7b4f80ef440/sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8b9f6ffc202ab498b89a0c30ea4af6ca2635b7e8cebc6a87f79e1279b8587978",
"md5": "f9667e01b56cbf5d81c9f69f62520a43",
"sha256": "8776df539016ac342ac9c33188586e7f5bf5986854b0146b8fa3912fd7eb9314"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "f9667e01b56cbf5d81c9f69f62520a43",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 35141937,
"upload_time": "2025-02-13T10:48:42",
"upload_time_iso_8601": "2025-02-13T10:48:42.191099Z",
"url": "https://files.pythonhosted.org/packages/8b/9f/6ffc202ab498b89a0c30ea4af6ca2635b7e8cebc6a87f79e1279b8587978/sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2c75525d11c713db43c33bb64a8fea7fd1e8b9e2a6cdcc78e7dfbdadfc42946d",
"md5": "4d16e0bad883960633f8030a7953ea3d",
"sha256": "000bd76548f4c30846615de302f4c33de6435376b3829d35603330230b2e890c"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "4d16e0bad883960633f8030a7953ea3d",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 19247910,
"upload_time": "2025-02-13T10:55:44",
"upload_time_iso_8601": "2025-02-13T10:55:44.832939Z",
"url": "https://files.pythonhosted.org/packages/2c/75/525d11c713db43c33bb64a8fea7fd1e8b9e2a6cdcc78e7dfbdadfc42946d/sherpa_onnx-1.10.44-cp310-cp310-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a95445f0b0abe8289e32d01ed1cf74053a6fe155598c7679ed13607813b932b5",
"md5": "9f4a6786f750768a6dd2058077c26b22",
"sha256": "b3eab902849ff57579ac3f507f08053c10810bd9a8c76e57b5157745af23a48e"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "9f4a6786f750768a6dd2058077c26b22",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 20422800,
"upload_time": "2025-02-13T10:47:45",
"upload_time_iso_8601": "2025-02-13T10:47:45.917111Z",
"url": "https://files.pythonhosted.org/packages/a9/54/45f0b0abe8289e32d01ed1cf74053a6fe155598c7679ed13607813b932b5/sherpa_onnx-1.10.44-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7e9e2c5a4eb2c0bd5ba81124062c3717a5ad605fca0d31c974efde5f83d4025f",
"md5": "398c6da5a1f2858957423386780a9c29",
"sha256": "12d15133605e3fa81e39a90df103b32d3dba20153aef2e35e1f2c231f73d96cf"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-win32.whl",
"has_sig": false,
"md5_digest": "398c6da5a1f2858957423386780a9c29",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 19274254,
"upload_time": "2025-02-13T10:51:02",
"upload_time_iso_8601": "2025-02-13T10:51:02.223642Z",
"url": "https://files.pythonhosted.org/packages/7e/9e/2c5a4eb2c0bd5ba81124062c3717a5ad605fca0d31c974efde5f83d4025f/sherpa_onnx-1.10.44-cp310-cp310-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "78498669d63f790ed3bba10db33a05ee226daeb4c7619414ae49508c30d20069",
"md5": "8738e3a8dec4a7bd7e7dea467d7f4255",
"sha256": "11a4ee4b10fc358d1eb88486bd704fa57c3b2984fcafd8436ef2392138a929c0"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp310-cp310-win_amd64.whl",
"has_sig": false,
"md5_digest": "8738e3a8dec4a7bd7e7dea467d7f4255",
"packagetype": "bdist_wheel",
"python_version": "cp310",
"requires_python": ">=3.6",
"size": 21907358,
"upload_time": "2025-02-13T11:01:05",
"upload_time_iso_8601": "2025-02-13T11:01:05.856249Z",
"url": "https://files.pythonhosted.org/packages/78/49/8669d63f790ed3bba10db33a05ee226daeb4c7619414ae49508c30d20069/sherpa_onnx-1.10.44-cp310-cp310-win_amd64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b603d463e45cdec0cbbd6d3dbe973da55d5051e60320f7772fe9a291809b2f83",
"md5": "d6a717682eebbbaf7117ff0f0b63e008",
"sha256": "ffb9939f9f8b5d342d7e1cb89205abb140a30273dda220db51b0bd98a4798b63"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp311-cp311-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "d6a717682eebbbaf7117ff0f0b63e008",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.6",
"size": 35145330,
"upload_time": "2025-02-13T10:45:31",
"upload_time_iso_8601": "2025-02-13T10:45:31.735618Z",
"url": "https://files.pythonhosted.org/packages/b6/03/d463e45cdec0cbbd6d3dbe973da55d5051e60320f7772fe9a291809b2f83/sherpa_onnx-1.10.44-cp311-cp311-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f802e7bfe6f12996359bdd386f4245e1c2d85938fea8100060d4d4614ad7626f",
"md5": "558ba67c42567936d7734957198d1c02",
"sha256": "84dd22d8add16de20c31332796c4d903dfcbb9675f54c6270f4b98ec9359250f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp311-cp311-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "558ba67c42567936d7734957198d1c02",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.6",
"size": 19249397,
"upload_time": "2025-02-13T10:55:59",
"upload_time_iso_8601": "2025-02-13T10:55:59.829938Z",
"url": "https://files.pythonhosted.org/packages/f8/02/e7bfe6f12996359bdd386f4245e1c2d85938fea8100060d4d4614ad7626f/sherpa_onnx-1.10.44-cp311-cp311-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7af1880e7ff65ce8b13f4fe415faec6ff200bcf76f5723aeaf96de3e403781a4",
"md5": "483ae1450b5ac78aca7f692c32477b48",
"sha256": "7898e25ad1d0babdc06b9bd7100e51f7c98f6aa179727c676e88037fd35c00d3"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "483ae1450b5ac78aca7f692c32477b48",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.6",
"size": 20420004,
"upload_time": "2025-02-13T10:47:29",
"upload_time_iso_8601": "2025-02-13T10:47:29.645028Z",
"url": "https://files.pythonhosted.org/packages/7a/f1/880e7ff65ce8b13f4fe415faec6ff200bcf76f5723aeaf96de3e403781a4/sherpa_onnx-1.10.44-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "95fcb8e81b602ddddf850a6f6406fca81a9b7a9c8eecb7f2cd9dd9849d36334c",
"md5": "ce15aecc34623f0f55471649e5188224",
"sha256": "45e8214c3e3742ad3c16941cead1d0003ac45fde496f9ee5276559afc4e28212"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp311-cp311-win32.whl",
"has_sig": false,
"md5_digest": "ce15aecc34623f0f55471649e5188224",
"packagetype": "bdist_wheel",
"python_version": "cp311",
"requires_python": ">=3.6",
"size": 19272191,
"upload_time": "2025-02-13T10:51:21",
"upload_time_iso_8601": "2025-02-13T10:51:21.560690Z",
"url": "https://files.pythonhosted.org/packages/95/fc/b8e81b602ddddf850a6f6406fca81a9b7a9c8eecb7f2cd9dd9849d36334c/sherpa_onnx-1.10.44-cp311-cp311-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4431fe67edc45d729387d1c79eb33a4b94ca2336a0befac7c4285f45201126e4",
"md5": "a0209ace79f859fb50c146891a7d3dd4",
"sha256": "f4901e61245ab5ce9bf390178d7c3be5bb1f8f0136625c1bc1ba3414e97face6"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp312-cp312-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "a0209ace79f859fb50c146891a7d3dd4",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.6",
"size": 35161317,
"upload_time": "2025-02-13T10:45:49",
"upload_time_iso_8601": "2025-02-13T10:45:49.199472Z",
"url": "https://files.pythonhosted.org/packages/44/31/fe67edc45d729387d1c79eb33a4b94ca2336a0befac7c4285f45201126e4/sherpa_onnx-1.10.44-cp312-cp312-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d1215c3f064694f474be047ae379a09aa11a6fa3ddfe619723ad3297d400a09e",
"md5": "5221adda36802bd6dde5cca4b712af45",
"sha256": "d29b641a426889cd03b0ee36f42cd6dce2379f350564109855aa39d29b0b551f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp312-cp312-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "5221adda36802bd6dde5cca4b712af45",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.6",
"size": 19268203,
"upload_time": "2025-02-13T11:04:18",
"upload_time_iso_8601": "2025-02-13T11:04:18.508207Z",
"url": "https://files.pythonhosted.org/packages/d1/21/5c3f064694f474be047ae379a09aa11a6fa3ddfe619723ad3297d400a09e/sherpa_onnx-1.10.44-cp312-cp312-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "43c0df4281d3cd8da88c610169b3d248bfd26a4fd349cca8175242947b0e3471",
"md5": "aa261b732b5a0a64ae6a893ddf656d81",
"sha256": "9802a562af0592197608068f29e8bf6647301f90f98b07137e243b198e171c2b"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "aa261b732b5a0a64ae6a893ddf656d81",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.6",
"size": 20422466,
"upload_time": "2025-02-13T10:48:10",
"upload_time_iso_8601": "2025-02-13T10:48:10.992230Z",
"url": "https://files.pythonhosted.org/packages/43/c0/df4281d3cd8da88c610169b3d248bfd26a4fd349cca8175242947b0e3471/sherpa_onnx-1.10.44-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3174f24dc0f388da10a540dce38d11ec29e14b7604787bf32f6c14849e52295b",
"md5": "5d1c3be0afebea1082d32fdaf9259716",
"sha256": "107aeb475564007a74773990118be50baee520bdfaf3dcd5f8cf8632aeab8f1f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp312-cp312-win32.whl",
"has_sig": false,
"md5_digest": "5d1c3be0afebea1082d32fdaf9259716",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.6",
"size": 19273557,
"upload_time": "2025-02-13T10:50:28",
"upload_time_iso_8601": "2025-02-13T10:50:28.742807Z",
"url": "https://files.pythonhosted.org/packages/31/74/f24dc0f388da10a540dce38d11ec29e14b7604787bf32f6c14849e52295b/sherpa_onnx-1.10.44-cp312-cp312-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "dab958345f93246d5496494819ce5379ceadb9bcf245d60b35368d8a29af57b9",
"md5": "665ef702df518326058a5da4e58c5a59",
"sha256": "1b5b215e82b6f25ae01be9c4d4d942bb095c5c72c6f94ee884d477b7a12c3ced"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp313-cp313-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "665ef702df518326058a5da4e58c5a59",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.6",
"size": 35161351,
"upload_time": "2025-02-13T10:43:31",
"upload_time_iso_8601": "2025-02-13T10:43:31.067363Z",
"url": "https://files.pythonhosted.org/packages/da/b9/58345f93246d5496494819ce5379ceadb9bcf245d60b35368d8a29af57b9/sherpa_onnx-1.10.44-cp313-cp313-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "295a4a519ee7eed3941680b6207818b3fea2bc299e819ddafc67999bf6a60255",
"md5": "923cf1e5f1f3ecc6c5116968144ae488",
"sha256": "22885c19c9533fbdac8ead814ff294c01b4a5da7928ae43525a311cfb111309f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp313-cp313-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "923cf1e5f1f3ecc6c5116968144ae488",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.6",
"size": 19268343,
"upload_time": "2025-02-13T10:59:11",
"upload_time_iso_8601": "2025-02-13T10:59:11.023313Z",
"url": "https://files.pythonhosted.org/packages/29/5a/4a519ee7eed3941680b6207818b3fea2bc299e819ddafc67999bf6a60255/sherpa_onnx-1.10.44-cp313-cp313-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "484a67efa3ba72e6f149f397f4ffe6143aada8a755394d6ecde02fee65442518",
"md5": "f6e8c5b510cac232d258b1c7c20883a3",
"sha256": "5070733e1c3fdb3de6cfda673b1bd83692cdd50d7e73819276d984d4fe2ca569"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "f6e8c5b510cac232d258b1c7c20883a3",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.6",
"size": 20422886,
"upload_time": "2025-02-13T11:00:14",
"upload_time_iso_8601": "2025-02-13T11:00:14.279454Z",
"url": "https://files.pythonhosted.org/packages/48/4a/67efa3ba72e6f149f397f4ffe6143aada8a755394d6ecde02fee65442518/sherpa_onnx-1.10.44-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f3609fa92b763e352e2166e656d1c2c912b679d7530e808b162c4d1bc6cb2655",
"md5": "2742f962fde4d909508dfe1d919a7ab6",
"sha256": "60c83538b14757c6cbf53cad148a5d612bf09e8df1e02501c24e3de19e59747f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp313-cp313-win32.whl",
"has_sig": false,
"md5_digest": "2742f962fde4d909508dfe1d919a7ab6",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.6",
"size": 19273625,
"upload_time": "2025-02-13T10:49:52",
"upload_time_iso_8601": "2025-02-13T10:49:52.626098Z",
"url": "https://files.pythonhosted.org/packages/f3/60/9fa92b763e352e2166e656d1c2c912b679d7530e808b162c4d1bc6cb2655/sherpa_onnx-1.10.44-cp313-cp313-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fe28a10d0a7265f9054a2b142760eb52ff9a64ee31b1c80f8d3adec9d4af298d",
"md5": "feb7195c2c7f789d7cce5862ef27b90c",
"sha256": "41b6bee66c7976730b23bb69cf853f1e9471f301bee324a95ac4c9008ff035b7"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "feb7195c2c7f789d7cce5862ef27b90c",
"packagetype": "bdist_wheel",
"python_version": "cp37",
"requires_python": ">=3.6",
"size": 20457023,
"upload_time": "2025-02-13T11:00:35",
"upload_time_iso_8601": "2025-02-13T11:00:35.723449Z",
"url": "https://files.pythonhosted.org/packages/fe/28/a10d0a7265f9054a2b142760eb52ff9a64ee31b1c80f8d3adec9d4af298d/sherpa_onnx-1.10.44-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "81668f32d2587ecd8d6485183968cb21b4e52e07bfcd9d3182a2b0cdbedffda4",
"md5": "802ec28ec6c0da62206a513da7744044",
"sha256": "8df38253175b7ee43045d1ed57d3f48c3859cb8b194035ec7387b16bf81f6bf2"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp37-cp37m-win32.whl",
"has_sig": false,
"md5_digest": "802ec28ec6c0da62206a513da7744044",
"packagetype": "bdist_wheel",
"python_version": "cp37",
"requires_python": ">=3.6",
"size": 19274707,
"upload_time": "2025-02-13T10:53:12",
"upload_time_iso_8601": "2025-02-13T10:53:12.123945Z",
"url": "https://files.pythonhosted.org/packages/81/66/8f32d2587ecd8d6485183968cb21b4e52e07bfcd9d3182a2b0cdbedffda4/sherpa_onnx-1.10.44-cp37-cp37m-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0f0aa557419507649093f17673113a28cc3e4d00b5af764ada5a4a41318b02fc",
"md5": "e4844414992223fb408f79e090fa85f8",
"sha256": "f71f1c784783490e80abdd9367086062024f2118deb864a1403353cc8ea726a9"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp38-cp38-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "e4844414992223fb408f79e090fa85f8",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.6",
"size": 35141404,
"upload_time": "2025-02-13T10:58:03",
"upload_time_iso_8601": "2025-02-13T10:58:03.025008Z",
"url": "https://files.pythonhosted.org/packages/0f/0a/a557419507649093f17673113a28cc3e4d00b5af764ada5a4a41318b02fc/sherpa_onnx-1.10.44-cp38-cp38-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "169590460d282448484acbdc1d0e20f8c97e9ec292e049bc41e3a65e0b1b4d76",
"md5": "cdf40277173eacb738b5390696f4a680",
"sha256": "79252072d009acee86eb38c463bda4eb5266791859f793bb54d68d7b1338d65e"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp38-cp38-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "cdf40277173eacb738b5390696f4a680",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.6",
"size": 19247499,
"upload_time": "2025-02-13T11:10:01",
"upload_time_iso_8601": "2025-02-13T11:10:01.482894Z",
"url": "https://files.pythonhosted.org/packages/16/95/90460d282448484acbdc1d0e20f8c97e9ec292e049bc41e3a65e0b1b4d76/sherpa_onnx-1.10.44-cp38-cp38-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "40206c83b88cb919ba0314f24bdd973be08c1cb7135986ea150b85df2ef92233",
"md5": "842e319b4383da58080f9b51a4ad48f5",
"sha256": "2c4f97337cae29636880054220ae16623ccd092d16fdbc755e067f16b76d2040"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "842e319b4383da58080f9b51a4ad48f5",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.6",
"size": 20422962,
"upload_time": "2025-02-13T11:01:10",
"upload_time_iso_8601": "2025-02-13T11:01:10.859096Z",
"url": "https://files.pythonhosted.org/packages/40/20/6c83b88cb919ba0314f24bdd973be08c1cb7135986ea150b85df2ef92233/sherpa_onnx-1.10.44-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1842f9084859d3dd23fb333863118166be5d8f21eb3aea16ad3f1b461e74f1d6",
"md5": "d5a4b8d4a27946fa84d15202d9df7b30",
"sha256": "288dca08d13dafe55015b8a5e26145a3a639588941955026d9d256c0f508cf1f"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp38-cp38-win32.whl",
"has_sig": false,
"md5_digest": "d5a4b8d4a27946fa84d15202d9df7b30",
"packagetype": "bdist_wheel",
"python_version": "cp38",
"requires_python": ">=3.6",
"size": 19274151,
"upload_time": "2025-02-13T10:47:27",
"upload_time_iso_8601": "2025-02-13T10:47:27.614875Z",
"url": "https://files.pythonhosted.org/packages/18/42/f9084859d3dd23fb333863118166be5d8f21eb3aea16ad3f1b461e74f1d6/sherpa_onnx-1.10.44-cp38-cp38-win32.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e4d17a6dca3093bbd2245ad8ffad4352ccd791009b2735da128c3d19e187aa15",
"md5": "3f4539641ce88e07fa260552d2ed41dd",
"sha256": "01c9d79c9186ed8b248d79ae67d56c0e3fd7090fd867c543f703b5e2e2cbbc88"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp39-cp39-macosx_11_0_universal2.whl",
"has_sig": false,
"md5_digest": "3f4539641ce88e07fa260552d2ed41dd",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.6",
"size": 35141951,
"upload_time": "2025-02-13T10:55:35",
"upload_time_iso_8601": "2025-02-13T10:55:35.029488Z",
"url": "https://files.pythonhosted.org/packages/e4/d1/7a6dca3093bbd2245ad8ffad4352ccd791009b2735da128c3d19e187aa15/sherpa_onnx-1.10.44-cp39-cp39-macosx_11_0_universal2.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "be5ec51ee0fa60ea3afabdba8a1b1318a99eb0dc5424357694b166a095cdd00d",
"md5": "a30aff9a37dc3b2b387fb20c74821134",
"sha256": "670edb719031ea120c2c0fffa5706e2219b036bb38dcf55cc26aba0f985136a0"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp39-cp39-macosx_11_0_x86_64.whl",
"has_sig": false,
"md5_digest": "a30aff9a37dc3b2b387fb20c74821134",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.6",
"size": 19247821,
"upload_time": "2025-02-13T11:07:25",
"upload_time_iso_8601": "2025-02-13T11:07:25.045410Z",
"url": "https://files.pythonhosted.org/packages/be/5e/c51ee0fa60ea3afabdba8a1b1318a99eb0dc5424357694b166a095cdd00d/sherpa_onnx-1.10.44-cp39-cp39-macosx_11_0_x86_64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3c50c8f817c95aaca49f9cf484466314841527673e8cc244f30065e412806693",
"md5": "5ee2955c6fe903c9c16a9089b157095d",
"sha256": "affa68b3d41bc90de67b98fb1289f420365ebd9405e6f7c25ba8340f20d54640"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"has_sig": false,
"md5_digest": "5ee2955c6fe903c9c16a9089b157095d",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.6",
"size": 20423168,
"upload_time": "2025-02-13T11:01:58",
"upload_time_iso_8601": "2025-02-13T11:01:58.361688Z",
"url": "https://files.pythonhosted.org/packages/3c/50/c8f817c95aaca49f9cf484466314841527673e8cc244f30065e412806693/sherpa_onnx-1.10.44-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "10c7ba7d3f393c7b174122fb2b8f83cfb7d2f7c6d73c82f1c143d6561970935f",
"md5": "5a456b56fd7a7c2b97b1caadafa70522",
"sha256": "da5bd17a98906001bb3b6bf9254e9bcec8ce690e22c040bd4e2a1901a1f57080"
},
"downloads": -1,
"filename": "sherpa_onnx-1.10.44-cp39-cp39-win32.whl",
"has_sig": false,
"md5_digest": "5a456b56fd7a7c2b97b1caadafa70522",
"packagetype": "bdist_wheel",
"python_version": "cp39",
"requires_python": ">=3.6",
"size": 19274289,
"upload_time": "2025-02-13T10:48:59",
"upload_time_iso_8601": "2025-02-13T10:48:59.962901Z",
"url": "https://files.pythonhosted.org/packages/10/c7/ba7d3f393c7b174122fb2b8f83cfb7d2f7c6d73c82f1c143d6561970935f/sherpa_onnx-1.10.44-cp39-cp39-win32.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-13 11:09:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "k2-fsa",
"github_project": "sherpa-onnx",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "sherpa-onnx"
}