# Rhino Speech-to-Intent Engine
Made in Vancouver, Canada by [Picovoice](https://picovoice.ai)
Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a given context of
interest, in real-time. For example, given a spoken command:
> Can I have a small double-shot espresso?
Rhino infers that the user would like to order a drink and emits the following inference result:
```json
{
"isUnderstood": "true",
"intent": "orderBeverage",
"slots": {
"beverage": "espresso",
"size": "small",
"numberOfShots": "2"
}
}
```
Rhino is:
* using deep neural networks trained in real-world environments.
* compact and computationally-efficient, making it perfect for IoT.
* self-service. Developers and designers can train custom models using [Picovoice Console](https://console.picovoice.ai/).
## Compatibility
- Python 3.8+
- Runs on Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64), and Raspberry Pi (Zero, 3, 4, 5).
## Installation
```console
pip3 install pvrhino
```
## AccessKey
Rhino requires a valid Picovoice `AccessKey` at initialization. `AccessKey` acts as your credentials when using Rhino SDKs.
You can get your `AccessKey` for free. Make sure to keep your `AccessKey` secret.
Signup or Login to [Picovoice Console](https://console.picovoice.ai/) to get your `AccessKey`.
## Usage
Create an instance of the engine:
```python
import pvrhino
access_key = "${ACCESS_KEY}" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
handle = pvrhino.create(access_key=access_key, context_path='/absolute/path/to/context')
```
Where `context_path` is the absolute path to Speech-to-Intent context created either using
[Picovoice Console](https://console.picovoice.ai/) or one of the default contexts available on Rhino's GitHub repository.
The sensitivity of the engine can be tuned using the `sensitivity` parameter. It is a floating-point number within
[0, 1]. A higher sensitivity value results in fewer misses at the cost of (potentially) increasing the erroneous
inference rate.
```python
import pvrhino
access_key = "${ACCESS_KEY}" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
handle = pvrhino.create(access_key=access_key, context_path='/absolute/path/to/context', sensitivity=0.25)
```
When initialized, the valid sample rate is given by `handle.sample_rate`. Expected frame length (number of audio samples
in an input array) is `handle.frame_length`. The engine accepts 16-bit linearly-encoded PCM and operates on
single-channel audio.
```python
def get_next_audio_frame():
pass
while True:
is_finalized = rhino.process(get_next_audio_frame())
if is_finalized:
inference = rhino.get_inference()
if not inference.is_understood:
# add code to handle unsupported commands
pass
else:
intent = inference.intent
slots = inference.slots
# add code to take action based on inferred intent and slot values
```
When done resources have to be released explicitly:
```python
handle.delete()
```
## Non-English Contexts
In order to run inference on non-English contexts you need to use the corresponding model file. The model files for all supported languages are available [here](../../lib/common).
## Demos
[pvrhinodemo](https://pypi.org/project/pvrhinodemo/) provides command-line utilities for processing real-time
audio (i.e. microphone) and files using Rhino.
Raw data
{
"_id": null,
"home_page": "https://github.com/Picovoice/rhino",
"name": "pvrhino",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "Speech-to-Intent, voice commands, voice control, speech recognition, natural language understanding",
"author": "Picovoice",
"author_email": "hello@picovoice.ai",
"download_url": "https://files.pythonhosted.org/packages/bb/93/ab3dff50bf4ebfc9a4afe1cb2a52e573ec5f2b7e4c420b742f8192228e52/pvrhino-3.0.3.tar.gz",
"platform": null,
"description": "# Rhino Speech-to-Intent Engine\n\nMade in Vancouver, Canada by [Picovoice](https://picovoice.ai)\n\nRhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a given context of\ninterest, in real-time. For example, given a spoken command:\n\n> Can I have a small double-shot espresso?\n\nRhino infers that the user would like to order a drink and emits the following inference result:\n\n```json\n{\n \"isUnderstood\": \"true\",\n \"intent\": \"orderBeverage\",\n \"slots\": {\n \"beverage\": \"espresso\",\n \"size\": \"small\",\n \"numberOfShots\": \"2\"\n }\n}\n```\n\nRhino is:\n\n* using deep neural networks trained in real-world environments.\n* compact and computationally-efficient, making it perfect for IoT.\n* self-service. Developers and designers can train custom models using [Picovoice Console](https://console.picovoice.ai/).\n\n## Compatibility\n\n- Python 3.8+\n- Runs on Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64), and Raspberry Pi (Zero, 3, 4, 5).\n\n## Installation\n\n```console\npip3 install pvrhino\n```\n\n## AccessKey\n\nRhino requires a valid Picovoice `AccessKey` at initialization. `AccessKey` acts as your credentials when using Rhino SDKs.\nYou can get your `AccessKey` for free. Make sure to keep your `AccessKey` secret.\nSignup or Login to [Picovoice Console](https://console.picovoice.ai/) to get your `AccessKey`.\n\n## Usage\n\nCreate an instance of the engine:\n\n```python\nimport pvrhino\n\naccess_key = \"${ACCESS_KEY}\" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)\n\nhandle = pvrhino.create(access_key=access_key, context_path='/absolute/path/to/context')\n```\n\nWhere `context_path` is the absolute path to Speech-to-Intent context created either using\n[Picovoice Console](https://console.picovoice.ai/) or one of the default contexts available on Rhino's GitHub repository.\n\nThe sensitivity of the engine can be tuned using the `sensitivity` parameter. It is a floating-point number within\n[0, 1]. A higher sensitivity value results in fewer misses at the cost of (potentially) increasing the erroneous\ninference rate.\n\n```python\nimport pvrhino\n\naccess_key = \"${ACCESS_KEY}\" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)\n\nhandle = pvrhino.create(access_key=access_key, context_path='/absolute/path/to/context', sensitivity=0.25)\n```\n\nWhen initialized, the valid sample rate is given by `handle.sample_rate`. Expected frame length (number of audio samples\nin an input array) is `handle.frame_length`. The engine accepts 16-bit linearly-encoded PCM and operates on\nsingle-channel audio.\n\n```python\ndef get_next_audio_frame():\n pass\n\nwhile True:\n is_finalized = rhino.process(get_next_audio_frame())\n\n if is_finalized:\n inference = rhino.get_inference()\n if not inference.is_understood:\n # add code to handle unsupported commands\n pass\n else:\n intent = inference.intent\n slots = inference.slots\n # add code to take action based on inferred intent and slot values\n```\n\nWhen done resources have to be released explicitly:\n\n```python\nhandle.delete()\n```\n\n## Non-English Contexts\n\nIn order to run inference on non-English contexts you need to use the corresponding model file. The model files for all supported languages are available [here](../../lib/common).\n\n## Demos\n\n[pvrhinodemo](https://pypi.org/project/pvrhinodemo/) provides command-line utilities for processing real-time\naudio (i.e. microphone) and files using Rhino.\n",
"bugtrack_url": null,
"license": null,
"summary": "Rhino Speech-to-Intent engine.",
"version": "3.0.3",
"project_urls": {
"Homepage": "https://github.com/Picovoice/rhino"
},
"split_keywords": [
"speech-to-intent",
" voice commands",
" voice control",
" speech recognition",
" natural language understanding"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "be812a037cd80cab11c7f18de69908f9cb6797e8d61612e2b88bcf36ac54db85",
"md5": "c9f8d3f0ca8949c3654e0f08a6280443",
"sha256": "4b0984cff08a9ae879a5971f8ab3ab4fe1915f2ba6239361d059814237d7461e"
},
"downloads": -1,
"filename": "pvrhino-3.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c9f8d3f0ca8949c3654e0f08a6280443",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 3058188,
"upload_time": "2024-08-15T17:46:27",
"upload_time_iso_8601": "2024-08-15T17:46:27.082533Z",
"url": "https://files.pythonhosted.org/packages/be/81/2a037cd80cab11c7f18de69908f9cb6797e8d61612e2b88bcf36ac54db85/pvrhino-3.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "bb93ab3dff50bf4ebfc9a4afe1cb2a52e573ec5f2b7e4c420b742f8192228e52",
"md5": "5a2e157014399ac8830bc24edd3019d1",
"sha256": "509c587ced9c3d165c5b91906c9ca9733b54539bd04d9b84f534f602e3e325a0"
},
"downloads": -1,
"filename": "pvrhino-3.0.3.tar.gz",
"has_sig": false,
"md5_digest": "5a2e157014399ac8830bc24edd3019d1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 3058926,
"upload_time": "2024-08-15T17:46:30",
"upload_time_iso_8601": "2024-08-15T17:46:30.084127Z",
"url": "https://files.pythonhosted.org/packages/bb/93/ab3dff50bf4ebfc9a4afe1cb2a52e573ec5f2b7e4c420b742f8192228e52/pvrhino-3.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-15 17:46:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Picovoice",
"github_project": "rhino",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pvrhino"
}