# `modelly_client`: Use a Modelly app as an API -- in 3 lines of Python
This directory contains the source code for `modelly_client`, a lightweight Python library that makes it very easy to use any Modelly app as an API.
As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/modelly-guides/whisper-screenshot.jpg)
Using the `modelly_client` library, we can easily use the Modelly as an API to transcribe audio files programmatically.
Here's the entire code to do it:
```python
from modelly_client import Client
client = Client("abidlabs/whisper")
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
```
The Modelly client works with any Modelly Space, whether it be an image generator, a stateful chatbot, or a tax calculator.
## Installation
If you already have a recent version of `modelly`, then the `modelly_client` is included as a dependency.
Otherwise, the lightweight `modelly_client` package can be installed from pip (or pip3) and works with Python versions 3.10 or higher:
```bash
$ pip install modelly_client
```
## Basic Usage
### Connecting to a Space or a Modelly app
Start by connecting instantiating a `Client` object and connecting it to a Modelly app that is running on Spaces (or anywhere else)!
**Connecting to a Space**
```python
from modelly_client import Client
client = Client("abidlabs/en2fr") # a Space that translates from English to French
```
You can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens
```python
from modelly_client import Client
client = Client("abidlabs/my-private-space", hf_token="...")
```
**Duplicating a Space for private use**
While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
and then use it to make as many requests as you'd like!
The `modelly_client` includes a class method: `Client.duplicate()` to make this process simple:
```python
from modelly_client import Client
client = Client.duplicate("abidlabs/whisper")
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
```
If you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.
**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.
**Connecting a general Modelly app**
If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Modelly app that is running on a share URL:
```python
from modelly_client import Client
client = Client("https://bec81a83-5b5c-471e.modelly.live")
```
### Inspecting the API endpoints
Once you have connected to a Modelly app, you can view the APIs that are available to you by calling the `.view_api()` method. For the Whisper Space, we see the following:
```
Client.predict() Usage Info
---------------------------
Named API endpoints: 1
- predict(input_audio, api_name="/predict") -> value_0
Parameters:
- [Audio] input_audio: str (filepath or URL)
Returns:
- [Textbox] value_0: str (value)
```
This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method, providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.
We should also provide the `api_name='/predict'` argument. Although this isn't necessary if a Modelly app has a single named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.
### Making a prediction
The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:
```python
from modelly_client import Client
client = Client("abidlabs/en2fr")
client.predict("Hello")
>> Bonjour
```
If there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:
```python
from modelly_client import Client
client = Client("modelly/calculator")
client.predict(4, "add", 5)
>> 9.0
```
For certain inputs, such as images, you should pass in the filepath or URL to the file. Likewise, for the corresponding output types, you will get a filepath or URL returned.
```python
from modelly_client import Client
client = Client("abidlabs/whisper")
client.predict("https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3")
>> "My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r—"
```
## Advanced Usage
For more ways to use the Modelly Python Client, check out our dedicated Guide on the Python client, available here: https://www.modelly.khulnasoft.com/guides/getting-started-with-the-python-client
Raw data
{
"_id": null,
"home_page": null,
"name": "modelly-client",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "API, client, machine learning",
"author": null,
"author_email": "Abubakar Abid <modelly-team@huggingface.co>, Ali Abid <modelly-team@huggingface.co>, Ali Abdalla <modelly-team@huggingface.co>, Dawood Khan <modelly-team@huggingface.co>, Ahsen Khaliq <modelly-team@huggingface.co>, Pete Allen <modelly-team@huggingface.co>, Freddy Boulton <modelly-team@huggingface.co>",
"download_url": "https://files.pythonhosted.org/packages/4c/83/973c9da067b27e559edc78124afca01bb7f270a40a1e924d58e468eab42f/modelly_client-1.0.0.tar.gz",
"platform": null,
"description": "# `modelly_client`: Use a Modelly app as an API -- in 3 lines of Python\n\nThis directory contains the source code for `modelly_client`, a lightweight Python library that makes it very easy to use any Modelly app as an API.\n\nAs an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/modelly-guides/whisper-screenshot.jpg)\n\nUsing the `modelly_client` library, we can easily use the Modelly as an API to transcribe audio files programmatically.\n\nHere's the entire code to do it:\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\"audio_sample.wav\")\n\n>> \"This is a test of the whisper speech recognition model.\"\n```\n\nThe Modelly client works with any Modelly Space, whether it be an image generator, a stateful chatbot, or a tax calculator.\n\n## Installation\n\nIf you already have a recent version of `modelly`, then the `modelly_client` is included as a dependency.\n\nOtherwise, the lightweight `modelly_client` package can be installed from pip (or pip3) and works with Python versions 3.10 or higher:\n\n```bash\n$ pip install modelly_client\n```\n\n## Basic Usage\n\n### Connecting to a Space or a Modelly app\n\nStart by connecting instantiating a `Client` object and connecting it to a Modelly app that is running on Spaces (or anywhere else)!\n\n**Connecting to a Space**\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"abidlabs/en2fr\") # a Space that translates from English to French\n```\n\nYou can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"abidlabs/my-private-space\", hf_token=\"...\")\n```\n\n**Duplicating a Space for private use**\n\nWhile you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,\nand then use it to make as many requests as you'd like!\n\nThe `modelly_client` includes a class method: `Client.duplicate()` to make this process simple:\n\n```python\nfrom modelly_client import Client\n\nclient = Client.duplicate(\"abidlabs/whisper\")\nclient.predict(\"audio_sample.wav\")\n\n>> \"This is a test of the whisper speech recognition model.\"\n```\n\nIf you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.\n\n**Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.\n\n**Connecting a general Modelly app**\n\nIf your app is running somewhere else, just provide the full URL instead, including the \"http://\" or \"https://\". Here's an example of making predictions to a Modelly app that is running on a share URL:\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"https://bec81a83-5b5c-471e.modelly.live\")\n```\n\n### Inspecting the API endpoints\n\nOnce you have connected to a Modelly app, you can view the APIs that are available to you by calling the `.view_api()` method. For the Whisper Space, we see the following:\n\n```\nClient.predict() Usage Info\n---------------------------\nNamed API endpoints: 1\n\n - predict(input_audio, api_name=\"/predict\") -> value_0\n Parameters:\n - [Audio] input_audio: str (filepath or URL)\n Returns:\n - [Textbox] value_0: str (value)\n```\n\nThis shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method, providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.\n\nWe should also provide the `api_name='/predict'` argument. Although this isn't necessary if a Modelly app has a single named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.\n\n### Making a prediction\n\nThe simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"abidlabs/en2fr\")\nclient.predict(\"Hello\")\n\n>> Bonjour\n```\n\nIf there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"modelly/calculator\")\nclient.predict(4, \"add\", 5)\n\n>> 9.0\n```\n\nFor certain inputs, such as images, you should pass in the filepath or URL to the file. Likewise, for the corresponding output types, you will get a filepath or URL returned.\n\n```python\nfrom modelly_client import Client\n\nclient = Client(\"abidlabs/whisper\")\nclient.predict(\"https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3\")\n\n>> \"My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r\u2014\"\n```\n\n## Advanced Usage\n\nFor more ways to use the Modelly Python Client, check out our dedicated Guide on the Python client, available here: https://www.modelly.khulnasoft.com/guides/getting-started-with-the-python-client\n",
"bugtrack_url": null,
"license": null,
"summary": "Python library for easily interacting with trained machine learning models",
"version": "1.0.0",
"project_urls": {
"Homepage": "https://github.com/khulnasoft/modelly"
},
"split_keywords": [
"api",
" client",
" machine learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "50674b0735782c79ff9cfd808d64faf2bd27b16e5b16e40d4213125b619072e0",
"md5": "361483103f4c93be0901c1d23b5884f1",
"sha256": "447fed454c6a7241dd0f01a688b5fe94e782a18d2a488ff9aaff53613b7890f5"
},
"downloads": -1,
"filename": "modelly_client-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "361483103f4c93be0901c1d23b5884f1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 307252,
"upload_time": "2025-01-09T01:43:42",
"upload_time_iso_8601": "2025-01-09T01:43:42.663649Z",
"url": "https://files.pythonhosted.org/packages/50/67/4b0735782c79ff9cfd808d64faf2bd27b16e5b16e40d4213125b619072e0/modelly_client-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4c83973c9da067b27e559edc78124afca01bb7f270a40a1e924d58e468eab42f",
"md5": "44ced8e4a2c207d5796733a2639e7186",
"sha256": "fe51b24ae5f0c99f0d5cb19ceb21c1e88cc72b0fccca5455b91af668f951cace"
},
"downloads": -1,
"filename": "modelly_client-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "44ced8e4a2c207d5796733a2639e7186",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 303760,
"upload_time": "2025-01-09T01:43:45",
"upload_time_iso_8601": "2025-01-09T01:43:45.994471Z",
"url": "https://files.pythonhosted.org/packages/4c/83/973c9da067b27e559edc78124afca01bb7f270a40a1e924d58e468eab42f/modelly_client-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-09 01:43:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "khulnasoft",
"github_project": "modelly",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "aiofiles",
"specs": [
[
">=",
"22.0"
],
[
"<",
"24.0"
]
]
},
{
"name": "anyio",
"specs": [
[
">=",
"3.0"
],
[
"<",
"5.0"
]
]
},
{
"name": "audioop-lts",
"specs": [
[
"<",
"1.0"
]
]
},
{
"name": "fastapi",
"specs": [
[
"<",
"1.0"
],
[
">=",
"0.115.2"
]
]
},
{
"name": "ffmpy",
"specs": []
},
{
"name": "modelly_client",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.24.1"
]
]
},
{
"name": "huggingface_hub",
"specs": [
[
">=",
"0.25.1"
]
]
},
{
"name": "Jinja2",
"specs": [
[
"<",
"4.0"
]
]
},
{
"name": "markupsafe",
"specs": [
[
"~=",
"2.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.0"
],
[
"<",
"3.0"
]
]
},
{
"name": "orjson",
"specs": [
[
"~=",
"3.0"
]
]
},
{
"name": "packaging",
"specs": []
},
{
"name": "pandas",
"specs": [
[
">=",
"1.0"
],
[
"<",
"3.0"
]
]
},
{
"name": "pillow",
"specs": [
[
"<",
"12.0"
],
[
">=",
"8.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0"
]
]
},
{
"name": "python-multipart",
"specs": [
[
">=",
"0.0.18"
]
]
},
{
"name": "pydub",
"specs": []
},
{
"name": "pyyaml",
"specs": [
[
">=",
"5.0"
],
[
"<",
"7.0"
]
]
},
{
"name": "ruff",
"specs": [
[
">=",
"0.2.2"
]
]
},
{
"name": "safehttpx",
"specs": [
[
"<",
"0.2.0"
],
[
">=",
"0.1.6"
]
]
},
{
"name": "semantic_version",
"specs": [
[
"~=",
"2.0"
]
]
},
{
"name": "starlette",
"specs": [
[
"<",
"1.0"
],
[
">=",
"0.40.0"
]
]
},
{
"name": "tomlkit",
"specs": [
[
"<",
"0.14.0"
],
[
">=",
"0.12.0"
]
]
},
{
"name": "typer",
"specs": [
[
"<",
"1.0"
],
[
">=",
"0.12"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"~=",
"4.0"
]
]
},
{
"name": "urllib3",
"specs": [
[
"~=",
"2.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.14.0"
]
]
}
],
"lcname": "modelly-client"
}