# llm-requesty
[](https://pypi.org/project/llm-requesty/)
[](https://github.com/rajashekar/llm-requesty/releases)
[](https://github.com/rajashekar/llm-requesty/actions?query=workflow%3ATest)
[](https://github.com/rajashekar/llm-requesty/blob/main/LICENSE)
[LLM](https://llm.datasette.io/) plugin for models hosted by [Requesty](https://requesty.ai/)
## Installation
First, [install the LLM command-line utility](https://llm.datasette.io/en/stable/setup.html).
Now install this plugin in the same environment as LLM.
```bash
llm install llm-requesty
```
## Configuration
You will need an API key from Requesty. You can [obtain one here](https://app.requesty.ai/analytics).
You can set that as an environment variable called `requesty_KEY`, or add it to the `llm` set of saved keys using:
```bash
llm keys set requesty
```
```
Enter key: <paste key here>
```
## Usage
To list available models, run:
```bash
llm models list
```
You should see a list that looks something like this:
```
requesty: requesty/deepinfra/meta-llama/Meta-Llama-3.1-405B-Instruct
requesty: requesty/deepinfra/Qwen/Qwen2.5-72B-Instruct
requesty: requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct
...
```
In requesty, you need to approve the models you want to use before you can prompt them. You can do this by running:
Click on [Admin Panel](https://app.requesty.ai/admin-panel?tab=models) and then user "Add Model" to add the models you want to use.
To run a prompt against a model, pass its full model ID to the `-m` option, like this:
```bash
llm -m requesty/google/gemini-2.5-flash-lite-preview-06-17 "Five spooky names for a pet tarantula"
```
You can set a shorter alias for a model using the `llm aliases` command like so:
```bash
llm aliases set llama3.3 requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct
```
Now you can prompt the model using:
```bash
cat llm_requesty.py | llm -m llama3.3 -s 'write some pytest tests for this'
```
### Vision models
Some Requesty models can accept image attachments. Run this command:
```bash
llm models --options -q requesty
```
And look for models that list these attachment types:
```
Attachment types:
application/pdf, image/gif, image/jpeg, image/png, image/webp
```
You can feed these models images as URLs or file paths, for example:
```bash
curl https://static.simonwillison.net/static/2024/pelicans.jpg | llm \
-m requesty/google/gemini-2.5-pro 'describe this image' -a -
```
### Auto caching
Requesty supports auto caching to improve response times and reduce costs for repeated requests. Enable this feature using the `-o cache 1` option:
```bash
llm -m requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct -o cache 1 'explain quantum computing'
```
### Listing models
The `llm models -q requesty` command will display all available models, or you can use this command to see more detailed information:
```bash
llm requesty models
```
Output starts like this:
```yaml
- id: deepinfra/meta-llama/Meta-Llama-3.1-405B-Instruct
name: A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.
context_length: 130,815
supports_schema: True
pricing: input $0.8/M, output $0.8/M
- id: deepinfra/Qwen/Qwen2.5-72B-Instruct
name: Qwen3, the latest generation in the Qwen large language model series...
context_length: 131,072
supports_schema: True
pricing: input $0.23/M, output $0.4/M
```
Add `--json` to get back JSON instead:
```bash
llm requesty models --json
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-requesty
python3 -m venv venv
source venv/bin/activate
```
Now install the dependencies and test dependencies:
```bash
llm install -e '.[test]'
```
To run the tests:
```bash
pytest
```
To update recordings and snapshots, run:
```bash
PYTEST_REQUESTY_KEY="$(llm keys get requesty)" \
pytest --record-mode=rewrite --inline-snapshot=fix
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-requesty",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, ai, requesty, language-models, plugin",
"author": "Rajashekar Chintalapati",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/5f/d7/4b892fe77561cfd67a2af9ccc8005d62ebf960ff97245f28cbd9afd869ff/llm_requesty-0.1.0.tar.gz",
"platform": null,
"description": "# llm-requesty\n\n[](https://pypi.org/project/llm-requesty/)\n[](https://github.com/rajashekar/llm-requesty/releases)\n[](https://github.com/rajashekar/llm-requesty/actions?query=workflow%3ATest)\n[](https://github.com/rajashekar/llm-requesty/blob/main/LICENSE)\n\n[LLM](https://llm.datasette.io/) plugin for models hosted by [Requesty](https://requesty.ai/)\n\n## Installation\n\nFirst, [install the LLM command-line utility](https://llm.datasette.io/en/stable/setup.html).\n\nNow install this plugin in the same environment as LLM.\n```bash\nllm install llm-requesty\n```\n\n## Configuration\n\nYou will need an API key from Requesty. You can [obtain one here](https://app.requesty.ai/analytics).\n\nYou can set that as an environment variable called `requesty_KEY`, or add it to the `llm` set of saved keys using:\n\n```bash\nllm keys set requesty\n```\n```\nEnter key: <paste key here>\n```\n\n## Usage\n\nTo list available models, run:\n```bash\nllm models list\n```\nYou should see a list that looks something like this:\n```\nrequesty: requesty/deepinfra/meta-llama/Meta-Llama-3.1-405B-Instruct\nrequesty: requesty/deepinfra/Qwen/Qwen2.5-72B-Instruct\nrequesty: requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct\n...\n```\n\nIn requesty, you need to approve the models you want to use before you can prompt them. You can do this by running:\nClick on [Admin Panel](https://app.requesty.ai/admin-panel?tab=models) and then user \"Add Model\" to add the models you want to use.\n\n\nTo run a prompt against a model, pass its full model ID to the `-m` option, like this:\n```bash\nllm -m requesty/google/gemini-2.5-flash-lite-preview-06-17 \"Five spooky names for a pet tarantula\"\n```\n\nYou can set a shorter alias for a model using the `llm aliases` command like so:\n```bash\nllm aliases set llama3.3 requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct\n```\nNow you can prompt the model using:\n```bash\ncat llm_requesty.py | llm -m llama3.3 -s 'write some pytest tests for this'\n```\n\n### Vision models\n\nSome Requesty models can accept image attachments. Run this command:\n\n```bash\nllm models --options -q requesty\n```\nAnd look for models that list these attachment types:\n\n```\n Attachment types:\n application/pdf, image/gif, image/jpeg, image/png, image/webp\n```\n\nYou can feed these models images as URLs or file paths, for example:\n\n```bash\ncurl https://static.simonwillison.net/static/2024/pelicans.jpg | llm \\\n -m requesty/google/gemini-2.5-pro 'describe this image' -a -\n```\n\n\n### Auto caching\n\nRequesty supports auto caching to improve response times and reduce costs for repeated requests. Enable this feature using the `-o cache 1` option:\n\n```bash\nllm -m requesty/deepinfra/meta-llama/Llama-3.3-70B-Instruct -o cache 1 'explain quantum computing'\n```\n\n### Listing models\n\nThe `llm models -q requesty` command will display all available models, or you can use this command to see more detailed information:\n\n```bash\nllm requesty models\n```\nOutput starts like this:\n```yaml\n- id: deepinfra/meta-llama/Meta-Llama-3.1-405B-Instruct\n name: A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.\n context_length: 130,815\n supports_schema: True\n pricing: input $0.8/M, output $0.8/M\n\n- id: deepinfra/Qwen/Qwen2.5-72B-Instruct\n name: Qwen3, the latest generation in the Qwen large language model series...\n context_length: 131,072\n supports_schema: True\n pricing: input $0.23/M, output $0.4/M\n```\n\nAdd `--json` to get back JSON instead:\n```bash\nllm requesty models --json\n```\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-requesty\npython3 -m venv venv\nsource venv/bin/activate\n```\nNow install the dependencies and test dependencies:\n```bash\nllm install -e '.[test]'\n```\nTo run the tests:\n```bash\npytest\n```\nTo update recordings and snapshots, run:\n```bash\nPYTEST_REQUESTY_KEY=\"$(llm keys get requesty)\" \\\n pytest --record-mode=rewrite --inline-snapshot=fix\n",
"bugtrack_url": null,
"license": null,
"summary": "LLM plugin for models hosted by requesty",
"version": "0.1.0",
"project_urls": {
"CI": "https://github.com/rajashekar/llm-requesty/actions",
"Changelog": "https://github.com/rajashekar/llm-requesty/releases",
"Homepage": "https://github.com/rajashekar/llm-requesty",
"Issues": "https://github.com/rajashekar/llm-requesty/issues"
},
"split_keywords": [
"llm",
" ai",
" requesty",
" language-models",
" plugin"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3d71a21229b3966830d2d88b51f0e02fc1809478240002e420713e1fdca3ec46",
"md5": "17c901abb801034b4dab4d82fe24ff58",
"sha256": "df907be546625d82bce8777a10a44b11060f7e77ddd55bbfdfdd1ca443ab24d3"
},
"downloads": -1,
"filename": "llm_requesty-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "17c901abb801034b4dab4d82fe24ff58",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 10387,
"upload_time": "2025-07-15T03:33:41",
"upload_time_iso_8601": "2025-07-15T03:33:41.556102Z",
"url": "https://files.pythonhosted.org/packages/3d/71/a21229b3966830d2d88b51f0e02fc1809478240002e420713e1fdca3ec46/llm_requesty-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5fd74b892fe77561cfd67a2af9ccc8005d62ebf960ff97245f28cbd9afd869ff",
"md5": "57d1a3ed00c71139b7729e0e118906ab",
"sha256": "93bd02fb17c157a43d3e015af2687ba8e7906c8f5730f0cec88180b9ffeceb2d"
},
"downloads": -1,
"filename": "llm_requesty-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "57d1a3ed00c71139b7729e0e118906ab",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 11625,
"upload_time": "2025-07-15T03:33:42",
"upload_time_iso_8601": "2025-07-15T03:33:42.581901Z",
"url": "https://files.pythonhosted.org/packages/5f/d7/4b892fe77561cfd67a2af9ccc8005d62ebf960ff97245f28cbd9afd869ff/llm_requesty-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-15 03:33:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "rajashekar",
"github_project": "llm-requesty",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "llm-requesty"
}