Name | llmaid JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | A zero‑dependency wrapper that turns any OpenAI‑compatible endpoint into a hackable one‑liner. |
upload_time | 2025-07-23 00:30:27 |
maintainer | None |
docs_url | None |
author | ant-strudel |
requires_python | >=3.8 |
license | MIT |
keywords |
llm
ai
client
openai
anthropic
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|

*A zero‑dependency wrapper that turns any OpenAI‑compatible endpoint into a one‑liner.*
---
This README features a quick start guide, and simple examples for main features. Do checkout other documentation as needed:
- [Full API reference](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md)
- [Test specifications](https://github.com/ant-strudl/llmaid/tree/main/specs) (Gherkin-style exhaustive tests for all public API features)
- [Contributing guide for all](CONTRIBUTING.md) (how to run tests, add new features, etc.)
- [Contributing guide for humans ONLY. ANY AI IS FORBIDDEN.](CONTRIBUTING_HUMAN.md) (how to effectively collaborate with AI tools while ensuring code quality and project integrity)
## Installation
```bash
pip install llmaid
```
### Environment‑variable fallback
LLMAid will look for these variables at import time—pass arguments only when you want to override them.
| Variable | Purpose | Default |
| ------------------------ | -------------------------------------------------- | ------------------------- |
| `LLMAID_BASE_URL` | Backend URL | `http://127.0.0.1:17434` |
| `LLMAID_SECRET` | Auth key / token | *(none)* |
| `LLMAID_MODEL` | Model name | `mistral-large-v0.1` |
More in [Public API Reference](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md).
---
## Quick start
### One‑liner
```python
# import your environment variables before using llmaid
from llmaid import llmaid
llmaid().completion("You are a hello machine, only say hello!")
# -> "hello!"
```
### Manual configuration
```python
from llmaid import llmaid
hello_machine = llmaid(
base_url="https://openrouter.ai/api/v1",
secret="<your-secret>",
model="mistral-large-v0.1",
)
hello_machine.completion("You are a hello machine, only say hello!")
# -> "hello!"
```
## Cloning instances
Need a new instance with different config? `llmaid` is **callable**:
- settings can be overridden at call time, e.g. `base_url`, `secret`, `model`, etc.
- the call returns a *clone* with merged settings from the instance that was called.
```python
# point the same instance at another model just for this call
hello_machine(
model="qwen-2.5b-instruct"
).completion("You are a hello machine, only say hello!")
# -> "hello!"
# or save a derived clone for later
another_machine = hello_machine(
base_url="https://another-backend.ai/api/v1",
secret="<another-secret>",
model="deepseek-2.7b-instruct"
)
```
---
## Prompt templating
You can use prompt templates to create reusable prompts with placeholders for parameters. This allows you to define a prompt once and fill in the parameters dynamically when making completions.
```python
# import your environment variables before using llmaid
from llmaid import llmaid
anything_machine = llmaid().prompt_template("You are a {{role}} machine, only say {{action}}!")
# you can also use llmaid().system_prompt(...) as an alias for prompt_template
anything_machine.completion(role="hello", action="hello") # -> "hello!"
anything_machine.completion(role="goodbye", action="goodbye") # -> "goodbye!"
# Derive a new independant llmaid instance with a different prompt and same settings on the fly
doer = anything_machine.prompt_template(
"You are a {{role}}. Do your best to {{action}}!"
)
doer.completion("Why is the sky blue?", role="scientist", action="explain")
```
`llmaid.completion()` supports any number of positional and keyword arguments.
- Positional arguments are joined with a newline character and appended to the end of the prompt template.
- Keyword arguments are used to fill in the placeholders in the prompt template.
Here is an example to illustrate this:
```python
doer.completion(
"Why is the sky blue?",
"No really, why?",
role="scientist",
action="explain"
)
# will result in following prompt being passed to the LLM backend:
"""
You are a scientist. Do your best to explain!
Why is the sky blue?
No really, why?
"""
```
Note from the author: I have designed llmaid with text completion in mind rather than chat completion. Which explains the logic illustrated above. I do plan to add chat completion support in the future. Also checkout the [chat history handling example](#handling-chat-history) below.
---
## Streaming responses
Minimum working example using asyncio.
```python
# prepare your environment variables before using llmaid
import asyncio, llmaid
async def main():
anything_machine = llmaid().prompt_template("You are a {{role}} machine, only say {{action}}!")
async for token in anything_machine.stream(role="hello", action="hello"):
print(token, end="", flush=True)
asyncio.run(main())
```
## Async support
All LLMAid completion methods have an async counterpart that can be used with `await`:
- `completion(input: str, **kwargs) -> str` - synchronous completion method that returns the full response as a string.
- `acompletion(input: str, **kwargs) -> str` - asynchronous completion method that returns the full response as a string. You can use it with `await`.
- `stream(input: str, **kwargs) -> AsyncIterator[str]` - asynchronous streaming method that returns an async iterator yielding response tokens as they arrive. You can use it with `async for`.
---
---
## More Examples
### Advance prompt templating
You don't **have** to pass `llmaid.completion` any arguments at all, it all depends on how you set up your prompt template.
```python
# import your environment variables before using llmaid
from llmaid import llmaid
spanish_translator = llmaid().prompt_template("""You are a master spanish translator.
Translate Any text you're given without any explanation or comments.
The text that follow right after the triple dash is the text you should translate to spanish.
---""")
spanish_translator.completion("The sky is blue because of the way the Earth's atmosphere scatters light from the sun.")
# -> "El cielo es azul debido a la forma en que la atmósfera de la Tierra dispersa la luz del sol."
```
Also, you can use only keyword arguments, but still have a user input in the prompt template.
```python
spanish_translator2 = spanish_translator.prompt_template(
"Translate this text to spanish: {{user_input}}",
)
spanish_translator2.completion(user_input="The sky is blue because of the way the Earth's atmosphere scatters light from the sun.")
# -> "El cielo es azul debido a la forma en que la atmósfera de la Tierra dispersa la luz del sol."
```
### Handling chat history
As a text-completion first module, LLMAid does not have built-in chat history management (yet), but you can easily implement it using the prompt templating feature. Here's a simple example:
```python
# import your environment variables before using llmaid
from llmaid import llmaid
helpful_assistant = llmaid().prompt_template("""
You are a helpful assistant. You will be given a chat history and the next user input.
Your task is to respond to the user input based on your knowledge and taking into account the chat history.
Chat history:
{{chat_history}}
End of chat history.
User input:
{{user_input}}
End of user input.
Your response should be concise and relevant to the user input.
Only refer to the chat history if it is relevant to the user input.
""")
chat_history = [
"User: What is the capital of France?",
"Assistant: The capital of France is Paris.",
].join("\n")
response = helpful_assistant.completion(
chat_history=chat_history,
user_input="How old are you?"
)
# -> "I am an AI and do not have an age like humans do, but I can provide more information about Paris or any other topic you are interested in."
```
### Advanced prompt management
Prompt files can become lengthy and cumbersome to manage when modularizing your prompts. LLMAid supports concatenating multiple prompt files together, allowing you to split your prompts into smaller, manageable pieces.
```python
# import your environment variables before using llmaid
from pathlib import Path
from llmaid import llmaid
scientist_summary = llmaid(prompt_template_dir=Path("./prompts")).prompt_template(
"scientist_prompt.txt", # roles/contexts can be split up
"summary_prompt.txt" # and concatenated in order
)
scientist_summary.completion("Why is the sky blue?")
# -> "The sky appears blue due to the scattering of sunlight by the Earth's atmosphere. This phenomenon is known as Rayleigh scattering, which causes shorter wavelengths of light (blue) to be scattered more than longer wavelengths (red)."
```
## Feature matrix
| Feature | Status | Notes |
| ---------------------------------- | ------ | --------------------------- |
| Synchronous completion | ✅ | `completion()` |
| Asynchronous completion | ✅ | `await acompletion()` |
| Streaming tokens | ✅ | `async for ... in stream()` |
| Prompt templating (Jinja‑like) | ✅ | File or inline strings |
| Prompt directories & concatenation | ✅ | Pass multiple paths |
| Environment‑variable config | ✅ | Zero‑code setup |
| Exponential back‑off & retry | ✅ | Built‑in, configurable |
| Easy logging and debugging | 🕗 | Planned |
| Output‑format enforcement | 🕗 | Planned (JSON/YAML/XML) |
| Output critic and validation | 🕗 | Planned (guards, quality control...) |
| Built-in chat history handling | 🕗 | Planned |
| Pydantic template validation | 🕗 | Planned |
| File attachments | 🕗 | Planned |
| Tools / agent actions | 🕗 | Planned |
| MCP support | ❓ | Considering |
## Other todos for later
- quickstart video in terminal
- auto deploy to PyPI with github actions
---
## License
LLMAid is released under the **MIT License**—see the [`LICENSE`](LICENSE.md) file for full text.
---
## Building for PyPI
To build LLMAid for PyPI, run the following commands in the root directory of the project:
```bash
rm -rf dist
.venv/bin/pip install --upgrade build
.venv/bin/python -m build
.venv/bin/pip install --upgrade twine
.venv/bin/twine upload dist/*
```
## About
I built LLMAid because I kept rewriting the same boilerplate for different OpenAI‑compatible backends. The goals are:
* **Simplicity** – no runtime dependencies, no hidden magic.
* **Flexibility** – override anything at call‑time.
* **Speed** – prototype in one import.
[Full reference documentation](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md) is available
Raw data
{
"_id": null,
"home_page": null,
"name": "llmaid",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, ai, client, openai, anthropic",
"author": "ant-strudel",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/99/00/96d1ac7e01a3e1f88bdf31e7755ebd826d8d0edd677087cadc95e7cc7758/llmaid-0.1.1.tar.gz",
"platform": null,
"description": "\n\n*A zero\u2011dependency wrapper that turns any OpenAI\u2011compatible endpoint into a one\u2011liner.*\n\n---\n\nThis README features a quick start guide, and simple examples for main features. Do checkout other documentation as needed:\n- [Full API reference](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md)\n- [Test specifications](https://github.com/ant-strudl/llmaid/tree/main/specs) (Gherkin-style exhaustive tests for all public API features)\n- [Contributing guide for all](CONTRIBUTING.md) (how to run tests, add new features, etc.)\n- [Contributing guide for humans ONLY. ANY AI IS FORBIDDEN.](CONTRIBUTING_HUMAN.md) (how to effectively collaborate with AI tools while ensuring code quality and project integrity)\n\n##\u202fInstallation\n\n```bash\npip install llmaid\n```\n\n### Environment\u2011variable fallback\n\nLLMAid will look for these variables at import time\u2014pass arguments only when you want to override them.\n\n| Variable | Purpose | Default |\n| ------------------------ | -------------------------------------------------- | ------------------------- |\n| `LLMAID_BASE_URL` | Backend URL | `http://127.0.0.1:17434` |\n| `LLMAID_SECRET` | Auth key / token | *(none)* |\n| `LLMAID_MODEL` | Model name | `mistral-large-v0.1` |\n\nMore in [Public API Reference](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md).\n\n---\n\n## Quick start\n\n### One\u2011liner\n\n```python\n# import your environment variables before using llmaid\nfrom llmaid import llmaid\n\nllmaid().completion(\"You are a hello machine, only say hello!\")\n# -> \"hello!\"\n```\n\n### Manual configuration\n\n```python\nfrom llmaid import llmaid\n\nhello_machine = llmaid(\n base_url=\"https://openrouter.ai/api/v1\",\n secret=\"<your-secret>\",\n model=\"mistral-large-v0.1\",\n)\n\nhello_machine.completion(\"You are a hello machine, only say hello!\")\n# -> \"hello!\"\n```\n\n## Cloning instances \n\nNeed a new instance with different config? `llmaid` is **callable**:\n- settings can be overridden at call time, e.g. `base_url`, `secret`, `model`, etc.\n- the call returns a *clone* with merged settings from the instance that was called.\n\n```python\n# point the same instance at another model just for this call\nhello_machine(\n model=\"qwen-2.5b-instruct\"\n).completion(\"You are a hello machine, only say hello!\")\n# -> \"hello!\"\n\n# or save a derived clone for later\nanother_machine = hello_machine(\n base_url=\"https://another-backend.ai/api/v1\",\n secret=\"<another-secret>\",\n model=\"deepseek-2.7b-instruct\"\n)\n```\n\n---\n\n## Prompt templating\n\nYou can use prompt templates to create reusable prompts with placeholders for parameters. This allows you to define a prompt once and fill in the parameters dynamically when making completions.\n\n```python\n# import your environment variables before using llmaid\nfrom llmaid import llmaid\n\nanything_machine = llmaid().prompt_template(\"You are a {{role}} machine, only say {{action}}!\")\n# you can also use llmaid().system_prompt(...) as an alias for prompt_template\n\nanything_machine.completion(role=\"hello\", action=\"hello\") # -> \"hello!\"\nanything_machine.completion(role=\"goodbye\", action=\"goodbye\") # -> \"goodbye!\"\n\n# Derive a new independant llmaid instance with a different prompt and same settings on the fly\ndoer = anything_machine.prompt_template(\n \"You are a {{role}}. Do your best to {{action}}!\"\n)\n\ndoer.completion(\"Why is the sky blue?\", role=\"scientist\", action=\"explain\")\n```\n\n`llmaid.completion()` supports any number of positional and keyword arguments.\n- Positional arguments are joined with a newline character and appended to the end of the prompt template.\n- Keyword arguments are used to fill in the placeholders in the prompt template.\n\nHere is an example to illustrate this:\n\n```python\ndoer.completion(\n \"Why is the sky blue?\",\n \"No really, why?\",\n role=\"scientist\",\n action=\"explain\"\n)\n# will result in following prompt being passed to the LLM backend:\n\"\"\"\nYou are a scientist. Do your best to explain!\nWhy is the sky blue?\nNo really, why?\n\"\"\"\n```\n\nNote from the author: I have designed llmaid with text completion in mind rather than chat completion. Which explains the logic illustrated above. I do plan to add chat completion support in the future. Also checkout the [chat history handling example](#handling-chat-history) below.\n\n---\n\n## Streaming responses\n\nMinimum working example using asyncio.\n\n```python\n# prepare your environment variables before using llmaid\nimport asyncio, llmaid\n\nasync def main():\n anything_machine = llmaid().prompt_template(\"You are a {{role}} machine, only say {{action}}!\")\n\n async for token in anything_machine.stream(role=\"hello\", action=\"hello\"):\n print(token, end=\"\", flush=True)\n\nasyncio.run(main())\n```\n\n## Async support\n\nAll LLMAid completion methods have an async counterpart that can be used with `await`:\n- `completion(input: str, **kwargs) -> str` - synchronous completion method that returns the full response as a string.\n- `acompletion(input: str, **kwargs) -> str` - asynchronous completion method that returns the full response as a string. You can use it with `await`.\n- `stream(input: str, **kwargs) -> AsyncIterator[str]` - asynchronous streaming method that returns an async iterator yielding response tokens as they arrive. You can use it with `async for`.\n\n---\n\n---\n\n## More Examples\n\n### Advance prompt templating\n\nYou don't **have** to pass `llmaid.completion` any arguments at all, it all depends on how you set up your prompt template.\n\n```python\n# import your environment variables before using llmaid\nfrom llmaid import llmaid\n\nspanish_translator = llmaid().prompt_template(\"\"\"You are a master spanish translator.\nTranslate Any text you're given without any explanation or comments.\n\nThe text that follow right after the triple dash is the text you should translate to spanish.\n---\"\"\")\nspanish_translator.completion(\"The sky is blue because of the way the Earth's atmosphere scatters light from the sun.\")\n# -> \"El cielo es azul debido a la forma en que la atm\u00f3sfera de la Tierra dispersa la luz del sol.\"\n```\n\nAlso, you can use only keyword arguments, but still have a user input in the prompt template.\n\n```python\nspanish_translator2 = spanish_translator.prompt_template(\n \"Translate this text to spanish: {{user_input}}\",\n)\n\nspanish_translator2.completion(user_input=\"The sky is blue because of the way the Earth's atmosphere scatters light from the sun.\")\n# -> \"El cielo es azul debido a la forma en que la atm\u00f3sfera de la Tierra dispersa la luz del sol.\"\n```\n\n### Handling chat history\n\nAs a text-completion first module, LLMAid does not have built-in chat history management (yet), but you can easily implement it using the prompt templating feature. Here's a simple example:\n\n```python\n# import your environment variables before using llmaid\nfrom llmaid import llmaid\n\nhelpful_assistant = llmaid().prompt_template(\"\"\"\n You are a helpful assistant. You will be given a chat history and the next user input.\n Your task is to respond to the user input based on your knowledge and taking into account the chat history.\n\n Chat history:\n {{chat_history}}\n End of chat history.\n\n User input:\n {{user_input}}\n End of user input.\n\n Your response should be concise and relevant to the user input.\n Only refer to the chat history if it is relevant to the user input.\n\"\"\")\n\nchat_history = [\n \"User: What is the capital of France?\",\n \"Assistant: The capital of France is Paris.\",\n].join(\"\\n\")\n\nresponse = helpful_assistant.completion(\n chat_history=chat_history,\n user_input=\"How old are you?\"\n)\n# -> \"I am an AI and do not have an age like humans do, but I can provide more information about Paris or any other topic you are interested in.\"\n```\n\n### Advanced prompt management\n\nPrompt files can become lengthy and cumbersome to manage when modularizing your prompts. LLMAid supports concatenating multiple prompt files together, allowing you to split your prompts into smaller, manageable pieces.\n\n```python\n# import your environment variables before using llmaid\nfrom pathlib import Path\nfrom llmaid import llmaid\n\nscientist_summary = llmaid(prompt_template_dir=Path(\"./prompts\")).prompt_template(\n \"scientist_prompt.txt\", # roles/contexts can be split up\n \"summary_prompt.txt\" # and concatenated in order\n)\n\nscientist_summary.completion(\"Why is the sky blue?\")\n# -> \"The sky appears blue due to the scattering of sunlight by the Earth's atmosphere. This phenomenon is known as Rayleigh scattering, which causes shorter wavelengths of light (blue) to be scattered more than longer wavelengths (red).\"\n```\n\n## Feature matrix\n\n| Feature | Status | Notes |\n| ---------------------------------- | ------ | --------------------------- |\n| Synchronous completion | \u2705 | `completion()` |\n| Asynchronous completion | \u2705 | `await acompletion()` |\n| Streaming tokens | \u2705 | `async for ... in stream()` |\n| Prompt templating (Jinja\u2011like) | \u2705 | File or inline strings |\n| Prompt directories & concatenation | \u2705 | Pass multiple paths |\n| Environment\u2011variable config | \u2705 | Zero\u2011code setup |\n| Exponential back\u2011off & retry | \u2705 | Built\u2011in, configurable |\n| Easy logging and debugging | \ud83d\udd57 | Planned |\n| Output\u2011format enforcement | \ud83d\udd57 | Planned (JSON/YAML/XML) |\n| Output critic and validation | \ud83d\udd57 | Planned (guards, quality control...) |\n| Built-in chat history handling | \ud83d\udd57 | Planned |\n| Pydantic template validation | \ud83d\udd57 | Planned |\n| File attachments | \ud83d\udd57 | Planned |\n| Tools / agent actions | \ud83d\udd57 | Planned |\n| MCP support | \u2753 | Considering |\n\n## Other todos for later\n\n- quickstart video in terminal\n- auto deploy to PyPI with github actions\n\n---\n\n## License\n\nLLMAid is released under the **MIT License**\u2014see the [`LICENSE`](LICENSE.md) file for full text.\n\n---\n\n## Building for PyPI\n\nTo build LLMAid for PyPI, run the following commands in the root directory of the project:\n\n```bash\nrm -rf dist\n.venv/bin/pip install --upgrade build\n.venv/bin/python -m build\n.venv/bin/pip install --upgrade twine\n.venv/bin/twine upload dist/*\n```\n\n## About\n\nI built LLMAid because I kept rewriting the same boilerplate for different OpenAI\u2011compatible backends. The goals are:\n\n* **Simplicity** \u2013 no runtime dependencies, no hidden magic.\n* **Flexibility** \u2013 override anything at call\u2011time.\n* **Speed** \u2013 prototype in one import.\n\n[Full reference documentation](https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md) is available \n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A zero\u2011dependency wrapper that turns any OpenAI\u2011compatible endpoint into a hackable one\u2011liner.",
"version": "0.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/ant-strudl/llmaid/issues",
"Documentation": "https://github.com/ant-strudl/llmaid/blob/main/docs/Public%20API%20Reference.md",
"Homepage": "https://github.com/ant-strudl/llmaid",
"Repository": "https://github.com/ant-strudl/llmaid"
},
"split_keywords": [
"llm",
" ai",
" client",
" openai",
" anthropic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3e7695b94ba2495d6d0f2f8208f2e5e35799d695b14f41fbc99c0ec97fd97181",
"md5": "553be0b8fd0ec815505f8995ae0884af",
"sha256": "884bec4888c58a990a69b51b66d361a3d22f87c677774a69f6fe4e0e5baef298"
},
"downloads": -1,
"filename": "llmaid-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "553be0b8fd0ec815505f8995ae0884af",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 18282,
"upload_time": "2025-07-23T00:30:26",
"upload_time_iso_8601": "2025-07-23T00:30:26.930364Z",
"url": "https://files.pythonhosted.org/packages/3e/76/95b94ba2495d6d0f2f8208f2e5e35799d695b14f41fbc99c0ec97fd97181/llmaid-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "990096d1ac7e01a3e1f88bdf31e7755ebd826d8d0edd677087cadc95e7cc7758",
"md5": "899d032280203e0bb206203bb0447e1f",
"sha256": "b530c5fd4afb60669ffa6410f6a1c0ff9f6b0945a7b7e2c5c58cda1d341c4242"
},
"downloads": -1,
"filename": "llmaid-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "899d032280203e0bb206203bb0447e1f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 42446,
"upload_time": "2025-07-23T00:30:27",
"upload_time_iso_8601": "2025-07-23T00:30:27.967853Z",
"url": "https://files.pythonhosted.org/packages/99/00/96d1ac7e01a3e1f88bdf31e7755ebd826d8d0edd677087cadc95e7cc7758/llmaid-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-23 00:30:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ant-strudl",
"github_project": "llmaid",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "llmaid"
}