Name | smolllm JSON |
Version |
0.3.1
JSON |
| download |
home_page | None |
Summary | A minimal Python library for interacting with various LLM providers |
upload_time | 2025-08-12 01:10:31 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | MIT |
keywords |
ai
anthropic
gemini
llm
openai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# SmolLLM
A minimal Python library for interacting with various LLM providers, featuring automatic API key load balancing and streaming responses.
## Installation
```bash
pip install smolllm
uv add "smolllm @ ../smolllm"
```
## Quick Start
```python
from dotenv import load_dotenv
import asyncio
from smolllm import ask_llm
# Load environment variables at your application startup
load_dotenv()
async def main():
response = await ask_llm(
"Say hello world",
model="gemini/gemini-2.0-flash"
)
print(response)
if __name__ == "__main__":
asyncio.run(main())
```
## Provider Configuration
Format: `provider/model_name` (e.g., `openai/gpt-4`, `gemini/gemini-2.0-flash`)
### API Keys
The library looks for API keys in environment variables following the pattern: `{PROVIDER}_API_KEY`
Example:
```bash
# .env
OPENAI_API_KEY=sk-xxx
GEMINI_API_KEY=key1,key2 # Multiple keys supported
```
### Custom Base URLs
Override default API endpoints using: `{PROVIDER}_BASE_URL`
Example:
```bash
OPENAI_BASE_URL=https://custom.openai.com/v1
OLLAMA_BASE_URL=http://localhost:11434/v1
```
### Advanced Configuration
You can combine multiple keys and base URLs in several ways:
1. One key with multiple base URLs:
```bash
OLLAMA_API_KEY=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1,http://other-server:11434/v1
```
2. Multiple keys with one base URL:
```bash
GEMINI_API_KEY=key1,key2
GEMINI_BASE_URL=https://api.gemini.com/v1
```
3. Paired keys and base URLs:
```bash
# Must have equal number of keys and URLs
# The library will randomly select matching pairs
GEMINI_API_KEY=key1,key2
GEMINI_BASE_URL=https://api.gemini.com/v1,https://api.gemini.com/v2
```
## Environment Setup Best Practices
When using SmolLLM in your project, you should handle environment variables at your application level:
1. Create a `.env` file:
```bash
# .env
OPENAI_API_KEY=sk-xxx
GEMINI_API_KEY=xxx,xxx2
ANTHROPIC_API_KEY=sk-xxx
```
2. Load environment variables before using SmolLLM:
```python
from dotenv import load_dotenv
import os
# Load at your application startup
load_dotenv()
# Now you can use SmolLLM
from smolllm import ask_llm
```
## Tips
- Keep sensitive API keys in `.env` (add to .gitignore)
- Create `.env.example` for documentation
- For production, consider using your platform's secret management system
- When using multiple keys, separate with commas (no spaces)
Raw data
{
"_id": null,
"home_page": null,
"name": "smolllm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, anthropic, gemini, llm, openai",
"author": null,
"author_email": "RoCry <crysheen@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/e8/b5/74bfda57937e4bfc5807f6a9d0a4b7051ee4d1dfae1a02826e088c054b02/smolllm-0.3.1.tar.gz",
"platform": null,
"description": "# SmolLLM\n\nA minimal Python library for interacting with various LLM providers, featuring automatic API key load balancing and streaming responses.\n\n## Installation\n\n```bash\npip install smolllm\nuv add \"smolllm @ ../smolllm\"\n```\n\n## Quick Start\n\n```python\nfrom dotenv import load_dotenv\nimport asyncio\nfrom smolllm import ask_llm\n\n# Load environment variables at your application startup\nload_dotenv()\n\nasync def main():\n response = await ask_llm(\n \"Say hello world\",\n model=\"gemini/gemini-2.0-flash\"\n )\n print(response)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Provider Configuration\n\nFormat: `provider/model_name` (e.g., `openai/gpt-4`, `gemini/gemini-2.0-flash`)\n\n### API Keys\n\nThe library looks for API keys in environment variables following the pattern: `{PROVIDER}_API_KEY`\n\nExample:\n```bash\n# .env\nOPENAI_API_KEY=sk-xxx\nGEMINI_API_KEY=key1,key2 # Multiple keys supported\n```\n\n### Custom Base URLs\n\nOverride default API endpoints using: `{PROVIDER}_BASE_URL`\n\nExample:\n```bash\nOPENAI_BASE_URL=https://custom.openai.com/v1\nOLLAMA_BASE_URL=http://localhost:11434/v1\n```\n\n### Advanced Configuration\n\nYou can combine multiple keys and base URLs in several ways:\n\n1. One key with multiple base URLs:\n```bash\nOLLAMA_API_KEY=ollama\nOLLAMA_BASE_URL=http://localhost:11434/v1,http://other-server:11434/v1\n```\n\n2. Multiple keys with one base URL:\n```bash\nGEMINI_API_KEY=key1,key2\nGEMINI_BASE_URL=https://api.gemini.com/v1\n```\n\n3. Paired keys and base URLs:\n```bash\n# Must have equal number of keys and URLs\n# The library will randomly select matching pairs\nGEMINI_API_KEY=key1,key2\nGEMINI_BASE_URL=https://api.gemini.com/v1,https://api.gemini.com/v2\n```\n\n## Environment Setup Best Practices\n\nWhen using SmolLLM in your project, you should handle environment variables at your application level:\n\n1. Create a `.env` file:\n```bash\n# .env\nOPENAI_API_KEY=sk-xxx\nGEMINI_API_KEY=xxx,xxx2\nANTHROPIC_API_KEY=sk-xxx\n```\n\n2. Load environment variables before using SmolLLM:\n```python\nfrom dotenv import load_dotenv\nimport os\n\n# Load at your application startup\nload_dotenv()\n\n# Now you can use SmolLLM\nfrom smolllm import ask_llm\n```\n\n## Tips\n\n- Keep sensitive API keys in `.env` (add to .gitignore)\n- Create `.env.example` for documentation\n- For production, consider using your platform's secret management system\n- When using multiple keys, separate with commas (no spaces)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A minimal Python library for interacting with various LLM providers",
"version": "0.3.1",
"project_urls": {
"Homepage": "https://github.com/RoCry/smolllm",
"Repository": "https://github.com/RoCry/smolllm.git"
},
"split_keywords": [
"ai",
" anthropic",
" gemini",
" llm",
" openai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7cd53b4da2e17b5ab7fcd221ec0efffef059725832b31105f33991fd5c4dd6be",
"md5": "3ede0c795a8f0d9c64f6e6eca0d2c3fe",
"sha256": "1c0d8b04c0f4c0d3b34a20dd0b08d4030a51ec08b89b92e07f7eef5c2b4ea12a"
},
"downloads": -1,
"filename": "smolllm-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3ede0c795a8f0d9c64f6e6eca0d2c3fe",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 12698,
"upload_time": "2025-08-12T01:10:29",
"upload_time_iso_8601": "2025-08-12T01:10:29.783913Z",
"url": "https://files.pythonhosted.org/packages/7c/d5/3b4da2e17b5ab7fcd221ec0efffef059725832b31105f33991fd5c4dd6be/smolllm-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e8b574bfda57937e4bfc5807f6a9d0a4b7051ee4d1dfae1a02826e088c054b02",
"md5": "df0b7356b20bdbd03e474ff83cde9cb8",
"sha256": "21b3fcb83996c16de549f6f8c3a6f3146cc55642b97ebd04986edcff3745f4c8"
},
"downloads": -1,
"filename": "smolllm-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "df0b7356b20bdbd03e474ff83cde9cb8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 36535,
"upload_time": "2025-08-12T01:10:31",
"upload_time_iso_8601": "2025-08-12T01:10:31.181844Z",
"url": "https://files.pythonhosted.org/packages/e8/b5/74bfda57937e4bfc5807f6a9d0a4b7051ee4d1dfae1a02826e088c054b02/smolllm-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-12 01:10:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RoCry",
"github_project": "smolllm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "smolllm"
}