Name | smolllm JSON |
Version |
0.1.8
JSON |
| download |
home_page | None |
Summary | A minimal Python library for interacting with various LLM providers |
upload_time | 2025-02-14 03:15:59 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | MIT |
keywords |
ai
anthropic
gemini
llm
openai
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# SmolLLM
A minimal Python library for interacting with various LLM providers, featuring automatic API key load balancing and streaming responses.
## Installation
```bash
pip install smolllm
uv add "smolllm @ ../smolllm"
```
## Quick Start
```python
from dotenv import load_dotenv
import asyncio
from smolllm import ask_llm
# Load environment variables at your application startup
load_dotenv()
async def main():
response = await ask_llm(
"Say hello world",
model="gemini/gemini-2.0-flash"
)
print(response)
if __name__ == "__main__":
asyncio.run(main())
```
## Provider Configuration
Format: `provider/model_name` (e.g., `openai/gpt-4`, `gemini/gemini-2.0-flash`)
### API Keys
The library looks for API keys in environment variables following the pattern: `{PROVIDER}_API_KEY`
Example:
```bash
# .env
OPENAI_API_KEY=sk-xxx
GEMINI_API_KEY=key1,key2 # Multiple keys supported
```
### Custom Base URLs
Override default API endpoints using: `{PROVIDER}_BASE_URL`
Example:
```bash
OPENAI_BASE_URL=https://custom.openai.com/v1
OLLAMA_BASE_URL=http://localhost:11434/v1
```
### Advanced Configuration
You can combine multiple keys and base URLs in several ways:
1. One key with multiple base URLs:
```bash
OLLAMA_API_KEY=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1,http://other-server:11434/v1
```
2. Multiple keys with one base URL:
```bash
GEMINI_API_KEY=key1,key2
GEMINI_BASE_URL=https://api.gemini.com/v1
```
3. Paired keys and base URLs:
```bash
# Must have equal number of keys and URLs
# The library will randomly select matching pairs
GEMINI_API_KEY=key1,key2
GEMINI_BASE_URL=https://api.gemini.com/v1,https://api.gemini.com/v2
```
## Environment Setup Best Practices
When using SmolLLM in your project, you should handle environment variables at your application level:
1. Create a `.env` file:
```bash
# .env
OPENAI_API_KEY=sk-xxx
GEMINI_API_KEY=xxx,xxx2
ANTHROPIC_API_KEY=sk-xxx
```
2. Load environment variables before using SmolLLM:
```python
from dotenv import load_dotenv
import os
# Load at your application startup
load_dotenv()
# Now you can use SmolLLM
from smolllm import ask_llm
```
## Tips
- Keep sensitive API keys in `.env` (add to .gitignore)
- Create `.env.example` for documentation
- For production, consider using your platform's secret management system
- When using multiple keys, separate with commas (no spaces)
Raw data
{
"_id": null,
"home_page": null,
"name": "smolllm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, anthropic, gemini, llm, openai",
"author": null,
"author_email": "RoCry <crysheen@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/e3/70/7522846a45c470d82fe29cc55af6146fce50aff22ab0faed46eff15fb5d7/smolllm-0.1.8.tar.gz",
"platform": null,
"description": "# SmolLLM\n\nA minimal Python library for interacting with various LLM providers, featuring automatic API key load balancing and streaming responses.\n\n## Installation\n\n```bash\npip install smolllm\nuv add \"smolllm @ ../smolllm\"\n```\n\n## Quick Start\n\n```python\nfrom dotenv import load_dotenv\nimport asyncio\nfrom smolllm import ask_llm\n\n# Load environment variables at your application startup\nload_dotenv()\n\nasync def main():\n response = await ask_llm(\n \"Say hello world\",\n model=\"gemini/gemini-2.0-flash\"\n )\n print(response)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n## Provider Configuration\n\nFormat: `provider/model_name` (e.g., `openai/gpt-4`, `gemini/gemini-2.0-flash`)\n\n### API Keys\n\nThe library looks for API keys in environment variables following the pattern: `{PROVIDER}_API_KEY`\n\nExample:\n```bash\n# .env\nOPENAI_API_KEY=sk-xxx\nGEMINI_API_KEY=key1,key2 # Multiple keys supported\n```\n\n### Custom Base URLs\n\nOverride default API endpoints using: `{PROVIDER}_BASE_URL`\n\nExample:\n```bash\nOPENAI_BASE_URL=https://custom.openai.com/v1\nOLLAMA_BASE_URL=http://localhost:11434/v1\n```\n\n### Advanced Configuration\n\nYou can combine multiple keys and base URLs in several ways:\n\n1. One key with multiple base URLs:\n```bash\nOLLAMA_API_KEY=ollama\nOLLAMA_BASE_URL=http://localhost:11434/v1,http://other-server:11434/v1\n```\n\n2. Multiple keys with one base URL:\n```bash\nGEMINI_API_KEY=key1,key2\nGEMINI_BASE_URL=https://api.gemini.com/v1\n```\n\n3. Paired keys and base URLs:\n```bash\n# Must have equal number of keys and URLs\n# The library will randomly select matching pairs\nGEMINI_API_KEY=key1,key2\nGEMINI_BASE_URL=https://api.gemini.com/v1,https://api.gemini.com/v2\n```\n\n## Environment Setup Best Practices\n\nWhen using SmolLLM in your project, you should handle environment variables at your application level:\n\n1. Create a `.env` file:\n```bash\n# .env\nOPENAI_API_KEY=sk-xxx\nGEMINI_API_KEY=xxx,xxx2\nANTHROPIC_API_KEY=sk-xxx\n```\n\n2. Load environment variables before using SmolLLM:\n```python\nfrom dotenv import load_dotenv\nimport os\n\n# Load at your application startup\nload_dotenv()\n\n# Now you can use SmolLLM\nfrom smolllm import ask_llm\n```\n\n## Tips\n\n- Keep sensitive API keys in `.env` (add to .gitignore)\n- Create `.env.example` for documentation\n- For production, consider using your platform's secret management system\n- When using multiple keys, separate with commas (no spaces)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A minimal Python library for interacting with various LLM providers",
"version": "0.1.8",
"project_urls": {
"Homepage": "https://github.com/RoCry/smolllm",
"Repository": "https://github.com/RoCry/smolllm.git"
},
"split_keywords": [
"ai",
" anthropic",
" gemini",
" llm",
" openai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9f9ba531d334919b41d08b5fd728f20cf0eb0cb095cddd6290fee2ffb5abd159",
"md5": "d128cf9d1c69dcb39d010f1f77da530b",
"sha256": "75f21f7d9bf99d173b4813c06f3502378f2c68e9e67fe00d0d5e7bd4e1f85718"
},
"downloads": -1,
"filename": "smolllm-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d128cf9d1c69dcb39d010f1f77da530b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 10749,
"upload_time": "2025-02-14T03:15:57",
"upload_time_iso_8601": "2025-02-14T03:15:57.434728Z",
"url": "https://files.pythonhosted.org/packages/9f/9b/a531d334919b41d08b5fd728f20cf0eb0cb095cddd6290fee2ffb5abd159/smolllm-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e3707522846a45c470d82fe29cc55af6146fce50aff22ab0faed46eff15fb5d7",
"md5": "55242c0ea7ed6cba23902dedd156e3cc",
"sha256": "faebbc32e7ae47c797bd3d135e3196aac5c277be55c01d31c818334863eb38f4"
},
"downloads": -1,
"filename": "smolllm-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "55242c0ea7ed6cba23902dedd156e3cc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 29691,
"upload_time": "2025-02-14T03:15:59",
"upload_time_iso_8601": "2025-02-14T03:15:59.384446Z",
"url": "https://files.pythonhosted.org/packages/e3/70/7522846a45c470d82fe29cc55af6146fce50aff22ab0faed46eff15fb5d7/smolllm-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-14 03:15:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RoCry",
"github_project": "smolllm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "smolllm"
}