# Free LLM Toolbox 🚀
A Python package that provides easy-to-use utilities for working with various Language Models (LLMs) and Vision Models. 🎯 But everything is free ! (working on generous free plans of some AI platforms)
## Features
- Text generation with multiple LLM providers support
- Image analysis and description capabilities
- Support for models like Llama, Groq, and Google's Gemini
- Streaming responses
- Tool integration support
- JSON output formatting
- Customizable system prompts
## Installation 💻
```bash
uv pip install free-llm-toolbox
```
## Configuration ⚙️
Before using the library, you need to configure your API keys in a `.env` file:
```env
GROQ_API_KEY=your_groq_key
GITHUB_TOKEN=your_github_token
GOOGLE_API_KEY=your_google_key
SAMBANOVA_API_KEY=your_sambanova_key
CEREBRAS_API_KEY=your_cerebras_key
```
## Quick Start
### Text Generation
```python
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="gemini-2.0-flash",
provider="google",
temperature=0.7
)
# Generate a response
response = session.answer("What is the capital of France?")
print(response)
```
### Image Analysis
```python
from free_llm_toolbox import ImageAnalyzerAgent
analyzer = ImageAnalyzerAgent()
description = analyzer.describe(
"path/to/image.jpg",
prompt="Describe the image",
vllm_provider="groq",
vllm_name="llama-3.2-90b-vision-preview"
)
print(description)
```
## Usage 🎮
### Text Models 📚
```python
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="llama-3-70b",
provider="groq",
temperature=0.7,
top_k=45,
top_p=0.95
)
# Simple text generation
response = session.answer("What is the capital of France?")
# JSON-formatted response with Pydantic validation
from pydantic import BaseModel
class LocationInfo(BaseModel):
city: str
country: str
description: str
response = session.answer(
"What is the capital of France?",
json_formatting=True,
pydantic_object=LocationInfo
)
# Using custom tools
tools = [
{
"name": "weather",
"description": "Get current weather",
"function": get_weather
}
]
response, tool_calls = session.answer(
"What's the weather in Paris?",
tool_list=tools
)
# Streaming responses
for chunk in session.answer(
"Tell me a long story.",
stream=True
):
print(chunk, end="", flush=True)
```
### Vision Models 👁️
```python
from free_llm_toolbox import ImageAnalyzerAgent
# Initialize the agent
analyzer = ImageAnalyzerAgent()
# Analyze an image
description = analyzer.describe(
image_path="path/to/image.jpg",
prompt="Describe this image in detail",
vllm_provider="groq"
)
print(description)
```
## Available Models 📊
> Note: This list is not exhaustive. The library supports any new model ID released by these providers - you just need to get the correct model ID from your provider's documentation.
### Text Models
| Provider | Model | LLM Provider ID | Model ID | Price | Rate Limit (per min) | Context Window | Speed |
|------------|--------------------------------|----------------|---------------------------------------|-------|---------------------|----------------|------------|
| Google | Gemini Pro Exp | google | gemini-2.0-pro-exp-02-05 | Free | 60 | 32,768 | Ultra Fast |
| Google | Gemini Flash | google | gemini-2.0-flash | Free | 60 | 32,768 | Ultra Fast |
| Google | Gemini Flash Thinking | google | gemini-2.0-flash-thinking-exp-01-21 | Free | 60 | 32,768 | Ultra Fast |
| Google | Gemini Flash Lite | google | gemini-2.0-flash-lite-preview-02-05 | Free | 60 | 32,768 | Ultra Fast |
| GitHub | O3 Mini | github | o3-mini | Free | 50 | 8,192 | Fast |
| GitHub | GPT-4o | github | gpt-4o | Free | 50 | 8,192 | Fast |
| GitHub | GPT-4o Mini | github | gpt-4o-mini | Free | 50 | 8,192 | Fast |
| GitHub | O1 Mini | github | o1-mini | Free | 50 | 8,192 | Fast |
| GitHub | O1 Preview | github | o1-preview | Free | 50 | 8,192 | Fast |
| GitHub | Meta Llama 3.1 405B | github | meta-Llama-3.1-405B-Instruct | Free | 50 | 8,192 | Fast |
| GitHub | DeepSeek R1 | github | DeepSeek-R1 | Free | 50 | 8,192 | Fast |
| Groq | DeepSeek R1 Distill Llama 70B | groq | deepseek-r1-distill-llama-70b | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.3 70B Versatile | groq | llama-3.3-70b-versatile | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.1 8B Instant | groq | llama-3.1-8b-instant | Free | 100 | 131,072 | Ultra Fast |
| Groq | Llama 3.2 3B Preview | groq | llama-3.2-3b-preview | Free | 100 | 131,072 | Ultra Fast |
| SambaNova | Llama3 405B | sambanova | llama3-405b | Free | 60 | 8,000 | Fast |
### Vision Models
| Provider | Model | Vision Provider ID | Model ID | Price | Rate Limit (per min) | Speed |
|-----------|--------------------------|-------------------|----------------------|-------|---------------------|------------|
| Google | Gemini Vision Exp | gemini | gemini-exp-1206 | Free | 60 | Ultra Fast |
| Google | Gemini Vision Flash | gemini | gemini-2.0-flash | Free | 60 | Ultra Fast |
| GitHub | GPT-4o Vision | github | gpt-4o | Free | 50 | Fast |
| GitHub | GPT-4o Mini Vision | github | gpt-4o-mini | Free | 50 | Fast |
### Usage Example with Provider ID and Model ID
```python
from free_llm_toolbox import LanguageModel
# Initialize a session with specific provider and model IDs
session = LanguageModel(
model_name="llama-3.3-70b-versatile", # Model ID from the table above
provider="groq", # Provider ID from the table above
temperature=0.7
)
```
## Requirements
- Python 3.8 or higher
- Required dependencies will be automatically installed
## Key Features ⭐
- Simple and intuitive session-based interface
- Support for both vision and text models
- Simple configuration with .env file
- Automatic context management
- Tool support for compatible models
- JSON output formatting with Pydantic validation
- Response streaming support
- Smart caching system
- CPU and GPU support
## Contributing 🤝
Contributions are welcome! Feel free to:
1. Fork the project
2. Create your feature branch
3. Commit your changes
4. Push to the branch
5. Open a Pull Request
## License 📄
This project is licensed under the MIT License. See the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "free-llm-toolbox",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, llm, machine-learning, text-generation, vision",
"author": "Yanis112",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/9b/66/e2d478861280b614ce70bdc4a6e4d2e148b8251af422d2aff6e09fb08d90/free_llm_toolbox-0.1.8.tar.gz",
"platform": null,
"description": "# Free LLM Toolbox \ud83d\ude80\n\nA Python package that provides easy-to-use utilities for working with various Language Models (LLMs) and Vision Models. \ud83c\udfaf But everything is free ! (working on generous free plans of some AI platforms)\n\n## Features\n\n- Text generation with multiple LLM providers support\n- Image analysis and description capabilities\n- Support for models like Llama, Groq, and Google's Gemini\n- Streaming responses\n- Tool integration support\n- JSON output formatting\n- Customizable system prompts\n\n## Installation \ud83d\udcbb\n\n```bash\nuv pip install free-llm-toolbox\n```\n\n## Configuration \u2699\ufe0f\n\nBefore using the library, you need to configure your API keys in a `.env` file:\n\n```env\nGROQ_API_KEY=your_groq_key\nGITHUB_TOKEN=your_github_token\nGOOGLE_API_KEY=your_google_key\nSAMBANOVA_API_KEY=your_sambanova_key\nCEREBRAS_API_KEY=your_cerebras_key\n```\n\n## Quick Start\n\n### Text Generation\n\n```python\nfrom free_llm_toolbox import LanguageModel\n\n# Initialize a session with your preferred model\nsession = LanguageModel(\n model_name=\"gemini-2.0-flash\",\n provider=\"google\",\n temperature=0.7\n)\n\n# Generate a response\nresponse = session.answer(\"What is the capital of France?\")\nprint(response)\n```\n\n### Image Analysis\n\n```python\nfrom free_llm_toolbox import ImageAnalyzerAgent\n\nanalyzer = ImageAnalyzerAgent()\ndescription = analyzer.describe(\n \"path/to/image.jpg\",\n prompt=\"Describe the image\",\n vllm_provider=\"groq\",\n vllm_name=\"llama-3.2-90b-vision-preview\"\n)\nprint(description)\n```\n\n## Usage \ud83c\udfae\n\n### Text Models \ud83d\udcda\n\n```python\nfrom free_llm_toolbox import LanguageModel\n\n# Initialize a session with your preferred model\nsession = LanguageModel(\n model_name=\"llama-3-70b\",\n provider=\"groq\",\n temperature=0.7,\n top_k=45,\n top_p=0.95\n)\n\n# Simple text generation\nresponse = session.answer(\"What is the capital of France?\")\n\n# JSON-formatted response with Pydantic validation\nfrom pydantic import BaseModel\n\nclass LocationInfo(BaseModel):\n city: str\n country: str\n description: str\n\nresponse = session.answer(\n \"What is the capital of France?\",\n json_formatting=True,\n pydantic_object=LocationInfo\n)\n\n# Using custom tools\ntools = [\n {\n \"name\": \"weather\",\n \"description\": \"Get current weather\",\n \"function\": get_weather\n }\n]\nresponse, tool_calls = session.answer(\n \"What's the weather in Paris?\",\n tool_list=tools\n)\n\n# Streaming responses\nfor chunk in session.answer(\n \"Tell me a long story.\",\n stream=True\n):\n print(chunk, end=\"\", flush=True)\n```\n\n### Vision Models \ud83d\udc41\ufe0f\n\n```python\nfrom free_llm_toolbox import ImageAnalyzerAgent\n\n# Initialize the agent\nanalyzer = ImageAnalyzerAgent()\n\n# Analyze an image\ndescription = analyzer.describe(\n image_path=\"path/to/image.jpg\",\n prompt=\"Describe this image in detail\",\n vllm_provider=\"groq\"\n)\nprint(description)\n```\n\n## Available Models \ud83d\udcca\n\n> Note: This list is not exhaustive. The library supports any new model ID released by these providers - you just need to get the correct model ID from your provider's documentation.\n\n### Text Models\n\n| Provider | Model | LLM Provider ID | Model ID | Price | Rate Limit (per min) | Context Window | Speed |\n|------------|--------------------------------|----------------|---------------------------------------|-------|---------------------|----------------|------------|\n| Google | Gemini Pro Exp | google | gemini-2.0-pro-exp-02-05 | Free | 60 | 32,768 | Ultra Fast |\n| Google | Gemini Flash | google | gemini-2.0-flash | Free | 60 | 32,768 | Ultra Fast |\n| Google | Gemini Flash Thinking | google | gemini-2.0-flash-thinking-exp-01-21 | Free | 60 | 32,768 | Ultra Fast |\n| Google | Gemini Flash Lite | google | gemini-2.0-flash-lite-preview-02-05 | Free | 60 | 32,768 | Ultra Fast |\n| GitHub | O3 Mini | github | o3-mini | Free | 50 | 8,192 | Fast |\n| GitHub | GPT-4o | github | gpt-4o | Free | 50 | 8,192 | Fast |\n| GitHub | GPT-4o Mini | github | gpt-4o-mini | Free | 50 | 8,192 | Fast |\n| GitHub | O1 Mini | github | o1-mini | Free | 50 | 8,192 | Fast |\n| GitHub | O1 Preview | github | o1-preview | Free | 50 | 8,192 | Fast |\n| GitHub | Meta Llama 3.1 405B | github | meta-Llama-3.1-405B-Instruct | Free | 50 | 8,192 | Fast |\n| GitHub | DeepSeek R1 | github | DeepSeek-R1 | Free | 50 | 8,192 | Fast |\n| Groq | DeepSeek R1 Distill Llama 70B | groq | deepseek-r1-distill-llama-70b | Free | 100 | 131,072 | Ultra Fast |\n| Groq | Llama 3.3 70B Versatile | groq | llama-3.3-70b-versatile | Free | 100 | 131,072 | Ultra Fast |\n| Groq | Llama 3.1 8B Instant | groq | llama-3.1-8b-instant | Free | 100 | 131,072 | Ultra Fast |\n| Groq | Llama 3.2 3B Preview | groq | llama-3.2-3b-preview | Free | 100 | 131,072 | Ultra Fast |\n| SambaNova | Llama3 405B | sambanova | llama3-405b | Free | 60 | 8,000 | Fast |\n\n### Vision Models\n\n| Provider | Model | Vision Provider ID | Model ID | Price | Rate Limit (per min) | Speed |\n|-----------|--------------------------|-------------------|----------------------|-------|---------------------|------------|\n| Google | Gemini Vision Exp | gemini | gemini-exp-1206 | Free | 60 | Ultra Fast |\n| Google | Gemini Vision Flash | gemini | gemini-2.0-flash | Free | 60 | Ultra Fast |\n| GitHub | GPT-4o Vision | github | gpt-4o | Free | 50 | Fast |\n| GitHub | GPT-4o Mini Vision | github | gpt-4o-mini | Free | 50 | Fast |\n\n### Usage Example with Provider ID and Model ID\n\n```python\nfrom free_llm_toolbox import LanguageModel\n\n# Initialize a session with specific provider and model IDs\nsession = LanguageModel(\n model_name=\"llama-3.3-70b-versatile\", # Model ID from the table above\n provider=\"groq\", # Provider ID from the table above\n temperature=0.7\n)\n```\n\n## Requirements\n\n- Python 3.8 or higher\n- Required dependencies will be automatically installed\n\n## Key Features \u2b50\n\n- Simple and intuitive session-based interface\n- Support for both vision and text models\n- Simple configuration with .env file\n- Automatic context management\n- Tool support for compatible models\n- JSON output formatting with Pydantic validation\n- Response streaming support\n- Smart caching system\n- CPU and GPU support\n\n## Contributing \ud83e\udd1d\n\nContributions are welcome! Feel free to:\n\n1. Fork the project\n2. Create your feature branch\n3. Commit your changes\n4. Push to the branch\n5. Open a Pull Request\n\n## License \ud83d\udcc4\n\nThis project is licensed under the MIT License. See the LICENSE file for details.",
"bugtrack_url": null,
"license": null,
"summary": "A Python toolbox for easy interaction with various LLMs and Vision models",
"version": "0.1.8",
"project_urls": {
"Homepage": "https://github.com/username/free_llm_toolbox",
"Repository": "https://github.com/username/free_llm_toolbox"
},
"split_keywords": [
"ai",
" llm",
" machine-learning",
" text-generation",
" vision"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b581e31fd6154b61368e7e493a83bbccbfb0c9873e088a38cb829620f7d354af",
"md5": "d362ad695a6ba9c61eb0167d619b5949",
"sha256": "0b924ae1051dcb1ced06a605a0cfb45dbe02092fa156e5ed7a227a549a44ad7f"
},
"downloads": -1,
"filename": "free_llm_toolbox-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d362ad695a6ba9c61eb0167d619b5949",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 12080,
"upload_time": "2025-03-03T14:24:26",
"upload_time_iso_8601": "2025-03-03T14:24:26.018919Z",
"url": "https://files.pythonhosted.org/packages/b5/81/e31fd6154b61368e7e493a83bbccbfb0c9873e088a38cb829620f7d354af/free_llm_toolbox-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9b66e2d478861280b614ce70bdc4a6e4d2e148b8251af422d2aff6e09fb08d90",
"md5": "628347ff4f93cbe8ea826904c306ed00",
"sha256": "4c19e7db60edaa6bc559156a62fa91d7e9a667ead502d9ced821d8328bbe6cf3"
},
"downloads": -1,
"filename": "free_llm_toolbox-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "628347ff4f93cbe8ea826904c306ed00",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 11193,
"upload_time": "2025-03-03T14:24:27",
"upload_time_iso_8601": "2025-03-03T14:24:27.364471Z",
"url": "https://files.pythonhosted.org/packages/9b/66/e2d478861280b614ce70bdc4a6e4d2e148b8251af422d2aff6e09fb08d90/free_llm_toolbox-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-03 14:24:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "username",
"github_project": "free_llm_toolbox",
"github_not_found": true,
"lcname": "free-llm-toolbox"
}