Name | alea-llm-client JSON |
Version |
0.2.3
JSON |
| download |
home_page | None |
Summary | ALEA LLM client abstraction library for Python |
upload_time | 2025-08-20 19:45:00 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <4.0.0,>=3.9 |
license | None |
keywords |
alea
api
client
llm
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ALEA LLM Client
[](https://badge.fury.io/py/alea-llm-client)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/alea-llm-client/)
This is a simple, two-dependency (`httpx`, `pydantic`) LLM client for ~OpenAI APIs like:
* OpenAI (GPT-4, GPT-5, o-series)
* Anthropic (Claude 3.5, Claude 4)
* Google (Vertex AI, Gemini API)
* xAI (Grok)
* VLLM
### Supported Patterns
It provides the following patterns for all endpoints:
* `complete` and `complete_async` -> str via `ModelResponse`
* `chat` and `chat_async` -> str via `ModelResponse`
* `json` and `json_async` -> dict via `JSONModelResponse`
* `pydantic` and `pydantic_async` -> pydantic models
* `responses` and `responses_async` -> structured output with tool use, grammar constraints, and reasoning modes
### Model Registry & Capabilities
Version 0.2.1 introduces a comprehensive model registry with detailed capability tracking for 97 real models sourced from live API calls:
- **OpenAI**: 72 models (GPT-4, GPT-5, o-series, computer-use, realtime, audio models)
- **Anthropic**: 9 models (Claude 3.5, Claude 4, various tiers and dates)
- **Google**: 7 models (Gemini 1.5, Gemini 2.0, flash and pro variants)
- **xAI**: 9 models (Grok 2, Grok 3, with vision support)
```python
from alea_llm_client.llms import (
get_models_with_context_window_gte,
filter_models,
compare_models,
get_model_details
)
# Find models with large context windows
large_context = get_models_with_context_window_gte(1000000)
# Filter by multiple criteria
efficient = filter_models(
min_context=100000,
capabilities=["tools", "vision"],
tiers=["mini", "flash"], # Can also use ModelTier.MINI, ModelTier.FLASH
exclude_deprecated=True
)
# Compare specific models
comparison = compare_models(["gpt-5", "claude-sonnet-4-20250514", "gemini-2.5-pro"])
```
#### Dynamic Model Configuration
The model registry is powered by a dynamic JSON configuration system that automatically updates from live API calls:
- **Real API Data**: All 97 models are discovered and configured from actual provider APIs
- **Automatic Updates**: Model configurations stay current with provider releases
- **Capability Detection**: Supports tools, vision, computer use, thinking modes, and more
- **Fallback System**: Maintains backward compatibility with Python constants
### Advanced Features
#### Grammar Constraints (GPT-5)
```python
from alea_llm_client import OpenAIModel
model = OpenAIModel(model="gpt-5")
response = model.responses(
input="Answer yes or no: Is 2+2=4?",
grammar='start: "yes" | "no"',
grammar_syntax="lark"
)
```
#### Thinking Mode (Claude 4+)
```python
from alea_llm_client import AnthropicModel
model = AnthropicModel(model="claude-sonnet-4-20250514")
response = model.chat(
messages=[{"role": "user", "content": "Solve this complex problem..."}],
thinking={"enabled": True, "budget_tokens": 2000}
)
print(response.thinking) # Access thinking content
```
#### Reasoning Tokens (o-series)
```python
from alea_llm_client import OpenAIModel
model = OpenAIModel(model="o3-mini")
response = model.chat(
messages=[{"role": "user", "content": "Think through this step by step..."}],
max_completion_tokens=50000
)
print(f"Used {response.reasoning_tokens} reasoning tokens")
```
### Response Caching
**Result caching is disabled by default for predictable API client behavior.**
To enable caching for better performance, you can either:
* set `ignore_cache=False` for each method call (`complete`, `chat`, `json`, `pydantic`)
* set `ignore_cache=False` as a kwarg at model construction
```python
# Enable caching at model level
model = OpenAIModel(ignore_cache=False)
# Enable caching for specific calls
response = model.chat("Hello", ignore_cache=False)
```
Cached objects are stored in `~/.alea/cache/{provider}/{endpoint_model_hash}/{call_hash}.json`
in compressed `.json.gz` format. You can delete these files to clear the cache.
### Authentication
Authentication is handled in the following priority order:
* an `api_key` provided at model construction
* a standard environment variable (e.g., `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`)
* a key stored in `~/.alea/keys/{provider}` (e.g., `openai`, `anthropic`, `gemini`, `grok`)
### Streaming
Given the research focus of this library, streaming generation is not supported. However,
you can directly access the `httpx` objects on `.client` and `.async_client` to stream responses
directly if you prefer.
## Installation
```bash
pip install alea-llm-client
```
## Examples
### Basic JSON Example
```python
from alea_llm_client import VLLMModel
if __name__ == "__main__":
model = VLLMModel(
endpoint="http://my.vllm.server:8000",
model="meta-llama/Meta-Llama-3.1-8B-Instruct"
)
messages = [
{
"role": "user",
"content": "Give me a JSON object with keys 'name' and 'age' for a person named Alice who is 30 years old.",
},
]
print(model.json(messages=messages, system="Respond in JSON.").data)
# Output: {'name': 'Alice', 'age': 30}
```
### Basic Completion Example with KL3M
```python
from alea_llm_client import VLLMModel
if __name__ == "__main__":
model = VLLMModel(
model="kl3m-1.7b", ignore_cache=True
)
prompt = "My name is "
print(model.complete(prompt=prompt, temperature=0.5).text)
# Output: Dr. Hermann Kamenzi, and
```
### Pydantic Example
```python
from pydantic import BaseModel
from alea_llm_client import AnthropicModel, format_prompt, format_instructions
class Person(BaseModel):
name: str
age: int
if __name__ == "__main__":
model = AnthropicModel(ignore_cache=True)
instructions = [
"Provide one random record based on the SCHEMA below.",
]
prompt = format_prompt(
{
"instructions": format_instructions(instructions),
"schema": Person,
}
)
person = model.pydantic(prompt, system="Respond in JSON.", pydantic_model=Person)
print(person)
# Output: name='Olivia Chen' age=29
```
## Design
### Class Inheritance
```mermaid
classDiagram
BaseAIModel <|-- OpenAICompatibleModel
OpenAICompatibleModel <|-- AnthropicModel
OpenAICompatibleModel <|-- OpenAIModel
OpenAICompatibleModel <|-- VLLMModel
OpenAICompatibleModel <|-- GrokModel
BaseAIModel <|-- GoogleModel
class BaseAIModel {
<<abstract>>
}
class OpenAICompatibleModel
class AnthropicModel
class OpenAIModel
class VLLMModel
class GrokModel
class GoogleModel
```
### Example Call Flow
```mermaid
sequenceDiagram
participant Client
participant BaseAIModel
participant OpenAICompatibleModel
participant SpecificModel
participant API
Client->>BaseAIModel: json()
BaseAIModel->>BaseAIModel: _retry_wrapper()
BaseAIModel->>OpenAICompatibleModel: _json()
OpenAICompatibleModel->>OpenAICompatibleModel: format()
OpenAICompatibleModel->>OpenAICompatibleModel: _make_request()
OpenAICompatibleModel->>API: HTTP POST
API-->>OpenAICompatibleModel: Response
OpenAICompatibleModel->>OpenAICompatibleModel: _handle_json_response()
OpenAICompatibleModel-->>BaseAIModel: JSONModelResponse
BaseAIModel-->>Client: JSONModelResponse
```
## Testing
The library includes comprehensive test coverage with intelligent rate limiting for all 97 models:
### Test Features
* **All model providers**: OpenAI (72 models), Anthropic (9 models), Google (7 models), xAI (9 models), VLLM
* **Complete API coverage**: Sync/async operations, JSON/Pydantic responses, error handling, retry logic
* **Real API integration**: Tests use actual provider APIs with intelligent rate limiting
* **Cache functionality**: Response caching with configurable ignore options
### Rate Limiting Configuration
Prevent API quota exhaustion with configurable delays:
```bash
# Google API (most restrictive)
export GOOGLE_API_DELAY=2.0 # Seconds between calls (default: 2.0)
export GOOGLE_API_CONCURRENT=1 # Max concurrent calls (default: 1)
# Anthropic API
export ANTHROPIC_API_DELAY=0.5 # Seconds between calls (default: 0.5)
export ANTHROPIC_API_CONCURRENT=3 # Max concurrent calls (default: 3)
# OpenAI API
export OPENAI_API_DELAY=0.2 # Seconds between calls (default: 0.2)
export OPENAI_API_CONCURRENT=5 # Max concurrent calls (default: 5)
# xAI/Grok API
export XAI_API_DELAY=1.0 # Seconds between calls (default: 1.0)
export XAI_API_CONCURRENT=2 # Max concurrent calls (default: 2)
# VLLM (local servers)
export VLLM_API_DELAY=0.1 # Seconds between calls (default: 0.1)
export VLLM_API_CONCURRENT=10 # Max concurrent calls (default: 10)
```
### Running Tests
```bash
# Run all tests with rate limiting
uv run pytest tests/
# Run specific provider tests
uv run pytest tests/test_openai.py
uv run pytest tests/test_anthropic.py
# Custom VLLM server testing
export VLLM_ENDPOINT="http://192.168.1.118:8080/"
export VLLM_MODEL="Qwen/Qwen3-4B-Instruct-2507"
uv run pytest tests/test_vllm.py
```
## Migration Guide
### Upgrading from v0.1.x to v0.2.x
**⚠️ Important Changes:**
1. **Google Model Key Path**: The Google API key path changed from `~/.alea/keys/google` to `~/.alea/keys/gemini`
2. **Model Registry**: Now uses dynamic JSON configuration with 97 real models (was 50+ theoretical models)
3. **Test Configuration**: Added rate limiting system - tests may run slower but prevent API quota exhaustion
**Migration Steps:**
```bash
# 1. Update Google API key path if you use Google models
mv ~/.alea/keys/google ~/.alea/keys/gemini # If the file exists
# 2. Update to latest version
pip install --upgrade alea-llm-client
# 3. No code changes required - all existing APIs remain compatible
```
**What's New in v0.2.x:**
- **97 Real Models**: All models now sourced from live API calls (vs theoretical documentation)
- **Enhanced Capabilities**: Tool use, vision, computer use, thinking modes, reasoning tokens
- **Better Testing**: Intelligent rate limiting prevents API quota issues
- **Dynamic Configuration**: Model registry updates automatically from provider APIs
**Breaking Changes (minimal impact):**
- **Google key path**: `~/.alea/keys/google` → `~/.alea/keys/gemini`
- **ModelResponse.text**: Changed from `Optional[str]` to `str` (empty string default)
- **Test timing**: Rate limiting may slow test execution (configurable via environment variables)
## License
The ALEA LLM client is released under the MIT License. See the [LICENSE](LICENSE) file for details.
## Support
If you encounter any issues or have questions about using the ALEA LLM client library, please [open an issue](https://github.com/alea-institute/alea-llm-client/issues) on GitHub.
## Learn More
To learn more about ALEA and its software and research projects like KL3M and leeky, visit the [ALEA website](https://aleainstitute.ai/).
Raw data
{
"_id": null,
"home_page": null,
"name": "alea-llm-client",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.9",
"maintainer_email": null,
"keywords": "alea, api, client, llm",
"author": null,
"author_email": "ALEA Institute <hello@aleainstitute.ai>",
"download_url": "https://files.pythonhosted.org/packages/00/27/17f285226d1dd68d19403e0d178d923a5e39dfc258b3c344bcc1eb44ade3/alea_llm_client-0.2.3.tar.gz",
"platform": null,
"description": "# ALEA LLM Client\n\n[](https://badge.fury.io/py/alea-llm-client)\n[](https://opensource.org/licenses/MIT)\n[](https://pypi.org/project/alea-llm-client/)\n\nThis is a simple, two-dependency (`httpx`, `pydantic`) LLM client for ~OpenAI APIs like:\n * OpenAI (GPT-4, GPT-5, o-series)\n * Anthropic (Claude 3.5, Claude 4)\n * Google (Vertex AI, Gemini API)\n * xAI (Grok)\n * VLLM\n\n### Supported Patterns\n\nIt provides the following patterns for all endpoints:\n * `complete` and `complete_async` -> str via `ModelResponse`\n * `chat` and `chat_async` -> str via `ModelResponse`\n * `json` and `json_async` -> dict via `JSONModelResponse`\n * `pydantic` and `pydantic_async` -> pydantic models\n * `responses` and `responses_async` -> structured output with tool use, grammar constraints, and reasoning modes\n\n### Model Registry & Capabilities\n\nVersion 0.2.1 introduces a comprehensive model registry with detailed capability tracking for 97 real models sourced from live API calls:\n- **OpenAI**: 72 models (GPT-4, GPT-5, o-series, computer-use, realtime, audio models)\n- **Anthropic**: 9 models (Claude 3.5, Claude 4, various tiers and dates)\n- **Google**: 7 models (Gemini 1.5, Gemini 2.0, flash and pro variants)\n- **xAI**: 9 models (Grok 2, Grok 3, with vision support)\n\n```python\nfrom alea_llm_client.llms import (\n get_models_with_context_window_gte,\n filter_models,\n compare_models,\n get_model_details\n)\n\n# Find models with large context windows\nlarge_context = get_models_with_context_window_gte(1000000)\n\n# Filter by multiple criteria\nefficient = filter_models(\n min_context=100000,\n capabilities=[\"tools\", \"vision\"],\n tiers=[\"mini\", \"flash\"], # Can also use ModelTier.MINI, ModelTier.FLASH\n exclude_deprecated=True\n)\n\n# Compare specific models\ncomparison = compare_models([\"gpt-5\", \"claude-sonnet-4-20250514\", \"gemini-2.5-pro\"])\n```\n\n#### Dynamic Model Configuration\nThe model registry is powered by a dynamic JSON configuration system that automatically updates from live API calls:\n- **Real API Data**: All 97 models are discovered and configured from actual provider APIs\n- **Automatic Updates**: Model configurations stay current with provider releases\n- **Capability Detection**: Supports tools, vision, computer use, thinking modes, and more\n- **Fallback System**: Maintains backward compatibility with Python constants\n\n### Advanced Features\n\n#### Grammar Constraints (GPT-5)\n```python\nfrom alea_llm_client import OpenAIModel\n\nmodel = OpenAIModel(model=\"gpt-5\")\nresponse = model.responses(\n input=\"Answer yes or no: Is 2+2=4?\",\n grammar='start: \"yes\" | \"no\"',\n grammar_syntax=\"lark\"\n)\n```\n\n#### Thinking Mode (Claude 4+)\n```python\nfrom alea_llm_client import AnthropicModel\n\nmodel = AnthropicModel(model=\"claude-sonnet-4-20250514\")\nresponse = model.chat(\n messages=[{\"role\": \"user\", \"content\": \"Solve this complex problem...\"}],\n thinking={\"enabled\": True, \"budget_tokens\": 2000}\n)\nprint(response.thinking) # Access thinking content\n```\n\n#### Reasoning Tokens (o-series)\n```python\nfrom alea_llm_client import OpenAIModel\n\nmodel = OpenAIModel(model=\"o3-mini\")\nresponse = model.chat(\n messages=[{\"role\": \"user\", \"content\": \"Think through this step by step...\"}],\n max_completion_tokens=50000\n)\nprint(f\"Used {response.reasoning_tokens} reasoning tokens\")\n```\n\n### Response Caching \n\n**Result caching is disabled by default for predictable API client behavior.**\n\nTo enable caching for better performance, you can either:\n * set `ignore_cache=False` for each method call (`complete`, `chat`, `json`, `pydantic`)\n * set `ignore_cache=False` as a kwarg at model construction\n\n```python\n# Enable caching at model level\nmodel = OpenAIModel(ignore_cache=False)\n\n# Enable caching for specific calls\nresponse = model.chat(\"Hello\", ignore_cache=False)\n```\n\nCached objects are stored in `~/.alea/cache/{provider}/{endpoint_model_hash}/{call_hash}.json`\nin compressed `.json.gz` format. You can delete these files to clear the cache.\n\n### Authentication\n\nAuthentication is handled in the following priority order:\n * an `api_key` provided at model construction\n * a standard environment variable (e.g., `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`)\n * a key stored in `~/.alea/keys/{provider}` (e.g., `openai`, `anthropic`, `gemini`, `grok`)\n\n### Streaming\n\nGiven the research focus of this library, streaming generation is not supported. However,\nyou can directly access the `httpx` objects on `.client` and `.async_client` to stream responses\ndirectly if you prefer.\n\n## Installation\n\n```bash\npip install alea-llm-client\n```\n\n## Examples\n\n\n### Basic JSON Example\n\n```python\nfrom alea_llm_client import VLLMModel\n\nif __name__ == \"__main__\":\n model = VLLMModel(\n endpoint=\"http://my.vllm.server:8000\",\n model=\"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n )\n\n messages = [\n {\n \"role\": \"user\",\n \"content\": \"Give me a JSON object with keys 'name' and 'age' for a person named Alice who is 30 years old.\",\n },\n ]\n\n print(model.json(messages=messages, system=\"Respond in JSON.\").data)\n\n# Output: {'name': 'Alice', 'age': 30}\n```\n\n### Basic Completion Example with KL3M\n\n```python\nfrom alea_llm_client import VLLMModel\n\nif __name__ == \"__main__\":\n model = VLLMModel(\n model=\"kl3m-1.7b\", ignore_cache=True\n )\n\n prompt = \"My name is \"\n print(model.complete(prompt=prompt, temperature=0.5).text)\n\n# Output: Dr. Hermann Kamenzi, and\n```\n\n### Pydantic Example\n```python\nfrom pydantic import BaseModel\nfrom alea_llm_client import AnthropicModel, format_prompt, format_instructions\n\nclass Person(BaseModel):\n name: str\n age: int\n\nif __name__ == \"__main__\":\n model = AnthropicModel(ignore_cache=True)\n\n instructions = [\n \"Provide one random record based on the SCHEMA below.\",\n ]\n prompt = format_prompt(\n {\n \"instructions\": format_instructions(instructions),\n \"schema\": Person,\n }\n )\n\n person = model.pydantic(prompt, system=\"Respond in JSON.\", pydantic_model=Person)\n print(person)\n\n# Output: name='Olivia Chen' age=29\n```\n\n\n## Design\n\n### Class Inheritance\n\n```mermaid\nclassDiagram\n BaseAIModel <|-- OpenAICompatibleModel\n OpenAICompatibleModel <|-- AnthropicModel\n OpenAICompatibleModel <|-- OpenAIModel\n OpenAICompatibleModel <|-- VLLMModel\n OpenAICompatibleModel <|-- GrokModel\n BaseAIModel <|-- GoogleModel\n\n class BaseAIModel {\n <<abstract>>\n }\n class OpenAICompatibleModel\n class AnthropicModel\n class OpenAIModel\n class VLLMModel\n class GrokModel\n class GoogleModel\n```\n\n### Example Call Flow\n\n```mermaid\nsequenceDiagram\n participant Client\n participant BaseAIModel\n participant OpenAICompatibleModel\n participant SpecificModel\n participant API\n\n Client->>BaseAIModel: json()\n BaseAIModel->>BaseAIModel: _retry_wrapper()\n BaseAIModel->>OpenAICompatibleModel: _json()\n OpenAICompatibleModel->>OpenAICompatibleModel: format()\n OpenAICompatibleModel->>OpenAICompatibleModel: _make_request()\n OpenAICompatibleModel->>API: HTTP POST\n API-->>OpenAICompatibleModel: Response\n OpenAICompatibleModel->>OpenAICompatibleModel: _handle_json_response()\n OpenAICompatibleModel-->>BaseAIModel: JSONModelResponse\n BaseAIModel-->>Client: JSONModelResponse\n```\n\n## Testing\n\nThe library includes comprehensive test coverage with intelligent rate limiting for all 97 models:\n\n### Test Features\n* **All model providers**: OpenAI (72 models), Anthropic (9 models), Google (7 models), xAI (9 models), VLLM\n* **Complete API coverage**: Sync/async operations, JSON/Pydantic responses, error handling, retry logic\n* **Real API integration**: Tests use actual provider APIs with intelligent rate limiting\n* **Cache functionality**: Response caching with configurable ignore options\n\n### Rate Limiting Configuration\nPrevent API quota exhaustion with configurable delays:\n```bash\n# Google API (most restrictive)\nexport GOOGLE_API_DELAY=2.0 # Seconds between calls (default: 2.0)\nexport GOOGLE_API_CONCURRENT=1 # Max concurrent calls (default: 1)\n\n# Anthropic API \nexport ANTHROPIC_API_DELAY=0.5 # Seconds between calls (default: 0.5)\nexport ANTHROPIC_API_CONCURRENT=3 # Max concurrent calls (default: 3)\n\n# OpenAI API\nexport OPENAI_API_DELAY=0.2 # Seconds between calls (default: 0.2)\nexport OPENAI_API_CONCURRENT=5 # Max concurrent calls (default: 5)\n\n# xAI/Grok API\nexport XAI_API_DELAY=1.0 # Seconds between calls (default: 1.0)\nexport XAI_API_CONCURRENT=2 # Max concurrent calls (default: 2)\n\n# VLLM (local servers)\nexport VLLM_API_DELAY=0.1 # Seconds between calls (default: 0.1)\nexport VLLM_API_CONCURRENT=10 # Max concurrent calls (default: 10)\n```\n\n### Running Tests\n```bash\n# Run all tests with rate limiting\nuv run pytest tests/\n\n# Run specific provider tests\nuv run pytest tests/test_openai.py\nuv run pytest tests/test_anthropic.py\n\n# Custom VLLM server testing\nexport VLLM_ENDPOINT=\"http://192.168.1.118:8080/\"\nexport VLLM_MODEL=\"Qwen/Qwen3-4B-Instruct-2507\"\nuv run pytest tests/test_vllm.py\n```\n\n## Migration Guide\n\n### Upgrading from v0.1.x to v0.2.x\n\n**\u26a0\ufe0f Important Changes:**\n\n1. **Google Model Key Path**: The Google API key path changed from `~/.alea/keys/google` to `~/.alea/keys/gemini`\n2. **Model Registry**: Now uses dynamic JSON configuration with 97 real models (was 50+ theoretical models)\n3. **Test Configuration**: Added rate limiting system - tests may run slower but prevent API quota exhaustion\n\n**Migration Steps:**\n```bash\n# 1. Update Google API key path if you use Google models\nmv ~/.alea/keys/google ~/.alea/keys/gemini # If the file exists\n\n# 2. Update to latest version\npip install --upgrade alea-llm-client\n\n# 3. No code changes required - all existing APIs remain compatible\n```\n\n**What's New in v0.2.x:**\n- **97 Real Models**: All models now sourced from live API calls (vs theoretical documentation)\n- **Enhanced Capabilities**: Tool use, vision, computer use, thinking modes, reasoning tokens\n- **Better Testing**: Intelligent rate limiting prevents API quota issues\n- **Dynamic Configuration**: Model registry updates automatically from provider APIs\n\n**Breaking Changes (minimal impact):**\n- **Google key path**: `~/.alea/keys/google` \u2192 `~/.alea/keys/gemini`\n- **ModelResponse.text**: Changed from `Optional[str]` to `str` (empty string default)\n- **Test timing**: Rate limiting may slow test execution (configurable via environment variables)\n\n## License\n\nThe ALEA LLM client is released under the MIT License. See the [LICENSE](LICENSE) file for details.\n\n## Support\n\nIf you encounter any issues or have questions about using the ALEA LLM client library, please [open an issue](https://github.com/alea-institute/alea-llm-client/issues) on GitHub.\n\n## Learn More\n\nTo learn more about ALEA and its software and research projects like KL3M and leeky, visit the [ALEA website](https://aleainstitute.ai/).\n",
"bugtrack_url": null,
"license": null,
"summary": "ALEA LLM client abstraction library for Python",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://aleainstitute.ai/",
"Repository": "https://github.com/alea-institute/alea-llm-client"
},
"split_keywords": [
"alea",
" api",
" client",
" llm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "c193a0130c8c52054e372183ec13cc1a7fc4374dc0c45b881419828acb457215",
"md5": "60b5212c2e8f1b8dfc0dbfd4e08a90d9",
"sha256": "a3d94c8002bdd9178e06f1e19e6c715761dcb178b5e7c59555d34d84b5ff4474"
},
"downloads": -1,
"filename": "alea_llm_client-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "60b5212c2e8f1b8dfc0dbfd4e08a90d9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.9",
"size": 64557,
"upload_time": "2025-08-20T19:44:59",
"upload_time_iso_8601": "2025-08-20T19:44:59.654699Z",
"url": "https://files.pythonhosted.org/packages/c1/93/a0130c8c52054e372183ec13cc1a7fc4374dc0c45b881419828acb457215/alea_llm_client-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "002717f285226d1dd68d19403e0d178d923a5e39dfc258b3c344bcc1eb44ade3",
"md5": "2da134d5bc6cd01f399fcd1364fd1ae1",
"sha256": "94ef8fa97c98fbe77d1d2b3a1a5aa7239d01a10a466ba72ba7a976c6f71ad7ec"
},
"downloads": -1,
"filename": "alea_llm_client-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "2da134d5bc6cd01f399fcd1364fd1ae1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.9",
"size": 50342,
"upload_time": "2025-08-20T19:45:00",
"upload_time_iso_8601": "2025-08-20T19:45:00.771110Z",
"url": "https://files.pythonhosted.org/packages/00/27/17f285226d1dd68d19403e0d178d923a5e39dfc258b3c344bcc1eb44ade3/alea_llm_client-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-20 19:45:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alea-institute",
"github_project": "alea-llm-client",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "alea-llm-client"
}