Name | llm-interface JSON |
Version |
0.1.5
JSON |
| download |
home_page | None |
Summary | A flexible interface for working with various LLM providers |
upload_time | 2025-01-30 00:41:57 |
maintainer | None |
docs_url | None |
author | Niels Provos |
requires_python | <4.0,>=3.10 |
license | Apache-2.0 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LLM Interface
[](https://badge.fury.io/py/llm-interface)
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.org/project/llm-interface/)
[](https://codecov.io/gh/provos/llm-interface)
A flexible Python interface for working with various Language Model providers, including OpenAI, Anthropic, and Ollama. This library provides a unified way to interact with different LLM providers while supporting features like structured outputs, tool execution, and response caching.
## Features
- **Multiple Provider Support**
- OpenAI (GPT models)
- Anthropic (Claude models)
- Ollama (local and remote)
- Remote Ollama via SSH
- **Advanced Capabilities**
- Structured output parsing with Pydantic models
- Function/tool calling support
- Response caching
- Comprehensive logging
- JSON mode support
- System prompt handling
- **Developer-Friendly**
- Type hints throughout
- Extensive test coverage
- Flexible configuration options
- Error handling and retries
## Installation
Install using pip:
```bash
pip install llm-interface
```
Or using Poetry:
```bash
poetry add llm-interface
```
## Basic Usage
### Simple Chat Completion
```python
from llm_interface import llm_from_config
# Create an OpenAI interface
llm = llm_from_config(
provider="openai",
model_name="gpt-4",
)
# Simple chat
response = llm.chat([
{"role": "user", "content": "What is the capital of France?"}
])
```
### Structured Output with Pydantic
```python
from pydantic import BaseModel
class LocationInfo(BaseModel):
city: str
country: str
population: int
response = llm.generate_pydantic(
prompt_template="Provide information about Paris",
output_schema=LocationInfo,
system="You are a helpful geography assistant"
)
```
### Tool/Function Calling
```python
from llm_interface.llm_tool import tool
@tool(name="get_weather")
def get_weather(location: str, units: str = "celsius") -> str:
"""Get weather information for a location.
Args:
location: City or location name
units: Temperature units (celsius/fahrenheit)
"""
# Implementation here
return f"Weather in {location}"
response = llm.chat(
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
tools=[get_weather]
)
```
### Remote Ollama Setup
```python
llm = llm_from_config(
provider="remote_ollama",
model_name="llama2",
hostname="example.com",
username="user"
)
```
## Configuration
The library supports various configuration options through the `llm_from_config` function:
```python
llm = llm_from_config(
provider="openai", # "openai", "anthropic", "ollama", or "remote_ollama"
model_name="gpt-4", # Model name
max_tokens=4096, # Maximum tokens in response
host=None, # Local Ollama host
hostname=None, # Remote SSH hostname
username=None, # Remote SSH username
log_dir="logs", # Directory for logs
use_cache=True # Enable response caching
)
```
## Environment Variables
Required environment variables based on provider:
- OpenAI: `OPENAI_API_KEY`
- Anthropic: `ANTHROPIC_API_KEY`
- Remote Ollama: requires an SSH key to be loaded in SSH agent
## Development
This project uses Poetry for dependency management:
```bash
# Install dependencies
poetry install
# Run tests
poetry run pytest
# Format code
poetry run black .
# Run linter
poetry run flake8
```
## License
Apache License 2.0 - See LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "llm-interface",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Niels Provos",
"author_email": "provos@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ff/97/bad03e392843a391b7bec9656086fdfbc780df528ff4b996ba7d1f183cfa/llm_interface-0.1.5.tar.gz",
"platform": null,
"description": "# LLM Interface\n\n[](https://badge.fury.io/py/llm-interface)\n[](https://opensource.org/licenses/Apache-2.0)\n[](https://pypi.org/project/llm-interface/)\n[](https://codecov.io/gh/provos/llm-interface)\n\nA flexible Python interface for working with various Language Model providers, including OpenAI, Anthropic, and Ollama. This library provides a unified way to interact with different LLM providers while supporting features like structured outputs, tool execution, and response caching.\n\n## Features\n\n- **Multiple Provider Support**\n - OpenAI (GPT models)\n - Anthropic (Claude models)\n - Ollama (local and remote)\n - Remote Ollama via SSH\n\n- **Advanced Capabilities**\n - Structured output parsing with Pydantic models\n - Function/tool calling support\n - Response caching\n - Comprehensive logging\n - JSON mode support\n - System prompt handling\n\n- **Developer-Friendly**\n - Type hints throughout\n - Extensive test coverage\n - Flexible configuration options\n - Error handling and retries\n\n## Installation\n\nInstall using pip:\n\n```bash\npip install llm-interface\n```\n\nOr using Poetry:\n\n```bash\npoetry add llm-interface\n```\n\n## Basic Usage\n\n### Simple Chat Completion\n\n```python\nfrom llm_interface import llm_from_config\n\n# Create an OpenAI interface\nllm = llm_from_config(\n provider=\"openai\",\n model_name=\"gpt-4\",\n)\n\n# Simple chat\nresponse = llm.chat([\n {\"role\": \"user\", \"content\": \"What is the capital of France?\"}\n])\n```\n\n### Structured Output with Pydantic\n\n```python\nfrom pydantic import BaseModel\n\nclass LocationInfo(BaseModel):\n city: str\n country: str\n population: int\n\nresponse = llm.generate_pydantic(\n prompt_template=\"Provide information about Paris\",\n output_schema=LocationInfo,\n system=\"You are a helpful geography assistant\"\n)\n```\n\n### Tool/Function Calling\n\n```python\nfrom llm_interface.llm_tool import tool\n\n@tool(name=\"get_weather\")\ndef get_weather(location: str, units: str = \"celsius\") -> str:\n \"\"\"Get weather information for a location.\n \n Args:\n location: City or location name\n units: Temperature units (celsius/fahrenheit)\n \"\"\"\n # Implementation here\n return f\"Weather in {location}\"\n\nresponse = llm.chat(\n messages=[{\"role\": \"user\", \"content\": \"What's the weather in Paris?\"}],\n tools=[get_weather]\n)\n```\n\n### Remote Ollama Setup\n\n```python\nllm = llm_from_config(\n provider=\"remote_ollama\",\n model_name=\"llama2\",\n hostname=\"example.com\",\n username=\"user\"\n)\n```\n\n## Configuration\n\nThe library supports various configuration options through the `llm_from_config` function:\n\n```python\nllm = llm_from_config(\n provider=\"openai\", # \"openai\", \"anthropic\", \"ollama\", or \"remote_ollama\"\n model_name=\"gpt-4\", # Model name\n max_tokens=4096, # Maximum tokens in response\n host=None, # Local Ollama host\n hostname=None, # Remote SSH hostname\n username=None, # Remote SSH username\n log_dir=\"logs\", # Directory for logs\n use_cache=True # Enable response caching\n)\n```\n\n## Environment Variables\n\nRequired environment variables based on provider:\n\n- OpenAI: `OPENAI_API_KEY`\n- Anthropic: `ANTHROPIC_API_KEY`\n- Remote Ollama: requires an SSH key to be loaded in SSH agent\n\n## Development\n\nThis project uses Poetry for dependency management:\n\n```bash\n# Install dependencies\npoetry install\n\n# Run tests\npoetry run pytest\n\n# Format code\npoetry run black .\n\n# Run linter\npoetry run flake8\n```\n\n## License\n\nApache License 2.0 - See LICENSE file for details.",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "A flexible interface for working with various LLM providers",
"version": "0.1.5",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e13c8303eb94c633040782f906a0d98bcaebf2c80378052cfe4b35d3269c7c04",
"md5": "a2379c31ab748a977209fbf7a20a6e4b",
"sha256": "84506e02c8449cf506d8a8baede84dd2dc700a14486a401d3339b30d2de099fa"
},
"downloads": -1,
"filename": "llm_interface-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a2379c31ab748a977209fbf7a20a6e4b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 29739,
"upload_time": "2025-01-30T00:41:55",
"upload_time_iso_8601": "2025-01-30T00:41:55.313127Z",
"url": "https://files.pythonhosted.org/packages/e1/3c/8303eb94c633040782f906a0d98bcaebf2c80378052cfe4b35d3269c7c04/llm_interface-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ff97bad03e392843a391b7bec9656086fdfbc780df528ff4b996ba7d1f183cfa",
"md5": "89ccbb55bbef2e5c98bf37bb2e7865e2",
"sha256": "5d003130c790627ba9e7b9cf061c399573824632cfe829cd8e583c6b311f891b"
},
"downloads": -1,
"filename": "llm_interface-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "89ccbb55bbef2e5c98bf37bb2e7865e2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 23517,
"upload_time": "2025-01-30T00:41:57",
"upload_time_iso_8601": "2025-01-30T00:41:57.526300Z",
"url": "https://files.pythonhosted.org/packages/ff/97/bad03e392843a391b7bec9656086fdfbc780df528ff4b996ba7d1f183cfa/llm_interface-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-30 00:41:57",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llm-interface"
}