llm-api-adapter


Namellm-api-adapter JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryLightweight, pluggable adapter for multiple LLM APIs (OpenAI, Anthropic, Google)
upload_time2025-08-11 13:41:36
maintainerNone
docs_urlNone
authorSergey Inozemtsev
requires_python>=3.9
licenseNone
keywords llm adapter lightweight api
VCS
bugtrack_url
requirements requests
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM API Adapter SDK for Python

## Overview

This SDK for Python allows you to use LLM APIs from various companies and models through a unified interface. Currently, the project supports API integration for the following companies: OpenAI, Anthropic, and Google. At this stage, only the chat functionality is implemented.

### Version

Current version: 0.2.0

## Features

- **Unified Interface**: Work seamlessly with different LLM providers using a single, consistent API.
- **Multiple Provider Support**: Currently supports OpenAI, Anthropic, and Google APIs, allowing easy switching between them.
- **Chat Functionality**: Provides an easy way to interact with chat-based LLMs.
- **Extensible Design**: Built to easily extend support for additional providers and new functionalities in the future.
- **Error Handling**: Standardized error messages across all supported LLMs, simplifying integration and debugging.
- **Flexible Configuration**: Manage request parameters like temperature, max tokens, and other settings for fine-tuned control.

## Installation

To install the SDK, you can use pip:

```bash
pip install llm_api_adapter
```

**Note:** You will need to obtain API keys from each LLM provider you wish to use (OpenAI, Anthropic, Google). Refer to their respective documentation for instructions on obtaining API keys.


## Getting Started

### Importing and Setting Up the Adapter

To start using the adapter, you need to import the necessary components:

```python
from llm_api_adapter.models.messages.chat_message import (
    AIMessage, Prompt, UserMessage
)                                               
from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter
```

### Sending a Simple Request

The SDK supports three types of messages for interacting with the LLM:

- **Prompt**: Use `Prompt` to set the context or initial prompt for the model.
- **UserMessage**: Use `UserMessage` to send messages from the user during a conversation.
- **AIMessage**: Use `AIMessage` to simulate responses from the assistant during a conversation.

Here is an example of how to send a simple request to the adapter:

```python
messages = [
    UserMessage("Hi! Can you explain how artificial intelligence works?")
]

adapter = UniversalLLMAPIAdapter(
    organization="openai",
    model="gpt-3.5-turbo",
    api_key=openai_api_key
)

response = adapter.generate_chat_answer(
    messages=messages,
    max_tokens=max_tokens,
    temperature=temperature,
    top_p=top_p
)
print(response.content)
```

### Parameters

- **max\_tokens**: The maximum number of tokens to generate in the response. This limits the length of the output. Default value: `256`.

- **temperature**: Controls the randomness of the response. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. Default value: `1.0` (range: 0 to 2).

- **top\_p**: Limits the response to a certain cumulative probability. This is used to create more focused and coherent responses by considering only the highest probability options. Default value: `1.0` (range: 0 to 1).

## Handling Errors

### Common Errors

The SDK provides a set of standardized errors for easier debugging and integration:

- **LLMAPIError**: Base class for all API-related errors. This error is also used for any unexpected LLM API errors.

- **LLMAPIAuthorizationError**: Raised when authentication or authorization fails.

- **LLMAPIRateLimitError**: Raised when rate limits are exceeded.

- **LLMAPITokenLimitError**: Raised when token limits are exceeded.

- **LLMAPIClientError**: Raised when the client makes an invalid request.

- **LLMAPIServerError**: Raised when the server encounters an error.

- **LLMAPITimeoutError**: Raised when a request times out.

- **LLMAPIUsageLimitError**: Raised when usage limits are exceeded.

## Configuration and Management

### Using Different Providers and Models

The SDK allows you to easily switch between LLM providers and specify the model you want to use. Currently supported providers are OpenAI, Anthropic, and Google.

- **OpenAI**: You can use models like `gpt-4o-mini`, `gpt-4o`, `gpt-4-turbo`, `gpt-4`, `gpt-4-turbo-preview`, `gpt-3.5-turbo`. Set the `organization` parameter to `openai` and specify the `model` name.

- **Anthropic**: Available models include `claude-3-5-sonnet-20241022`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307`. Set the `organization` parameter to `anthropic` and specify the desired `model`.

- **Google**: Models such as `gemini-1.5-flash`, `gemini-1.5-flash-8b`, `gemini-1.5-pro` can be used. Set the `organization` parameter to `google` and specify the `model`.

Example:

```python
adapter = UniversalLLMAPIAdapter(
    organization="openai",
    model="gpt-3.5-turbo",
    api_key=openai_api_key
)
```

To switch to another provider, simply change the `organization` and `model` parameters.

### Switching Providers

Here is an example of how to switch between different LLM providers using the SDK:

**Note**: Each instance of `UniversalLLMAPIAdapter` is tied to a specific provider and model. You cannot change the `organization` parameter for an existing adapter object. To use a different provider, you must create a new instance.

```python
gpt = UniversalLLMAPIAdapter(
    organization="openai",
    model="gpt-3.5-turbo",
    api_key=openai_api_key
)
gpt_response = gpt.generate_chat_answer(messages=messages)
print(gpt_response.content)

claude = UniversalLLMAPIAdapter(
    organization="anthropic",
    model="claude-3-haiku-20240307",
    api_key=anthropic_api_key
)
claude_response = claude.generate_chat_answer(messages=messages)
print(claude_response.content)

google = UniversalLLMAPIAdapter(
    organization="google",
    model="gemini-1.5-flash",
    api_key=google_api_key
)
google_response = google.generate_chat_answer(messages=messages)
print(google_response.content)
```

## Example Use Case

Here is a comprehensive example that showcases all possible message types and interactions:

```python
from llm_api_adapter.models.messages.chat_message import (
    AIMessage, Prompt, UserMessage
)                                               
from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter

messages = [
    Prompt(
        "You are a friendly assistant who explains complex concepts "
        "in simple terms."
    ),
    UserMessage("Hi! Can you explain how artificial intelligence works?"),
    AIMessage(
        "Sure! Artificial intelligence (AI) is a system that can perform "
        "tasks requiring human-like intelligence, such as recognizing images "
        "or understanding language. It learns by analyzing large amounts of "
        "data, finding patterns, and making predictions."
    ),
    UserMessage("How does AI learn?"),
]

adapter = UniversalLLMAPIAdapter(
    organization="openai",
    model="gpt-3.5-turbo",
    api_key=openai_api_key
)

response = adapter.generate_chat_answer(
    messages=messages,
    max_tokens=256,
    temperature=1.0,
    top_p=1.0
)
print(response.content)
```

The `ChatResponse` object returned by `generate_chat_answer` includes several attributes that provide additional details about the response. These attributes will contain data only if they are included in the response from the LLM:

- **model**: The model that generated the response.
- **response_id**: A unique identifier for the response.
- **timestamp**: The time at which the response was generated.
- **tokens_used**: The number of tokens used for the response.
- **content**: The actual text content generated by the model.
- **finish_reason**: The reason why the generation was finished (e.g., "stop" or "length").

## Testing

This project uses `pytest` for testing. Tests are located in the `tests/` directory.

### Running Tests

To run all tests, use the following command:

```bash
pytest
```

Alternatively, you can run the tests using the `tests_runner.py` script:

```bash
python tests/tests_runner.py
```

### Dependencies

Ensure you have the required dependencies installed. You can install them using:

```bash
pip install -r requirements-test.txt
```

### Test Structure

*   `unit/`: Contains unit tests for individual components.
*   `integration/`: Contains integration tests to verify the interaction between different parts of the system.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-api-adapter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "llm, adapter, lightweight, api",
    "author": "Sergey Inozemtsev",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/fe/22/8f9bf3f314a65b5861e3e1d067e5d924fe7090c4733f9f385f682932f265/llm_api_adapter-0.2.0.tar.gz",
    "platform": null,
    "description": "# LLM API Adapter SDK for Python\r\n\r\n## Overview\r\n\r\nThis SDK for Python allows you to use LLM APIs from various companies and models through a unified interface. Currently, the project supports API integration for the following companies: OpenAI, Anthropic, and Google. At this stage, only the chat functionality is implemented.\r\n\r\n### Version\r\n\r\nCurrent version: 0.2.0\r\n\r\n## Features\r\n\r\n- **Unified Interface**: Work seamlessly with different LLM providers using a single, consistent API.\r\n- **Multiple Provider Support**: Currently supports OpenAI, Anthropic, and Google APIs, allowing easy switching between them.\r\n- **Chat Functionality**: Provides an easy way to interact with chat-based LLMs.\r\n- **Extensible Design**: Built to easily extend support for additional providers and new functionalities in the future.\r\n- **Error Handling**: Standardized error messages across all supported LLMs, simplifying integration and debugging.\r\n- **Flexible Configuration**: Manage request parameters like temperature, max tokens, and other settings for fine-tuned control.\r\n\r\n## Installation\r\n\r\nTo install the SDK, you can use pip:\r\n\r\n```bash\r\npip install llm_api_adapter\r\n```\r\n\r\n**Note:** You will need to obtain API keys from each LLM provider you wish to use (OpenAI, Anthropic, Google). Refer to their respective documentation for instructions on obtaining API keys.\r\n\r\n\r\n## Getting Started\r\n\r\n### Importing and Setting Up the Adapter\r\n\r\nTo start using the adapter, you need to import the necessary components:\r\n\r\n```python\r\nfrom llm_api_adapter.models.messages.chat_message import (\r\n    AIMessage, Prompt, UserMessage\r\n)                                               \r\nfrom llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter\r\n```\r\n\r\n### Sending a Simple Request\r\n\r\nThe SDK supports three types of messages for interacting with the LLM:\r\n\r\n- **Prompt**: Use `Prompt` to set the context or initial prompt for the model.\r\n- **UserMessage**: Use `UserMessage` to send messages from the user during a conversation.\r\n- **AIMessage**: Use `AIMessage` to simulate responses from the assistant during a conversation.\r\n\r\nHere is an example of how to send a simple request to the adapter:\r\n\r\n```python\r\nmessages = [\r\n    UserMessage(\"Hi! Can you explain how artificial intelligence works?\")\r\n]\r\n\r\nadapter = UniversalLLMAPIAdapter(\r\n    organization=\"openai\",\r\n    model=\"gpt-3.5-turbo\",\r\n    api_key=openai_api_key\r\n)\r\n\r\nresponse = adapter.generate_chat_answer(\r\n    messages=messages,\r\n    max_tokens=max_tokens,\r\n    temperature=temperature,\r\n    top_p=top_p\r\n)\r\nprint(response.content)\r\n```\r\n\r\n### Parameters\r\n\r\n- **max\\_tokens**: The maximum number of tokens to generate in the response. This limits the length of the output. Default value: `256`.\r\n\r\n- **temperature**: Controls the randomness of the response. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic. Default value: `1.0` (range: 0 to 2).\r\n\r\n- **top\\_p**: Limits the response to a certain cumulative probability. This is used to create more focused and coherent responses by considering only the highest probability options. Default value: `1.0` (range: 0 to 1).\r\n\r\n## Handling Errors\r\n\r\n### Common Errors\r\n\r\nThe SDK provides a set of standardized errors for easier debugging and integration:\r\n\r\n- **LLMAPIError**: Base class for all API-related errors. This error is also used for any unexpected LLM API errors.\r\n\r\n- **LLMAPIAuthorizationError**: Raised when authentication or authorization fails.\r\n\r\n- **LLMAPIRateLimitError**: Raised when rate limits are exceeded.\r\n\r\n- **LLMAPITokenLimitError**: Raised when token limits are exceeded.\r\n\r\n- **LLMAPIClientError**: Raised when the client makes an invalid request.\r\n\r\n- **LLMAPIServerError**: Raised when the server encounters an error.\r\n\r\n- **LLMAPITimeoutError**: Raised when a request times out.\r\n\r\n- **LLMAPIUsageLimitError**: Raised when usage limits are exceeded.\r\n\r\n## Configuration and Management\r\n\r\n### Using Different Providers and Models\r\n\r\nThe SDK allows you to easily switch between LLM providers and specify the model you want to use. Currently supported providers are OpenAI, Anthropic, and Google.\r\n\r\n- **OpenAI**: You can use models like `gpt-4o-mini`, `gpt-4o`, `gpt-4-turbo`, `gpt-4`, `gpt-4-turbo-preview`, `gpt-3.5-turbo`. Set the `organization` parameter to `openai` and specify the `model` name.\r\n\r\n- **Anthropic**: Available models include `claude-3-5-sonnet-20241022`, `claude-3-opus-20240229`, `claude-3-sonnet-20240229`, `claude-3-haiku-20240307`. Set the `organization` parameter to `anthropic` and specify the desired `model`.\r\n\r\n- **Google**: Models such as `gemini-1.5-flash`, `gemini-1.5-flash-8b`, `gemini-1.5-pro` can be used. Set the `organization` parameter to `google` and specify the `model`.\r\n\r\nExample:\r\n\r\n```python\r\nadapter = UniversalLLMAPIAdapter(\r\n    organization=\"openai\",\r\n    model=\"gpt-3.5-turbo\",\r\n    api_key=openai_api_key\r\n)\r\n```\r\n\r\nTo switch to another provider, simply change the `organization` and `model` parameters.\r\n\r\n### Switching Providers\r\n\r\nHere is an example of how to switch between different LLM providers using the SDK:\r\n\r\n**Note**: Each instance of `UniversalLLMAPIAdapter` is tied to a specific provider and model. You cannot change the `organization` parameter for an existing adapter object. To use a different provider, you must create a new instance.\r\n\r\n```python\r\ngpt = UniversalLLMAPIAdapter(\r\n    organization=\"openai\",\r\n    model=\"gpt-3.5-turbo\",\r\n    api_key=openai_api_key\r\n)\r\ngpt_response = gpt.generate_chat_answer(messages=messages)\r\nprint(gpt_response.content)\r\n\r\nclaude = UniversalLLMAPIAdapter(\r\n    organization=\"anthropic\",\r\n    model=\"claude-3-haiku-20240307\",\r\n    api_key=anthropic_api_key\r\n)\r\nclaude_response = claude.generate_chat_answer(messages=messages)\r\nprint(claude_response.content)\r\n\r\ngoogle = UniversalLLMAPIAdapter(\r\n    organization=\"google\",\r\n    model=\"gemini-1.5-flash\",\r\n    api_key=google_api_key\r\n)\r\ngoogle_response = google.generate_chat_answer(messages=messages)\r\nprint(google_response.content)\r\n```\r\n\r\n## Example Use Case\r\n\r\nHere is a comprehensive example that showcases all possible message types and interactions:\r\n\r\n```python\r\nfrom llm_api_adapter.models.messages.chat_message import (\r\n    AIMessage, Prompt, UserMessage\r\n)                                               \r\nfrom llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter\r\n\r\nmessages = [\r\n    Prompt(\r\n        \"You are a friendly assistant who explains complex concepts \"\r\n        \"in simple terms.\"\r\n    ),\r\n    UserMessage(\"Hi! Can you explain how artificial intelligence works?\"),\r\n    AIMessage(\r\n        \"Sure! Artificial intelligence (AI) is a system that can perform \"\r\n        \"tasks requiring human-like intelligence, such as recognizing images \"\r\n        \"or understanding language. It learns by analyzing large amounts of \"\r\n        \"data, finding patterns, and making predictions.\"\r\n    ),\r\n    UserMessage(\"How does AI learn?\"),\r\n]\r\n\r\nadapter = UniversalLLMAPIAdapter(\r\n    organization=\"openai\",\r\n    model=\"gpt-3.5-turbo\",\r\n    api_key=openai_api_key\r\n)\r\n\r\nresponse = adapter.generate_chat_answer(\r\n    messages=messages,\r\n    max_tokens=256,\r\n    temperature=1.0,\r\n    top_p=1.0\r\n)\r\nprint(response.content)\r\n```\r\n\r\nThe `ChatResponse` object returned by `generate_chat_answer` includes several attributes that provide additional details about the response. These attributes will contain data only if they are included in the response from the LLM:\r\n\r\n- **model**: The model that generated the response.\r\n- **response_id**: A unique identifier for the response.\r\n- **timestamp**: The time at which the response was generated.\r\n- **tokens_used**: The number of tokens used for the response.\r\n- **content**: The actual text content generated by the model.\r\n- **finish_reason**: The reason why the generation was finished (e.g., \"stop\" or \"length\").\r\n\r\n## Testing\r\n\r\nThis project uses `pytest` for testing. Tests are located in the `tests/` directory.\r\n\r\n### Running Tests\r\n\r\nTo run all tests, use the following command:\r\n\r\n```bash\r\npytest\r\n```\r\n\r\nAlternatively, you can run the tests using the `tests_runner.py` script:\r\n\r\n```bash\r\npython tests/tests_runner.py\r\n```\r\n\r\n### Dependencies\r\n\r\nEnsure you have the required dependencies installed. You can install them using:\r\n\r\n```bash\r\npip install -r requirements-test.txt\r\n```\r\n\r\n### Test Structure\r\n\r\n*   `unit/`: Contains unit tests for individual components.\r\n*   `integration/`: Contains integration tests to verify the interaction between different parts of the system.\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Lightweight, pluggable adapter for multiple LLM APIs (OpenAI, Anthropic, Google)",
    "version": "0.2.0",
    "project_urls": {
        "Repository": "https://github.com/Inozem/llm_api_adapter/"
    },
    "split_keywords": [
        "llm",
        " adapter",
        " lightweight",
        " api"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a906aa6abe6f277d664ba644b3190c52d859f15c47308ef3da982473c851a620",
                "md5": "bdedd0d2441f7bbcde8a8a6808ab0a67",
                "sha256": "232089aad07d3e86053c3da1e197bfb89a247b3753bdba466bee3aa79aaa1674"
            },
            "downloads": -1,
            "filename": "llm_api_adapter-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bdedd0d2441f7bbcde8a8a6808ab0a67",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 15776,
            "upload_time": "2025-08-11T13:41:35",
            "upload_time_iso_8601": "2025-08-11T13:41:35.750198Z",
            "url": "https://files.pythonhosted.org/packages/a9/06/aa6abe6f277d664ba644b3190c52d859f15c47308ef3da982473c851a620/llm_api_adapter-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fe228f9bf3f314a65b5861e3e1d067e5d924fe7090c4733f9f385f682932f265",
                "md5": "e6587c6aa1a7c0c76c82ab55219cc4c7",
                "sha256": "1dd4a6c676d8ac68f79d55124bff91faa20a3c6d5e250f39a0c57ecc90fcef65"
            },
            "downloads": -1,
            "filename": "llm_api_adapter-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e6587c6aa1a7c0c76c82ab55219cc4c7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 10051,
            "upload_time": "2025-08-11T13:41:36",
            "upload_time_iso_8601": "2025-08-11T13:41:36.832470Z",
            "url": "https://files.pythonhosted.org/packages/fe/22/8f9bf3f314a65b5861e3e1d067e5d924fe7090c4733f9f385f682932f265/llm_api_adapter-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-11 13:41:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Inozem",
    "github_project": "llm_api_adapter",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "requests",
            "specs": []
        }
    ],
    "lcname": "llm-api-adapter"
}
        
Elapsed time: 0.91827s