llm-wrapper-testing


Namellm-wrapper-testing JSON
Version 1.0.3 PyPI version JSON
download
home_pageNone
SummaryA comprehensive Python wrapper for Large Language Models with database integration and usage tracking
upload_time2025-07-24 06:32:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords llm language-model openai azure gpt ai machine-learning database postgresql mysql mongodb usage-tracking api-wrapper
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LLM Wrapper

A comprehensive Python wrapper for Azure OpenAI with built-in PostgreSQL integration and usage tracking. Provides detailed analytics for LLM usage with support for both text and JSON response formats.

## Features

- 🚀 **Easy Integration**: Simple API for interacting with Azure OpenAI services
- 📊 **Usage Tracking**: Comprehensive logging and analytics for all LLM requests
- 💾 **PostgreSQL Integration**: Built-in PostgreSQL database support with automatic table creation
- ⚡ **High Performance**: Optimized for concurrent requests and high throughput
- 🔒 **Secure**: Built-in security features and API key management
- 📈 **Analytics**: Detailed usage statistics and reporting
- 🎯 **Response Types**: Support for both text and JSON response formats
- 🐳 **Production Ready**: Robust error handling and logging

## Installation

### Basic Installation

```bash
pip install llm_wrapper_biz
```

## Quick Start

### Basic Usage

```python
from llm_wrapper_biz import LLMWrapper

# Initialize the wrapper (database connection is automatic)
wrapper = LLMWrapper(
    service_url="https://your-azure-openai-instance.openai.azure.com",
    api_key="your-azure-openai-api-key",
    deployment_name="your-deployment-name",
    api_version="2023-05-15",
    default_model='gpt-4'
)

# Send a text request
response = wrapper.send_request(
    input_text="What are the benefits of renewable energy?",
    customer_id=1,
    organization_id=1,
    response_type="text",  # "text" or "json"
    temperature=0.7,
    max_tokens=2000
)

print(f"Response: {response['processed_output']}")
print(f"Tokens used: {response['total_tokens']}")
print(f"Response type: {response['response_type']}")

# Send a JSON request
json_response = wrapper.send_request(
    input_text="Create a JSON object with information about Python programming including name, creator, and year_created.",
    customer_id=1,
    organization_id=1,
    response_type="json"
)

print(f"JSON Response: {json_response['processed_output']}")
print(f"Creator: {json_response['processed_output'].get('creator', 'N/A')}")

# Get usage statistics
stats = wrapper.get_usage_stats()
print(f"Total requests: {stats['total_requests']}")
print(f"Total tokens: {stats['total_tokens']}")

# Clean up
wrapper.close()
```

### Simplified Usage

For easier integration, use the simplified methods:

```python
# Get just the processed output (text)
text_result = wrapper.send_request(
    input_text="Explain quantum computing",
    customer_id=1,
    organization_id=1,
    response_type="text"
)
print(text_result)

# Get just the processed output (JSON)
json_result = wrapper.send_request(
    input_text="Create JSON with weather data for London",
    customer_id=1,
    organization_id=1,
    response_type="json"
)
```

## Response Types

### Text Response (Default)

```python
response = wrapper.send_request(
    input_text="Explain artificial intelligence",
    customer_id=1,
    organization_id=1,
    response_type="text"
)

# Response structure:
{
    "output_text": "raw response from API",
    "processed_output": "same as output_text for text responses",
    "response_type": "text",
    "input_tokens": 10,
    "output_tokens": 150,
    "total_tokens": 160,
    "response_time_ms": 1200,
    "model": "gpt-4",
    "full_response": {...}
}
```

### JSON Response

```python
response = wrapper.send_request(
    input_text="Create a JSON object with user information including name, age, and skills array",
    customer_id=1,
    organization_id=1,
    response_type="json"
)

# Response structure:
{
    "output_text": '{"name": "John", "age": 30, "skills": ["Python", "AI"]}',
    "processed_output": {"name": "John", "age": 30, "skills": ["Python", "AI"]},
    "response_type": "json",
    "input_tokens": 15,
    "output_tokens": 25,
    "total_tokens": 40,
    "response_time_ms": 1500,
    "model": "gpt-4",
    "full_response": {...}
}
```


## Database Schema

The wrapper automatically creates the following PostgreSQL table:

```sql
CREATE TABLE token_usage_log (
    id SERIAL PRIMARY KEY,
    customer_id INTEGER NOT NULL,
    organization_id INTEGER NOT NULL,
    model_name VARCHAR(255) NOT NULL,
    request_params JSON,
    response_params JSON,
    input_tokens INTEGER NOT NULL,
    output_tokens INTEGER NOT NULL,
    total_tokens INTEGER NOT NULL,
    request_timestamp TIMESTAMP DEFAULT NOW(),
    response_time_ms INTEGER NOT NULL,
    status VARCHAR(50) DEFAULT 'success'
);
```

## Usage Analytics

```python
# Get overall statistics
stats = wrapper.get_usage_stats()

# Get customer-specific statistics
customer_stats = wrapper.get_usage_stats(customer_id=1)

# Get organization-specific statistics
org_stats = wrapper.get_usage_stats(organization_id=1)

# Get statistics for a specific time period
period_stats = wrapper.get_usage_stats(
    start_date="2024-01-01T00:00:00",
    end_date="2024-01-31T23:59:59"
)

# Example stats output:
{
    "total_requests": 150,
    "total_tokens": 45000,
    "models": [
        {
            "model_name": "gpt-4",
            "requests": 100,
            "input_tokens": 15000,
            "output_tokens": 20000,
            "total_tokens": 35000,
            "avg_response_time_ms": 1200
        },
        {
            "model_name": "gpt-3.5-turbo",
            "requests": 50,
            "input_tokens": 5000,
            "output_tokens": 5000,
            "total_tokens": 10000,
            "avg_response_time_ms": 800
        }
    ]
}
```

## Configuration Options

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `service_url` | str | Required | Azure OpenAI service endpoint URL |
| `api_key` | str | Required | Azure OpenAI API key |
| `deployment_name` | str | Required | Azure OpenAI deployment name |
| `api_version` | str | Required | Azure OpenAI API version |
| `default_model` | str | 'gpt-4' | Default model identifier |
| `timeout` | int | 30 | Request timeout in seconds |

## API Reference

### Core Methods

#### `send_request(input_text, customer_id, organization_id, response_type="text", **kwargs)`

Send a request to the Azure OpenAI service.

**Parameters:**
- `input_text` (str): The prompt text
- `customer_id` (int): Customer identifier
- `organization_id` (int): Organization identifier
- `response_type` (str): Response format - "text" or "json"
- `model` (str, optional): Model to use for this request
- `temperature` (float, optional): Sampling temperature (0.0-1.0)
- `max_tokens` (int, optional): Maximum tokens in response

**Returns:**
- `dict`: Response containing output text, processed output, token counts, and metadata

#### `send_request_simple(input_text, customer_id, organization_id, response_type="text", **kwargs)`

Simplified method that returns only the processed output.

**Parameters:**
- Same as `send_request()`

**Returns:**
- `str` (for text) or `dict` (for JSON): Direct processed output

#### `get_usage_stats(**filters)`

Get usage statistics with optional filtering.

**Parameters:**
- `customer_id` (int, optional): Filter by customer
- `organization_id` (int, optional): Filter by organization
- `start_date` (str, optional): Start date in ISO format
- `end_date` (str, optional): End date in ISO format

**Returns:**
- `dict`: Usage statistics including request counts, token usage, and performance metrics

#### `close()`

Close database connections and clean up resources.


## Requirements

- Python 3.8+

## License

This project is licensed under the MIT License.

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## Acknowledgments

- Thanks to all contributors who have helped shape this project
- Built with love for the AI/ML community

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-wrapper-testing",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Akilan R M <akilan@hibizsolutions.com>",
    "keywords": "llm, language-model, openai, azure, gpt, ai, machine-learning, database, postgresql, mysql, mongodb, usage-tracking, api-wrapper",
    "author": null,
    "author_email": "Akilan R M <akilan@hibizsolutions.com>",
    "download_url": "https://files.pythonhosted.org/packages/48/d0/595737789eddc623c935b2ecb7c138e1a73f496420ae0939d9ac53384dbd/llm_wrapper_testing-1.0.3.tar.gz",
    "platform": null,
    "description": "# LLM Wrapper\r\n\r\nA comprehensive Python wrapper for Azure OpenAI with built-in PostgreSQL integration and usage tracking. Provides detailed analytics for LLM usage with support for both text and JSON response formats.\r\n\r\n## Features\r\n\r\n- \ud83d\ude80 **Easy Integration**: Simple API for interacting with Azure OpenAI services\r\n- \ud83d\udcca **Usage Tracking**: Comprehensive logging and analytics for all LLM requests\r\n- \ud83d\udcbe **PostgreSQL Integration**: Built-in PostgreSQL database support with automatic table creation\r\n- \u26a1 **High Performance**: Optimized for concurrent requests and high throughput\r\n- \ud83d\udd12 **Secure**: Built-in security features and API key management\r\n- \ud83d\udcc8 **Analytics**: Detailed usage statistics and reporting\r\n- \ud83c\udfaf **Response Types**: Support for both text and JSON response formats\r\n- \ud83d\udc33 **Production Ready**: Robust error handling and logging\r\n\r\n## Installation\r\n\r\n### Basic Installation\r\n\r\n```bash\r\npip install llm_wrapper_biz\r\n```\r\n\r\n## Quick Start\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom llm_wrapper_biz import LLMWrapper\r\n\r\n# Initialize the wrapper (database connection is automatic)\r\nwrapper = LLMWrapper(\r\n    service_url=\"https://your-azure-openai-instance.openai.azure.com\",\r\n    api_key=\"your-azure-openai-api-key\",\r\n    deployment_name=\"your-deployment-name\",\r\n    api_version=\"2023-05-15\",\r\n    default_model='gpt-4'\r\n)\r\n\r\n# Send a text request\r\nresponse = wrapper.send_request(\r\n    input_text=\"What are the benefits of renewable energy?\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"text\",  # \"text\" or \"json\"\r\n    temperature=0.7,\r\n    max_tokens=2000\r\n)\r\n\r\nprint(f\"Response: {response['processed_output']}\")\r\nprint(f\"Tokens used: {response['total_tokens']}\")\r\nprint(f\"Response type: {response['response_type']}\")\r\n\r\n# Send a JSON request\r\njson_response = wrapper.send_request(\r\n    input_text=\"Create a JSON object with information about Python programming including name, creator, and year_created.\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"json\"\r\n)\r\n\r\nprint(f\"JSON Response: {json_response['processed_output']}\")\r\nprint(f\"Creator: {json_response['processed_output'].get('creator', 'N/A')}\")\r\n\r\n# Get usage statistics\r\nstats = wrapper.get_usage_stats()\r\nprint(f\"Total requests: {stats['total_requests']}\")\r\nprint(f\"Total tokens: {stats['total_tokens']}\")\r\n\r\n# Clean up\r\nwrapper.close()\r\n```\r\n\r\n### Simplified Usage\r\n\r\nFor easier integration, use the simplified methods:\r\n\r\n```python\r\n# Get just the processed output (text)\r\ntext_result = wrapper.send_request(\r\n    input_text=\"Explain quantum computing\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"text\"\r\n)\r\nprint(text_result)\r\n\r\n# Get just the processed output (JSON)\r\njson_result = wrapper.send_request(\r\n    input_text=\"Create JSON with weather data for London\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"json\"\r\n)\r\n```\r\n\r\n## Response Types\r\n\r\n### Text Response (Default)\r\n\r\n```python\r\nresponse = wrapper.send_request(\r\n    input_text=\"Explain artificial intelligence\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"text\"\r\n)\r\n\r\n# Response structure:\r\n{\r\n    \"output_text\": \"raw response from API\",\r\n    \"processed_output\": \"same as output_text for text responses\",\r\n    \"response_type\": \"text\",\r\n    \"input_tokens\": 10,\r\n    \"output_tokens\": 150,\r\n    \"total_tokens\": 160,\r\n    \"response_time_ms\": 1200,\r\n    \"model\": \"gpt-4\",\r\n    \"full_response\": {...}\r\n}\r\n```\r\n\r\n### JSON Response\r\n\r\n```python\r\nresponse = wrapper.send_request(\r\n    input_text=\"Create a JSON object with user information including name, age, and skills array\",\r\n    customer_id=1,\r\n    organization_id=1,\r\n    response_type=\"json\"\r\n)\r\n\r\n# Response structure:\r\n{\r\n    \"output_text\": '{\"name\": \"John\", \"age\": 30, \"skills\": [\"Python\", \"AI\"]}',\r\n    \"processed_output\": {\"name\": \"John\", \"age\": 30, \"skills\": [\"Python\", \"AI\"]},\r\n    \"response_type\": \"json\",\r\n    \"input_tokens\": 15,\r\n    \"output_tokens\": 25,\r\n    \"total_tokens\": 40,\r\n    \"response_time_ms\": 1500,\r\n    \"model\": \"gpt-4\",\r\n    \"full_response\": {...}\r\n}\r\n```\r\n\r\n\r\n## Database Schema\r\n\r\nThe wrapper automatically creates the following PostgreSQL table:\r\n\r\n```sql\r\nCREATE TABLE token_usage_log (\r\n    id SERIAL PRIMARY KEY,\r\n    customer_id INTEGER NOT NULL,\r\n    organization_id INTEGER NOT NULL,\r\n    model_name VARCHAR(255) NOT NULL,\r\n    request_params JSON,\r\n    response_params JSON,\r\n    input_tokens INTEGER NOT NULL,\r\n    output_tokens INTEGER NOT NULL,\r\n    total_tokens INTEGER NOT NULL,\r\n    request_timestamp TIMESTAMP DEFAULT NOW(),\r\n    response_time_ms INTEGER NOT NULL,\r\n    status VARCHAR(50) DEFAULT 'success'\r\n);\r\n```\r\n\r\n## Usage Analytics\r\n\r\n```python\r\n# Get overall statistics\r\nstats = wrapper.get_usage_stats()\r\n\r\n# Get customer-specific statistics\r\ncustomer_stats = wrapper.get_usage_stats(customer_id=1)\r\n\r\n# Get organization-specific statistics\r\norg_stats = wrapper.get_usage_stats(organization_id=1)\r\n\r\n# Get statistics for a specific time period\r\nperiod_stats = wrapper.get_usage_stats(\r\n    start_date=\"2024-01-01T00:00:00\",\r\n    end_date=\"2024-01-31T23:59:59\"\r\n)\r\n\r\n# Example stats output:\r\n{\r\n    \"total_requests\": 150,\r\n    \"total_tokens\": 45000,\r\n    \"models\": [\r\n        {\r\n            \"model_name\": \"gpt-4\",\r\n            \"requests\": 100,\r\n            \"input_tokens\": 15000,\r\n            \"output_tokens\": 20000,\r\n            \"total_tokens\": 35000,\r\n            \"avg_response_time_ms\": 1200\r\n        },\r\n        {\r\n            \"model_name\": \"gpt-3.5-turbo\",\r\n            \"requests\": 50,\r\n            \"input_tokens\": 5000,\r\n            \"output_tokens\": 5000,\r\n            \"total_tokens\": 10000,\r\n            \"avg_response_time_ms\": 800\r\n        }\r\n    ]\r\n}\r\n```\r\n\r\n## Configuration Options\r\n\r\n| Parameter | Type | Default | Description |\r\n|-----------|------|---------|-------------|\r\n| `service_url` | str | Required | Azure OpenAI service endpoint URL |\r\n| `api_key` | str | Required | Azure OpenAI API key |\r\n| `deployment_name` | str | Required | Azure OpenAI deployment name |\r\n| `api_version` | str | Required | Azure OpenAI API version |\r\n| `default_model` | str | 'gpt-4' | Default model identifier |\r\n| `timeout` | int | 30 | Request timeout in seconds |\r\n\r\n## API Reference\r\n\r\n### Core Methods\r\n\r\n#### `send_request(input_text, customer_id, organization_id, response_type=\"text\", **kwargs)`\r\n\r\nSend a request to the Azure OpenAI service.\r\n\r\n**Parameters:**\r\n- `input_text` (str): The prompt text\r\n- `customer_id` (int): Customer identifier\r\n- `organization_id` (int): Organization identifier\r\n- `response_type` (str): Response format - \"text\" or \"json\"\r\n- `model` (str, optional): Model to use for this request\r\n- `temperature` (float, optional): Sampling temperature (0.0-1.0)\r\n- `max_tokens` (int, optional): Maximum tokens in response\r\n\r\n**Returns:**\r\n- `dict`: Response containing output text, processed output, token counts, and metadata\r\n\r\n#### `send_request_simple(input_text, customer_id, organization_id, response_type=\"text\", **kwargs)`\r\n\r\nSimplified method that returns only the processed output.\r\n\r\n**Parameters:**\r\n- Same as `send_request()`\r\n\r\n**Returns:**\r\n- `str` (for text) or `dict` (for JSON): Direct processed output\r\n\r\n#### `get_usage_stats(**filters)`\r\n\r\nGet usage statistics with optional filtering.\r\n\r\n**Parameters:**\r\n- `customer_id` (int, optional): Filter by customer\r\n- `organization_id` (int, optional): Filter by organization\r\n- `start_date` (str, optional): Start date in ISO format\r\n- `end_date` (str, optional): End date in ISO format\r\n\r\n**Returns:**\r\n- `dict`: Usage statistics including request counts, token usage, and performance metrics\r\n\r\n#### `close()`\r\n\r\nClose database connections and clean up resources.\r\n\r\n\r\n## Requirements\r\n\r\n- Python 3.8+\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License.\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n## Acknowledgments\r\n\r\n- Thanks to all contributors who have helped shape this project\r\n- Built with love for the AI/ML community\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A comprehensive Python wrapper for Large Language Models with database integration and usage tracking",
    "version": "1.0.3",
    "project_urls": null,
    "split_keywords": [
        "llm",
        " language-model",
        " openai",
        " azure",
        " gpt",
        " ai",
        " machine-learning",
        " database",
        " postgresql",
        " mysql",
        " mongodb",
        " usage-tracking",
        " api-wrapper"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7e11da26526f414a2df2c3e0ecc957f7154edd46212c243daf96c31625a8a27e",
                "md5": "ef3d5991bcc0ddb4455d038d8a2bbf87",
                "sha256": "bede84da5a1a5415ab7c8eceabbda54dad2b2bb292deea5268e52f5daef08db0"
            },
            "downloads": -1,
            "filename": "llm_wrapper_testing-1.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ef3d5991bcc0ddb4455d038d8a2bbf87",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 12417,
            "upload_time": "2025-07-24T06:32:25",
            "upload_time_iso_8601": "2025-07-24T06:32:25.650390Z",
            "url": "https://files.pythonhosted.org/packages/7e/11/da26526f414a2df2c3e0ecc957f7154edd46212c243daf96c31625a8a27e/llm_wrapper_testing-1.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "48d0595737789eddc623c935b2ecb7c138e1a73f496420ae0939d9ac53384dbd",
                "md5": "d4f11de36fae3ef8dccdccbcbf522b22",
                "sha256": "c55c8d2aeace08150a07d1789dd239c28397a141395199d919d522d5a07dea66"
            },
            "downloads": -1,
            "filename": "llm_wrapper_testing-1.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "d4f11de36fae3ef8dccdccbcbf522b22",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 18173,
            "upload_time": "2025-07-24T06:32:27",
            "upload_time_iso_8601": "2025-07-24T06:32:27.082233Z",
            "url": "https://files.pythonhosted.org/packages/48/d0/595737789eddc623c935b2ecb7c138e1a73f496420ae0939d9ac53384dbd/llm_wrapper_testing-1.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-24 06:32:27",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llm-wrapper-testing"
}
        
Elapsed time: 0.61763s