paylm-sdk


Namepaylm-sdk JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/paylm/paylm-sdk-python
SummaryOfficial Python SDK for Paylm - Track AI usage and costs across multiple providers (OpenAI, Anthropic, Google, DeepSeek, etc.)
upload_time2025-10-22 16:47:40
maintainerNone
docs_urlNone
authorPaylm
requires_python>=3.7
licenseMIT
keywords paylm sdk ai llm usage tracking costs openai anthropic claude gpt gemini deepseek token-counting cost-calculation api-integration
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Paylm SDK for Python

A Python SDK for integrating with the Paylm API to track usage and costs for AI models.

## Installation

```bash
pip install paylm-sdk-python
```

## Usage

### Basic Usage

```python
import logging
from paylm_sdk import Client, UsageData

def main():
    # Create a new client with your API key
    client = Client.new_client("your-paylm-api-key")
    
    # Set log level (optional)
    client.set_log_level(logging.INFO)
    
    # Define usage data
    usage_data = UsageData(
        model="llama",
        prompt_tokens=756,
        completion_tokens=244,
        total_tokens=1000
    )
    
    # Send usage data
    try:
        client.send_usage("agent-123", "customer-456", "email-sent", usage_data)
        print("Usage data sent successfully!")
    except Exception as e:
        print(f"Failed to send usage: {e}")

if __name__ == "__main__":
    main()
```

### Using SendUsageWithTokenString

```python
import logging
from paylm_sdk import Client, UsageDataWithStrings

def main():
    # Create a new client
    client = Client.new_client_with_url("your-api-key", "http://localhost:8080")
    client.set_log_level(logging.INFO)
    
    # Define usage data with prompt and output strings
    usage_data = UsageDataWithStrings(
        service_provider="OpenAI",
        model="gpt-4",
        prompt_string="What is the capital of France? Please provide a detailed explanation.",
        output_string="The capital of France is Paris. Paris is located in the north-central part of France and is the country's largest city and economic center."
    )
    
    # Send usage data (tokens will be automatically counted)
    try:
        client.send_usage_with_token_string("agent-123", "customer-456", "question-answer", usage_data)
        print("Usage data sent successfully!")
    except Exception as e:
        print(f"Failed to send usage: {e}")

if __name__ == "__main__":
    main()
```

### Using Model Constants

The SDK provides predefined constants for all supported models and service providers:

```python
import logging
from paylm_sdk import (
    Client, 
    UsageData,
    OpenAIModels,
    AnthropicModels,
    ServiceProvider,
    is_model_supported
)

def main():
    client = Client.new_client("your-api-key")
    
    # Use model constants
    usage_data = UsageData(
        service_provider=ServiceProvider.OPENAI,
        model=OpenAIModels.GPT_4O,
        prompt_tokens=1000,
        completion_tokens=500,
        total_tokens=1500
    )
    
    client.send_usage("agent-123", "customer-456", "chat-completion", usage_data)
    
    # Check if a model is supported
    if is_model_supported(OpenAIModels.GPT_5):
        print("GPT-5 is supported!")

if __name__ == "__main__":
    main()
```

#### Available Model Constants

- **OpenAI**: `OpenAIModels.GPT_5`, `OpenAIModels.GPT_4O`, `OpenAIModels.GPT_4O_MINI`, `OpenAIModels.O1`, `OpenAIModels.O3`, etc.
- **Anthropic**: `AnthropicModels.SONNET_4_5`, `AnthropicModels.HAIKU_4_5`, `AnthropicModels.OPUS_4_1`, etc.
- **Google DeepMind**: `GoogleDeepMindModels.GEMINI_2_5_PRO`, `GoogleDeepMindModels.GEMINI_2_5_FLASH`, etc.
- **Meta**: `MetaModels.LLAMA_4_MAVERICK`, `MetaModels.LLAMA_3_1_405B_INSTRUCT_TURBO`, etc.
- **AWS**: `AWSModels.AMAZON_NOVA_PRO`, `AWSModels.AMAZON_NOVA_LITE`, etc.
- **Mistral AI**: `MistralAIModels.MISTRAL_LARGE`, `MistralAIModels.MISTRAL_MEDIUM`, etc.
- **Cohere**: `CohereModels.COMMAND_R_PLUS`, `CohereModels.COMMAND_R`, etc.
- **DeepSeek**: `DeepSeekModels.DEEPSEEK_R1_GLOBAL`, `DeepSeekModels.DEEPSEEK_REASONER`, etc.

### Advanced Usage

```python
import logging
from paylm_sdk import Client, UsageData, UsageDataWithStrings

def main():
    # Create client with custom base URL
    client = Client.new_client_with_url("your-api-key", "https://custom-api.paylm.com")
    
    # Set debug logging
    client.set_log_level(logging.DEBUG)
    
    # Get logger for custom logging
    logger = client.get_logger()
    logger.info("Starting usage tracking...")
    
    # Method 1: Send usage data with pre-calculated tokens
    usage_data = UsageData(
        model="gpt-4",
        prompt_tokens=1000,
        completion_tokens=500,
        total_tokens=1500
    )
    
    try:
        client.send_usage("agent-789", "customer-101", "chat-completion", usage_data)
        logger.info("Usage data sent successfully!")
    except Exception as e:
        logger.error(f"Failed to send usage: {e}")
    
    # Method 2: Send usage data with automatic token counting
    usage_data_strings = UsageDataWithStrings(
        service_provider="Anthropic",
        model="claude-3-sonnet",
        prompt_string="Hello, how are you?",
        output_string="I'm doing well, thank you for asking!"
    )
    
    try:
        client.send_usage_with_token_string("agent-789", "customer-101", "greeting", usage_data_strings)
        logger.info("Usage data with token strings sent successfully!")
    except Exception as e:
        logger.error(f"Failed to send usage with token strings: {e}")

if __name__ == "__main__":
    main()
```

## API Reference

### Client

#### `Client.new_client(api_key: str) -> Client`
Creates a new Paylm SDK client with the default API URL.

#### `Client.new_client_with_url(api_key: str, base_url: str) -> Client`
Creates a new Paylm SDK client with a custom base URL.

#### `send_usage(agent_id: str, customer_id: str, indicator: str, usage_data: UsageData) -> None`
Sends usage data to the Paylm API with pre-calculated token counts. Raises an exception if the request fails.

#### `send_usage_with_token_string(agent_id: str, customer_id: str, indicator: str, usage_data: UsageDataWithStrings) -> None`
Sends usage data to the Paylm API using prompt and output strings. The function automatically counts tokens using proper tokenizers for each model provider and calculates costs. Raises an exception if the request fails.

#### `set_log_level(level: int) -> None`
Sets the logging level for the client.

#### `get_logger() -> logging.Logger`
Returns the logger instance for custom logging.

### Types

#### `UsageData`
```python
@dataclass
class UsageData:
    model: str
    prompt_tokens: int
    completion_tokens: int
    total_tokens: int
```

#### `UsageDataWithStrings`
```python
@dataclass
class UsageDataWithStrings:
    service_provider: str
    model: str
    prompt_string: str
    output_string: str
```

## Supported Models

The SDK includes built-in pricing for models from the following providers:

### OpenAI
- `gpt-3.5-turbo` - $1.50 prompt, $2.00 completion (per 1000 tokens)
- `gpt-3.5-turbo-16k` - $3.00 prompt, $4.00 completion (per 1000 tokens)
- `gpt-4` - $30.00 prompt, $60.00 completion (per 1000 tokens)
- `gpt-4-turbo` - $10.00 prompt, $30.00 completion (per 1000 tokens)
- `gpt-4o` - $5.00 prompt, $15.00 completion (per 1000 tokens)
- `gpt-4o-mini` - $0.15 prompt, $0.60 completion (per 1000 tokens)

### Anthropic
- `claude-3-haiku` - $0.25 prompt, $1.25 completion (per 1000 tokens)
- `claude-3-sonnet` - $3.00 prompt, $15.00 completion (per 1000 tokens)
- `claude-3-opus` - $15.00 prompt, $75.00 completion (per 1000 tokens)
- `claude-3.5-sonnet` - $3.00 prompt, $15.00 completion (per 1000 tokens)

### Google DeepMind
- `gemini-pro` - $0.50 prompt, $1.50 completion (per 1000 tokens)
- `gemini-1.5-pro` - $1.25 prompt, $5.00 completion (per 1000 tokens)
- `gemini-1.5-flash` - $0.075 prompt, $0.30 completion (per 1000 tokens)

### Meta
- `llama-2-7b` - $0.10 per 1000 tokens
- `llama-2-13b` - $0.20 per 1000 tokens
- `llama-2-70b` - $0.70 per 1000 tokens
- `llama-3-8b` - $0.10 per 1000 tokens
- `llama-3-70b` - $0.70 per 1000 tokens

### AWS
- `claude-3-haiku-aws` - $0.25 prompt, $1.25 completion (per 1000 tokens)
- `claude-3-sonnet-aws` - $3.00 prompt, $15.00 completion (per 1000 tokens)
- `titan-text-express` - $0.80 prompt, $1.60 completion (per 1000 tokens)

### Mistral AI
- `mistral-7b` - $0.10 per 1000 tokens
- `mistral-large` - $2.00 prompt, $6.00 completion (per 1000 tokens)

### Cohere
- `command` - $1.50 prompt, $2.00 completion (per 1000 tokens)
- `command-r-plus` - $3.00 prompt, $15.00 completion (per 1000 tokens)

### DeepSeek
- `deepseek-chat` - $0.10 prompt, $0.20 completion (per 1000 tokens)

For unknown models, the SDK will use default pricing of $0.10 per 1000 tokens.

## Token Counting

The SDK uses accurate token counting for different model providers:

- **OpenAI GPT models**: Uses the official tiktoken library with model-specific encodings
- **Anthropic Claude models**: Uses cl100k_base encoding (same as GPT-4)
- **Google Gemini models**: Uses cl100k_base encoding as approximation
- **Meta Llama models**: Uses cl100k_base encoding as approximation
- **Mistral models**: Uses cl100k_base encoding as approximation
- **Cohere models**: Uses cl100k_base encoding as approximation
- **DeepSeek models**: Uses cl100k_base encoding as approximation
- **AWS Titan models**: Uses cl100k_base encoding as approximation
- **Unknown models**: Falls back to word-based estimation (1.3 tokens per word)

The token counting is performed automatically when using `send_usage_with_token_string()`.

## Logging

The SDK uses Python's built-in logging module. You can control the log level and access the logger for custom logging.

```python
import logging

# Set log level
client.set_log_level(logging.DEBUG)

# Get logger for custom logging
logger = client.get_logger()
logger.info("Custom log message")
```

## Authentication

The SDK uses the `paylm-api-key` header for authentication. Make sure to provide a valid API key when creating the client.

## Error Handling

The SDK raises appropriate exceptions for various failure scenarios:

- `requests.RequestException` - Network and HTTP errors
- `ValueError` - Invalid usage data or cost calculation errors

```python
try:
    client.send_usage("agent-123", "customer-456", "test", usage_data)
except requests.RequestException as e:
    print(f"Network error: {e}")
except ValueError as e:
    print(f"Invalid data: {e}")
```

## Development

### Running Tests

```bash
python -m pytest tests/
```

### Running Examples

```bash
# Basic usage
python examples/basic_usage.py

# Advanced usage
python examples/advanced_usage.py
```

## License

MIT

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/paylm/paylm-sdk-python",
    "name": "paylm-sdk",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Paylm Team <support@paylm.com>",
    "keywords": "paylm, sdk, ai, llm, usage, tracking, costs, openai, anthropic, claude, gpt, gemini, deepseek, token-counting, cost-calculation, api-integration",
    "author": "Paylm",
    "author_email": "Paylm Team <support@paylm.com>",
    "download_url": "https://files.pythonhosted.org/packages/94/27/4b63f2d2d2c5194126ef20c13806a1bdb50eb1e53bfc58de16788ef86506/paylm_sdk-1.0.0.tar.gz",
    "platform": null,
    "description": "# Paylm SDK for Python\n\nA Python SDK for integrating with the Paylm API to track usage and costs for AI models.\n\n## Installation\n\n```bash\npip install paylm-sdk-python\n```\n\n## Usage\n\n### Basic Usage\n\n```python\nimport logging\nfrom paylm_sdk import Client, UsageData\n\ndef main():\n    # Create a new client with your API key\n    client = Client.new_client(\"your-paylm-api-key\")\n    \n    # Set log level (optional)\n    client.set_log_level(logging.INFO)\n    \n    # Define usage data\n    usage_data = UsageData(\n        model=\"llama\",\n        prompt_tokens=756,\n        completion_tokens=244,\n        total_tokens=1000\n    )\n    \n    # Send usage data\n    try:\n        client.send_usage(\"agent-123\", \"customer-456\", \"email-sent\", usage_data)\n        print(\"Usage data sent successfully!\")\n    except Exception as e:\n        print(f\"Failed to send usage: {e}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### Using SendUsageWithTokenString\n\n```python\nimport logging\nfrom paylm_sdk import Client, UsageDataWithStrings\n\ndef main():\n    # Create a new client\n    client = Client.new_client_with_url(\"your-api-key\", \"http://localhost:8080\")\n    client.set_log_level(logging.INFO)\n    \n    # Define usage data with prompt and output strings\n    usage_data = UsageDataWithStrings(\n        service_provider=\"OpenAI\",\n        model=\"gpt-4\",\n        prompt_string=\"What is the capital of France? Please provide a detailed explanation.\",\n        output_string=\"The capital of France is Paris. Paris is located in the north-central part of France and is the country's largest city and economic center.\"\n    )\n    \n    # Send usage data (tokens will be automatically counted)\n    try:\n        client.send_usage_with_token_string(\"agent-123\", \"customer-456\", \"question-answer\", usage_data)\n        print(\"Usage data sent successfully!\")\n    except Exception as e:\n        print(f\"Failed to send usage: {e}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### Using Model Constants\n\nThe SDK provides predefined constants for all supported models and service providers:\n\n```python\nimport logging\nfrom paylm_sdk import (\n    Client, \n    UsageData,\n    OpenAIModels,\n    AnthropicModels,\n    ServiceProvider,\n    is_model_supported\n)\n\ndef main():\n    client = Client.new_client(\"your-api-key\")\n    \n    # Use model constants\n    usage_data = UsageData(\n        service_provider=ServiceProvider.OPENAI,\n        model=OpenAIModels.GPT_4O,\n        prompt_tokens=1000,\n        completion_tokens=500,\n        total_tokens=1500\n    )\n    \n    client.send_usage(\"agent-123\", \"customer-456\", \"chat-completion\", usage_data)\n    \n    # Check if a model is supported\n    if is_model_supported(OpenAIModels.GPT_5):\n        print(\"GPT-5 is supported!\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n#### Available Model Constants\n\n- **OpenAI**: `OpenAIModels.GPT_5`, `OpenAIModels.GPT_4O`, `OpenAIModels.GPT_4O_MINI`, `OpenAIModels.O1`, `OpenAIModels.O3`, etc.\n- **Anthropic**: `AnthropicModels.SONNET_4_5`, `AnthropicModels.HAIKU_4_5`, `AnthropicModels.OPUS_4_1`, etc.\n- **Google DeepMind**: `GoogleDeepMindModels.GEMINI_2_5_PRO`, `GoogleDeepMindModels.GEMINI_2_5_FLASH`, etc.\n- **Meta**: `MetaModels.LLAMA_4_MAVERICK`, `MetaModels.LLAMA_3_1_405B_INSTRUCT_TURBO`, etc.\n- **AWS**: `AWSModels.AMAZON_NOVA_PRO`, `AWSModels.AMAZON_NOVA_LITE`, etc.\n- **Mistral AI**: `MistralAIModels.MISTRAL_LARGE`, `MistralAIModels.MISTRAL_MEDIUM`, etc.\n- **Cohere**: `CohereModels.COMMAND_R_PLUS`, `CohereModels.COMMAND_R`, etc.\n- **DeepSeek**: `DeepSeekModels.DEEPSEEK_R1_GLOBAL`, `DeepSeekModels.DEEPSEEK_REASONER`, etc.\n\n### Advanced Usage\n\n```python\nimport logging\nfrom paylm_sdk import Client, UsageData, UsageDataWithStrings\n\ndef main():\n    # Create client with custom base URL\n    client = Client.new_client_with_url(\"your-api-key\", \"https://custom-api.paylm.com\")\n    \n    # Set debug logging\n    client.set_log_level(logging.DEBUG)\n    \n    # Get logger for custom logging\n    logger = client.get_logger()\n    logger.info(\"Starting usage tracking...\")\n    \n    # Method 1: Send usage data with pre-calculated tokens\n    usage_data = UsageData(\n        model=\"gpt-4\",\n        prompt_tokens=1000,\n        completion_tokens=500,\n        total_tokens=1500\n    )\n    \n    try:\n        client.send_usage(\"agent-789\", \"customer-101\", \"chat-completion\", usage_data)\n        logger.info(\"Usage data sent successfully!\")\n    except Exception as e:\n        logger.error(f\"Failed to send usage: {e}\")\n    \n    # Method 2: Send usage data with automatic token counting\n    usage_data_strings = UsageDataWithStrings(\n        service_provider=\"Anthropic\",\n        model=\"claude-3-sonnet\",\n        prompt_string=\"Hello, how are you?\",\n        output_string=\"I'm doing well, thank you for asking!\"\n    )\n    \n    try:\n        client.send_usage_with_token_string(\"agent-789\", \"customer-101\", \"greeting\", usage_data_strings)\n        logger.info(\"Usage data with token strings sent successfully!\")\n    except Exception as e:\n        logger.error(f\"Failed to send usage with token strings: {e}\")\n\nif __name__ == \"__main__\":\n    main()\n```\n\n## API Reference\n\n### Client\n\n#### `Client.new_client(api_key: str) -> Client`\nCreates a new Paylm SDK client with the default API URL.\n\n#### `Client.new_client_with_url(api_key: str, base_url: str) -> Client`\nCreates a new Paylm SDK client with a custom base URL.\n\n#### `send_usage(agent_id: str, customer_id: str, indicator: str, usage_data: UsageData) -> None`\nSends usage data to the Paylm API with pre-calculated token counts. Raises an exception if the request fails.\n\n#### `send_usage_with_token_string(agent_id: str, customer_id: str, indicator: str, usage_data: UsageDataWithStrings) -> None`\nSends usage data to the Paylm API using prompt and output strings. The function automatically counts tokens using proper tokenizers for each model provider and calculates costs. Raises an exception if the request fails.\n\n#### `set_log_level(level: int) -> None`\nSets the logging level for the client.\n\n#### `get_logger() -> logging.Logger`\nReturns the logger instance for custom logging.\n\n### Types\n\n#### `UsageData`\n```python\n@dataclass\nclass UsageData:\n    model: str\n    prompt_tokens: int\n    completion_tokens: int\n    total_tokens: int\n```\n\n#### `UsageDataWithStrings`\n```python\n@dataclass\nclass UsageDataWithStrings:\n    service_provider: str\n    model: str\n    prompt_string: str\n    output_string: str\n```\n\n## Supported Models\n\nThe SDK includes built-in pricing for models from the following providers:\n\n### OpenAI\n- `gpt-3.5-turbo` - $1.50 prompt, $2.00 completion (per 1000 tokens)\n- `gpt-3.5-turbo-16k` - $3.00 prompt, $4.00 completion (per 1000 tokens)\n- `gpt-4` - $30.00 prompt, $60.00 completion (per 1000 tokens)\n- `gpt-4-turbo` - $10.00 prompt, $30.00 completion (per 1000 tokens)\n- `gpt-4o` - $5.00 prompt, $15.00 completion (per 1000 tokens)\n- `gpt-4o-mini` - $0.15 prompt, $0.60 completion (per 1000 tokens)\n\n### Anthropic\n- `claude-3-haiku` - $0.25 prompt, $1.25 completion (per 1000 tokens)\n- `claude-3-sonnet` - $3.00 prompt, $15.00 completion (per 1000 tokens)\n- `claude-3-opus` - $15.00 prompt, $75.00 completion (per 1000 tokens)\n- `claude-3.5-sonnet` - $3.00 prompt, $15.00 completion (per 1000 tokens)\n\n### Google DeepMind\n- `gemini-pro` - $0.50 prompt, $1.50 completion (per 1000 tokens)\n- `gemini-1.5-pro` - $1.25 prompt, $5.00 completion (per 1000 tokens)\n- `gemini-1.5-flash` - $0.075 prompt, $0.30 completion (per 1000 tokens)\n\n### Meta\n- `llama-2-7b` - $0.10 per 1000 tokens\n- `llama-2-13b` - $0.20 per 1000 tokens\n- `llama-2-70b` - $0.70 per 1000 tokens\n- `llama-3-8b` - $0.10 per 1000 tokens\n- `llama-3-70b` - $0.70 per 1000 tokens\n\n### AWS\n- `claude-3-haiku-aws` - $0.25 prompt, $1.25 completion (per 1000 tokens)\n- `claude-3-sonnet-aws` - $3.00 prompt, $15.00 completion (per 1000 tokens)\n- `titan-text-express` - $0.80 prompt, $1.60 completion (per 1000 tokens)\n\n### Mistral AI\n- `mistral-7b` - $0.10 per 1000 tokens\n- `mistral-large` - $2.00 prompt, $6.00 completion (per 1000 tokens)\n\n### Cohere\n- `command` - $1.50 prompt, $2.00 completion (per 1000 tokens)\n- `command-r-plus` - $3.00 prompt, $15.00 completion (per 1000 tokens)\n\n### DeepSeek\n- `deepseek-chat` - $0.10 prompt, $0.20 completion (per 1000 tokens)\n\nFor unknown models, the SDK will use default pricing of $0.10 per 1000 tokens.\n\n## Token Counting\n\nThe SDK uses accurate token counting for different model providers:\n\n- **OpenAI GPT models**: Uses the official tiktoken library with model-specific encodings\n- **Anthropic Claude models**: Uses cl100k_base encoding (same as GPT-4)\n- **Google Gemini models**: Uses cl100k_base encoding as approximation\n- **Meta Llama models**: Uses cl100k_base encoding as approximation\n- **Mistral models**: Uses cl100k_base encoding as approximation\n- **Cohere models**: Uses cl100k_base encoding as approximation\n- **DeepSeek models**: Uses cl100k_base encoding as approximation\n- **AWS Titan models**: Uses cl100k_base encoding as approximation\n- **Unknown models**: Falls back to word-based estimation (1.3 tokens per word)\n\nThe token counting is performed automatically when using `send_usage_with_token_string()`.\n\n## Logging\n\nThe SDK uses Python's built-in logging module. You can control the log level and access the logger for custom logging.\n\n```python\nimport logging\n\n# Set log level\nclient.set_log_level(logging.DEBUG)\n\n# Get logger for custom logging\nlogger = client.get_logger()\nlogger.info(\"Custom log message\")\n```\n\n## Authentication\n\nThe SDK uses the `paylm-api-key` header for authentication. Make sure to provide a valid API key when creating the client.\n\n## Error Handling\n\nThe SDK raises appropriate exceptions for various failure scenarios:\n\n- `requests.RequestException` - Network and HTTP errors\n- `ValueError` - Invalid usage data or cost calculation errors\n\n```python\ntry:\n    client.send_usage(\"agent-123\", \"customer-456\", \"test\", usage_data)\nexcept requests.RequestException as e:\n    print(f\"Network error: {e}\")\nexcept ValueError as e:\n    print(f\"Invalid data: {e}\")\n```\n\n## Development\n\n### Running Tests\n\n```bash\npython -m pytest tests/\n```\n\n### Running Examples\n\n```bash\n# Basic usage\npython examples/basic_usage.py\n\n# Advanced usage\npython examples/advanced_usage.py\n```\n\n## License\n\nMIT\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Official Python SDK for Paylm - Track AI usage and costs across multiple providers (OpenAI, Anthropic, Google, DeepSeek, etc.)",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/paylm/paylm-sdk-python/issues",
        "Changelog": "https://github.com/paylm/paylm-sdk-python/releases",
        "Documentation": "https://github.com/paylm/paylm-sdk-python#readme",
        "Homepage": "https://paylm.com",
        "Repository": "https://github.com/paylm/paylm-sdk-python",
        "Source Code": "https://github.com/paylm/paylm-sdk-python"
    },
    "split_keywords": [
        "paylm",
        " sdk",
        " ai",
        " llm",
        " usage",
        " tracking",
        " costs",
        " openai",
        " anthropic",
        " claude",
        " gpt",
        " gemini",
        " deepseek",
        " token-counting",
        " cost-calculation",
        " api-integration"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bf24722f03138fb1fae52bab498ca2897e89e56d75b3a7a8b66a9a000f06c620",
                "md5": "bf4ebb3f0e88ab3ea595ced0fa750ebb",
                "sha256": "88f34de5ca663a73332dfad6222a540a6a7e9c69f36b084b80b73c52fa658b7f"
            },
            "downloads": -1,
            "filename": "paylm_sdk-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bf4ebb3f0e88ab3ea595ced0fa750ebb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 19873,
            "upload_time": "2025-10-22T16:47:37",
            "upload_time_iso_8601": "2025-10-22T16:47:37.117424Z",
            "url": "https://files.pythonhosted.org/packages/bf/24/722f03138fb1fae52bab498ca2897e89e56d75b3a7a8b66a9a000f06c620/paylm_sdk-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "94274b63f2d2d2c5194126ef20c13806a1bdb50eb1e53bfc58de16788ef86506",
                "md5": "0d94c55ae4560c961a6eb5bb2d41f3dc",
                "sha256": "7a8887660b1ed1756fdafdc9b08e03b6875261104b0b3356de724e09ccff4764"
            },
            "downloads": -1,
            "filename": "paylm_sdk-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "0d94c55ae4560c961a6eb5bb2d41f3dc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 20121,
            "upload_time": "2025-10-22T16:47:40",
            "upload_time_iso_8601": "2025-10-22T16:47:40.927734Z",
            "url": "https://files.pythonhosted.org/packages/94/27/4b63f2d2d2c5194126ef20c13806a1bdb50eb1e53bfc58de16788ef86506/paylm_sdk-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-22 16:47:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "paylm",
    "github_project": "paylm-sdk-python",
    "github_not_found": true,
    "lcname": "paylm-sdk"
}
        
Elapsed time: 1.84497s