llm-token-tracker


Namellm-token-tracker JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
SummaryA package to track LLM token usage in context
upload_time2025-10-16 11:32:21
maintainerNone
docs_urlNone
authorLaurin
requires_python>=3.13
licenseNone
keywords llm token tracker xai grok
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-token-tracker

[![PyPI version](https://img.shields.io/pypi/v/llm-token-tracker.svg)](https://pypi.org/project/llm-token-tracker/)

A Python package to track token usage in LLM interactions.

## Features

- Track token usage in LLM conversations
- Support for detailed token breakdowns (prompt, completion, reasoning, cached, etc.)
- Cost estimation for input and output tokens
- Configurable verbosity levels for logging
- Integration with custom loggers
- Compatibility with xAI and OpenAI models

## Installation

```bash
pip install llm-token-tracker
```

## Usage

```python
from llm_token_tracker import wrap_llm
from xai_sdk import Client
import logging

# Create and wrap a chat for token tracking
client = Client()
chat = client.chat.create(model="grok-3")
wrapped_chat = wrap_llm(chat)

response = wrapped_chat.sample("Hello, how are you?")
print(response.content)
# Console will log: Total tokens used in context: X

# For conversation context
wrapped_chat.append(system("You are Grok, a highly intelligent AI."))
wrapped_chat.append(user("What is the meaning of life?"))
response = wrapped_chat.sample()
print(response.content)
```

### OpenAI Compatibility

For OpenAI models using the responses API:

```python
from openai import OpenAI
from llm_token_tracker import wrap_llm

client = OpenAI()
wrapped_client = wrap_llm(client, provider="openai")

response = wrapped_client.responses.create(
    model="gpt-4.1",
    input="Tell me a three sentence bedtime story about a unicorn."
)
print(response.content)
# Console will log: Total tokens used in context: X
```

### Configuration Options

`wrap_llm` accepts several parameters to customize logging:

- `provider`: `"xai"` (default) or `"openai"` to specify the LLM provider.
- `verbosity`: `"minimum"` (default), `"detailed"`, or `"max"`
  - `"minimum"`: Logs only total tokens used.
  - `"detailed"`: Logs a detailed usage summary.
  - `"max"`: Logs the full history of all token usages.
- `logger`: Optional `logging.Logger` instance. If provided, uses the logger instead of printing to console.
- `log_level`: Logging level (default `logging.INFO`).
- `quiet`: If `True`, disables all logging.
- `max_tokens`: Maximum tokens allowed in context (default 132000).
- `input_pricing`: Price per 1 million input tokens (default 0.2).
- `output_pricing`: Price per 1 million output tokens (default 0.5).
- `calculate_pricing`: If `True`, calculates and logs cost estimates (default `False`).

Example with custom logger:

```python
import logging

logger = logging.getLogger("my_llm_logger")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
logger.addHandler(handler)

wrapped_chat = wrap_llm(chat, logger=logger, verbosity="detailed")
```

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-token-tracker",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.13",
    "maintainer_email": null,
    "keywords": "llm, token, tracker, xai, grok",
    "author": "Laurin",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/01/2b/c0e74da9f0ce96ce5d3d29b0ebe57061481eca0cbd640dcbdd35b7b957aa/llm_token_tracker-0.4.0.tar.gz",
    "platform": null,
    "description": "# llm-token-tracker\n\n[![PyPI version](https://img.shields.io/pypi/v/llm-token-tracker.svg)](https://pypi.org/project/llm-token-tracker/)\n\nA Python package to track token usage in LLM interactions.\n\n## Features\n\n- Track token usage in LLM conversations\n- Support for detailed token breakdowns (prompt, completion, reasoning, cached, etc.)\n- Cost estimation for input and output tokens\n- Configurable verbosity levels for logging\n- Integration with custom loggers\n- Compatibility with xAI and OpenAI models\n\n## Installation\n\n```bash\npip install llm-token-tracker\n```\n\n## Usage\n\n```python\nfrom llm_token_tracker import wrap_llm\nfrom xai_sdk import Client\nimport logging\n\n# Create and wrap a chat for token tracking\nclient = Client()\nchat = client.chat.create(model=\"grok-3\")\nwrapped_chat = wrap_llm(chat)\n\nresponse = wrapped_chat.sample(\"Hello, how are you?\")\nprint(response.content)\n# Console will log: Total tokens used in context: X\n\n# For conversation context\nwrapped_chat.append(system(\"You are Grok, a highly intelligent AI.\"))\nwrapped_chat.append(user(\"What is the meaning of life?\"))\nresponse = wrapped_chat.sample()\nprint(response.content)\n```\n\n### OpenAI Compatibility\n\nFor OpenAI models using the responses API:\n\n```python\nfrom openai import OpenAI\nfrom llm_token_tracker import wrap_llm\n\nclient = OpenAI()\nwrapped_client = wrap_llm(client, provider=\"openai\")\n\nresponse = wrapped_client.responses.create(\n    model=\"gpt-4.1\",\n    input=\"Tell me a three sentence bedtime story about a unicorn.\"\n)\nprint(response.content)\n# Console will log: Total tokens used in context: X\n```\n\n### Configuration Options\n\n`wrap_llm` accepts several parameters to customize logging:\n\n- `provider`: `\"xai\"` (default) or `\"openai\"` to specify the LLM provider.\n- `verbosity`: `\"minimum\"` (default), `\"detailed\"`, or `\"max\"`\n  - `\"minimum\"`: Logs only total tokens used.\n  - `\"detailed\"`: Logs a detailed usage summary.\n  - `\"max\"`: Logs the full history of all token usages.\n- `logger`: Optional `logging.Logger` instance. If provided, uses the logger instead of printing to console.\n- `log_level`: Logging level (default `logging.INFO`).\n- `quiet`: If `True`, disables all logging.\n- `max_tokens`: Maximum tokens allowed in context (default 132000).\n- `input_pricing`: Price per 1 million input tokens (default 0.2).\n- `output_pricing`: Price per 1 million output tokens (default 0.5).\n- `calculate_pricing`: If `True`, calculates and logs cost estimates (default `False`).\n\nExample with custom logger:\n\n```python\nimport logging\n\nlogger = logging.getLogger(\"my_llm_logger\")\nlogger.setLevel(logging.INFO)\nhandler = logging.StreamHandler()\nlogger.addHandler(handler)\n\nwrapped_chat = wrap_llm(chat, logger=logger, verbosity=\"detailed\")\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A package to track LLM token usage in context",
    "version": "0.4.0",
    "project_urls": null,
    "split_keywords": [
        "llm",
        " token",
        " tracker",
        " xai",
        " grok"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5e554327a778ce1c47078f9ebf99532a32fa754c6f484765f4643464fef9c610",
                "md5": "9c572d163c2b5462dc87cc85fbcb2689",
                "sha256": "56d8e34df36b232829d8cb120eee0541d413e2e5b9452ca7a0a9ce70727edf23"
            },
            "downloads": -1,
            "filename": "llm_token_tracker-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c572d163c2b5462dc87cc85fbcb2689",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.13",
            "size": 6259,
            "upload_time": "2025-10-16T11:32:20",
            "upload_time_iso_8601": "2025-10-16T11:32:20.558407Z",
            "url": "https://files.pythonhosted.org/packages/5e/55/4327a778ce1c47078f9ebf99532a32fa754c6f484765f4643464fef9c610/llm_token_tracker-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "012bc0e74da9f0ce96ce5d3d29b0ebe57061481eca0cbd640dcbdd35b7b957aa",
                "md5": "f0289045acb5b8394d149477dfae3040",
                "sha256": "8482ff780199d0c7bf468b618ee05e76373506ef247de922aba64dea2a03d779"
            },
            "downloads": -1,
            "filename": "llm_token_tracker-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "f0289045acb5b8394d149477dfae3040",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.13",
            "size": 6463,
            "upload_time": "2025-10-16T11:32:21",
            "upload_time_iso_8601": "2025-10-16T11:32:21.506421Z",
            "url": "https://files.pythonhosted.org/packages/01/2b/c0e74da9f0ce96ce5d3d29b0ebe57061481eca0cbd640dcbdd35b7b957aa/llm_token_tracker-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-16 11:32:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llm-token-tracker"
}
        
Elapsed time: 0.64258s