# TokenWatcher Python SDK
Monitor your LLM API calls with zero code changes. Simply wrap your OpenAI or Anthropic client and get instant observability into tokens, costs, latency, and errors.
## Installation
```bash
pip install tokenwatcher
```
For OpenAI support:
```bash
pip install tokenwatcher[openai]
```
For Anthropic support:
```bash
pip install tokenwatcher[anthropic]
```
For both:
```bash
pip install tokenwatcher[all]
```
## Quick Start
### OpenAI
```python
from tokenwatcher import MonitoredOpenAI
from openai import OpenAI
# Wrap your client (that's it!)
client = MonitoredOpenAI(
OpenAI(api_key="sk-..."),
api_key="ym_your_monitoring_key"
)
# Use OpenAI exactly as before
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### Anthropic
```python
from tokenwatcher import MonitoredAnthropic
from anthropic import Anthropic
# Wrap your client
client = MonitoredAnthropic(
Anthropic(api_key="sk-ant-..."),
api_key="ym_your_monitoring_key"
)
# Use Anthropic exactly as before
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
```
## Configuration
```python
from tokenwatcher import MonitoredOpenAI
client = MonitoredOpenAI(
openai_client,
api_key="ym_your_key", # Required: Your TokenWatcher API key
base_url="https://api.token-watcher.com", # Optional: Custom backend URL
context="production/chatbot", # Optional: Tag events with context
user_identifier="user-123", # Optional: Associate events with a user
buffer_size=100, # Optional: Max events before auto-flush (default: 100)
flush_interval=10, # Optional: Seconds between auto-flush (default: 10)
enabled=True, # Optional: Disable monitoring (default: True)
)
```
## Features
- **Zero Code Changes**: Drop-in replacement for OpenAI and Anthropic clients
- **Automatic Monitoring**: Captures tokens, latency, costs, and errors
- **Auto-Context Detection**: Automatically detects calling module and function for better observability
- **Buffered & Async**: Events are buffered and sent in background, no performance impact
- **Silent Failures**: If monitoring fails, your LLM calls continue uninterrupted
- **Multi-tenant**: Each API key is isolated to your account
- **Type Safe**: Full type hints for Python 3.8+
## How It Works
The SDK wraps your LLM client and intercepts API calls to extract:
- Provider and model used
- Input and output token counts
- Request latency
- Success/error status
- Error details (if any)
- **Calling context** (automatically detects module and function name)
Events are buffered and sent to your TokenWatcher backend in batches, ensuring minimal overhead.
### Automatic Context Detection
When you don't provide an explicit `context` parameter, the SDK automatically detects the calling module and function:
```python
# In myapp/chatbot.py
def handle_user_message(message):
client = MonitoredOpenAI(openai_client, api_key="ym_key")
response = client.chat.completions.create(...)
# Context will be auto-set to: "myapp.chatbot.handle_user_message"
```
This provides automatic code-level observability without any manual tagging! You can still override this by providing your own `context` parameter.
## Requirements
- Python 3.8+
- requests >= 2.25.0
- openai >= 1.0.0 (optional, for OpenAI support)
- anthropic >= 0.18.0 (optional, for Anthropic support)
## Environment Variables
You can configure the SDK using environment variables:
```bash
export TOKENWATCHER_API_KEY=ym_your_key
export TOKENWATCHER_BASE_URL=https://api.token-watcher.com
export TOKENWATCHER_CONTEXT=production
```
Then use without explicit configuration:
```python
from tokenwatcher import MonitoredOpenAI
from openai import OpenAI
import os
client = MonitoredOpenAI(
OpenAI(api_key="sk-..."),
api_key=os.getenv("TOKENWATCHER_API_KEY")
)
```
## Support
- Documentation: https://docs.token-watcher.com
- Issues: https://github.com/tokenwatcher/tokenwatcher-python/issues
- Email: hello@token-watcher.com
## License
MIT License - see LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "tokenwatcher",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, monitoring, observability, openai, anthropic",
"author": null,
"author_email": "TokenWatcher <hello@token-watcher.com>",
"download_url": "https://files.pythonhosted.org/packages/8d/6f/808794de6f9b1a07784acb3b0d39b31880c99de54cf52a3ed122485ce042/tokenwatcher-0.1.1.tar.gz",
"platform": null,
"description": "# TokenWatcher Python SDK\n\nMonitor your LLM API calls with zero code changes. Simply wrap your OpenAI or Anthropic client and get instant observability into tokens, costs, latency, and errors.\n\n## Installation\n\n```bash\npip install tokenwatcher\n```\n\nFor OpenAI support:\n```bash\npip install tokenwatcher[openai]\n```\n\nFor Anthropic support:\n```bash\npip install tokenwatcher[anthropic]\n```\n\nFor both:\n```bash\npip install tokenwatcher[all]\n```\n\n## Quick Start\n\n### OpenAI\n\n```python\nfrom tokenwatcher import MonitoredOpenAI\nfrom openai import OpenAI\n\n# Wrap your client (that's it!)\nclient = MonitoredOpenAI(\n OpenAI(api_key=\"sk-...\"),\n api_key=\"ym_your_monitoring_key\"\n)\n\n# Use OpenAI exactly as before\nresponse = client.chat.completions.create(\n model=\"gpt-4\",\n messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\n```\n\n### Anthropic\n\n```python\nfrom tokenwatcher import MonitoredAnthropic\nfrom anthropic import Anthropic\n\n# Wrap your client\nclient = MonitoredAnthropic(\n Anthropic(api_key=\"sk-ant-...\"),\n api_key=\"ym_your_monitoring_key\"\n)\n\n# Use Anthropic exactly as before\nresponse = client.messages.create(\n model=\"claude-3-5-sonnet-20241022\",\n max_tokens=1024,\n messages=[{\"role\": \"user\", \"content\": \"Hello!\"}]\n)\n```\n\n## Configuration\n\n```python\nfrom tokenwatcher import MonitoredOpenAI\n\nclient = MonitoredOpenAI(\n openai_client,\n api_key=\"ym_your_key\", # Required: Your TokenWatcher API key\n base_url=\"https://api.token-watcher.com\", # Optional: Custom backend URL\n context=\"production/chatbot\", # Optional: Tag events with context\n user_identifier=\"user-123\", # Optional: Associate events with a user\n buffer_size=100, # Optional: Max events before auto-flush (default: 100)\n flush_interval=10, # Optional: Seconds between auto-flush (default: 10)\n enabled=True, # Optional: Disable monitoring (default: True)\n)\n```\n\n## Features\n\n- **Zero Code Changes**: Drop-in replacement for OpenAI and Anthropic clients\n- **Automatic Monitoring**: Captures tokens, latency, costs, and errors\n- **Auto-Context Detection**: Automatically detects calling module and function for better observability\n- **Buffered & Async**: Events are buffered and sent in background, no performance impact\n- **Silent Failures**: If monitoring fails, your LLM calls continue uninterrupted\n- **Multi-tenant**: Each API key is isolated to your account\n- **Type Safe**: Full type hints for Python 3.8+\n\n## How It Works\n\nThe SDK wraps your LLM client and intercepts API calls to extract:\n- Provider and model used\n- Input and output token counts\n- Request latency\n- Success/error status\n- Error details (if any)\n- **Calling context** (automatically detects module and function name)\n\nEvents are buffered and sent to your TokenWatcher backend in batches, ensuring minimal overhead.\n\n### Automatic Context Detection\n\nWhen you don't provide an explicit `context` parameter, the SDK automatically detects the calling module and function:\n\n```python\n# In myapp/chatbot.py\ndef handle_user_message(message):\n client = MonitoredOpenAI(openai_client, api_key=\"ym_key\")\n response = client.chat.completions.create(...)\n # Context will be auto-set to: \"myapp.chatbot.handle_user_message\"\n```\n\nThis provides automatic code-level observability without any manual tagging! You can still override this by providing your own `context` parameter.\n\n## Requirements\n\n- Python 3.8+\n- requests >= 2.25.0\n- openai >= 1.0.0 (optional, for OpenAI support)\n- anthropic >= 0.18.0 (optional, for Anthropic support)\n\n## Environment Variables\n\nYou can configure the SDK using environment variables:\n\n```bash\nexport TOKENWATCHER_API_KEY=ym_your_key\nexport TOKENWATCHER_BASE_URL=https://api.token-watcher.com\nexport TOKENWATCHER_CONTEXT=production\n```\n\nThen use without explicit configuration:\n\n```python\nfrom tokenwatcher import MonitoredOpenAI\nfrom openai import OpenAI\nimport os\n\nclient = MonitoredOpenAI(\n OpenAI(api_key=\"sk-...\"),\n api_key=os.getenv(\"TOKENWATCHER_API_KEY\")\n)\n```\n\n## Support\n\n- Documentation: https://docs.token-watcher.com\n- Issues: https://github.com/tokenwatcher/tokenwatcher-python/issues\n- Email: hello@token-watcher.com\n\n## License\n\nMIT License - see LICENSE file for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python SDK for monitoring LLM API calls with TokenWatcher",
"version": "0.1.1",
"project_urls": {
"Documentation": "https://github.com/tokenwatcher/tokenwatcher-python",
"Homepage": "https://token-watcher.com",
"Repository": "https://github.com/tokenwatcher/tokenwatcher-python"
},
"split_keywords": [
"llm",
" monitoring",
" observability",
" openai",
" anthropic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "68abc02fd570a94b61052c84a7ba3318bcae37fadd4da5e61998263a6bb1bb89",
"md5": "abfe2fc9b48132c0c1cfb70e44f70fb2",
"sha256": "cf5a25039059bd2c20d028acdee2ad8351d9b966e8aff578ce24ce42cda3d77c"
},
"downloads": -1,
"filename": "tokenwatcher-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "abfe2fc9b48132c0c1cfb70e44f70fb2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 11796,
"upload_time": "2025-10-26T16:32:36",
"upload_time_iso_8601": "2025-10-26T16:32:36.167802Z",
"url": "https://files.pythonhosted.org/packages/68/ab/c02fd570a94b61052c84a7ba3318bcae37fadd4da5e61998263a6bb1bb89/tokenwatcher-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8d6f808794de6f9b1a07784acb3b0d39b31880c99de54cf52a3ed122485ce042",
"md5": "a860a9b055f8548f15239cf6d1a956da",
"sha256": "6dedc032f2a919bbeff807fe6ef27b5dd26513b7fa8d650ff64e87510c35885c"
},
"downloads": -1,
"filename": "tokenwatcher-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "a860a9b055f8548f15239cf6d1a956da",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 17286,
"upload_time": "2025-10-26T16:32:37",
"upload_time_iso_8601": "2025-10-26T16:32:37.077858Z",
"url": "https://files.pythonhosted.org/packages/8d/6f/808794de6f9b1a07784acb3b0d39b31880c99de54cf52a3ed122485ce042/tokenwatcher-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-26 16:32:37",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tokenwatcher",
"github_project": "tokenwatcher-python",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "tokenwatcher"
}