# ohtell
## unreliable observability is worse than no observability
A simple, async-first OpenTelemetry decorator for tracing Python functions. Automatically captures traces, metrics, and logs with minimal setup.
## Disclaimer 1: Why ohtell exists
You wanted to try OpenTelemetry because it's cool, it's funky, and you want to observe your code like a l33t engineer. But then you start reading their [Getting Started guide](https://opentelemetry.io/docs/languages/python/getting-started/) and realize that OTEL becomes "Oh Hell!"
- **What you want**: "Just works"
- **What OpenTelemetry gives you**: 47 lines of boilerplate, 12 imports, and 100 backends that work just a little bit different
So ohtell was born out of frustration that to collect and send JSON (and traces are just objects) you need a PhD in distributed tracing theory.
## Disclaimer 2: The Observability Ego Problem
Observability tools often have such an **inflated ego** that they crash your services when they can't authorize, hit rate limits, or parse your logs.
**Classic Catch-22**: You don't know that your service failed... because the observability wrapper killed your service... because it couldn't connect to the backend to tell that your app working just fine. 🤦♂️
- **What you want**: "Hey, I can't connect to track your stuff right now, but your code keeps working fine. I'll show you a warning when I'm back online."
- **What you get**: `AuthenticationError: Service destroyed. Have a nice day! 💥`
## Disclaimer 3: The Performance Killer Problem
Most observability tools will **murder your performance** because they send logs and traces synchronously.
**Picture this**: You're processing a high-volume Kafka queue. Each message takes 50ms to process. But your observability tool runs 3 blocking queries around each message - authentication (300ms), span creation (200ms), and export (400ms). **Congratulations, your 50ms job now takes 950ms.**
Your Kafka queue goes from processing 1000 messages/sec to... 1 message/sec. Your observability tool just made your system 1000x slower. Great job! 👏
- **ohtell's approach**: Fire-and-forget in background threads. Your code runs at full speed while telemetry gets sent **when there's processing power available**
- **Philosophy**: Observability is important, but **not more important than your actual work**
## ⚠️ Experimental Software Warning
This is experimental software because OpenTelemetry is still evolving rapidly. Some issues may arise as the ecosystem changes.
**Found a bug? Something broken?** Use our [GitHub issue tracker](https://github.com/anthropics/claude-code/issues) and let's fix the shit out of it together!
## Features
- 🎯 **Async-first decorator API** - All functions become async when decorated
- 🖥️ **Console output by default** - No setup needed, outputs to console when no OTEL endpoint configured
- 🔄 **Automatic span hierarchy** - Nested function calls create proper parent-child relationships
- 📊 **Complete observability** - Traces, metrics, and logs in one package
- 📝 **Print capture** - Automatically captures print statements as events and logs
- 🏷️ **Dynamic naming** - Template-based span names with parameters
- ⚡ **Zero-block export** - Fire-and-forget telemetry that doesn't block your code
## Installation
```bash
pip install ohtell
```
## Quick Start
```python
import asyncio
from ohtell import task
@task(name="Hello World")
async def hello(name: str):
print(f"Hello {name}!")
return f"Greetings, {name}"
# Run it
result = asyncio.run(hello("World"))
```
When no OTLP endpoint is configured, ohtell automatically outputs all telemetry data to the console in a readable format. This includes:
- Structured trace spans with timing and hierarchy
- All print statements captured as events
- Function arguments and return values
- Error details and stack traces
- Metrics summaries
**Console output is enabled when:**
- No `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable is set
- No `endpoint` is specified in config.yaml
- The endpoint is explicitly set to `console: true` in config.yaml
### Option 1: Environment Variables (OTLP Standard)
```bash
# OTLP endpoint (if not set, outputs to console)
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
# Optional: Authentication headers
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your-token"
# Optional: Service identification
export OTEL_SERVICE_NAME="my-app"
export OTEL_RESOURCE_ATTRIBUTES="service.namespace=production,deployment.environment=prod"
# Optional: Protocol (defaults to http/protobuf)
export OTEL_EXPORTER_OTLP_PROTOCOL="grpc"
```
### Option 2: Config File (config.yaml)
Create a `config.yaml` file in your project root:
```yaml
otel:
endpoint: "http://localhost:4317"
headers: "Authorization=Bearer your-token"
protocol: "grpc" # or "http/protobuf"
resource_attributes: "service.namespace=production,deployment.environment=prod"
# Or explicitly enable console output
otel:
console: true # Forces console output even if endpoint is set
```
### Option 3: Programmatic Configuration
**⚠️ IMPORTANT**: Environment variables must be set **BEFORE** importing ohtell:
```python
import asyncio
import os
# Set environment variables BEFORE importing ohtell
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4317'
os.environ['OTEL_EXPORTER_OTLP_HEADERS'] = 'Authorization=Bearer your-token'
os.environ['OTEL_RESOURCE_ATTRIBUTES'] = 'service.name=my-app,service.namespace=production'
# Import AFTER setting environment variables
from ohtell import task
@task(name="Configured Task")
async def configured_task():
return "configured"
asyncio.run(configured_task())
```
**Configuration Priority**: Environment variables → config.yaml → defaults
- Environment variables set **before import** work ✅
- Config.yaml files are always reliable ✅
- Environment variables set **after import** are ignored ❌
### Option 4: Enhanced Environment Variables
ohtell now supports additional environment variables for streamlined configuration:
```bash
# Service configuration with smart defaults
export ENV="production" # Sets deployment.environment (default: "dev")
export NAMESPACE="my-service" # Sets service.namespace
# Standard OTEL configuration
export OTEL_SERVICE_NAME="my-app"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
```
### Option 5: Programmatic Init Function
Use the enhanced `init()` function for runtime configuration:
```python
import ohtell
# Initialize with custom configuration
ohtell.init(
app_name="my-application", # Sets service.name
service_namespace="production", # Sets service.namespace
deployment_env="prod", # Sets deployment.environment
service_version="1.2.3" # Sets service.version
)
# Configuration priority: function params > ENV vars > pyproject.toml > defaults
# - service.name: from app_name parameter
# - service.namespace: from service_namespace parameter or NAMESPACE env var
# - deployment.environment: from deployment_env parameter or ENV env var or "dev"
# - service.version: from service_version parameter or pyproject.toml or "0.1.0"
# - service.hostname: automatically detected system hostname
# All parameters are optional - only set what you want to override
ohtell.init(app_name="my-app") # Just set the service name
# Config object is also optional
ohtell.init() # Use defaults and environment variables
```
## Distributed Tracing
**Problem**: HTTP calls between services create separate traces.
**Solution**: Pass trace context in HTTP headers, use `__otel_context` to link spans.
### Helper Functions
#### `get_otel_context()`
Extracts current span context as a dictionary for distributed tracing:
```python
from ohtell import get_otel_context
context = get_otel_context()
# Returns: {'trace_id': '...', 'span_id': '...', 'trace_flags': 1, 'is_remote': True}
```
#### `set_trace_id(trace_id)`
Overrides the trace ID for testing or custom scenarios:
```python
from ohtell import set_trace_id
# Use a custom trace ID (32 hex characters)
set_trace_id('deadbeefcafebabe1234567890abcdef')
```
### HTTP Example
```python
import httpx
from ohtell import task, get_otel_context
@task(name="Client Service")
async def call_api():
context = get_otel_context()
headers = {"X-Trace-Id": context['trace_id'], "X-Span-Id": context['span_id']}
async with httpx.AsyncClient() as client:
response = await client.post("http://api/process", headers=headers, json={"data": "test"})
return response.json()
@task
def process_data(data):
print(data)
async def handle_api_request(request_headers, data):
if "X-Trace-Id" in request_headers:
remote_context = {
'trace_id': request_headers["X-Trace-Id"],
'span_id': request_headers["X-Span-Id"],
'trace_flags': 1,
'is_remote': True
}
return await process_data(data, __otel_context=remote_context)
return await process_data(data)
```
### Queue Example
```python
import asyncio
from ohtell import task, get_otel_context
@task(name="Producer")
async def send_to_queue(data):
context = get_otel_context()
# Add trace context to message
message = {
"data": data,
"user_id": "123",
"__otel_context": context
}
await queue.put(message)
return message
@task(name="Consumer")
async def process_queue_message(message):
# Extract context and pass to processor
context = message.get("__otel_context")
return await process_data(message["data"], message["user_id"], __otel_context=context)
@task(name="Process Data")
async def process_data(data, user_id):
return f"processed {data} for {user_id}"
# Usage
async def main():
# Producer creates message with trace context
message = await send_to_queue("user_signup")
# Consumer processes message - maintains same trace_id as producer
result = await process_queue_message(message)
print(result) # "processed user_signup for 123"
asyncio.run(main())
```
**Result**: Producer and process_data spans share the same trace_id, even though consumer has its own trace. The message payload bridges the trace across the async boundary.
**Result**: Client and server spans share the same trace_id. The `__otel_context` parameter is consumed by `@task` - your functions never see it.
## Examples
### Basic API Workflow
```python
import asyncio
from ohtell import task, add_event
@task(name="API Endpoint")
async def api_handler(request_id: str):
"""Simulate an API endpoint."""
print(f"Processing request {request_id}")
add_event("request_received", {"request_id": request_id})
result = await process_data(request_id, data_size=100)
add_event("request_completed", {"request_id": request_id, "result_size": len(result)})
return result
@task(name="Data Processing")
async def process_data(request_id: str, data_size: int):
"""Simulate data processing."""
print(f"Processing {data_size} items for {request_id}")
processed = []
for i in range(data_size):
item_result = await transform_item(f"item_{i}")
processed.append(item_result)
print(f"Processed {len(processed)} items")
return processed
@task(name="Transform Item")
async def transform_item(item: str):
"""Simulate item transformation."""
await asyncio.sleep(0.001) # Simulate work
return f"transformed_{item}"
# Execute the workflow
result = asyncio.run(api_handler("test_request_123"))
```
### Error Handling
```python
import asyncio
from ohtell import task
@task(name="Failing Task")
async def failing_task(should_fail: bool = True):
"""Task that can fail."""
print("Starting task...")
if should_fail:
raise ValueError("Simulated failure")
return "success"
@task(name="Error Handler")
async def error_handler():
"""Task that handles errors."""
results = []
# Try successful task
try:
success_result = await failing_task(should_fail=False)
results.append(("success", success_result))
except Exception as e:
results.append(("error", str(e)))
# Try failing task
try:
fail_result = await failing_task(should_fail=True)
results.append(("success", fail_result))
except Exception as e:
results.append(("error", str(e)))
return results
results = asyncio.run(error_handler())
# Results: [('success', 'success'), ('error', 'Simulated failure')]
```
### Dynamic Task Names
```python
import asyncio
from ohtell import task
@task(
name="backup-{operation}-{priority}",
description="Dynamic task name example"
)
async def scheduled_backup(operation: str, priority: str, size_mb: int):
"""Task with dynamic naming based on parameters."""
print(f"Starting {operation} backup with {priority} priority")
print(f"Backing up {size_mb}MB of data")
# Simulate backup time proportional to size
backup_time = size_mb * 0.0001 # 0.1ms per MB
await asyncio.sleep(backup_time)
print(f"Backup completed: {operation}")
return {
"operation": operation,
"priority": priority,
"size_mb": size_mb,
"success": True
}
# Creates spans named: "backup-database-high", "backup-files-medium"
result1 = asyncio.run(scheduled_backup("database", "high", 1000))
result2 = asyncio.run(scheduled_backup("files", "medium", 500))
```
### Nested Span Hierarchy
```python
import asyncio
from ohtell import task
@task(name="Level 1", description="Top level task")
async def level_1():
"""Top level function."""
print("Entering level 1")
result = await level_2()
print("Exiting level 1")
return f"level_1({result})"
@task(name="Level 2", description="Second level task")
async def level_2():
"""Second level function."""
print("Entering level 2")
result = await level_3()
print("Exiting level 2")
return f"level_2({result})"
@task(name="Level 3", description="Third level task")
async def level_3():
"""Third level function."""
print("Entering level 3")
await asyncio.sleep(0.001) # Simulate work
print("Exiting level 3")
return "level_3()"
# Creates nested spans: Level 1 > Level 2 > Level 3
result = asyncio.run(level_1())
# Result: "level_1(level_2(level_3()))"
```
## What Gets Captured
Each decorated function automatically captures:
- **Traces**: Span hierarchy with timing, status, and relationships
- **Logs**: Print statements and structured logs, correlated with traces
- **Events**: Custom events with `add_event()` function
- **Errors**: Automatic exception recording with full tracebacks
- **I/O**: Function arguments and return values (safely serialized)
## Manual Metrics
ohtell provides a simple metrics API for custom counters. No automatic metrics are collected - you control what gets measured:
```python
from ohtell import metric
# Simple fluent interface - creates counters automatically
metric('api_calls').add(1)
metric('dice_rolls').add(1, {'roll_value': 6})
metric('errors').add(1, {'error_type': 'timeout'})
# With custom descriptions
metric('user_signups', 'Number of new user registrations').add(1)
```
**Features:**
- **Auto-creation**: Counters created on first use, cached for reuse
- **Fluent interface**: Returns OpenTelemetry counter with `.add()` method
- **Attributes support**: Add labels/dimensions to metrics
- **Custom descriptions**: Optional description parameter
## Adding Events and Span Data
### Custom Events
Add structured events to your traces with the `add_event` function:
```python
import asyncio
import time
from ohtell import task, add_event
@task(name="User Registration")
async def register_user(email: str, plan: str):
"""Register a new user with event tracking."""
# Add event at the start
add_event("registration_started", {
"email": email,
"plan": plan,
"timestamp": time.time()
})
# Simulate validation
if "@" not in email:
add_event("validation_failed", {"reason": "invalid_email"})
raise ValueError("Invalid email format")
# Add event for successful validation
add_event("validation_passed", {"email_domain": email.split("@")[1]})
# Simulate database save
await asyncio.sleep(0.1)
# Add event for completion
add_event("registration_completed", {
"user_id": f"user_{hash(email) % 10000}",
"plan": plan,
"success": True
})
return {"user_id": f"user_{hash(email) % 10000}", "status": "active"}
# Run it
result = asyncio.run(register_user("user@example.com", "premium"))
```
### Span Attributes vs Events
- **Events** (`add_event`): Time-stamped log entries within a span. Use for discrete occurrences.
- **Attributes**: Key-value metadata about the entire span. Automatically captured from function arguments and return values.
```python
import asyncio
from ohtell import task, add_event
@task(name="Data Processing Pipeline")
async def process_data(dataset_id: str, batch_size: int = 100):
"""Example showing events vs automatic attributes."""
# Function arguments become span attributes automatically:
# - dataset_id: "customers_2024"
# - batch_size: 100
# Events capture specific moments in time
add_event("pipeline_started", {
"dataset_id": dataset_id,
"batch_size": batch_size
})
processed_count = 0
for batch_num in range(3): # Simulate 3 batches
add_event("batch_started", {"batch_number": batch_num + 1})
await asyncio.sleep(0.01) # Simulate processing
batch_processed = min(batch_size, 250 - processed_count)
processed_count += batch_processed
add_event("batch_completed", {
"batch_number": batch_num + 1,
"records_processed": batch_processed,
"total_processed": processed_count
})
add_event("pipeline_completed", {
"total_records": processed_count,
"batches_completed": 3
})
# Return value becomes a span attribute automatically
return {"processed_records": processed_count, "status": "success"}
result = asyncio.run(process_data("customers_2024", batch_size=150))
```
### Event Best Practices
1. **Use descriptive names**: `user_login_attempt`, `payment_processed`, `cache_miss`
2. **Include relevant context**: user IDs, request IDs, error codes
3. **Add timestamps when relevant**: Custom timestamps for external events
4. **Keep attributes simple**: Strings, numbers, booleans work best
```python
# Good event examples
add_event("cache_miss", {"key": "user_123", "cache_type": "redis"})
add_event("api_call_started", {"endpoint": "/users", "method": "GET"})
add_event("validation_error", {"field": "email", "error": "format_invalid"})
# Avoid complex objects in events
add_event("user_data", {"user": user_object}) # Bad - complex object
add_event("user_registered", {"user_id": user.id}) # Good - simple ID
```
## Error Handling and Span Status
### Automatic Exception Handling
ohtell automatically marks spans as failed when exceptions occur and captures full error details:
```python
import asyncio
from ohtell import task, add_event
@task(name="Database Operation")
async def save_user(user_id: str, email: str):
"""Function that may fail with automatic error handling."""
add_event("save_started", {"user_id": user_id})
# Simulate validation
if not email or "@" not in email:
# Exception automatically marks span as FAILED
# Records full traceback and error details
raise ValueError(f"Invalid email format: {email}")
# Simulate database error
if user_id == "user_999":
raise ConnectionError("Database connection failed")
add_event("save_completed", {"user_id": user_id})
return {"status": "saved", "user_id": user_id}
# Test successful case
try:
result = asyncio.run(save_user("user_123", "valid@example.com"))
print(f"Success: {result}") # Span marked as OK
except Exception as e:
print(f"Failed: {e}")
# Test failed case
try:
result = asyncio.run(save_user("user_999", "test@example.com"))
except Exception as e:
print(f"Failed: {e}") # Span marked as ERROR with full traceback
```
### What Gets Captured on Errors
When an exception occurs, ohtell automatically captures:
- **Span Status**: Set to `ERROR` with error message
- **Exception Recording**: Full exception details using OpenTelemetry's `record_exception()`
- **Error Traceback**: Complete stack trace in `error.traceback` attribute
- **Error Type**: Exception class name in metrics and logs
- **Error Message**: Exception message in span status
### Error Propagation
Errors are automatically propagated up the span hierarchy:
```python
import asyncio
from ohtell import task, add_event
@task(name="Level 1 - API Handler")
async def api_handler(user_id: str):
"""Top level handler - will be marked as ERROR if any child fails."""
add_event("api_call_started", {"user_id": user_id})
try:
result = await business_logic(user_id)
add_event("api_call_completed", {"user_id": user_id})
return result
except Exception as e:
# Even though we catch here, the span is already marked as ERROR
add_event("api_call_failed", {"user_id": user_id, "error": str(e)})
raise # Re-raise to maintain error status
@task(name="Level 2 - Business Logic")
async def business_logic(user_id: str):
"""Middle layer - error here affects parent span."""
add_event("processing_started", {"user_id": user_id})
result = await database_save(user_id)
return result
@task(name="Level 3 - Database Save")
async def database_save(user_id: str):
"""Lowest level - error originates here."""
add_event("db_save_started", {"user_id": user_id})
if user_id == "invalid":
# This error marks ALL parent spans as ERROR too
raise ValueError("Invalid user ID")
return {"saved": user_id}
# This creates an error hierarchy:
# Level 1 - API Handler (ERROR due to child failure)
# └── Level 2 - Business Logic (ERROR due to child failure)
# └── Level 3 - Database Save (ERROR - original source)
try:
result = asyncio.run(api_handler("invalid"))
except ValueError as e:
print(f"Caught: {e}")
```
### Custom Error Context
Add custom error context with events before exceptions:
```python
import asyncio
from ohtell import task, add_event
@task(name="File Processor")
async def process_file(file_path: str, max_size_mb: int = 10):
"""Process file with detailed error context."""
add_event("processing_started", {
"file_path": file_path,
"max_size_mb": max_size_mb
})
# Check file existence
import os
if not os.path.exists(file_path):
add_event("file_not_found", {"file_path": file_path})
raise FileNotFoundError(f"File not found: {file_path}")
# Check file size
file_size_mb = os.path.getsize(file_path) / (1024 * 1024)
add_event("file_size_checked", {
"file_path": file_path,
"size_mb": file_size_mb,
"max_allowed_mb": max_size_mb
})
if file_size_mb > max_size_mb:
add_event("file_too_large", {
"file_path": file_path,
"size_mb": file_size_mb,
"max_allowed_mb": max_size_mb,
"over_limit_by_mb": file_size_mb - max_size_mb
})
raise ValueError(f"File too large: {file_size_mb}MB > {max_size_mb}MB")
add_event("processing_completed", {"file_path": file_path})
return {"processed": file_path, "size_mb": file_size_mb}
# Test error cases with rich context
try:
result = asyncio.run(process_file("/nonexistent/file.txt"))
except FileNotFoundError as e:
print(f"File error: {e}")
try:
result = asyncio.run(process_file("large_file.txt", max_size_mb=1))
except ValueError as e:
print(f"Size error: {e}")
```
All error information is automatically captured in traces, metrics, and logs without any additional code.
## Export Control
```python
from ohtell import force_flush, trigger_export, shutdown
# Wait for all data to be exported (blocks)
force_flush()
# Trigger export in background (non-blocking)
trigger_export()
# Manual shutdown
shutdown()
```
## Configuration Options
### Environment Variables
**Core OTLP Configuration:**
| Variable | Default | Config YAML Key | Description |
|----------|---------|-----------------|-------------|
| `OTEL_EXPORTER_OTLP_ENDPOINT` | *(none)* | `endpoint` | OTLP endpoint URL. If not set, outputs to console |
| `OTEL_EXPORTER_OTLP_HEADERS` | *(none)* | `headers` | Authentication headers |
| `OTEL_EXPORTER_OTLP_PROTOCOL` | `http/protobuf` | `protocol` | Protocol: `grpc` or `http/protobuf` |
| `OTEL_SERVICE_NAME` | *(script filename)* | *(via resource_attributes)* | Service name (auto-detected from main script) |
| `OTEL_RESOURCE_ATTRIBUTES` | *(none)* | `resource_attributes` | Resource attributes (comma-separated key=value) |
**Enhanced Service Configuration:**
| Variable | Default | Description |
|----------|---------|-------------|
| `ENV` | `dev` | Sets `deployment.environment` resource attribute |
| `NAMESPACE` | *(none)* | Sets `service.namespace` resource attribute |
**Export Configuration:**
| Variable | Default | Config YAML Key | Description |
|----------|---------|-----------------|-------------|
| `OTEL_SPAN_EXPORT_INTERVAL_MS` | `500` | `span_export_interval_ms` | Trace export interval (milliseconds) |
| `OTEL_LOG_EXPORT_INTERVAL_MS` | `500` | `log_export_interval_ms` | Log export interval (milliseconds) |
| `OTEL_METRIC_EXPORT_INTERVAL_MS` | `30000` | `metric_export_interval_ms` | Metric export interval (milliseconds) |
| `OTEL_MAX_EXPORT_BATCH_SIZE` | `50` | `max_export_batch_size` | Maximum batch size for exports |
| `OTEL_MAX_QUEUE_SIZE` | `512` | `max_queue_size` | Maximum queue size |
**ohtell-Specific Configuration:**
| Variable | Default | Config YAML Key | Description |
|----------|---------|-----------------|-------------|
| `OTEL_WRAPPER_SKIP_CLEANUP` | `true` | `skip_cleanup` | Skip automatic cleanup on process exit |
**Environment variables always take precedence over config.yaml settings.**
### Automatic Service Attributes
ohtell automatically captures these service attributes:
| Attribute | Source | Description |
|-----------|--------|-------------|
| `service.name` | `OTEL_SERVICE_NAME` or `init()` parameter | Service name identifier (auto-detected from script filename) |
| `service.namespace` | `NAMESPACE` env var or `init()` parameter | Service namespace |
| `service.version` | `pyproject.toml` or `init()` parameter | Version from project metadata (auto-detected) |
| `service.hostname` | System hostname | Automatically detected server/container hostname |
| `deployment.environment` | `ENV` env var or `init()` parameter | Environment (default: "dev") |
**Auto-Detection Features**: ohtell automatically detects service information:
**Service Name**: Uses the filename of your main Python script:
```bash
python api_server.py # → service.name = "api_server"
python -m my_app.main # → service.name = "main"
python /path/to/worker.py # → service.name = "worker"
```
**Service Version**: Reads from your `pyproject.toml` file:
```toml
[project]
name = "my-app"
version = "1.2.3" # ← Automatically detected and used as service.version
```
**Configuration Priority** for each attribute:
1. `init()` function parameters (highest priority)
2. Environment variables (`ENV`, `NAMESPACE`, `OTEL_SERVICE_NAME`)
3. `pyproject.toml` (for version only)
4. Defaults (lowest priority)
### Config File Format (config.yaml)
```yaml
otel:
# Core OTLP Configuration
endpoint: "http://localhost:4317" # OTLP endpoint (omit for console output)
console: true # Force console output (overrides endpoint)
headers: "Authorization=Bearer token123" # Auth headers
protocol: "grpc" # grpc or http/protobuf
resource_attributes: "key1=value1,key2=value2" # Resource attributes
# Export Intervals (milliseconds)
span_export_interval_ms: 500 # Trace export interval (0.5 seconds)
log_export_interval_ms: 500 # Log export interval (0.5 seconds)
metric_export_interval_ms: 30000 # Metric export interval (30 seconds)
# Batch Configuration
max_export_batch_size: 50 # Maximum batch size for exports
max_queue_size: 512 # Maximum queue size
# Cleanup Configuration
skip_cleanup: true # Skip automatic cleanup on exit
```
The config file is automatically loaded from the project root if it exists. **Environment variables take precedence over config file values.**
## Testing
Run the comprehensive test suite:
```bash
# Run all tests
pytest tests/
# Run specific test categories
pytest tests/test_integration.py # Integration tests with real examples
pytest tests/test_config.py # Configuration tests
pytest tests/test_metrics.py # Metrics functionality tests
```
The integration tests in `tests/test_integration.py` contain realistic examples that demonstrate all features working together.
Raw data
{
"_id": null,
"home_page": null,
"name": "ohtell",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "opentelemetry, tracing, observability, monitoring, decorator",
"author": null,
"author_email": "Slava <slava.ganzin@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/6b/d2/994f263cef6bca7a1b815491b128e2f90f32e2e88babd90dafa7b99b2ebd/ohtell-0.2.19.tar.gz",
"platform": null,
"description": "# ohtell\n\n## unreliable observability is worse than no observability\n\nA simple, async-first OpenTelemetry decorator for tracing Python functions. Automatically captures traces, metrics, and logs with minimal setup.\n\n## Disclaimer 1: Why ohtell exists\n\nYou wanted to try OpenTelemetry because it's cool, it's funky, and you want to observe your code like a l33t engineer. But then you start reading their [Getting Started guide](https://opentelemetry.io/docs/languages/python/getting-started/) and realize that OTEL becomes \"Oh Hell!\" \n\n- **What you want**: \"Just works\"\n- **What OpenTelemetry gives you**: 47 lines of boilerplate, 12 imports, and 100 backends that work just a little bit different\n\nSo ohtell was born out of frustration that to collect and send JSON (and traces are just objects) you need a PhD in distributed tracing theory.\n\n## Disclaimer 2: The Observability Ego Problem\n\nObservability tools often have such an **inflated ego** that they crash your services when they can't authorize, hit rate limits, or parse your logs.\n\n**Classic Catch-22**: You don't know that your service failed... because the observability wrapper killed your service... because it couldn't connect to the backend to tell that your app working just fine. \ud83e\udd26\u200d\u2642\ufe0f\n\n- **What you want**: \"Hey, I can't connect to track your stuff right now, but your code keeps working fine. I'll show you a warning when I'm back online.\"\n- **What you get**: `AuthenticationError: Service destroyed. Have a nice day! \ud83d\udca5`\n\n## Disclaimer 3: The Performance Killer Problem\n\nMost observability tools will **murder your performance** because they send logs and traces synchronously. \n\n**Picture this**: You're processing a high-volume Kafka queue. Each message takes 50ms to process. But your observability tool runs 3 blocking queries around each message - authentication (300ms), span creation (200ms), and export (400ms). **Congratulations, your 50ms job now takes 950ms.**\n\nYour Kafka queue goes from processing 1000 messages/sec to... 1 message/sec. Your observability tool just made your system 1000x slower. Great job! \ud83d\udc4f\n\n- **ohtell's approach**: Fire-and-forget in background threads. Your code runs at full speed while telemetry gets sent **when there's processing power available**\n- **Philosophy**: Observability is important, but **not more important than your actual work**\n\n## \u26a0\ufe0f Experimental Software Warning\n\nThis is experimental software because OpenTelemetry is still evolving rapidly. Some issues may arise as the ecosystem changes. \n\n**Found a bug? Something broken?** Use our [GitHub issue tracker](https://github.com/anthropics/claude-code/issues) and let's fix the shit out of it together!\n\n## Features\n\n- \ud83c\udfaf **Async-first decorator API** - All functions become async when decorated\n- \ud83d\udda5\ufe0f **Console output by default** - No setup needed, outputs to console when no OTEL endpoint configured\n- \ud83d\udd04 **Automatic span hierarchy** - Nested function calls create proper parent-child relationships \n- \ud83d\udcca **Complete observability** - Traces, metrics, and logs in one package\n- \ud83d\udcdd **Print capture** - Automatically captures print statements as events and logs\n- \ud83c\udff7\ufe0f **Dynamic naming** - Template-based span names with parameters\n- \u26a1 **Zero-block export** - Fire-and-forget telemetry that doesn't block your code\n\n## Installation\n\n```bash\npip install ohtell\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom ohtell import task\n\n@task(name=\"Hello World\")\nasync def hello(name: str):\n print(f\"Hello {name}!\")\n return f\"Greetings, {name}\"\n\n# Run it\nresult = asyncio.run(hello(\"World\"))\n```\n\nWhen no OTLP endpoint is configured, ohtell automatically outputs all telemetry data to the console in a readable format. This includes:\n- Structured trace spans with timing and hierarchy\n- All print statements captured as events\n- Function arguments and return values\n- Error details and stack traces\n- Metrics summaries\n\n**Console output is enabled when:**\n- No `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable is set\n- No `endpoint` is specified in config.yaml\n- The endpoint is explicitly set to `console: true` in config.yaml\n\n### Option 1: Environment Variables (OTLP Standard)\n\n```bash\n# OTLP endpoint (if not set, outputs to console)\nexport OTEL_EXPORTER_OTLP_ENDPOINT=\"http://localhost:4317\"\n\n# Optional: Authentication headers\nexport OTEL_EXPORTER_OTLP_HEADERS=\"Authorization=Bearer your-token\"\n\n# Optional: Service identification\nexport OTEL_SERVICE_NAME=\"my-app\"\nexport OTEL_RESOURCE_ATTRIBUTES=\"service.namespace=production,deployment.environment=prod\"\n\n# Optional: Protocol (defaults to http/protobuf)\nexport OTEL_EXPORTER_OTLP_PROTOCOL=\"grpc\"\n```\n\n### Option 2: Config File (config.yaml)\n\nCreate a `config.yaml` file in your project root:\n\n```yaml\notel:\n endpoint: \"http://localhost:4317\"\n headers: \"Authorization=Bearer your-token\"\n protocol: \"grpc\" # or \"http/protobuf\"\n resource_attributes: \"service.namespace=production,deployment.environment=prod\"\n\n# Or explicitly enable console output\notel:\n console: true # Forces console output even if endpoint is set\n```\n\n### Option 3: Programmatic Configuration\n\n**\u26a0\ufe0f IMPORTANT**: Environment variables must be set **BEFORE** importing ohtell:\n\n```python\nimport asyncio\nimport os\n\n# Set environment variables BEFORE importing ohtell\nos.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'http://localhost:4317'\nos.environ['OTEL_EXPORTER_OTLP_HEADERS'] = 'Authorization=Bearer your-token'\nos.environ['OTEL_RESOURCE_ATTRIBUTES'] = 'service.name=my-app,service.namespace=production'\n\n# Import AFTER setting environment variables\nfrom ohtell import task\n\n@task(name=\"Configured Task\")\nasync def configured_task():\n return \"configured\"\n\nasyncio.run(configured_task())\n```\n\n**Configuration Priority**: Environment variables \u2192 config.yaml \u2192 defaults\n\n- Environment variables set **before import** work \u2705\n- Config.yaml files are always reliable \u2705 \n- Environment variables set **after import** are ignored \u274c\n\n### Option 4: Enhanced Environment Variables\n\nohtell now supports additional environment variables for streamlined configuration:\n\n```bash\n# Service configuration with smart defaults\nexport ENV=\"production\" # Sets deployment.environment (default: \"dev\")\nexport NAMESPACE=\"my-service\" # Sets service.namespace\n\n# Standard OTEL configuration\nexport OTEL_SERVICE_NAME=\"my-app\"\nexport OTEL_EXPORTER_OTLP_ENDPOINT=\"http://localhost:4317\"\n```\n\n### Option 5: Programmatic Init Function\n\nUse the enhanced `init()` function for runtime configuration:\n\n```python\nimport ohtell\n\n# Initialize with custom configuration\nohtell.init(\n app_name=\"my-application\", # Sets service.name\n service_namespace=\"production\", # Sets service.namespace \n deployment_env=\"prod\", # Sets deployment.environment\n service_version=\"1.2.3\" # Sets service.version\n)\n\n# Configuration priority: function params > ENV vars > pyproject.toml > defaults\n# - service.name: from app_name parameter\n# - service.namespace: from service_namespace parameter or NAMESPACE env var\n# - deployment.environment: from deployment_env parameter or ENV env var or \"dev\"\n# - service.version: from service_version parameter or pyproject.toml or \"0.1.0\"\n# - service.hostname: automatically detected system hostname\n\n# All parameters are optional - only set what you want to override\nohtell.init(app_name=\"my-app\") # Just set the service name\n\n# Config object is also optional\nohtell.init() # Use defaults and environment variables\n```\n\n## Distributed Tracing\n\n**Problem**: HTTP calls between services create separate traces.\n\n**Solution**: Pass trace context in HTTP headers, use `__otel_context` to link spans.\n\n### Helper Functions\n\n#### `get_otel_context()`\n\nExtracts current span context as a dictionary for distributed tracing:\n\n```python\nfrom ohtell import get_otel_context\n\ncontext = get_otel_context()\n# Returns: {'trace_id': '...', 'span_id': '...', 'trace_flags': 1, 'is_remote': True}\n```\n\n#### `set_trace_id(trace_id)`\n\nOverrides the trace ID for testing or custom scenarios:\n\n```python\nfrom ohtell import set_trace_id\n\n# Use a custom trace ID (32 hex characters)\nset_trace_id('deadbeefcafebabe1234567890abcdef')\n```\n\n\n### HTTP Example\n\n```python\nimport httpx\nfrom ohtell import task, get_otel_context\n\n@task(name=\"Client Service\")\nasync def call_api():\n context = get_otel_context()\n headers = {\"X-Trace-Id\": context['trace_id'], \"X-Span-Id\": context['span_id']}\n \n async with httpx.AsyncClient() as client:\n response = await client.post(\"http://api/process\", headers=headers, json={\"data\": \"test\"})\n \n return response.json()\n\n@task\ndef process_data(data):\n print(data)\n\nasync def handle_api_request(request_headers, data):\n if \"X-Trace-Id\" in request_headers:\n remote_context = {\n 'trace_id': request_headers[\"X-Trace-Id\"], \n 'span_id': request_headers[\"X-Span-Id\"],\n 'trace_flags': 1,\n 'is_remote': True\n }\n return await process_data(data, __otel_context=remote_context)\n \n return await process_data(data)\n```\n\n### Queue Example\n\n```python\nimport asyncio\nfrom ohtell import task, get_otel_context\n\n@task(name=\"Producer\")\nasync def send_to_queue(data):\n context = get_otel_context()\n \n # Add trace context to message\n message = {\n \"data\": data,\n \"user_id\": \"123\",\n \"__otel_context\": context\n }\n \n await queue.put(message)\n return message\n\n@task(name=\"Consumer\") \nasync def process_queue_message(message):\n # Extract context and pass to processor\n context = message.get(\"__otel_context\")\n return await process_data(message[\"data\"], message[\"user_id\"], __otel_context=context)\n\n@task(name=\"Process Data\")\nasync def process_data(data, user_id):\n return f\"processed {data} for {user_id}\"\n\n# Usage\nasync def main():\n # Producer creates message with trace context\n message = await send_to_queue(\"user_signup\")\n \n # Consumer processes message - maintains same trace_id as producer\n result = await process_queue_message(message)\n print(result) # \"processed user_signup for 123\"\n\nasyncio.run(main())\n```\n\n**Result**: Producer and process_data spans share the same trace_id, even though consumer has its own trace. The message payload bridges the trace across the async boundary.\n\n**Result**: Client and server spans share the same trace_id. The `__otel_context` parameter is consumed by `@task` - your functions never see it.\n\n## Examples\n\n### Basic API Workflow\n\n```python\nimport asyncio\nfrom ohtell import task, add_event\n\n@task(name=\"API Endpoint\")\nasync def api_handler(request_id: str):\n \"\"\"Simulate an API endpoint.\"\"\"\n print(f\"Processing request {request_id}\")\n add_event(\"request_received\", {\"request_id\": request_id})\n \n result = await process_data(request_id, data_size=100)\n \n add_event(\"request_completed\", {\"request_id\": request_id, \"result_size\": len(result)})\n return result\n\n@task(name=\"Data Processing\")\nasync def process_data(request_id: str, data_size: int):\n \"\"\"Simulate data processing.\"\"\"\n print(f\"Processing {data_size} items for {request_id}\")\n \n processed = []\n for i in range(data_size):\n item_result = await transform_item(f\"item_{i}\")\n processed.append(item_result)\n \n print(f\"Processed {len(processed)} items\")\n return processed\n\n@task(name=\"Transform Item\")\nasync def transform_item(item: str):\n \"\"\"Simulate item transformation.\"\"\"\n await asyncio.sleep(0.001) # Simulate work\n return f\"transformed_{item}\"\n\n# Execute the workflow\nresult = asyncio.run(api_handler(\"test_request_123\"))\n```\n\n### Error Handling\n\n```python\nimport asyncio\nfrom ohtell import task\n\n@task(name=\"Failing Task\")\nasync def failing_task(should_fail: bool = True):\n \"\"\"Task that can fail.\"\"\"\n print(\"Starting task...\")\n \n if should_fail:\n raise ValueError(\"Simulated failure\")\n \n return \"success\"\n\n@task(name=\"Error Handler\")\nasync def error_handler():\n \"\"\"Task that handles errors.\"\"\"\n results = []\n \n # Try successful task\n try:\n success_result = await failing_task(should_fail=False)\n results.append((\"success\", success_result))\n except Exception as e:\n results.append((\"error\", str(e)))\n \n # Try failing task \n try:\n fail_result = await failing_task(should_fail=True)\n results.append((\"success\", fail_result))\n except Exception as e:\n results.append((\"error\", str(e)))\n \n return results\n\nresults = asyncio.run(error_handler())\n# Results: [('success', 'success'), ('error', 'Simulated failure')]\n```\n\n### Dynamic Task Names\n\n```python\nimport asyncio\nfrom ohtell import task\n\n@task(\n name=\"backup-{operation}-{priority}\",\n description=\"Dynamic task name example\"\n)\nasync def scheduled_backup(operation: str, priority: str, size_mb: int):\n \"\"\"Task with dynamic naming based on parameters.\"\"\"\n print(f\"Starting {operation} backup with {priority} priority\")\n print(f\"Backing up {size_mb}MB of data\")\n \n # Simulate backup time proportional to size\n backup_time = size_mb * 0.0001 # 0.1ms per MB\n await asyncio.sleep(backup_time)\n \n print(f\"Backup completed: {operation}\")\n return {\n \"operation\": operation,\n \"priority\": priority, \n \"size_mb\": size_mb,\n \"success\": True\n }\n\n# Creates spans named: \"backup-database-high\", \"backup-files-medium\"\nresult1 = asyncio.run(scheduled_backup(\"database\", \"high\", 1000))\nresult2 = asyncio.run(scheduled_backup(\"files\", \"medium\", 500))\n```\n\n### Nested Span Hierarchy\n\n```python\nimport asyncio\nfrom ohtell import task\n\n@task(name=\"Level 1\", description=\"Top level task\")\nasync def level_1():\n \"\"\"Top level function.\"\"\"\n print(\"Entering level 1\")\n result = await level_2()\n print(\"Exiting level 1\")\n return f\"level_1({result})\"\n\n@task(name=\"Level 2\", description=\"Second level task\")\nasync def level_2():\n \"\"\"Second level function.\"\"\"\n print(\"Entering level 2\") \n result = await level_3()\n print(\"Exiting level 2\")\n return f\"level_2({result})\"\n\n@task(name=\"Level 3\", description=\"Third level task\")\nasync def level_3():\n \"\"\"Third level function.\"\"\"\n print(\"Entering level 3\")\n await asyncio.sleep(0.001) # Simulate work\n print(\"Exiting level 3\")\n return \"level_3()\"\n\n# Creates nested spans: Level 1 > Level 2 > Level 3\nresult = asyncio.run(level_1())\n# Result: \"level_1(level_2(level_3()))\"\n```\n\n## What Gets Captured\n\nEach decorated function automatically captures:\n\n- **Traces**: Span hierarchy with timing, status, and relationships\n- **Logs**: Print statements and structured logs, correlated with traces\n- **Events**: Custom events with `add_event()` function\n- **Errors**: Automatic exception recording with full tracebacks\n- **I/O**: Function arguments and return values (safely serialized)\n\n## Manual Metrics\n\nohtell provides a simple metrics API for custom counters. No automatic metrics are collected - you control what gets measured:\n\n```python\nfrom ohtell import metric\n\n# Simple fluent interface - creates counters automatically\nmetric('api_calls').add(1)\nmetric('dice_rolls').add(1, {'roll_value': 6})\nmetric('errors').add(1, {'error_type': 'timeout'})\n\n# With custom descriptions\nmetric('user_signups', 'Number of new user registrations').add(1)\n```\n\n**Features:**\n- **Auto-creation**: Counters created on first use, cached for reuse\n- **Fluent interface**: Returns OpenTelemetry counter with `.add()` method\n- **Attributes support**: Add labels/dimensions to metrics\n- **Custom descriptions**: Optional description parameter\n\n## Adding Events and Span Data\n\n### Custom Events\n\nAdd structured events to your traces with the `add_event` function:\n\n```python\nimport asyncio\nimport time\nfrom ohtell import task, add_event\n\n@task(name=\"User Registration\")\nasync def register_user(email: str, plan: str):\n \"\"\"Register a new user with event tracking.\"\"\"\n \n # Add event at the start\n add_event(\"registration_started\", {\n \"email\": email,\n \"plan\": plan,\n \"timestamp\": time.time()\n })\n \n # Simulate validation\n if \"@\" not in email:\n add_event(\"validation_failed\", {\"reason\": \"invalid_email\"})\n raise ValueError(\"Invalid email format\")\n \n # Add event for successful validation\n add_event(\"validation_passed\", {\"email_domain\": email.split(\"@\")[1]})\n \n # Simulate database save\n await asyncio.sleep(0.1)\n \n # Add event for completion\n add_event(\"registration_completed\", {\n \"user_id\": f\"user_{hash(email) % 10000}\",\n \"plan\": plan,\n \"success\": True\n })\n \n return {\"user_id\": f\"user_{hash(email) % 10000}\", \"status\": \"active\"}\n\n# Run it\nresult = asyncio.run(register_user(\"user@example.com\", \"premium\"))\n```\n\n### Span Attributes vs Events\n\n- **Events** (`add_event`): Time-stamped log entries within a span. Use for discrete occurrences.\n- **Attributes**: Key-value metadata about the entire span. Automatically captured from function arguments and return values.\n\n```python\nimport asyncio\nfrom ohtell import task, add_event\n\n@task(name=\"Data Processing Pipeline\")\nasync def process_data(dataset_id: str, batch_size: int = 100):\n \"\"\"Example showing events vs automatic attributes.\"\"\"\n \n # Function arguments become span attributes automatically:\n # - dataset_id: \"customers_2024\"\n # - batch_size: 100\n \n # Events capture specific moments in time\n add_event(\"pipeline_started\", {\n \"dataset_id\": dataset_id,\n \"batch_size\": batch_size\n })\n \n processed_count = 0\n for batch_num in range(3): # Simulate 3 batches\n add_event(\"batch_started\", {\"batch_number\": batch_num + 1})\n \n await asyncio.sleep(0.01) # Simulate processing\n batch_processed = min(batch_size, 250 - processed_count)\n processed_count += batch_processed\n \n add_event(\"batch_completed\", {\n \"batch_number\": batch_num + 1,\n \"records_processed\": batch_processed,\n \"total_processed\": processed_count\n })\n \n add_event(\"pipeline_completed\", {\n \"total_records\": processed_count,\n \"batches_completed\": 3\n })\n \n # Return value becomes a span attribute automatically\n return {\"processed_records\": processed_count, \"status\": \"success\"}\n\nresult = asyncio.run(process_data(\"customers_2024\", batch_size=150))\n```\n\n### Event Best Practices\n\n1. **Use descriptive names**: `user_login_attempt`, `payment_processed`, `cache_miss`\n2. **Include relevant context**: user IDs, request IDs, error codes\n3. **Add timestamps when relevant**: Custom timestamps for external events\n4. **Keep attributes simple**: Strings, numbers, booleans work best\n\n```python\n# Good event examples\nadd_event(\"cache_miss\", {\"key\": \"user_123\", \"cache_type\": \"redis\"})\nadd_event(\"api_call_started\", {\"endpoint\": \"/users\", \"method\": \"GET\"})\nadd_event(\"validation_error\", {\"field\": \"email\", \"error\": \"format_invalid\"})\n\n# Avoid complex objects in events\nadd_event(\"user_data\", {\"user\": user_object}) # Bad - complex object\nadd_event(\"user_registered\", {\"user_id\": user.id}) # Good - simple ID\n```\n\n## Error Handling and Span Status\n\n### Automatic Exception Handling\n\nohtell automatically marks spans as failed when exceptions occur and captures full error details:\n\n```python\nimport asyncio\nfrom ohtell import task, add_event\n\n@task(name=\"Database Operation\")\nasync def save_user(user_id: str, email: str):\n \"\"\"Function that may fail with automatic error handling.\"\"\"\n \n add_event(\"save_started\", {\"user_id\": user_id})\n \n # Simulate validation\n if not email or \"@\" not in email:\n # Exception automatically marks span as FAILED\n # Records full traceback and error details\n raise ValueError(f\"Invalid email format: {email}\")\n \n # Simulate database error\n if user_id == \"user_999\":\n raise ConnectionError(\"Database connection failed\")\n \n add_event(\"save_completed\", {\"user_id\": user_id})\n return {\"status\": \"saved\", \"user_id\": user_id}\n\n# Test successful case\ntry:\n result = asyncio.run(save_user(\"user_123\", \"valid@example.com\"))\n print(f\"Success: {result}\") # Span marked as OK\nexcept Exception as e:\n print(f\"Failed: {e}\")\n\n# Test failed case \ntry:\n result = asyncio.run(save_user(\"user_999\", \"test@example.com\"))\nexcept Exception as e:\n print(f\"Failed: {e}\") # Span marked as ERROR with full traceback\n```\n\n### What Gets Captured on Errors\n\nWhen an exception occurs, ohtell automatically captures:\n\n- **Span Status**: Set to `ERROR` with error message\n- **Exception Recording**: Full exception details using OpenTelemetry's `record_exception()`\n- **Error Traceback**: Complete stack trace in `error.traceback` attribute\n- **Error Type**: Exception class name in metrics and logs\n- **Error Message**: Exception message in span status\n\n### Error Propagation\n\nErrors are automatically propagated up the span hierarchy:\n\n```python\nimport asyncio\nfrom ohtell import task, add_event\n\n@task(name=\"Level 1 - API Handler\") \nasync def api_handler(user_id: str):\n \"\"\"Top level handler - will be marked as ERROR if any child fails.\"\"\"\n add_event(\"api_call_started\", {\"user_id\": user_id})\n \n try:\n result = await business_logic(user_id)\n add_event(\"api_call_completed\", {\"user_id\": user_id})\n return result\n except Exception as e:\n # Even though we catch here, the span is already marked as ERROR\n add_event(\"api_call_failed\", {\"user_id\": user_id, \"error\": str(e)})\n raise # Re-raise to maintain error status\n\n@task(name=\"Level 2 - Business Logic\")\nasync def business_logic(user_id: str):\n \"\"\"Middle layer - error here affects parent span.\"\"\"\n add_event(\"processing_started\", {\"user_id\": user_id})\n \n result = await database_save(user_id)\n return result\n\n@task(name=\"Level 3 - Database Save\") \nasync def database_save(user_id: str):\n \"\"\"Lowest level - error originates here.\"\"\"\n add_event(\"db_save_started\", {\"user_id\": user_id})\n \n if user_id == \"invalid\":\n # This error marks ALL parent spans as ERROR too\n raise ValueError(\"Invalid user ID\")\n \n return {\"saved\": user_id}\n\n# This creates an error hierarchy:\n# Level 1 - API Handler (ERROR due to child failure)\n# \u2514\u2500\u2500 Level 2 - Business Logic (ERROR due to child failure) \n# \u2514\u2500\u2500 Level 3 - Database Save (ERROR - original source)\ntry:\n result = asyncio.run(api_handler(\"invalid\"))\nexcept ValueError as e:\n print(f\"Caught: {e}\")\n```\n\n### Custom Error Context\n\nAdd custom error context with events before exceptions:\n\n```python\nimport asyncio\nfrom ohtell import task, add_event\n\n@task(name=\"File Processor\")\nasync def process_file(file_path: str, max_size_mb: int = 10):\n \"\"\"Process file with detailed error context.\"\"\"\n \n add_event(\"processing_started\", {\n \"file_path\": file_path,\n \"max_size_mb\": max_size_mb\n })\n \n # Check file existence\n import os\n if not os.path.exists(file_path):\n add_event(\"file_not_found\", {\"file_path\": file_path})\n raise FileNotFoundError(f\"File not found: {file_path}\")\n \n # Check file size\n file_size_mb = os.path.getsize(file_path) / (1024 * 1024)\n add_event(\"file_size_checked\", {\n \"file_path\": file_path, \n \"size_mb\": file_size_mb,\n \"max_allowed_mb\": max_size_mb\n })\n \n if file_size_mb > max_size_mb:\n add_event(\"file_too_large\", {\n \"file_path\": file_path,\n \"size_mb\": file_size_mb,\n \"max_allowed_mb\": max_size_mb,\n \"over_limit_by_mb\": file_size_mb - max_size_mb\n })\n raise ValueError(f\"File too large: {file_size_mb}MB > {max_size_mb}MB\")\n \n add_event(\"processing_completed\", {\"file_path\": file_path})\n return {\"processed\": file_path, \"size_mb\": file_size_mb}\n\n# Test error cases with rich context\ntry:\n result = asyncio.run(process_file(\"/nonexistent/file.txt\"))\nexcept FileNotFoundError as e:\n print(f\"File error: {e}\")\n\ntry: \n result = asyncio.run(process_file(\"large_file.txt\", max_size_mb=1))\nexcept ValueError as e:\n print(f\"Size error: {e}\")\n```\n\nAll error information is automatically captured in traces, metrics, and logs without any additional code.\n\n## Export Control\n\n```python\nfrom ohtell import force_flush, trigger_export, shutdown\n\n# Wait for all data to be exported (blocks)\nforce_flush()\n\n# Trigger export in background (non-blocking)\ntrigger_export()\n\n# Manual shutdown\nshutdown()\n```\n\n## Configuration Options\n\n### Environment Variables\n\n**Core OTLP Configuration:**\n| Variable | Default | Config YAML Key | Description |\n|----------|---------|-----------------|-------------|\n| `OTEL_EXPORTER_OTLP_ENDPOINT` | *(none)* | `endpoint` | OTLP endpoint URL. If not set, outputs to console |\n| `OTEL_EXPORTER_OTLP_HEADERS` | *(none)* | `headers` | Authentication headers |\n| `OTEL_EXPORTER_OTLP_PROTOCOL` | `http/protobuf` | `protocol` | Protocol: `grpc` or `http/protobuf` |\n| `OTEL_SERVICE_NAME` | *(script filename)* | *(via resource_attributes)* | Service name (auto-detected from main script) |\n| `OTEL_RESOURCE_ATTRIBUTES` | *(none)* | `resource_attributes` | Resource attributes (comma-separated key=value) |\n\n**Enhanced Service Configuration:**\n| Variable | Default | Description |\n|----------|---------|-------------|\n| `ENV` | `dev` | Sets `deployment.environment` resource attribute |\n| `NAMESPACE` | *(none)* | Sets `service.namespace` resource attribute |\n\n**Export Configuration:**\n| Variable | Default | Config YAML Key | Description |\n|----------|---------|-----------------|-------------|\n| `OTEL_SPAN_EXPORT_INTERVAL_MS` | `500` | `span_export_interval_ms` | Trace export interval (milliseconds) |\n| `OTEL_LOG_EXPORT_INTERVAL_MS` | `500` | `log_export_interval_ms` | Log export interval (milliseconds) |\n| `OTEL_METRIC_EXPORT_INTERVAL_MS` | `30000` | `metric_export_interval_ms` | Metric export interval (milliseconds) |\n| `OTEL_MAX_EXPORT_BATCH_SIZE` | `50` | `max_export_batch_size` | Maximum batch size for exports |\n| `OTEL_MAX_QUEUE_SIZE` | `512` | `max_queue_size` | Maximum queue size |\n\n**ohtell-Specific Configuration:**\n| Variable | Default | Config YAML Key | Description |\n|----------|---------|-----------------|-------------|\n| `OTEL_WRAPPER_SKIP_CLEANUP` | `true` | `skip_cleanup` | Skip automatic cleanup on process exit |\n\n**Environment variables always take precedence over config.yaml settings.**\n\n### Automatic Service Attributes\n\nohtell automatically captures these service attributes:\n\n| Attribute | Source | Description |\n|-----------|--------|-------------|\n| `service.name` | `OTEL_SERVICE_NAME` or `init()` parameter | Service name identifier (auto-detected from script filename) |\n| `service.namespace` | `NAMESPACE` env var or `init()` parameter | Service namespace | \n| `service.version` | `pyproject.toml` or `init()` parameter | Version from project metadata (auto-detected) |\n| `service.hostname` | System hostname | Automatically detected server/container hostname |\n| `deployment.environment` | `ENV` env var or `init()` parameter | Environment (default: \"dev\") |\n\n**Auto-Detection Features**: ohtell automatically detects service information:\n\n**Service Name**: Uses the filename of your main Python script:\n```bash\npython api_server.py # \u2192 service.name = \"api_server\" \npython -m my_app.main # \u2192 service.name = \"main\"\npython /path/to/worker.py # \u2192 service.name = \"worker\"\n```\n\n**Service Version**: Reads from your `pyproject.toml` file:\n```toml\n[project]\nname = \"my-app\"\nversion = \"1.2.3\" # \u2190 Automatically detected and used as service.version\n```\n\n**Configuration Priority** for each attribute:\n1. `init()` function parameters (highest priority)\n2. Environment variables (`ENV`, `NAMESPACE`, `OTEL_SERVICE_NAME`)\n3. `pyproject.toml` (for version only)\n4. Defaults (lowest priority)\n\n### Config File Format (config.yaml)\n\n```yaml\notel:\n # Core OTLP Configuration\n endpoint: \"http://localhost:4317\" # OTLP endpoint (omit for console output)\n console: true # Force console output (overrides endpoint)\n headers: \"Authorization=Bearer token123\" # Auth headers \n protocol: \"grpc\" # grpc or http/protobuf\n resource_attributes: \"key1=value1,key2=value2\" # Resource attributes\n \n # Export Intervals (milliseconds)\n span_export_interval_ms: 500 # Trace export interval (0.5 seconds)\n log_export_interval_ms: 500 # Log export interval (0.5 seconds) \n metric_export_interval_ms: 30000 # Metric export interval (30 seconds)\n \n # Batch Configuration\n max_export_batch_size: 50 # Maximum batch size for exports\n max_queue_size: 512 # Maximum queue size\n \n \n # Cleanup Configuration \n skip_cleanup: true # Skip automatic cleanup on exit\n```\n\nThe config file is automatically loaded from the project root if it exists. **Environment variables take precedence over config file values.**\n\n## Testing\n\nRun the comprehensive test suite:\n\n```bash\n# Run all tests\npytest tests/\n\n# Run specific test categories\npytest tests/test_integration.py # Integration tests with real examples\npytest tests/test_config.py # Configuration tests\npytest tests/test_metrics.py # Metrics functionality tests\n```\n\nThe integration tests in `tests/test_integration.py` contain realistic examples that demonstrate all features working together.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A simple, decorator-based OpenTelemetry wrapper for tracing Python functions",
"version": "0.2.19",
"project_urls": {
"Bug Tracker": "https://github.com/slavaganzin/ohtell/issues",
"Documentation": "https://github.com/slavaganzin/ohtell#readme",
"Homepage": "https://github.com/slavaganzin/ohtell",
"Repository": "https://github.com/slavaganzin/ohtell.git"
},
"split_keywords": [
"opentelemetry",
" tracing",
" observability",
" monitoring",
" decorator"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7263a46420b895be660f51ac807763d0c95d0ac420b82374ba1a1c48455f69a2",
"md5": "94d2335328b52f22f5f365e058a76a59",
"sha256": "27150e90bfd83de4c84c93140fe8526dcaefb27bce32636294c172052e562c1f"
},
"downloads": -1,
"filename": "ohtell-0.2.19-py3-none-any.whl",
"has_sig": false,
"md5_digest": "94d2335328b52f22f5f365e058a76a59",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27549,
"upload_time": "2025-09-04T11:48:47",
"upload_time_iso_8601": "2025-09-04T11:48:47.363705Z",
"url": "https://files.pythonhosted.org/packages/72/63/a46420b895be660f51ac807763d0c95d0ac420b82374ba1a1c48455f69a2/ohtell-0.2.19-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6bd2994f263cef6bca7a1b815491b128e2f90f32e2e88babd90dafa7b99b2ebd",
"md5": "f1a2d8800881bf961ccd9b331c52360c",
"sha256": "6c3c4ef7bf009c387a398cb305e0acd58f07bdcd5be7b44c7f72bd5513f9bb85"
},
"downloads": -1,
"filename": "ohtell-0.2.19.tar.gz",
"has_sig": false,
"md5_digest": "f1a2d8800881bf961ccd9b331c52360c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 49122,
"upload_time": "2025-09-04T11:48:48",
"upload_time_iso_8601": "2025-09-04T11:48:48.852278Z",
"url": "https://files.pythonhosted.org/packages/6b/d2/994f263cef6bca7a1b815491b128e2f90f32e2e88babd90dafa7b99b2ebd/ohtell-0.2.19.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-04 11:48:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "slavaganzin",
"github_project": "ohtell",
"github_not_found": true,
"lcname": "ohtell"
}