# Spyglass SDK
The Spyglass SDK provides client code for shipping telemetry data to the Spyglass AI platform
## Installation
```bash
pip install spyglass-ai
```
## Configuration
Set the following environment variables to configure the SDK:
### Required
- `SPYGLASS_API_KEY`: Your Spyglass API key
- `SPYGLASS_DEPLOYMENT_ID`: Unique identifier for your deployment
- **Note**: Used for both `service.name` and `deployment.id` attributes
### Optional
- `SPYGLASS_OTEL_EXPORTER_OTLP_ENDPOINT`: Custom endpoint for development
### Example Configuration
```bash
export SPYGLASS_API_KEY="your-api-key"
export SPYGLASS_DEPLOYMENT_ID="user-service-v1.2.0" # Required - used for both service.name and deployment.id
```
**Note**: `SPYGLASS_DEPLOYMENT_ID` is required and will be used for both the OpenTelemetry `service.name` and `deployment.id` resource attributes. This ensures consistency and simplifies dashboard queries.
## Usage
### Basic Function Tracing
Use the `@spyglass_trace` decorator to automatically trace function calls:
```python
from spyglass_ai import spyglass_trace
@spyglass_trace()
def calculate_total(price, tax_rate):
return price * (1 + tax_rate)
# Usage
result = calculate_total(100, 0.08) # This call will be traced
```
You can also provide a custom span name:
```python
@spyglass_trace(name="payment_processing")
def process_payment(amount, card_info):
# Payment processing logic
return {"status": "success", "transaction_id": "tx_123"}
```
### OpenAI Integration
Wrap your OpenAI client to automatically trace API calls:
```python
from openai import OpenAI
from spyglass_ai import spyglass_openai
# Create your OpenAI client
client = OpenAI(api_key="your-api-key")
# Wrap it with Spyglass tracing
client = spyglass_openai(client)
# Now all chat completions will be traced
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=100
)
```
### Complete Example
```python
from openai import OpenAI
from spyglass_ai import spyglass_trace, spyglass_openai
# Set up OpenAI client with tracing
client = OpenAI(api_key="your-api-key")
client = spyglass_openai(client)
@spyglass_trace(name="ai_conversation")
def have_conversation(user_message):
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": user_message}],
max_tokens=150
)
return response.choices[0].message.content
# This will create traces for both the function and the OpenAI API call
answer = have_conversation("What is the capital of France?")
print(answer)
```
## What Gets Traced
### Function Tracing (`@spyglass_trace`)
- Function name, module, and qualified name
- Input arguments (excluding `self` and `cls`)
- Return values
- Execution time
- Any exceptions that occur
### OpenAI Integration (`spyglass_openai`)
- Model used
- Number of messages
- Request parameters (max_tokens, temperature, etc.)
- Token usage (prompt, completion, total)
- Response model
- Any API errors
## Development
### Install Dependencies
```bash
uv sync --extra test
```
### Run All Tests
```bash
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=src --cov-report=term-missing
# Run specific test file
uv run pytest tests/test_trace.py -v
```
Raw data
{
"_id": null,
"home_page": null,
"name": "spyglass-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, llm, monitoring, observability, openai, opentelemetry, tracing",
"author": null,
"author_email": "Spyglass AI Team <team@spyglass-ai.com>",
"download_url": "https://files.pythonhosted.org/packages/a9/1c/2bf552e1fbea71c865c69abb48cf75d8adfe62c5041b1db2887d43f5c388/spyglass_ai-0.0.1.tar.gz",
"platform": null,
"description": "# Spyglass SDK\nThe Spyglass SDK provides client code for shipping telemetry data to the Spyglass AI platform\n\n## Installation\n```bash\npip install spyglass-ai\n```\n\n## Configuration\n\nSet the following environment variables to configure the SDK:\n\n### Required\n- `SPYGLASS_API_KEY`: Your Spyglass API key\n- `SPYGLASS_DEPLOYMENT_ID`: Unique identifier for your deployment\n - **Note**: Used for both `service.name` and `deployment.id` attributes\n\n### Optional\n- `SPYGLASS_OTEL_EXPORTER_OTLP_ENDPOINT`: Custom endpoint for development\n\n### Example Configuration\n```bash\nexport SPYGLASS_API_KEY=\"your-api-key\"\nexport SPYGLASS_DEPLOYMENT_ID=\"user-service-v1.2.0\" # Required - used for both service.name and deployment.id\n```\n\n**Note**: `SPYGLASS_DEPLOYMENT_ID` is required and will be used for both the OpenTelemetry `service.name` and `deployment.id` resource attributes. This ensures consistency and simplifies dashboard queries.\n\n## Usage\n\n### Basic Function Tracing\n\nUse the `@spyglass_trace` decorator to automatically trace function calls:\n\n```python\nfrom spyglass_ai import spyglass_trace\n\n@spyglass_trace()\ndef calculate_total(price, tax_rate):\n return price * (1 + tax_rate)\n\n# Usage\nresult = calculate_total(100, 0.08) # This call will be traced\n```\n\nYou can also provide a custom span name:\n\n```python\n@spyglass_trace(name=\"payment_processing\")\ndef process_payment(amount, card_info):\n # Payment processing logic\n return {\"status\": \"success\", \"transaction_id\": \"tx_123\"}\n```\n\n### OpenAI Integration\n\nWrap your OpenAI client to automatically trace API calls:\n\n```python\nfrom openai import OpenAI\nfrom spyglass_ai import spyglass_openai\n\n# Create your OpenAI client\nclient = OpenAI(api_key=\"your-api-key\")\n\n# Wrap it with Spyglass tracing\nclient = spyglass_openai(client)\n\n# Now all chat completions will be traced\nresponse = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[{\"role\": \"user\", \"content\": \"Hello!\"}],\n max_tokens=100\n)\n```\n\n### Complete Example\n\n```python\nfrom openai import OpenAI\nfrom spyglass_ai import spyglass_trace, spyglass_openai\n\n# Set up OpenAI client with tracing\nclient = OpenAI(api_key=\"your-api-key\")\nclient = spyglass_openai(client)\n\n@spyglass_trace(name=\"ai_conversation\")\ndef have_conversation(user_message):\n response = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[{\"role\": \"user\", \"content\": user_message}],\n max_tokens=150\n )\n return response.choices[0].message.content\n\n# This will create traces for both the function and the OpenAI API call\nanswer = have_conversation(\"What is the capital of France?\")\nprint(answer)\n```\n\n## What Gets Traced\n\n### Function Tracing (`@spyglass_trace`)\n- Function name, module, and qualified name\n- Input arguments (excluding `self` and `cls`)\n- Return values\n- Execution time\n- Any exceptions that occur\n\n### OpenAI Integration (`spyglass_openai`)\n- Model used\n- Number of messages\n- Request parameters (max_tokens, temperature, etc.)\n- Token usage (prompt, completion, total)\n- Response model\n- Any API errors\n\n## Development\n### Install Dependencies\n```bash\nuv sync --extra test\n```\n\n### Run All Tests\n```bash\n# Run all tests\nuv run pytest\n\n# Run with coverage\nuv run pytest --cov=src --cov-report=term-missing\n\n# Run specific test file\nuv run pytest tests/test_trace.py -v\n```",
"bugtrack_url": null,
"license": null,
"summary": "A client library for shipping telemetry to the Spyglass AI platform",
"version": "0.0.1",
"project_urls": {
"Bug Reports": "https://github.com/spyglass-ai/spyglass-sdk/issues",
"Changelog": "https://github.com/spyglass-ai/spyglass-sdk/blob/main/CHANGELOG.md",
"Documentation": "https://docs.spyglass-ai.com",
"Homepage": "https://spyglass-ai.com",
"Repository": "https://github.com/spyglass-ai/spyglass-sdk",
"Source Code": "https://github.com/spyglass-ai/spyglass-sdk"
},
"split_keywords": [
"ai",
" llm",
" monitoring",
" observability",
" openai",
" opentelemetry",
" tracing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "728a6ffe11abe3129cec154fbf03c246d16699ae45384247ce3958ec921b093d",
"md5": "9604481b341734035d95ffb2ad68c030",
"sha256": "fd79934d48667da20ebcaa58918f854ad94cb31415d6aa6f7e35b6ff62874afc"
},
"downloads": -1,
"filename": "spyglass_ai-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9604481b341734035d95ffb2ad68c030",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 7489,
"upload_time": "2025-08-03T04:37:51",
"upload_time_iso_8601": "2025-08-03T04:37:51.591445Z",
"url": "https://files.pythonhosted.org/packages/72/8a/6ffe11abe3129cec154fbf03c246d16699ae45384247ce3958ec921b093d/spyglass_ai-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a91c2bf552e1fbea71c865c69abb48cf75d8adfe62c5041b1db2887d43f5c388",
"md5": "04472f44dfce649f523237ed63b71ea7",
"sha256": "75b81fa5bae56f155f9ca3414e8c34b5f75eea40846d9cee286a7731da1237f3"
},
"downloads": -1,
"filename": "spyglass_ai-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "04472f44dfce649f523237ed63b71ea7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 46198,
"upload_time": "2025-08-03T04:37:52",
"upload_time_iso_8601": "2025-08-03T04:37:52.796265Z",
"url": "https://files.pythonhosted.org/packages/a9/1c/2bf552e1fbea71c865c69abb48cf75d8adfe62c5041b1db2887d43f5c388/spyglass_ai-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-03 04:37:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "spyglass-ai",
"github_project": "spyglass-sdk",
"github_not_found": true,
"lcname": "spyglass-ai"
}