# Noveum Trace SDK
**Production-ready Python SDK for tracing LLM applications, multi-agent systems, and tool calls with OpenTelemetry compliance.**
[](https://github.com/Noveum/noveum-trace/actions/workflows/ci.yml)
[](https://github.com/Noveum/noveum-trace/actions/workflows/release.yml)
[](https://codecov.io/gh/Noveum/noveum-trace)
[](https://badge.fury.io/py/noveum-trace)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/Apache-2.0)
## ๐ Quick Start
```python
import noveum_trace
# Initialize with your project ID
noveum_trace.init(project_id="my-ai-project")
# Use the simple @trace decorator
@noveum_trace.trace
def process_data(data):
return f"Processed: {data}"
@noveum_trace.trace(type="llm", model="gpt-4")
def call_llm(prompt):
# Your LLM call here
return "AI response"
@noveum_trace.trace(type="component", agent="data-processor")
def agent_task(task):
return f"Agent completed: {task}"
# Your functions are now automatically traced!
result = process_data("user input")
```
## โจ Key Features
### ๐ฏ **Simplified Decorator Approach**
- **Single `@trace` decorator** with parameters instead of multiple decorators
- **Backward compatibility** with `@observe` and `@llm_trace` aliases
- **Parameter-based specialization** for different operation types
### ๐ค **Multi-Agent Support**
- **Agent registry and management** with hierarchical relationships
- **Cross-agent correlation** and trace propagation
- **Agent-aware context management** for complex workflows
- **Thread-safe operations** for concurrent agent execution
### ๐ **LLM & Tool Call Tracing**
- **Auto-instrumentation** for OpenAI and Anthropic SDKs
- **Comprehensive LLM metrics** (tokens, latency, model info)
- **Tool call tracking** with arguments and results
- **OpenTelemetry semantic conventions** compliance
### ๐๏ธ **Project-Based Organization**
- **Required project ID** for proper trace organization
- **Custom headers support** (projectId, orgId, additional headers)
- **Environment-aware** (development, staging, production)
- **Proper trace ID generation** with UUID-based identifiers
## ๐ฆ Installation
```bash
pip install noveum-trace
```
## ๐ง Configuration
### Basic Initialization
```python
import noveum_trace
# Minimal setup (project_id is required)
tracer = noveum_trace.init(project_id="my-project")
# Full configuration
tracer = noveum_trace.init(
project_id="my-project",
project_name="My AI Application",
org_id="org-123",
user_id="user-456",
session_id="session-789",
environment="production",
api_key="your-noveum-api-key", # For Noveum.ai platform
file_logging=True,
log_directory="./traces",
auto_instrument=True,
capture_content=True,
custom_headers={"X-Custom-Header": "value"}
)
```
### Environment Variables
```bash
export NOVEUM_PROJECT_ID="my-project"
export NOVEUM_API_KEY="your-api-key"
export NOVEUM_ORG_ID="org-123"
export NOVEUM_USER_ID="user-456"
export NOVEUM_SESSION_ID="session-789"
export NOVEUM_ENVIRONMENT="production"
```
## ๐จ Usage Examples
### Simple Function Tracing
```python
@noveum_trace.trace
def data_processing(data):
# Your processing logic
return processed_data
@noveum_trace.trace(name="custom-operation")
def custom_function():
return "result"
```
### LLM Tracing
```python
@noveum_trace.trace(type="llm", model="gpt-4", operation="chat")
def chat_completion(messages):
response = openai.chat.completions.create(
model="gpt-4",
messages=messages
)
return response
@noveum_trace.trace(type="llm", model="claude-3", operation="completion")
def text_completion(prompt):
response = anthropic.messages.create(
model="claude-3-haiku-20240307",
messages=[{"role": "user", "content": prompt}]
)
return response
```
### Multi-Agent Workflows
```python
from noveum_trace import Agent, AgentConfig, AgentContext, trace
# Define agents
coordinator = Agent(AgentConfig(
name="coordinator",
agent_type="orchestrator",
id="coord-001"
))
worker = Agent(AgentConfig(
name="data-worker",
agent_type="processor",
id="worker-001"
))
# Use agent context
with AgentContext(coordinator):
@trace
def plan_task(task):
# Coordinator planning
return task_plan
plan = plan_task("analyze data")
with AgentContext(worker):
@trace
def execute_task(plan):
# Worker execution
return results
results = execute_task(plan)
```
### Tool Call Tracing
```python
@noveum_trace.trace(type="tool", tool_name="web_search")
def search_web(query):
# Tool implementation
return search_results
@noveum_trace.trace(type="tool", tool_name="calculator")
def calculate(expression):
# Calculator implementation
return result
```
### Dynamic Span Updates
```python
from noveum_trace import trace, update_current_span
@trace
def long_running_task():
update_current_span(
metadata={"step": "initialization"},
progress=10
)
# Do some work
initialize()
update_current_span(
metadata={"step": "processing"},
progress=50
)
# More work
process_data()
update_current_span(
metadata={"step": "completion"},
progress=100
)
return "completed"
```
## ๐ Auto-Instrumentation
The SDK automatically instruments popular LLM libraries:
```python
# Auto-instrumentation is enabled by default
noveum_trace.init(project_id="my-project", auto_instrument=True)
# Now all OpenAI and Anthropic calls are automatically traced
import openai
response = openai.chat.completions.create(...) # Automatically traced!
import anthropic
response = anthropic.messages.create(...) # Automatically traced!
```
## ๐ Trace Data Structure
Each trace contains:
```json
{
"trace_id": "uuid-v4",
"span_id": "uuid-v4",
"parent_span_id": "uuid-v4",
"name": "operation-name",
"kind": "internal|client|server",
"status": "ok|error",
"start_time": "2024-01-01T00:00:00Z",
"end_time": "2024-01-01T00:00:01Z",
"duration_ms": 1000,
"project_id": "my-project",
"project_name": "My AI Application",
"org_id": "org-123",
"user_id": "user-456",
"session_id": "session-789",
"environment": "production",
"attributes": {
"llm.model": "gpt-4",
"llm.operation": "chat",
"gen_ai.system": "openai",
"gen_ai.usage.input_tokens": 100,
"gen_ai.usage.output_tokens": 50
},
"llm_request": {
"model": "gpt-4",
"messages": [...],
"temperature": 0.7
},
"llm_response": {
"id": "response-id",
"model": "gpt-4",
"choices": [...],
"usage": {
"prompt_tokens": 100,
"completion_tokens": 50,
"total_tokens": 150
}
},
"agent": {
"name": "data-processor",
"type": "worker",
"id": "agent-123"
},
"tool_calls": [
{
"id": "call-123",
"name": "web_search",
"arguments": {"query": "AI news"},
"result": "search results",
"duration_ms": 500
}
]
}
```
## ๐ Competitive Advantages
| Feature | Noveum Trace | DeepEval | Phoenix | Braintrust |
|---------|--------------|----------|---------|------------|
| Multi-agent support | โ
| โ | โ ๏ธ | โ ๏ธ |
| Simplified decorators | โ
| โ
| โ
| โ
|
| Auto agent resolution | โ
| โ | โ | โ |
| OpenTelemetry compliant | โ
| โ | โ
| โ |
| Project-based organization | โ
| โ | โ | โ |
| Custom headers | โ
| โ | โ | โ |
| Trace ID management | โ
| โ ๏ธ | โ
| โ ๏ธ |
| Tool call tracing | โ
| โ | โ ๏ธ | โ ๏ธ |
## ๐ Project Structure
```
noveum-trace/
โโโ src/noveum_trace/
โ โโโ __init__.py # Main exports
โ โโโ init.py # Simplified initialization
โ โโโ types.py # Type definitions
โ โโโ core/
โ โ โโโ tracer.py # Main tracer implementation
โ โ โโโ span.py # Span implementation
โ โ โโโ context.py # Context management
โ โโโ agents/
โ โ โโโ agent.py # Agent classes
โ โ โโโ registry.py # Agent registry
โ โ โโโ context.py # Agent context
โ โ โโโ decorators.py # Simplified decorators
โ โโโ sinks/
โ โ โโโ base.py # Base sink interface
โ โ โโโ file.py # File sink
โ โ โโโ console.py # Console sink
โ โ โโโ noveum.py # Noveum.ai sink
โ โ โโโ elasticsearch.py # Elasticsearch sink
โ โโโ instrumentation/
โ โ โโโ openai.py # OpenAI auto-instrumentation
โ โ โโโ anthropic.py # Anthropic auto-instrumentation
โ โโโ utils/
โ โโโ exceptions.py # Custom exceptions
โโโ examples/ # Usage examples
โโโ tests/ # Test suite
โโโ docs/ # Documentation
โโโ README.md # This file
```
## ๐งช Testing
```bash
# Run all tests
python -m pytest tests/
# Run specific test categories
python -m pytest tests/unit/
python -m pytest tests/integration/
# Run with coverage
python -m pytest tests/ --cov=noveum_trace
```
## ๐ Documentation
- [Getting Started Guide](docs/getting-started/)
- [Configuration Guide](docs/guides/configuration.md)
- [Multi-Agent Tracing](docs/guides/multi-agent-tracing.md)
- [LLM Tracing Guide](docs/guides/llm-tracing.md)
- [API Reference](docs/api/)
## ๐ค Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## ๐ License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## ๐ Support
- **Documentation**: [docs/](docs/)
- **Issues**: [GitHub Issues](https://github.com/noveum/noveum-trace/issues)
- **Discussions**: [GitHub Discussions](https://github.com/noveum/noveum-trace/discussions)
## ๐ Roadmap
- [ ] **JavaScript/TypeScript SDK** - Cross-platform support
- [ ] **Real-time evaluation** - Integration with NovaEval
- [ ] **Advanced analytics** - Performance insights and recommendations
- [ ] **Custom metrics** - User-defined metrics and alerts
- [ ] **Distributed tracing** - Cross-service trace correlation
- [ ] **Visual trace explorer** - Interactive trace visualization
---
**Built with โค๏ธ by the Noveum team**
Raw data
{
"_id": null,
"home_page": null,
"name": "noveum-trace",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Noveum AI <team@noveum.ai>",
"keywords": "llm, tracing, observability, opentelemetry, ai, monitoring, evaluation, noveum, novaeval, agents, instrumentation",
"author": null,
"author_email": "Noveum AI <team@noveum.ai>",
"download_url": "https://files.pythonhosted.org/packages/be/e9/c96b65c310d86440396cd01e85d95d384b7395571a5ce0366a4a7e83d5e5/noveum_trace-0.1.2.tar.gz",
"platform": null,
"description": "# Noveum Trace SDK\n\n**Production-ready Python SDK for tracing LLM applications, multi-agent systems, and tool calls with OpenTelemetry compliance.**\n\n[](https://github.com/Noveum/noveum-trace/actions/workflows/ci.yml)\n[](https://github.com/Noveum/noveum-trace/actions/workflows/release.yml)\n[](https://codecov.io/gh/Noveum/noveum-trace)\n[](https://badge.fury.io/py/noveum-trace)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/Apache-2.0)\n\n## \ud83d\ude80 Quick Start\n\n```python\nimport noveum_trace\n\n# Initialize with your project ID\nnoveum_trace.init(project_id=\"my-ai-project\")\n\n# Use the simple @trace decorator\n@noveum_trace.trace\ndef process_data(data):\n return f\"Processed: {data}\"\n\n@noveum_trace.trace(type=\"llm\", model=\"gpt-4\")\ndef call_llm(prompt):\n # Your LLM call here\n return \"AI response\"\n\n@noveum_trace.trace(type=\"component\", agent=\"data-processor\")\ndef agent_task(task):\n return f\"Agent completed: {task}\"\n\n# Your functions are now automatically traced!\nresult = process_data(\"user input\")\n```\n\n## \u2728 Key Features\n\n### \ud83c\udfaf **Simplified Decorator Approach**\n- **Single `@trace` decorator** with parameters instead of multiple decorators\n- **Backward compatibility** with `@observe` and `@llm_trace` aliases\n- **Parameter-based specialization** for different operation types\n\n### \ud83e\udd16 **Multi-Agent Support**\n- **Agent registry and management** with hierarchical relationships\n- **Cross-agent correlation** and trace propagation\n- **Agent-aware context management** for complex workflows\n- **Thread-safe operations** for concurrent agent execution\n\n### \ud83d\udd0d **LLM & Tool Call Tracing**\n- **Auto-instrumentation** for OpenAI and Anthropic SDKs\n- **Comprehensive LLM metrics** (tokens, latency, model info)\n- **Tool call tracking** with arguments and results\n- **OpenTelemetry semantic conventions** compliance\n\n### \ud83c\udfd7\ufe0f **Project-Based Organization**\n- **Required project ID** for proper trace organization\n- **Custom headers support** (projectId, orgId, additional headers)\n- **Environment-aware** (development, staging, production)\n- **Proper trace ID generation** with UUID-based identifiers\n\n## \ud83d\udce6 Installation\n\n```bash\npip install noveum-trace\n```\n\n## \ud83d\udd27 Configuration\n\n### Basic Initialization\n\n```python\nimport noveum_trace\n\n# Minimal setup (project_id is required)\ntracer = noveum_trace.init(project_id=\"my-project\")\n\n# Full configuration\ntracer = noveum_trace.init(\n project_id=\"my-project\",\n project_name=\"My AI Application\",\n org_id=\"org-123\",\n user_id=\"user-456\",\n session_id=\"session-789\",\n environment=\"production\",\n api_key=\"your-noveum-api-key\", # For Noveum.ai platform\n file_logging=True,\n log_directory=\"./traces\",\n auto_instrument=True,\n capture_content=True,\n custom_headers={\"X-Custom-Header\": \"value\"}\n)\n```\n\n### Environment Variables\n\n```bash\nexport NOVEUM_PROJECT_ID=\"my-project\"\nexport NOVEUM_API_KEY=\"your-api-key\"\nexport NOVEUM_ORG_ID=\"org-123\"\nexport NOVEUM_USER_ID=\"user-456\"\nexport NOVEUM_SESSION_ID=\"session-789\"\nexport NOVEUM_ENVIRONMENT=\"production\"\n```\n\n## \ud83c\udfa8 Usage Examples\n\n### Simple Function Tracing\n\n```python\n@noveum_trace.trace\ndef data_processing(data):\n # Your processing logic\n return processed_data\n\n@noveum_trace.trace(name=\"custom-operation\")\ndef custom_function():\n return \"result\"\n```\n\n### LLM Tracing\n\n```python\n@noveum_trace.trace(type=\"llm\", model=\"gpt-4\", operation=\"chat\")\ndef chat_completion(messages):\n response = openai.chat.completions.create(\n model=\"gpt-4\",\n messages=messages\n )\n return response\n\n@noveum_trace.trace(type=\"llm\", model=\"claude-3\", operation=\"completion\")\ndef text_completion(prompt):\n response = anthropic.messages.create(\n model=\"claude-3-haiku-20240307\",\n messages=[{\"role\": \"user\", \"content\": prompt}]\n )\n return response\n```\n\n### Multi-Agent Workflows\n\n```python\nfrom noveum_trace import Agent, AgentConfig, AgentContext, trace\n\n# Define agents\ncoordinator = Agent(AgentConfig(\n name=\"coordinator\",\n agent_type=\"orchestrator\",\n id=\"coord-001\"\n))\n\nworker = Agent(AgentConfig(\n name=\"data-worker\",\n agent_type=\"processor\",\n id=\"worker-001\"\n))\n\n# Use agent context\nwith AgentContext(coordinator):\n @trace\n def plan_task(task):\n # Coordinator planning\n return task_plan\n\n plan = plan_task(\"analyze data\")\n\n with AgentContext(worker):\n @trace\n def execute_task(plan):\n # Worker execution\n return results\n\n results = execute_task(plan)\n```\n\n### Tool Call Tracing\n\n```python\n@noveum_trace.trace(type=\"tool\", tool_name=\"web_search\")\ndef search_web(query):\n # Tool implementation\n return search_results\n\n@noveum_trace.trace(type=\"tool\", tool_name=\"calculator\")\ndef calculate(expression):\n # Calculator implementation\n return result\n```\n\n### Dynamic Span Updates\n\n```python\nfrom noveum_trace import trace, update_current_span\n\n@trace\ndef long_running_task():\n update_current_span(\n metadata={\"step\": \"initialization\"},\n progress=10\n )\n\n # Do some work\n initialize()\n\n update_current_span(\n metadata={\"step\": \"processing\"},\n progress=50\n )\n\n # More work\n process_data()\n\n update_current_span(\n metadata={\"step\": \"completion\"},\n progress=100\n )\n\n return \"completed\"\n```\n\n## \ud83d\udd0c Auto-Instrumentation\n\nThe SDK automatically instruments popular LLM libraries:\n\n```python\n# Auto-instrumentation is enabled by default\nnoveum_trace.init(project_id=\"my-project\", auto_instrument=True)\n\n# Now all OpenAI and Anthropic calls are automatically traced\nimport openai\nresponse = openai.chat.completions.create(...) # Automatically traced!\n\nimport anthropic\nresponse = anthropic.messages.create(...) # Automatically traced!\n```\n\n## \ud83d\udcca Trace Data Structure\n\nEach trace contains:\n\n```json\n{\n \"trace_id\": \"uuid-v4\",\n \"span_id\": \"uuid-v4\",\n \"parent_span_id\": \"uuid-v4\",\n \"name\": \"operation-name\",\n \"kind\": \"internal|client|server\",\n \"status\": \"ok|error\",\n \"start_time\": \"2024-01-01T00:00:00Z\",\n \"end_time\": \"2024-01-01T00:00:01Z\",\n \"duration_ms\": 1000,\n \"project_id\": \"my-project\",\n \"project_name\": \"My AI Application\",\n \"org_id\": \"org-123\",\n \"user_id\": \"user-456\",\n \"session_id\": \"session-789\",\n \"environment\": \"production\",\n \"attributes\": {\n \"llm.model\": \"gpt-4\",\n \"llm.operation\": \"chat\",\n \"gen_ai.system\": \"openai\",\n \"gen_ai.usage.input_tokens\": 100,\n \"gen_ai.usage.output_tokens\": 50\n },\n \"llm_request\": {\n \"model\": \"gpt-4\",\n \"messages\": [...],\n \"temperature\": 0.7\n },\n \"llm_response\": {\n \"id\": \"response-id\",\n \"model\": \"gpt-4\",\n \"choices\": [...],\n \"usage\": {\n \"prompt_tokens\": 100,\n \"completion_tokens\": 50,\n \"total_tokens\": 150\n }\n },\n \"agent\": {\n \"name\": \"data-processor\",\n \"type\": \"worker\",\n \"id\": \"agent-123\"\n },\n \"tool_calls\": [\n {\n \"id\": \"call-123\",\n \"name\": \"web_search\",\n \"arguments\": {\"query\": \"AI news\"},\n \"result\": \"search results\",\n \"duration_ms\": 500\n }\n ]\n}\n```\n\n## \ud83c\udfc6 Competitive Advantages\n\n| Feature | Noveum Trace | DeepEval | Phoenix | Braintrust |\n|---------|--------------|----------|---------|------------|\n| Multi-agent support | \u2705 | \u274c | \u26a0\ufe0f | \u26a0\ufe0f |\n| Simplified decorators | \u2705 | \u2705 | \u2705 | \u2705 |\n| Auto agent resolution | \u2705 | \u274c | \u274c | \u274c |\n| OpenTelemetry compliant | \u2705 | \u274c | \u2705 | \u274c |\n| Project-based organization | \u2705 | \u274c | \u274c | \u274c |\n| Custom headers | \u2705 | \u274c | \u274c | \u274c |\n| Trace ID management | \u2705 | \u26a0\ufe0f | \u2705 | \u26a0\ufe0f |\n| Tool call tracing | \u2705 | \u274c | \u26a0\ufe0f | \u26a0\ufe0f |\n\n## \ud83d\udcc1 Project Structure\n\n```\nnoveum-trace/\n\u251c\u2500\u2500 src/noveum_trace/\n\u2502 \u251c\u2500\u2500 __init__.py # Main exports\n\u2502 \u251c\u2500\u2500 init.py # Simplified initialization\n\u2502 \u251c\u2500\u2500 types.py # Type definitions\n\u2502 \u251c\u2500\u2500 core/\n\u2502 \u2502 \u251c\u2500\u2500 tracer.py # Main tracer implementation\n\u2502 \u2502 \u251c\u2500\u2500 span.py # Span implementation\n\u2502 \u2502 \u2514\u2500\u2500 context.py # Context management\n\u2502 \u251c\u2500\u2500 agents/\n\u2502 \u2502 \u251c\u2500\u2500 agent.py # Agent classes\n\u2502 \u2502 \u251c\u2500\u2500 registry.py # Agent registry\n\u2502 \u2502 \u251c\u2500\u2500 context.py # Agent context\n\u2502 \u2502 \u2514\u2500\u2500 decorators.py # Simplified decorators\n\u2502 \u251c\u2500\u2500 sinks/\n\u2502 \u2502 \u251c\u2500\u2500 base.py # Base sink interface\n\u2502 \u2502 \u251c\u2500\u2500 file.py # File sink\n\u2502 \u2502 \u251c\u2500\u2500 console.py # Console sink\n\u2502 \u2502 \u251c\u2500\u2500 noveum.py # Noveum.ai sink\n\u2502 \u2502 \u2514\u2500\u2500 elasticsearch.py # Elasticsearch sink\n\u2502 \u251c\u2500\u2500 instrumentation/\n\u2502 \u2502 \u251c\u2500\u2500 openai.py # OpenAI auto-instrumentation\n\u2502 \u2502 \u2514\u2500\u2500 anthropic.py # Anthropic auto-instrumentation\n\u2502 \u2514\u2500\u2500 utils/\n\u2502 \u2514\u2500\u2500 exceptions.py # Custom exceptions\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 docs/ # Documentation\n\u2514\u2500\u2500 README.md # This file\n```\n\n## \ud83e\uddea Testing\n\n```bash\n# Run all tests\npython -m pytest tests/\n\n# Run specific test categories\npython -m pytest tests/unit/\npython -m pytest tests/integration/\n\n# Run with coverage\npython -m pytest tests/ --cov=noveum_trace\n```\n\n## \ud83d\udcda Documentation\n\n- [Getting Started Guide](docs/getting-started/)\n- [Configuration Guide](docs/guides/configuration.md)\n- [Multi-Agent Tracing](docs/guides/multi-agent-tracing.md)\n- [LLM Tracing Guide](docs/guides/llm-tracing.md)\n- [API Reference](docs/api/)\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n## \ud83c\udd98 Support\n\n- **Documentation**: [docs/](docs/)\n- **Issues**: [GitHub Issues](https://github.com/noveum/noveum-trace/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/noveum/noveum-trace/discussions)\n\n## \ud83d\ude80 Roadmap\n\n- [ ] **JavaScript/TypeScript SDK** - Cross-platform support\n- [ ] **Real-time evaluation** - Integration with NovaEval\n- [ ] **Advanced analytics** - Performance insights and recommendations\n- [ ] **Custom metrics** - User-defined metrics and alerts\n- [ ] **Distributed tracing** - Cross-service trace correlation\n- [ ] **Visual trace explorer** - Interactive trace visualization\n\n---\n\n**Built with \u2764\ufe0f by the Noveum team**\n",
"bugtrack_url": null,
"license": null,
"summary": "High-performance LLM tracing SDK with OpenTelemetry compliance and real-time evaluation",
"version": "0.1.2",
"project_urls": {
"Bug Tracker": "https://github.com/Noveum/noveum-trace/issues",
"Changelog": "https://github.com/Noveum/noveum-trace/blob/main/CHANGELOG.md",
"Documentation": "https://noveum-trace.readthedocs.io",
"Homepage": "https://github.com/Noveum/noveum-trace",
"Repository": "https://github.com/Noveum/noveum-trace"
},
"split_keywords": [
"llm",
" tracing",
" observability",
" opentelemetry",
" ai",
" monitoring",
" evaluation",
" noveum",
" novaeval",
" agents",
" instrumentation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3aaa2e0f06f8523971d05eff887b517d40e1d892a3b7a616c3dc9eb70e11d990",
"md5": "5133708736eeb7b8c8ac78ae8d305393",
"sha256": "9c88e25e8c3c418d341fe1857554d5a3e7fff174e25bd0b5ad883ae45abaafb9"
},
"downloads": -1,
"filename": "noveum_trace-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5133708736eeb7b8c8ac78ae8d305393",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 68344,
"upload_time": "2025-07-16T11:49:20",
"upload_time_iso_8601": "2025-07-16T11:49:20.867310Z",
"url": "https://files.pythonhosted.org/packages/3a/aa/2e0f06f8523971d05eff887b517d40e1d892a3b7a616c3dc9eb70e11d990/noveum_trace-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "bee9c96b65c310d86440396cd01e85d95d384b7395571a5ce0366a4a7e83d5e5",
"md5": "de6ce72162cb02156f63c6c8a6704740",
"sha256": "f87b4c4990c34104d7d2945d775539deb542164114944e41be011a5cd195a8e1"
},
"downloads": -1,
"filename": "noveum_trace-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "de6ce72162cb02156f63c6c8a6704740",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 61469,
"upload_time": "2025-07-16T11:49:22",
"upload_time_iso_8601": "2025-07-16T11:49:22.446988Z",
"url": "https://files.pythonhosted.org/packages/be/e9/c96b65c310d86440396cd01e85d95d384b7395571a5ce0366a4a7e83d5e5/noveum_trace-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-16 11:49:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Noveum",
"github_project": "noveum-trace",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "noveum-trace"
}