hivetrace


Namehivetrace JSON
Version 1.3.8 PyPI version JSON
download
home_pagehttp://hivetrace.ai
SummaryHivetrace SDK for monitoring LLM applications
upload_time2025-08-12 12:47:38
maintainerNone
docs_urlNone
authorRaft
requires_python>=3.8
licenseNone
keywords sdk monitoring logging llm ai hivetrace
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Hivetrace SDK

## Overview

The Hivetrace SDK lets you integrate with the Hivetrace service to monitor user prompts and LLM responses. It supports both synchronous and asynchronous workflows and can be configured via environment variables.

---

## Installation

Install from PyPI:

```bash
pip install hivetrace[base]
```

---

## Quick Start

```python
from hivetrace import SyncHivetraceSDK, AsyncHivetraceSDK
```

You can use either the synchronous client (`SyncHivetraceSDK`) or the asynchronous client (`AsyncHivetraceSDK`). Choose the one that fits your runtime.

---

## Synchronous Client

### Initialize (Sync)

```python
# The sync client reads configuration from environment variables or accepts an explicit config
client = SyncHivetraceSDK()
```

### Send a user prompt (input)

```python
response = client.input(
    application_id="your-application-id",  # Obtained after registering the application in the UI
    message="User prompt here",
)
```

### Send an LLM response (output)

```python
response = client.output(
    application_id="your-application-id",
    message="LLM response here",
)
```

---

## Asynchronous Client

### Initialize (Async)

```python
# The async client can be used as a context manager
client = AsyncHivetraceSDK()
```

### Send a user prompt (input)

```python
response = await client.input(
    application_id="your-application-id",
    message="User prompt here",
)
```

### Send an LLM response (output)

```python
response = await client.output(
    application_id="your-application-id",
    message="LLM response here",
)
```

---

## Example with Additional Parameters

```python
response = client.input(
    application_id="your-application-id",
    message="User prompt here",
    additional_parameters={
        "session_id": "your-session-id",
        "user_id": "your-user-id",
        "agents": {
            "agent-1-id": {"name": "Agent 1", "description": "Agent description"},
            "agent-2-id": {"name": "Agent 2"},
            "agent-3-id": {}
        }
    }
)
```

> **Note:** `session_id`, `user_id`, and all agent IDs must be valid UUIDs.

---

## API

### `input`

```python
# Sync
def input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...

# Async
async def input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
```

Sends a **user prompt** to Hivetrace.

* `application_id` — Application identifier (must be a valid UUID, created in the UI)
* `message` — The user prompt
* `additional_parameters` — Optional dictionary with extra context (session, user, agents, etc.)

**Response example:**

```json
{
  "status": "processed",
  "monitoring_result": {
    "is_toxic": false,
    "type_of_violation": "benign",
    "token_count": 9,
    "token_usage_warning": false,
    "token_usage_unbounded": false
  }
}
```

---

### `output`

```python
# Sync
def output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...

# Async
async def output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
```

Sends an **LLM response** to Hivetrace.

* `application_id` — Application identifier (must be a valid UUID, created in the UI)
* `message` — The LLM response
* `additional_parameters` — Optional dictionary with extra context (session, user, agents, etc.)

**Response example:**

```json
{
  "status": "processed",
  "monitoring_result": {
    "is_toxic": false,
    "type_of_violation": "safe",
    "token_count": 21,
    "token_usage_warning": false,
    "token_usage_unbounded": false
  }
}
```

---

## Sending Requests in Sync Mode

```python
def main():
    # option 1: context manager
    with SyncHivetraceSDK() as client:
        response = client.input(
            application_id="your-application-id",
            message="User prompt here",
        )

    # option 2: manual close
    client = SyncHivetraceSDK()
    try:
        response = client.input(
            application_id="your-application-id",
            message="User prompt here",
        )
    finally:
        client.close()

main()
```

---

## Sending Requests in Async Mode

```python
import asyncio

async def main():
    # option 1: context manager
    async with AsyncHivetraceSDK() as client:
        response = await client.input(
            application_id="your-application-id",
            message="User prompt here",
        )

    # option 2: manual close
    client = AsyncHivetraceSDK()
    try:
        response = await client.input(
            application_id="your-application-id",
            message="User prompt here",
        )
    finally:
        await client.close()

asyncio.run(main())
```

### Closing the Async Client

```python
await client.close()
```

---

## Configuration

The SDK reads configuration from environment variables:

* `HIVETRACE_URL` — Base URL allowed to call.
* `HIVETRACE_ACCESS_TOKEN` — API token used for authentication.

These are loaded automatically when you create a client.


### Configuration Sources

Hivetrace SDK can retrieve configuration from the following sources:

**.env File:**

```bash
HIVETRACE_URL=https://your-hivetrace-instance.com
HIVETRACE_ACCESS_TOKEN=your-access-token  # obtained in the UI (API Tokens page)
```

The SDK will automatically load these settings.

You can also pass a config dict explicitly when creating a client instance.
```bash
client = SyncHivetraceSDK(
    config={
        "HIVETRACE_URL": HIVETRACE_URL,
        "HIVETRACE_ACCESS_TOKEN": HIVETRACE_ACCESS_TOKEN,
    },
)
```

## Environment Variables

Set up your environment variables for easier configuration:

```bash
# .env file
HIVETRACE_URL=https://your-hivetrace-instance.com
HIVETRACE_ACCESS_TOKEN=your-access-token
HIVETRACE_APP_ID=your-application-id
```

# CrewAI Integration

**Demo repository**

[https://github.com/anntish/multiagents-crew-forge](https://github.com/anntish/multiagents-crew-forge)

## Step 1: Install the dependency

**What to do:** Add the HiveTrace SDK to your project

**Where:** In `requirements.txt` or via pip

```bash
# Via pip (for quick testing)
pip install hivetrace[crewai]>=1.3.5

# Or add to requirements.txt (recommended)
echo "hivetrace[crewai]>=1.3.3" >> requirements.txt
pip install -r requirements.txt
```

**Why:** The HiveTrace SDK provides decorators and clients for sending agent activity data to the monitoring platform.

---

## Step 2: **ADD** unique IDs for each agent

**Example:** In `src/config.py`

```python
PLANNER_ID = "333e4567-e89b-12d3-a456-426614174001"
WRITER_ID = "444e4567-e89b-12d3-a456-426614174002"
EDITOR_ID = "555e4567-e89b-12d3-a456-426614174003"
```

**Why agents need IDs:** HiveTrace tracks each agent individually. A UUID ensures the agent can be uniquely identified in the monitoring system.

---

## Step 3: Create an agent mapping

**What to do:** Map agent roles to their HiveTrace IDs

**Example:** In `src/agents.py` (where your agents are defined)

```python
from crewai import Agent
# ADD: import agent IDs
from src.config import EDITOR_ID, PLANNER_ID, WRITER_ID

# ADD: mapping for HiveTrace (REQUIRED!)
agent_id_mapping = {
    "Content Planner": {  # ← Exactly the same as Agent(role="Content Planner")
        "id": PLANNER_ID,
        "description": "Creates content plans"
    },
    "Content Writer": {   # ← Exactly the same as Agent(role="Content Writer")
        "id": WRITER_ID,
        "description": "Writes high-quality articles"
    },
    "Editor": {           # ← Exactly the same as Agent(role="Editor")
        "id": EDITOR_ID,
        "description": "Edits and improves articles"
    },
}

# Your existing agents (NO CHANGES)
planner = Agent(
    role="Content Planner",  # ← Must match key in agent_id_mapping
    goal="Create a structured content plan for the given topic",
    backstory="You are an experienced analyst...",
    verbose=True,
)

writer = Agent(
    role="Content Writer",   # ← Must match key in agent_id_mapping
    goal="Write an informative and engaging article",
    backstory="You are a talented writer...",
    verbose=True,
)

editor = Agent(
    role="Editor",           # ← Must match key in agent_id_mapping
    goal="Improve the article",
    backstory="You are an experienced editor...",
    verbose=True,
)
```

**Important:** The keys in `agent_id_mapping` must **exactly** match the `role` of your agents. Otherwise, HiveTrace will not be able to associate activity with the correct agent.

---

## Step 4: Integrate with tools (if used)

**What to do:** Add HiveTrace support to tools

**Example:** In `src/tools.py`

```python
from crewai.tools import BaseTool
from typing import Optional

class WordCountTool(BaseTool):
    name: str = "WordCountTool"
    description: str = "Count words, characters and sentences in text"
    # ADD: HiveTrace field (REQUIRED!)
    agent_id: Optional[str] = None
    
    def _run(self, text: str) -> str:
        word_count = len(text.split())
        return f"Word count: {word_count}"
```

**Example:** In `src/agents.py`

```python
from src.tools import WordCountTool
from src.config import PLANNER_ID, WRITER_ID, EDITOR_ID

# ADD: create tools for each agent
planner_tools = [WordCountTool()]
writer_tools = [WordCountTool()]
editor_tools = [WordCountTool()]

# ADD: assign tools to agents
for tool in planner_tools:
    tool.agent_id = PLANNER_ID

for tool in writer_tools:
    tool.agent_id = WRITER_ID

for tool in editor_tools:
    tool.agent_id = EDITOR_ID

# Use tools in agents
planner = Agent(
    role="Content Planner",
    tools=planner_tools,  # ← Agent-specific tools
    # ... other parameters
)
```

**Why:** HiveTrace tracks tool usage. The `agent_id` field in the tool class and its assignment let HiveTrace know which agent used which tool.

---

## Step 5: Initialize HiveTrace in FastAPI (if used)

**What to do:** Add the HiveTrace client to the application lifecycle

**Example:** In `main.py`

```python
from contextlib import asynccontextmanager
from fastapi import FastAPI
# ADD: import HiveTrace SDK
from hivetrace import SyncHivetraceSDK
from src.config import HIVETRACE_ACCESS_TOKEN, HIVETRACE_URL

@asynccontextmanager
async def lifespan(app: FastAPI):
    # ADD: initialize HiveTrace client
    hivetrace = SyncHivetraceSDK(
        config={
            "HIVETRACE_URL": HIVETRACE_URL,
            "HIVETRACE_ACCESS_TOKEN": HIVETRACE_ACCESS_TOKEN,
        }
    )
    # Store client in app state
    app.state.hivetrace = hivetrace
    try:
        yield
    finally:
        # IMPORTANT: close connection on shutdown
        hivetrace.close()

app = FastAPI(lifespan=lifespan)
```

---

## Step 6: Integrate into business logic

**What to do:** Wrap Crew creation with the HiveTrace decorator

**Example:** In `src/services/topic_service.py`

```python
import uuid
from typing import Optional
from crewai import Crew
# ADD: HiveTrace imports
from hivetrace import SyncHivetraceSDK
from hivetrace import crewai_trace as trace

from src.agents import agent_id_mapping, planner, writer, editor
from src.tasks import plan_task, write_task, edit_task
from src.config import HIVETRACE_APP_ID

def process_topic(
    topic: str,
    hivetrace: SyncHivetraceSDK,  # ← ADD parameter
    user_id: Optional[str] = None,
    session_id: Optional[str] = None,
):
    # ADD: generate unique conversation ID
    agent_conversation_id = str(uuid.uuid4())
    
    # ADD: common trace parameters
    common_params = {
        "agent_conversation_id": agent_conversation_id,
        "user_id": user_id,
        "session_id": session_id,
    }

    # ADD: log user request
    hivetrace.input(
        application_id=HIVETRACE_APP_ID,
        message=f"Requesting information from agents on topic: {topic}",
        additional_parameters={
            **common_params,
            "agents": agent_id_mapping,  # ← pass agent mapping
        },
    )

    # ADD: @trace decorator for monitoring Crew
    @trace(
        hivetrace=hivetrace,
        application_id=HIVETRACE_APP_ID,
        agent_id_mapping=agent_id_mapping,  # ← REQUIRED!
    )
    def create_crew():
        return Crew(
            agents=[planner, writer, editor],
            tasks=[plan_task, write_task, edit_task],
            verbose=True,
        )

    # Execute with monitoring
    crew = create_crew()
    result = crew.kickoff(
        inputs={"topic": topic},
        **common_params  # ← pass common parameters
    )

    return {
        "result": result.raw,
        "execution_details": {**common_params, "status": "completed"},
    }
```

**How it works:**

1. **`agent_conversation_id`** — unique ID for grouping all actions under a single request
2. **`hivetrace.input()`** — sends the user’s request to HiveTrace for inspection
3. **`@trace`**:

   * Intercepts all agent actions inside the Crew
   * Sends data about each step to HiveTrace
   * Associates actions with specific agents via `agent_id_mapping`
4. **`**common_params`** — passes metadata into `crew.kickoff()` so all events are linked

**Critical:** The `@trace` decorator must be applied to the function that creates and returns the `Crew`, **not** the function that calls `kickoff()`.

---

## Step 7: Update FastAPI endpoints (if used)

**What to do:** Pass the HiveTrace client to the business logic

**Example:** In `src/routers/topic_router.py`

```python
from fastapi import APIRouter, Body, Request
# ADD: import HiveTrace type
from hivetrace import SyncHivetraceSDK

from src.services.topic_service import process_topic
from src.config import SESSION_ID, USER_ID

router = APIRouter(prefix="/api")

@router.post("/process-topic")
async def api_process_topic(request: Request, request_body: dict = Body(...)):
    # ADD: get HiveTrace client from app state
    hivetrace: SyncHivetraceSDK = request.app.state.hivetrace
    
    return process_topic(
        topic=request_body["topic"],
        hivetrace=hivetrace,  # ← pass client
        user_id=USER_ID,
        session_id=SESSION_ID,
    )
```

**Why:** The API endpoint must pass the HiveTrace client to the business logic so monitoring data can be sent.

---

## 🚨 Common mistakes

1. **Role mismatch** — make sure keys in `agent_id_mapping` exactly match `role` in agents
2. **Missing `agent_id_mapping`** — the `@trace` decorator must receive the mapping
3. **Decorator on wrong function** — `@trace` must be applied to the Crew creation function, not `kickoff`
4. **Client not closed** — remember to call `hivetrace.close()` in the lifespan
5. **Invalid credentials** — check your HiveTrace environment variables


# LangChain Integration

**Demo repository**

[https://github.com/anntish/multiagents-langchain-forge](https://github.com/anntish/multiagents-langchain-forge)

This project implements monitoring of a multi-agent system in LangChain via the HiveTrace SDK.

### Step 1. Install Dependencies

```bash
pip install hivetrace[langchain]>=1.3.5
# optional: add to requirements.txt and install
echo "hivetrace[langchain]>=1.3.3" >> requirements.txt
pip install -r requirements.txt
```

What the package provides: SDK clients (sync/async), a universal callback for LangChain agents, and ready-to-use calls for sending inputs/logs/outputs to HiveTrace.

### Step 2. Configure Environment Variables

* `HIVETRACE_URL`: HiveTrace address
* `HIVETRACE_ACCESS_TOKEN`: HiveTrace access token
* `HIVETRACE_APP_ID`: your application ID in HiveTrace
* `OPENAI_API_KEY`: key for the LLM provider (example with OpenAI)
* Additionally: `OPENAI_MODEL`, `USER_ID`, `SESSION_ID`

### Step 3. Assign Fixed UUIDs to Your Agents

Create a dictionary of fixed UUIDs for all "agent nodes" (e.g., orchestrator, specialized agents). This ensures unambiguous identification in tracing.

Example: file `src/core/constants.py`:

```python
PREDEFINED_AGENT_IDS = {
    "MainHub": "111e1111-e89b-12d3-a456-426614174099",
    "text_agent": "222e2222-e89b-12d3-a456-426614174099",
    "math_agent": "333e3333-e89b-12d3-a456-426614174099",
    "pre_text_agent": "444e4444-e89b-12d3-a456-426614174099",
    "pre_math_agent": "555e5555-e89b-12d3-a456-426614174099",
}
```

Tip: dictionary keys must match the actual node names appearing in logs (`tool`/agent name in LangChain calls).

### Step 4. Attach the Callback to Executors and Tools

Create and use `AgentLoggingCallback` — it should be passed:

* as a callback in `AgentExecutor` (orchestrator), and
* as `callback_handler` in your tools/agent wrappers (`BaseTool`).

Example: file `src/core/orchestrator.py` (fragment):

```python
from hivetrace.adapters.langchain import AgentLoggingCallback
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

class OrchestratorAgent:
    def __init__(self, llm, predefined_agent_ids=None):
        self.llm = llm
        self.logging_callback = AgentLoggingCallback(
            default_root_name="MainHub",
            predefined_agent_ids=predefined_agent_ids,
        )
        # Example: wrapper agents as tools
        # MathAgentTool/TextAgentTool internally pass self.logging_callback further
        agent = create_openai_tools_agent(self.llm, self.tools, ChatPromptTemplate.from_messages([
            ("system", "You are the orchestrator agent of a multi-agent system."),
            MessagesPlaceholder(variable_name="chat_history", optional=True),
            ("human", "{input}"),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
        ]))
        self.executor = AgentExecutor(
            agent=agent,
            tools=self.tools,
            verbose=True,
            callbacks=[self.logging_callback],
        )
```

Important: all nested agents/tools that create their own `AgentExecutor` or inherit from `BaseTool` must also receive this `callback_handler` so their steps are included in tracing.

### Step 5. One-Line Integration in a Business Method

Use the `run_with_tracing` helper from `hivetrace/adapters/langchain/api.py`. It:

* logs the input with agent mapping and metadata;
* calls your orchestrator;
* collects and sends accumulated logs/final answer.

Minimal example (script):

```python
import os, uuid
from langchain_openai import ChatOpenAI
from src.core.orchestrator import OrchestratorAgent
from src.core.constants import PREDEFINED_AGENT_IDS
from hivetrace.adapters.langchain import run_with_tracing

llm = ChatOpenAI(model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"), temperature=0.2, streaming=False)
orchestrator = OrchestratorAgent(llm, predefined_agent_ids=PREDEFINED_AGENT_IDS)

result = run_with_tracing(
    orchestrator=orchestrator,
    query="Format this text and count the number of words",
    application_id=os.getenv("HIVETRACE_APP_ID"),
    user_id=os.getenv("USER_ID"),
    session_id=os.getenv("SESSION_ID"),
    conversation_id=str(uuid.uuid4()),
)
print(result)
```

FastAPI variant (handler fragment):

```python
from fastapi import APIRouter, Request
from hivetrace.adapters.langchain import run_with_tracing
import uuid

router = APIRouter()

@router.post("/query")
async def process_query(payload: dict, request: Request):
    orchestrator = request.app.state.orchestrator
    conv_id = str(uuid.uuid4()) # always create a new agent_conversation_id for each request to group agent work for the same question
    result = run_with_tracing(
        orchestrator=orchestrator,
        query=payload["query"],
        application_id=request.app.state.HIVETRACE_APP_ID,
        user_id=request.app.state.USER_ID,
        session_id=request.app.state.SESSION_ID,
        conversation_id=conv_id,
    )
    return {"status": "success", "result": result}
```

### Step 6. Reusing the HiveTrace Client (Optional)

Helpers automatically create a short-lived client if none is provided. If you want to reuse a client — create it once during the application's lifecycle and pass it to helpers.

FastAPI (lifespan):

```python
from contextlib import asynccontextmanager
from fastapi import FastAPI
from hivetrace import SyncHivetraceSDK

@asynccontextmanager
async def lifespan(app: FastAPI):
    hivetrace = SyncHivetraceSDK()
    app.state.hivetrace = hivetrace
    try:
        yield
    finally:
        hivetrace.close()

app = FastAPI(lifespan=lifespan)
```

Then:

```python
result = run_with_tracing(
    orchestrator=orchestrator,
    query=payload.query,
    hivetrace=request.app.state.hivetrace,  # pass your own client
    application_id=request.app.state.HIVETRACE_APP_ID,
)
```

### How Logs Look in HiveTrace

* **Agent nodes**: orchestrator nodes and specialized "agent wrappers" (`text_agent`, `math_agent`, etc.).
* **Actual tools**: low-level tools (e.g., `text_analyzer`, `text_formatter`) are logged on start/end events.
* **Service records**: automatically added `return_result` (returning result to parent) and `final_answer` (final answer of the root node) steps.

This gives a clear call graph with data flow direction and the final answer.

### Common Mistakes and How to Avoid Them

* **Name mismatch**: key in `PREDEFINED_AGENT_IDS` must match the node/tool name in logs.
* **No agent mapping**: either pass `agents_mapping` to `run_with_tracing` or define `predefined_agent_ids` in `AgentLoggingCallback` — the SDK will build the mapping automatically.
* **Callback not attached**: add `AgentLoggingCallback` to all `AgentExecutor` and `BaseTool` wrappers via the `callback_handler` parameter.
* **Client not closed**: use lifespan/context manager for `SyncHivetraceSDK`.


# OpenAI Agents Integration

**Demo repository**

[https://github.com/anntish/openai-agents-forge](https://github.com/anntish/openai-agents-forge)

### 1. Installation

```bash
pip install hivetrace[openai_agents]==1.3.5
```

---

### 2. Environment Setup

Set the environment variables (via `.env` or export):

```bash
HIVETRACE_URL=http://localhost:8000          # Your HiveTrace URL
HIVETRACE_ACCESS_TOKEN=ht_...                # Your HiveTrace access token
HIVETRACE_APPLICATION_ID=00000000-...-0000   # Your HiveTrace application ID

SESSION_ID=
USERID=

OPENAI_API_KEY=
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MODEL=gpt-4o-mini
```

---

### 3. Attach the Trace Processor in Code

Add 3 lines before creating/using your agents:

```python
from agents import set_trace_processors
from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor

set_trace_processors([
    HivetraceOpenAIAgentProcessor()  # will take config from env
])
```

Alternative (explicit configuration if you don’t want to rely on env):

```python
from agents import set_trace_processors
from hivetrace import SyncHivetraceSDK
from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor

hivetrace = SyncHivetraceSDK(config={
    "HIVETRACE_URL": "http://localhost:8000",
    "HIVETRACE_ACCESS_TOKEN": "ht_...",
})

set_trace_processors([
    HivetraceOpenAIAgentProcessor(
        application_id="00000000-0000-0000-0000-000000000000",
        hivetrace_instance=hivetrace,
    )
])
```

Important:

* Register the processor only once at app startup.
* Attach it before the first agent run (`Runner.run(...)` / `Runner.run_sync(...)`).

---

### 4. Minimal "Before/After" Example

Before:

```python
from agents import Agent, Runner

assistant = Agent(name="Assistant", instructions="Be helpful.")
print(Runner.run_sync(assistant, "Hi!"))
```

After (with HiveTrace monitoring):

```python
from agents import Agent, Runner, set_trace_processors
from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor

set_trace_processors([HivetraceOpenAIAgentProcessor()])

assistant = Agent(name="Assistant", instructions="Be helpful.")
print(Runner.run_sync(assistant, "Hi!"))
```

From this moment, all agent calls, handoffs, and tool invocations will be logged in HiveTrace.

---

### 5. Tool Tracing

If you use tools, decorate them with `@function_tool` so their calls are automatically traced:

```python
from agents import function_tool

@function_tool(description_override="Adds two numbers")
def calculate_sum(a: int, b: int) -> int:
    return a + b
```

Add this tool to your agent’s `tools=[...]` — and its calls will appear in HiveTrace with inputs/outputs.

---

License
========

This project is licensed under Apache License 2.0.

            

Raw data

            {
    "_id": null,
    "home_page": "http://hivetrace.ai",
    "name": "hivetrace",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "SDK, monitoring, logging, LLM, AI, Hivetrace",
    "author": "Raft",
    "author_email": "sales@raftds.com",
    "download_url": "https://files.pythonhosted.org/packages/0a/d6/65dd0761003885ed7b8567fb88300d244de4cc44c6e1451608d8b693f007/hivetrace-1.3.8.tar.gz",
    "platform": null,
    "description": "# Hivetrace SDK\n\n## Overview\n\nThe Hivetrace SDK lets you integrate with the Hivetrace service to monitor user prompts and LLM responses. It supports both synchronous and asynchronous workflows and can be configured via environment variables.\n\n---\n\n## Installation\n\nInstall from PyPI:\n\n```bash\npip install hivetrace[base]\n```\n\n---\n\n## Quick Start\n\n```python\nfrom hivetrace import SyncHivetraceSDK, AsyncHivetraceSDK\n```\n\nYou can use either the synchronous client (`SyncHivetraceSDK`) or the asynchronous client (`AsyncHivetraceSDK`). Choose the one that fits your runtime.\n\n---\n\n## Synchronous Client\n\n### Initialize (Sync)\n\n```python\n# The sync client reads configuration from environment variables or accepts an explicit config\nclient = SyncHivetraceSDK()\n```\n\n### Send a user prompt (input)\n\n```python\nresponse = client.input(\n    application_id=\"your-application-id\",  # Obtained after registering the application in the UI\n    message=\"User prompt here\",\n)\n```\n\n### Send an LLM response (output)\n\n```python\nresponse = client.output(\n    application_id=\"your-application-id\",\n    message=\"LLM response here\",\n)\n```\n\n---\n\n## Asynchronous Client\n\n### Initialize (Async)\n\n```python\n# The async client can be used as a context manager\nclient = AsyncHivetraceSDK()\n```\n\n### Send a user prompt (input)\n\n```python\nresponse = await client.input(\n    application_id=\"your-application-id\",\n    message=\"User prompt here\",\n)\n```\n\n### Send an LLM response (output)\n\n```python\nresponse = await client.output(\n    application_id=\"your-application-id\",\n    message=\"LLM response here\",\n)\n```\n\n---\n\n## Example with Additional Parameters\n\n```python\nresponse = client.input(\n    application_id=\"your-application-id\",\n    message=\"User prompt here\",\n    additional_parameters={\n        \"session_id\": \"your-session-id\",\n        \"user_id\": \"your-user-id\",\n        \"agents\": {\n            \"agent-1-id\": {\"name\": \"Agent 1\", \"description\": \"Agent description\"},\n            \"agent-2-id\": {\"name\": \"Agent 2\"},\n            \"agent-3-id\": {}\n        }\n    }\n)\n```\n\n> **Note:** `session_id`, `user_id`, and all agent IDs must be valid UUIDs.\n\n---\n\n## API\n\n### `input`\n\n```python\n# Sync\ndef input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...\n\n# Async\nasync def input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...\n```\n\nSends a **user prompt** to Hivetrace.\n\n* `application_id` \u2014 Application identifier (must be a valid UUID, created in the UI)\n* `message` \u2014 The user prompt\n* `additional_parameters` \u2014 Optional dictionary with extra context (session, user, agents, etc.)\n\n**Response example:**\n\n```json\n{\n  \"status\": \"processed\",\n  \"monitoring_result\": {\n    \"is_toxic\": false,\n    \"type_of_violation\": \"benign\",\n    \"token_count\": 9,\n    \"token_usage_warning\": false,\n    \"token_usage_unbounded\": false\n  }\n}\n```\n\n---\n\n### `output`\n\n```python\n# Sync\ndef output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...\n\n# Async\nasync def output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...\n```\n\nSends an **LLM response** to Hivetrace.\n\n* `application_id` \u2014 Application identifier (must be a valid UUID, created in the UI)\n* `message` \u2014 The LLM response\n* `additional_parameters` \u2014 Optional dictionary with extra context (session, user, agents, etc.)\n\n**Response example:**\n\n```json\n{\n  \"status\": \"processed\",\n  \"monitoring_result\": {\n    \"is_toxic\": false,\n    \"type_of_violation\": \"safe\",\n    \"token_count\": 21,\n    \"token_usage_warning\": false,\n    \"token_usage_unbounded\": false\n  }\n}\n```\n\n---\n\n## Sending Requests in Sync Mode\n\n```python\ndef main():\n    # option 1: context manager\n    with SyncHivetraceSDK() as client:\n        response = client.input(\n            application_id=\"your-application-id\",\n            message=\"User prompt here\",\n        )\n\n    # option 2: manual close\n    client = SyncHivetraceSDK()\n    try:\n        response = client.input(\n            application_id=\"your-application-id\",\n            message=\"User prompt here\",\n        )\n    finally:\n        client.close()\n\nmain()\n```\n\n---\n\n## Sending Requests in Async Mode\n\n```python\nimport asyncio\n\nasync def main():\n    # option 1: context manager\n    async with AsyncHivetraceSDK() as client:\n        response = await client.input(\n            application_id=\"your-application-id\",\n            message=\"User prompt here\",\n        )\n\n    # option 2: manual close\n    client = AsyncHivetraceSDK()\n    try:\n        response = await client.input(\n            application_id=\"your-application-id\",\n            message=\"User prompt here\",\n        )\n    finally:\n        await client.close()\n\nasyncio.run(main())\n```\n\n### Closing the Async Client\n\n```python\nawait client.close()\n```\n\n---\n\n## Configuration\n\nThe SDK reads configuration from environment variables:\n\n* `HIVETRACE_URL` \u2014 Base URL allowed to call.\n* `HIVETRACE_ACCESS_TOKEN` \u2014 API token used for authentication.\n\nThese are loaded automatically when you create a client.\n\n\n### Configuration Sources\n\nHivetrace SDK can retrieve configuration from the following sources:\n\n**.env File:**\n\n```bash\nHIVETRACE_URL=https://your-hivetrace-instance.com\nHIVETRACE_ACCESS_TOKEN=your-access-token  # obtained in the UI (API Tokens page)\n```\n\nThe SDK will automatically load these settings.\n\nYou can also pass a config dict explicitly when creating a client instance.\n```bash\nclient = SyncHivetraceSDK(\n    config={\n        \"HIVETRACE_URL\": HIVETRACE_URL,\n        \"HIVETRACE_ACCESS_TOKEN\": HIVETRACE_ACCESS_TOKEN,\n    },\n)\n```\n\n## Environment Variables\n\nSet up your environment variables for easier configuration:\n\n```bash\n# .env file\nHIVETRACE_URL=https://your-hivetrace-instance.com\nHIVETRACE_ACCESS_TOKEN=your-access-token\nHIVETRACE_APP_ID=your-application-id\n```\n\n# CrewAI Integration\n\n**Demo repository**\n\n[https://github.com/anntish/multiagents-crew-forge](https://github.com/anntish/multiagents-crew-forge)\n\n## Step 1: Install the dependency\n\n**What to do:** Add the HiveTrace SDK to your project\n\n**Where:** In `requirements.txt` or via pip\n\n```bash\n# Via pip (for quick testing)\npip install hivetrace[crewai]>=1.3.5\n\n# Or add to requirements.txt (recommended)\necho \"hivetrace[crewai]>=1.3.3\" >> requirements.txt\npip install -r requirements.txt\n```\n\n**Why:** The HiveTrace SDK provides decorators and clients for sending agent activity data to the monitoring platform.\n\n---\n\n## Step 2: **ADD** unique IDs for each agent\n\n**Example:** In `src/config.py`\n\n```python\nPLANNER_ID = \"333e4567-e89b-12d3-a456-426614174001\"\nWRITER_ID = \"444e4567-e89b-12d3-a456-426614174002\"\nEDITOR_ID = \"555e4567-e89b-12d3-a456-426614174003\"\n```\n\n**Why agents need IDs:** HiveTrace tracks each agent individually. A UUID ensures the agent can be uniquely identified in the monitoring system.\n\n---\n\n## Step 3: Create an agent mapping\n\n**What to do:** Map agent roles to their HiveTrace IDs\n\n**Example:** In `src/agents.py` (where your agents are defined)\n\n```python\nfrom crewai import Agent\n# ADD: import agent IDs\nfrom src.config import EDITOR_ID, PLANNER_ID, WRITER_ID\n\n# ADD: mapping for HiveTrace (REQUIRED!)\nagent_id_mapping = {\n    \"Content Planner\": {  # \u2190 Exactly the same as Agent(role=\"Content Planner\")\n        \"id\": PLANNER_ID,\n        \"description\": \"Creates content plans\"\n    },\n    \"Content Writer\": {   # \u2190 Exactly the same as Agent(role=\"Content Writer\")\n        \"id\": WRITER_ID,\n        \"description\": \"Writes high-quality articles\"\n    },\n    \"Editor\": {           # \u2190 Exactly the same as Agent(role=\"Editor\")\n        \"id\": EDITOR_ID,\n        \"description\": \"Edits and improves articles\"\n    },\n}\n\n# Your existing agents (NO CHANGES)\nplanner = Agent(\n    role=\"Content Planner\",  # \u2190 Must match key in agent_id_mapping\n    goal=\"Create a structured content plan for the given topic\",\n    backstory=\"You are an experienced analyst...\",\n    verbose=True,\n)\n\nwriter = Agent(\n    role=\"Content Writer\",   # \u2190 Must match key in agent_id_mapping\n    goal=\"Write an informative and engaging article\",\n    backstory=\"You are a talented writer...\",\n    verbose=True,\n)\n\neditor = Agent(\n    role=\"Editor\",           # \u2190 Must match key in agent_id_mapping\n    goal=\"Improve the article\",\n    backstory=\"You are an experienced editor...\",\n    verbose=True,\n)\n```\n\n**Important:** The keys in `agent_id_mapping` must **exactly** match the `role` of your agents. Otherwise, HiveTrace will not be able to associate activity with the correct agent.\n\n---\n\n## Step 4: Integrate with tools (if used)\n\n**What to do:** Add HiveTrace support to tools\n\n**Example:** In `src/tools.py`\n\n```python\nfrom crewai.tools import BaseTool\nfrom typing import Optional\n\nclass WordCountTool(BaseTool):\n    name: str = \"WordCountTool\"\n    description: str = \"Count words, characters and sentences in text\"\n    # ADD: HiveTrace field (REQUIRED!)\n    agent_id: Optional[str] = None\n    \n    def _run(self, text: str) -> str:\n        word_count = len(text.split())\n        return f\"Word count: {word_count}\"\n```\n\n**Example:** In `src/agents.py`\n\n```python\nfrom src.tools import WordCountTool\nfrom src.config import PLANNER_ID, WRITER_ID, EDITOR_ID\n\n# ADD: create tools for each agent\nplanner_tools = [WordCountTool()]\nwriter_tools = [WordCountTool()]\neditor_tools = [WordCountTool()]\n\n# ADD: assign tools to agents\nfor tool in planner_tools:\n    tool.agent_id = PLANNER_ID\n\nfor tool in writer_tools:\n    tool.agent_id = WRITER_ID\n\nfor tool in editor_tools:\n    tool.agent_id = EDITOR_ID\n\n# Use tools in agents\nplanner = Agent(\n    role=\"Content Planner\",\n    tools=planner_tools,  # \u2190 Agent-specific tools\n    # ... other parameters\n)\n```\n\n**Why:** HiveTrace tracks tool usage. The `agent_id` field in the tool class and its assignment let HiveTrace know which agent used which tool.\n\n---\n\n## Step 5: Initialize HiveTrace in FastAPI (if used)\n\n**What to do:** Add the HiveTrace client to the application lifecycle\n\n**Example:** In `main.py`\n\n```python\nfrom contextlib import asynccontextmanager\nfrom fastapi import FastAPI\n# ADD: import HiveTrace SDK\nfrom hivetrace import SyncHivetraceSDK\nfrom src.config import HIVETRACE_ACCESS_TOKEN, HIVETRACE_URL\n\n@asynccontextmanager\nasync def lifespan(app: FastAPI):\n    # ADD: initialize HiveTrace client\n    hivetrace = SyncHivetraceSDK(\n        config={\n            \"HIVETRACE_URL\": HIVETRACE_URL,\n            \"HIVETRACE_ACCESS_TOKEN\": HIVETRACE_ACCESS_TOKEN,\n        }\n    )\n    # Store client in app state\n    app.state.hivetrace = hivetrace\n    try:\n        yield\n    finally:\n        # IMPORTANT: close connection on shutdown\n        hivetrace.close()\n\napp = FastAPI(lifespan=lifespan)\n```\n\n---\n\n## Step 6: Integrate into business logic\n\n**What to do:** Wrap Crew creation with the HiveTrace decorator\n\n**Example:** In `src/services/topic_service.py`\n\n```python\nimport uuid\nfrom typing import Optional\nfrom crewai import Crew\n# ADD: HiveTrace imports\nfrom hivetrace import SyncHivetraceSDK\nfrom hivetrace import crewai_trace as trace\n\nfrom src.agents import agent_id_mapping, planner, writer, editor\nfrom src.tasks import plan_task, write_task, edit_task\nfrom src.config import HIVETRACE_APP_ID\n\ndef process_topic(\n    topic: str,\n    hivetrace: SyncHivetraceSDK,  # \u2190 ADD parameter\n    user_id: Optional[str] = None,\n    session_id: Optional[str] = None,\n):\n    # ADD: generate unique conversation ID\n    agent_conversation_id = str(uuid.uuid4())\n    \n    # ADD: common trace parameters\n    common_params = {\n        \"agent_conversation_id\": agent_conversation_id,\n        \"user_id\": user_id,\n        \"session_id\": session_id,\n    }\n\n    # ADD: log user request\n    hivetrace.input(\n        application_id=HIVETRACE_APP_ID,\n        message=f\"Requesting information from agents on topic: {topic}\",\n        additional_parameters={\n            **common_params,\n            \"agents\": agent_id_mapping,  # \u2190 pass agent mapping\n        },\n    )\n\n    # ADD: @trace decorator for monitoring Crew\n    @trace(\n        hivetrace=hivetrace,\n        application_id=HIVETRACE_APP_ID,\n        agent_id_mapping=agent_id_mapping,  # \u2190 REQUIRED!\n    )\n    def create_crew():\n        return Crew(\n            agents=[planner, writer, editor],\n            tasks=[plan_task, write_task, edit_task],\n            verbose=True,\n        )\n\n    # Execute with monitoring\n    crew = create_crew()\n    result = crew.kickoff(\n        inputs={\"topic\": topic},\n        **common_params  # \u2190 pass common parameters\n    )\n\n    return {\n        \"result\": result.raw,\n        \"execution_details\": {**common_params, \"status\": \"completed\"},\n    }\n```\n\n**How it works:**\n\n1. **`agent_conversation_id`** \u2014 unique ID for grouping all actions under a single request\n2. **`hivetrace.input()`** \u2014 sends the user\u2019s request to HiveTrace for inspection\n3. **`@trace`**:\n\n   * Intercepts all agent actions inside the Crew\n   * Sends data about each step to HiveTrace\n   * Associates actions with specific agents via `agent_id_mapping`\n4. **`**common_params`** \u2014 passes metadata into `crew.kickoff()` so all events are linked\n\n**Critical:** The `@trace` decorator must be applied to the function that creates and returns the `Crew`, **not** the function that calls `kickoff()`.\n\n---\n\n## Step 7: Update FastAPI endpoints (if used)\n\n**What to do:** Pass the HiveTrace client to the business logic\n\n**Example:** In `src/routers/topic_router.py`\n\n```python\nfrom fastapi import APIRouter, Body, Request\n# ADD: import HiveTrace type\nfrom hivetrace import SyncHivetraceSDK\n\nfrom src.services.topic_service import process_topic\nfrom src.config import SESSION_ID, USER_ID\n\nrouter = APIRouter(prefix=\"/api\")\n\n@router.post(\"/process-topic\")\nasync def api_process_topic(request: Request, request_body: dict = Body(...)):\n    # ADD: get HiveTrace client from app state\n    hivetrace: SyncHivetraceSDK = request.app.state.hivetrace\n    \n    return process_topic(\n        topic=request_body[\"topic\"],\n        hivetrace=hivetrace,  # \u2190 pass client\n        user_id=USER_ID,\n        session_id=SESSION_ID,\n    )\n```\n\n**Why:** The API endpoint must pass the HiveTrace client to the business logic so monitoring data can be sent.\n\n---\n\n## \ud83d\udea8 Common mistakes\n\n1. **Role mismatch** \u2014 make sure keys in `agent_id_mapping` exactly match `role` in agents\n2. **Missing `agent_id_mapping`** \u2014 the `@trace` decorator must receive the mapping\n3. **Decorator on wrong function** \u2014 `@trace` must be applied to the Crew creation function, not `kickoff`\n4. **Client not closed** \u2014 remember to call `hivetrace.close()` in the lifespan\n5. **Invalid credentials** \u2014 check your HiveTrace environment variables\n\n\n# LangChain Integration\n\n**Demo repository**\n\n[https://github.com/anntish/multiagents-langchain-forge](https://github.com/anntish/multiagents-langchain-forge)\n\nThis project implements monitoring of a multi-agent system in LangChain via the HiveTrace SDK.\n\n### Step 1. Install Dependencies\n\n```bash\npip install hivetrace[langchain]>=1.3.5\n# optional: add to requirements.txt and install\necho \"hivetrace[langchain]>=1.3.3\" >> requirements.txt\npip install -r requirements.txt\n```\n\nWhat the package provides: SDK clients (sync/async), a universal callback for LangChain agents, and ready-to-use calls for sending inputs/logs/outputs to HiveTrace.\n\n### Step 2. Configure Environment Variables\n\n* `HIVETRACE_URL`: HiveTrace address\n* `HIVETRACE_ACCESS_TOKEN`: HiveTrace access token\n* `HIVETRACE_APP_ID`: your application ID in HiveTrace\n* `OPENAI_API_KEY`: key for the LLM provider (example with OpenAI)\n* Additionally: `OPENAI_MODEL`, `USER_ID`, `SESSION_ID`\n\n### Step 3. Assign Fixed UUIDs to Your Agents\n\nCreate a dictionary of fixed UUIDs for all \"agent nodes\" (e.g., orchestrator, specialized agents). This ensures unambiguous identification in tracing.\n\nExample: file `src/core/constants.py`:\n\n```python\nPREDEFINED_AGENT_IDS = {\n    \"MainHub\": \"111e1111-e89b-12d3-a456-426614174099\",\n    \"text_agent\": \"222e2222-e89b-12d3-a456-426614174099\",\n    \"math_agent\": \"333e3333-e89b-12d3-a456-426614174099\",\n    \"pre_text_agent\": \"444e4444-e89b-12d3-a456-426614174099\",\n    \"pre_math_agent\": \"555e5555-e89b-12d3-a456-426614174099\",\n}\n```\n\nTip: dictionary keys must match the actual node names appearing in logs (`tool`/agent name in LangChain calls).\n\n### Step 4. Attach the Callback to Executors and Tools\n\nCreate and use `AgentLoggingCallback` \u2014 it should be passed:\n\n* as a callback in `AgentExecutor` (orchestrator), and\n* as `callback_handler` in your tools/agent wrappers (`BaseTool`).\n\nExample: file `src/core/orchestrator.py` (fragment):\n\n```python\nfrom hivetrace.adapters.langchain import AgentLoggingCallback\nfrom langchain.agents import AgentExecutor, create_openai_tools_agent\nfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n\nclass OrchestratorAgent:\n    def __init__(self, llm, predefined_agent_ids=None):\n        self.llm = llm\n        self.logging_callback = AgentLoggingCallback(\n            default_root_name=\"MainHub\",\n            predefined_agent_ids=predefined_agent_ids,\n        )\n        # Example: wrapper agents as tools\n        # MathAgentTool/TextAgentTool internally pass self.logging_callback further\n        agent = create_openai_tools_agent(self.llm, self.tools, ChatPromptTemplate.from_messages([\n            (\"system\", \"You are the orchestrator agent of a multi-agent system.\"),\n            MessagesPlaceholder(variable_name=\"chat_history\", optional=True),\n            (\"human\", \"{input}\"),\n            MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n        ]))\n        self.executor = AgentExecutor(\n            agent=agent,\n            tools=self.tools,\n            verbose=True,\n            callbacks=[self.logging_callback],\n        )\n```\n\nImportant: all nested agents/tools that create their own `AgentExecutor` or inherit from `BaseTool` must also receive this `callback_handler` so their steps are included in tracing.\n\n### Step 5. One-Line Integration in a Business Method\n\nUse the `run_with_tracing` helper from `hivetrace/adapters/langchain/api.py`. It:\n\n* logs the input with agent mapping and metadata;\n* calls your orchestrator;\n* collects and sends accumulated logs/final answer.\n\nMinimal example (script):\n\n```python\nimport os, uuid\nfrom langchain_openai import ChatOpenAI\nfrom src.core.orchestrator import OrchestratorAgent\nfrom src.core.constants import PREDEFINED_AGENT_IDS\nfrom hivetrace.adapters.langchain import run_with_tracing\n\nllm = ChatOpenAI(model=os.getenv(\"OPENAI_MODEL\", \"gpt-4o-mini\"), temperature=0.2, streaming=False)\norchestrator = OrchestratorAgent(llm, predefined_agent_ids=PREDEFINED_AGENT_IDS)\n\nresult = run_with_tracing(\n    orchestrator=orchestrator,\n    query=\"Format this text and count the number of words\",\n    application_id=os.getenv(\"HIVETRACE_APP_ID\"),\n    user_id=os.getenv(\"USER_ID\"),\n    session_id=os.getenv(\"SESSION_ID\"),\n    conversation_id=str(uuid.uuid4()),\n)\nprint(result)\n```\n\nFastAPI variant (handler fragment):\n\n```python\nfrom fastapi import APIRouter, Request\nfrom hivetrace.adapters.langchain import run_with_tracing\nimport uuid\n\nrouter = APIRouter()\n\n@router.post(\"/query\")\nasync def process_query(payload: dict, request: Request):\n    orchestrator = request.app.state.orchestrator\n    conv_id = str(uuid.uuid4()) # always create a new agent_conversation_id for each request to group agent work for the same question\n    result = run_with_tracing(\n        orchestrator=orchestrator,\n        query=payload[\"query\"],\n        application_id=request.app.state.HIVETRACE_APP_ID,\n        user_id=request.app.state.USER_ID,\n        session_id=request.app.state.SESSION_ID,\n        conversation_id=conv_id,\n    )\n    return {\"status\": \"success\", \"result\": result}\n```\n\n### Step 6. Reusing the HiveTrace Client (Optional)\n\nHelpers automatically create a short-lived client if none is provided. If you want to reuse a client \u2014 create it once during the application's lifecycle and pass it to helpers.\n\nFastAPI (lifespan):\n\n```python\nfrom contextlib import asynccontextmanager\nfrom fastapi import FastAPI\nfrom hivetrace import SyncHivetraceSDK\n\n@asynccontextmanager\nasync def lifespan(app: FastAPI):\n    hivetrace = SyncHivetraceSDK()\n    app.state.hivetrace = hivetrace\n    try:\n        yield\n    finally:\n        hivetrace.close()\n\napp = FastAPI(lifespan=lifespan)\n```\n\nThen:\n\n```python\nresult = run_with_tracing(\n    orchestrator=orchestrator,\n    query=payload.query,\n    hivetrace=request.app.state.hivetrace,  # pass your own client\n    application_id=request.app.state.HIVETRACE_APP_ID,\n)\n```\n\n### How Logs Look in HiveTrace\n\n* **Agent nodes**: orchestrator nodes and specialized \"agent wrappers\" (`text_agent`, `math_agent`, etc.).\n* **Actual tools**: low-level tools (e.g., `text_analyzer`, `text_formatter`) are logged on start/end events.\n* **Service records**: automatically added `return_result` (returning result to parent) and `final_answer` (final answer of the root node) steps.\n\nThis gives a clear call graph with data flow direction and the final answer.\n\n### Common Mistakes and How to Avoid Them\n\n* **Name mismatch**: key in `PREDEFINED_AGENT_IDS` must match the node/tool name in logs.\n* **No agent mapping**: either pass `agents_mapping` to `run_with_tracing` or define `predefined_agent_ids` in `AgentLoggingCallback` \u2014 the SDK will build the mapping automatically.\n* **Callback not attached**: add `AgentLoggingCallback` to all `AgentExecutor` and `BaseTool` wrappers via the `callback_handler` parameter.\n* **Client not closed**: use lifespan/context manager for `SyncHivetraceSDK`.\n\n\n# OpenAI Agents Integration\n\n**Demo repository**\n\n[https://github.com/anntish/openai-agents-forge](https://github.com/anntish/openai-agents-forge)\n\n### 1. Installation\n\n```bash\npip install hivetrace[openai_agents]==1.3.5\n```\n\n---\n\n### 2. Environment Setup\n\nSet the environment variables (via `.env` or export):\n\n```bash\nHIVETRACE_URL=http://localhost:8000          # Your HiveTrace URL\nHIVETRACE_ACCESS_TOKEN=ht_...                # Your HiveTrace access token\nHIVETRACE_APPLICATION_ID=00000000-...-0000   # Your HiveTrace application ID\n\nSESSION_ID=\nUSERID=\n\nOPENAI_API_KEY=\nOPENAI_BASE_URL=https://api.openai.com/v1\nOPENAI_MODEL=gpt-4o-mini\n```\n\n---\n\n### 3. Attach the Trace Processor in Code\n\nAdd 3 lines before creating/using your agents:\n\n```python\nfrom agents import set_trace_processors\nfrom hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor\n\nset_trace_processors([\n    HivetraceOpenAIAgentProcessor()  # will take config from env\n])\n```\n\nAlternative (explicit configuration if you don\u2019t want to rely on env):\n\n```python\nfrom agents import set_trace_processors\nfrom hivetrace import SyncHivetraceSDK\nfrom hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor\n\nhivetrace = SyncHivetraceSDK(config={\n    \"HIVETRACE_URL\": \"http://localhost:8000\",\n    \"HIVETRACE_ACCESS_TOKEN\": \"ht_...\",\n})\n\nset_trace_processors([\n    HivetraceOpenAIAgentProcessor(\n        application_id=\"00000000-0000-0000-0000-000000000000\",\n        hivetrace_instance=hivetrace,\n    )\n])\n```\n\nImportant:\n\n* Register the processor only once at app startup.\n* Attach it before the first agent run (`Runner.run(...)` / `Runner.run_sync(...)`).\n\n---\n\n### 4. Minimal \"Before/After\" Example\n\nBefore:\n\n```python\nfrom agents import Agent, Runner\n\nassistant = Agent(name=\"Assistant\", instructions=\"Be helpful.\")\nprint(Runner.run_sync(assistant, \"Hi!\"))\n```\n\nAfter (with HiveTrace monitoring):\n\n```python\nfrom agents import Agent, Runner, set_trace_processors\nfrom hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor\n\nset_trace_processors([HivetraceOpenAIAgentProcessor()])\n\nassistant = Agent(name=\"Assistant\", instructions=\"Be helpful.\")\nprint(Runner.run_sync(assistant, \"Hi!\"))\n```\n\nFrom this moment, all agent calls, handoffs, and tool invocations will be logged in HiveTrace.\n\n---\n\n### 5. Tool Tracing\n\nIf you use tools, decorate them with `@function_tool` so their calls are automatically traced:\n\n```python\nfrom agents import function_tool\n\n@function_tool(description_override=\"Adds two numbers\")\ndef calculate_sum(a: int, b: int) -> int:\n    return a + b\n```\n\nAdd this tool to your agent\u2019s `tools=[...]` \u2014 and its calls will appear in HiveTrace with inputs/outputs.\n\n---\n\nLicense\n========\n\nThis project is licensed under Apache License 2.0.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Hivetrace SDK for monitoring LLM applications",
    "version": "1.3.8",
    "project_urls": {
        "Homepage": "http://hivetrace.ai"
    },
    "split_keywords": [
        "sdk",
        " monitoring",
        " logging",
        " llm",
        " ai",
        " hivetrace"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d5de5cfd8512d9692223e0cb7ea24227bdf85484b9e18ad2e646245b575c659d",
                "md5": "2278b0e097aa8cc431ecdcb5cfb3b651",
                "sha256": "80a227dbbd65affdcdb38353f2ea1a0bbb101f806c41f7d02c753a11e8743df0"
            },
            "downloads": -1,
            "filename": "hivetrace-1.3.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2278b0e097aa8cc431ecdcb5cfb3b651",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 52938,
            "upload_time": "2025-08-12T12:47:37",
            "upload_time_iso_8601": "2025-08-12T12:47:37.454862Z",
            "url": "https://files.pythonhosted.org/packages/d5/de/5cfd8512d9692223e0cb7ea24227bdf85484b9e18ad2e646245b575c659d/hivetrace-1.3.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0ad665dd0761003885ed7b8567fb88300d244de4cc44c6e1451608d8b693f007",
                "md5": "417e6b01ad95621d3c21390a047f652f",
                "sha256": "c5cf566ada279635baa055fd5f7a2e14b419813763e8c51cc12593c8597d6898"
            },
            "downloads": -1,
            "filename": "hivetrace-1.3.8.tar.gz",
            "has_sig": false,
            "md5_digest": "417e6b01ad95621d3c21390a047f652f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 46383,
            "upload_time": "2025-08-12T12:47:38",
            "upload_time_iso_8601": "2025-08-12T12:47:38.799616Z",
            "url": "https://files.pythonhosted.org/packages/0a/d6/65dd0761003885ed7b8567fb88300d244de4cc44c6e1451608d8b693f007/hivetrace-1.3.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-12 12:47:38",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "hivetrace"
}
        
Elapsed time: 1.49098s