# Trasor.io Python SDK
[](https://badge.fury.io/py/trasor-sdk)
[](https://pypi.org/project/trasor-sdk/)
[](https://pypi.org/project/trasor-sdk/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/joaomdmoura/crewAI)
[](https://github.com/langchain-ai/langchain)
_The official Python SDK for Trasor.io – audit logs you can trust._
> **Also available:** [Node.js SDK](https://www.npmjs.com/package/@trasor/sdk) - `npm install @trasor/sdk`
📄 [Changelog](CHANGELOG.md) | 🚀 [v1.1.0 Release](CHANGELOG.md#110---2025-01-15)
## Installation
To install the official Trasor.io Python SDK:
```bash
pip install trasor-sdk
```
For framework integrations:
```bash
# For CrewAI integration
pip install trasor-sdk[crewai]
# For LangChain integration
pip install trasor-sdk[langchain]
# For all integrations
pip install trasor-sdk[all]
```
Trasor.io provides secure, immutable audit trails for AI agents using blockchain-style verification. Perfect for teams building with CrewAI, LangChain, AutoGPT, and other AI frameworks who need SOC 2 / ISO27001 compliance.
## Features
- 🔐 **Secure audit logging** with automatic hash chaining
- 🚀 **Developer-friendly** - Get started in minutes
- 📊 **Chain verification** - Ensure data integrity
- 🔑 **Simple authentication** with API keys
- ⚡ **Async support** - High-performance async operations with `async_mode=True`
- 📦 **Batch logging** - Log multiple events in a single API call
- 🐍 **Python 3.7+** compatible
- 📦 **Minimal dependencies** - Only requires `requests` (aiohttp for async)
## Quick Start
```python
from trasor import TrasorClient
# Initialize client with your API key
client = TrasorClient(api_key="trasor_live_abc123...")
# Log an AI agent event
response = client.log_event(
agent_name="data_processor",
action="process_customer_data",
inputs={"customer_id": "cust_123", "data_type": "profile"},
outputs={"status": "processed", "record_count": 1},
metadata={"processing_time": "1.2s"},
workflow_id="workflow_456",
status="success"
)
print(f"Audit log created: {response['id']}")
print(f"Hash: {response['hash']}")
```
## Framework Integrations
### CrewAI Integration
Automatically log all CrewAI agent actions and task executions:
```python
from trasor import TrasorClient
from trasor.integrations.crewai import TrasorCrewAIHandler
from crewai import Agent, Task, Crew
# Initialize Trasor.io
client = TrasorClient(api_key="trasor_live_xxx")
handler = TrasorCrewAIHandler(client, workflow_id="research-crew-001")
# Create agents with automatic logging
researcher = Agent(
role="Researcher",
goal="Research the latest AI trends",
backstory="You are an expert AI researcher",
callbacks=[handler]
)
writer = Agent(
role="Writer",
goal="Write engaging content about AI",
backstory="You are a technical writer",
callbacks=[handler]
)
# Create tasks - all executions are logged
research_task = Task(
description="Research GPT-4 capabilities",
agent=researcher,
callbacks=[handler]
)
write_task = Task(
description="Write an article about the findings",
agent=writer,
callbacks=[handler]
)
# Run crew - all actions are tracked
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
callbacks=[handler]
)
results = crew.kickoff()
```
### LangChain Integration
Automatically log LangChain chains, tools, and LLM calls:
```python
from trasor import TrasorClient
from trasor.integrations.langchain import TrasorLangChainHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
# Initialize Trasor.io
client = TrasorClient(api_key="trasor_live_xxx")
handler = TrasorLangChainHandler(
client,
workflow_id="qa-chain-001",
log_llm_calls=True # Log individual LLM calls
)
# Create LLM with logging
llm = OpenAI(temperature=0.7, callbacks=[handler])
# Create chain with logging
prompt = PromptTemplate(
input_variables=["topic"],
template="Write a short article about {topic}"
)
chain = LLMChain(
llm=llm,
prompt=prompt,
callbacks=[handler]
)
# All executions are automatically logged
result = chain.run("artificial intelligence in healthcare")
```
## Advanced Features
### ⚡ Async Support
For high-throughput workloads, Trasor.io SDK supports asynchronous logging using Python's asyncio:
```python
import asyncio
from trasor import TrasorClient
async def main():
# Initialize client with async_mode=True
client = TrasorClient(api_key="trasor_live_xxx", async_mode=True)
# All methods now return coroutines
response = await client.log_event(
agent_name="async-agent",
action="process_data",
inputs={"foo": "bar"},
status="success"
)
print(f"Async log created: {response['id']}")
# Clean up
await client.close()
asyncio.run(main())
```
To enable async:
- Pass `async_mode=True` when creating the TrasorClient
- All client methods become await-able when async_mode is enabled
- Install with: `pip install trasor-sdk[async]`
### 📦 Batch Logging
Batch mode allows efficient submission of multiple logs in a single API call:
```python
from trasor import TrasorClient
client = TrasorClient(api_key="trasor_live_xxx")
# Prepare multiple events
batch = [
{
"agent_name": "agent-1",
"action": "task-a",
"inputs": {"data": 1},
"status": "success"
},
{
"agent_name": "agent-2",
"action": "task-b",
"inputs": {"data": 2},
"status": "error"
}
]
# Submit all at once
response = client.log_batch(batch)
print(f"Batch logged {len(response)} events")
```
Benefits:
- Reduce API calls for bulk auditing scenarios
- Minimize HTTP overhead
- Maintain data consistency across related events
### Combined Async + Batch Example
```python
async def process_large_dataset():
client = TrasorClient(api_key="trasor_live_xxx", async_mode=True)
# Process data in batches asynchronously
batch = [
{"agent_name": f"worker-{i}", "action": "process", "status": "success"}
for i in range(100)
]
response = await client.log_batch(batch)
print(f"Logged {len(response)} events asynchronously")
await client.close()
```
## API Reference
### TrasorClient
#### `__init__(api_key, base_url="https://api.trasor.io/v1", timeout=30, async_mode=False)`
Create a new Trasor.io client instance.
**Parameters:**
- `api_key` (str): Your Trasor.io API key (format: `trasor_live_*`)
- `base_url` (str, optional): Base URL for the API
- `timeout` (int, optional): Request timeout in seconds
- `async_mode` (bool, optional): Enable async operation mode (default: False)
```python
# Sync mode (default)
client = TrasorClient(api_key="trasor_live_abc123...")
# Async mode
client = TrasorClient(api_key="trasor_live_abc123...", async_mode=True)
```
#### `log_event(agent_name, action, **kwargs)`
Create a new audit log entry.
**Parameters:**
- `agent_name` (str): Name/identifier of the AI agent or service
- `action` (str): The action that was performed
- `inputs` (dict, optional): Input data/parameters for the action
- `outputs` (dict, optional): Output data/results from the action
- `metadata` (dict, optional): Additional metadata about the event
- `workflow_id` (str, optional): Workflow or session identifier
- `status` (str, optional): Status of the action (e.g., "success", "error", "pending")
**Returns:** `dict` - The created audit log entry
```python
response = client.log_event(
agent_name="email_agent",
action="send_notification",
inputs={"recipient": "user@example.com"},
outputs={"message_id": "msg_123"},
status="success"
)
```
#### `get_logs(limit=50, offset=0, workflow_id=None)`
Retrieve audit logs with pagination.
**Parameters:**
- `limit` (int, optional): Number of logs to return (max 100)
- `offset` (int, optional): Number of logs to skip
- `workflow_id` (str, optional): Filter by workflow ID
**Returns:** `dict` - Paginated list of audit logs
```python
logs = client.get_logs(limit=20, offset=0)
for log in logs['logs']:
print(f"Agent: {log['agentId']}, Action: {log['action']}")
```
#### `verify_chain()`
Verify the integrity of the audit log chain.
**Returns:** `dict` - Verification results
```python
verification = client.verify_chain()
print(f"Chain integrity: {verification['isValid']}")
```
#### `get_stats()`
Get account statistics and metrics.
**Returns:** `dict` - Account statistics
```python
stats = client.get_stats()
print(f"Total logs: {stats['totalLogs']}")
print(f"Chain integrity: {stats['chainIntegrity']}%")
```
#### `log_batch(events)`
Log multiple events in a single API request.
**Parameters:**
- `events` (list): List of event dictionaries, each containing:
- `agent_name` (str, required): Name of the agent
- `action` (str, required): Action performed
- `inputs` (dict, optional): Input data
- `outputs` (dict, optional): Output data
- `metadata` (dict, optional): Additional metadata
- `workflow_id` (str, optional): Workflow identifier
- `status` (str, optional): Status of the action
**Returns:** `list` - List of created log entries
```python
batch = [
{"agent_name": "agent1", "action": "process", "status": "success"},
{"agent_name": "agent2", "action": "validate", "status": "success"}
]
response = client.log_batch(batch)
```
**Note:** In async mode (`async_mode=True`), all methods return coroutines and must be awaited.
#### `__init__(api_key, base_url="https://api.trasor.io/v1")`
Create a new async Trasor.io client instance.
**Parameters:**
- `api_key` (str): Your Trasor.io API key (format: `trasor_live_*`)
- `base_url` (str, optional): Base URL for the API
#### `log_event_async(agent_name, action, ...)`
Asynchronously log a single audit event. Same parameters as `log_event()`.
**Returns:** `dict` - The created audit log entry
#### `log_events_async(events)`
Asynchronously log multiple audit events in a single batch request.
**Parameters:**
- `events` (list): List of event dictionaries, each containing:
- `agent_name` (str, required): Name of the agent
- `action` (str, required): Action performed
- All other parameters from `log_event()` are optional
**Returns:** `dict` - Response containing all created log entries
```python
events = [
{"agent_name": "parser", "action": "parse_document", "status": "success"},
{"agent_name": "validator", "action": "validate_data", "status": "success"},
{"agent_name": "storage", "action": "save_results", "status": "success"}
]
response = await client.log_events_async(events)
```
#### `get_logs_async(limit=50, offset=0)`
Asynchronously retrieve audit logs. Same parameters and return as `get_logs()`.
#### `verify_chain_async()`
Asynchronously verify chain integrity. Same return as `verify_chain()`.
#### `get_stats_async()`
Asynchronously get account statistics. Same return as `get_stats()`.
#### `close()`
Close the underlying aiohttp session. Called automatically when using context manager.
## Framework Examples
### CrewAI Integration
```python
from trasor import TrasorClient
from crewai import Agent, Task, Crew
# Initialize Trasor.io client
trasor = TrasorClient(api_key="trasor_live_abc123...")
# Create your CrewAI agents
researcher = Agent(
role='Research Analyst',
goal='Analyze market trends',
backstory='Expert in market analysis'
)
# Custom callback to log CrewAI events
def log_crew_event(agent_name, action, inputs=None, outputs=None, status="success"):
trasor.log_event(
agent_name=agent_name,
action=action,
inputs=inputs,
outputs=outputs,
metadata={"framework": "crewai"},
status=status
)
# Log when agent starts task
log_crew_event("research_analyst", "start_analysis", {"topic": "AI market"})
# ... run your CrewAI workflow ...
# Log completion
log_crew_event("research_analyst", "complete_analysis",
outputs={"findings": "Market growing 40% YoY"})
```
### LangChain Integration
```python
from trasor import TrasorClient
from langchain.chains import LLMChain
from langchain.callbacks.base import BaseCallbackHandler
trasor = TrasorClient(api_key="trasor_live_abc123...")
class TrasorCallback(BaseCallbackHandler):
def on_chain_start(self, serialized, inputs, **kwargs):
trasor.log_event(
agent_name="langchain_agent",
action="chain_start",
inputs=inputs,
metadata={"chain_type": serialized.get("name", "unknown")}
)
def on_chain_end(self, outputs, **kwargs):
trasor.log_event(
agent_name="langchain_agent",
action="chain_end",
outputs=outputs,
status="success"
)
# Use the callback in your LangChain
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[TrasorCallback()])
```
## Error Handling
The SDK includes comprehensive error handling:
```python
from trasor import TrasorClient, AuthenticationError, ValidationError, APIError
client = TrasorClient(api_key="trasor_live_abc123...")
try:
response = client.log_event(
agent_name="test_agent",
action="test_action"
)
except AuthenticationError:
print("Invalid API key")
except ValidationError as e:
print(f"Invalid parameters: {e}")
except APIError as e:
print(f"API error: {e}")
```
## Context Manager Support
The client supports context manager protocol for automatic cleanup:
```python
with TrasorClient(api_key="trasor_live_abc123...") as client:
client.log_event(
agent_name="context_agent",
action="test_action"
)
# Client session automatically closed
```
## Getting Your API Key
1. Sign up at [trasor.io](https://trasor.io)
2. Go to your Settings page
3. Generate a new API key
4. Copy the key (format: `trasor_live_...`)
## Development
### Running Tests
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run tests with coverage
pytest tests/ --cov=trasor
```
### Code Quality
```bash
# Format code
black trasor/
# Lint code
flake8 trasor/
# Type checking
mypy trasor/
```
## Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature-name`
3. Make your changes and add tests
4. Run the test suite: `pytest`
5. Submit a pull request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Support
- 📧 **Email**: support@trasor.io
- 📖 **Documentation**: https://docs.trasor.io
- 🐛 **Bug Reports**: https://github.com/trasor-io/trasor-python/issues
- 💬 **Community**: https://discord.gg/trasor
## Changelog
### 1.0.0 (2024-01-14)
- Initial release
- Core audit logging functionality
- Chain verification
- Full API coverage
- Python 3.7+ support
Raw data
{
"_id": null,
"home_page": "https://github.com/trasor-io/trasor-python",
"name": "trasor-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8.1",
"maintainer_email": "\"Trasor.io\" <support@trasor.io>",
"keywords": "audit, logging, security, ai, agents, blockchain, verification, trust, infrastructure",
"author": "Trasor.io",
"author_email": "\"Trasor.io\" <support@trasor.io>",
"download_url": "https://files.pythonhosted.org/packages/56/e9/89440ab5db8ec51f7422cb22d87b53e44fe2410c54495970b6783621d27e/trasor_sdk-2.0.1.tar.gz",
"platform": null,
"description": "# Trasor.io Python SDK\n\n[](https://badge.fury.io/py/trasor-sdk)\n[](https://pypi.org/project/trasor-sdk/)\n[](https://pypi.org/project/trasor-sdk/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/joaomdmoura/crewAI)\n[](https://github.com/langchain-ai/langchain)\n\n_The official Python SDK for Trasor.io \u2013 audit logs you can trust._\n\n> **Also available:** [Node.js SDK](https://www.npmjs.com/package/@trasor/sdk) - `npm install @trasor/sdk`\n\n\ud83d\udcc4 [Changelog](CHANGELOG.md) | \ud83d\ude80 [v1.1.0 Release](CHANGELOG.md#110---2025-01-15)\n\n## Installation\n\nTo install the official Trasor.io Python SDK:\n\n```bash\npip install trasor-sdk\n```\n\nFor framework integrations:\n\n```bash\n# For CrewAI integration\npip install trasor-sdk[crewai]\n\n# For LangChain integration\npip install trasor-sdk[langchain]\n\n# For all integrations\npip install trasor-sdk[all]\n```\n\nTrasor.io provides secure, immutable audit trails for AI agents using blockchain-style verification. Perfect for teams building with CrewAI, LangChain, AutoGPT, and other AI frameworks who need SOC 2 / ISO27001 compliance.\n\n## Features\n\n- \ud83d\udd10 **Secure audit logging** with automatic hash chaining\n- \ud83d\ude80 **Developer-friendly** - Get started in minutes\n- \ud83d\udcca **Chain verification** - Ensure data integrity\n- \ud83d\udd11 **Simple authentication** with API keys\n- \u26a1 **Async support** - High-performance async operations with `async_mode=True`\n- \ud83d\udce6 **Batch logging** - Log multiple events in a single API call\n- \ud83d\udc0d **Python 3.7+** compatible\n- \ud83d\udce6 **Minimal dependencies** - Only requires `requests` (aiohttp for async)\n\n## Quick Start\n\n```python\nfrom trasor import TrasorClient\n\n# Initialize client with your API key\nclient = TrasorClient(api_key=\"trasor_live_abc123...\")\n\n# Log an AI agent event\nresponse = client.log_event(\n agent_name=\"data_processor\",\n action=\"process_customer_data\",\n inputs={\"customer_id\": \"cust_123\", \"data_type\": \"profile\"},\n outputs={\"status\": \"processed\", \"record_count\": 1},\n metadata={\"processing_time\": \"1.2s\"},\n workflow_id=\"workflow_456\",\n status=\"success\"\n)\n\nprint(f\"Audit log created: {response['id']}\")\nprint(f\"Hash: {response['hash']}\")\n```\n\n## Framework Integrations\n\n### CrewAI Integration\n\nAutomatically log all CrewAI agent actions and task executions:\n\n```python\nfrom trasor import TrasorClient\nfrom trasor.integrations.crewai import TrasorCrewAIHandler\nfrom crewai import Agent, Task, Crew\n\n# Initialize Trasor.io\nclient = TrasorClient(api_key=\"trasor_live_xxx\")\nhandler = TrasorCrewAIHandler(client, workflow_id=\"research-crew-001\")\n\n# Create agents with automatic logging\nresearcher = Agent(\n role=\"Researcher\",\n goal=\"Research the latest AI trends\",\n backstory=\"You are an expert AI researcher\",\n callbacks=[handler]\n)\n\nwriter = Agent(\n role=\"Writer\", \n goal=\"Write engaging content about AI\",\n backstory=\"You are a technical writer\",\n callbacks=[handler]\n)\n\n# Create tasks - all executions are logged\nresearch_task = Task(\n description=\"Research GPT-4 capabilities\",\n agent=researcher,\n callbacks=[handler]\n)\n\nwrite_task = Task(\n description=\"Write an article about the findings\",\n agent=writer,\n callbacks=[handler]\n)\n\n# Run crew - all actions are tracked\ncrew = Crew(\n agents=[researcher, writer],\n tasks=[research_task, write_task],\n callbacks=[handler]\n)\n\nresults = crew.kickoff()\n```\n\n### LangChain Integration\n\nAutomatically log LangChain chains, tools, and LLM calls:\n\n```python\nfrom trasor import TrasorClient\nfrom trasor.integrations.langchain import TrasorLangChainHandler\nfrom langchain.chains import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\n\n# Initialize Trasor.io\nclient = TrasorClient(api_key=\"trasor_live_xxx\")\nhandler = TrasorLangChainHandler(\n client, \n workflow_id=\"qa-chain-001\",\n log_llm_calls=True # Log individual LLM calls\n)\n\n# Create LLM with logging\nllm = OpenAI(temperature=0.7, callbacks=[handler])\n\n# Create chain with logging\nprompt = PromptTemplate(\n input_variables=[\"topic\"],\n template=\"Write a short article about {topic}\"\n)\n\nchain = LLMChain(\n llm=llm, \n prompt=prompt, \n callbacks=[handler]\n)\n\n# All executions are automatically logged\nresult = chain.run(\"artificial intelligence in healthcare\")\n```\n\n## Advanced Features\n\n### \u26a1 Async Support\n\nFor high-throughput workloads, Trasor.io SDK supports asynchronous logging using Python's asyncio:\n\n```python\nimport asyncio\nfrom trasor import TrasorClient\n\nasync def main():\n # Initialize client with async_mode=True\n client = TrasorClient(api_key=\"trasor_live_xxx\", async_mode=True)\n \n # All methods now return coroutines\n response = await client.log_event(\n agent_name=\"async-agent\",\n action=\"process_data\",\n inputs={\"foo\": \"bar\"},\n status=\"success\"\n )\n print(f\"Async log created: {response['id']}\")\n \n # Clean up\n await client.close()\n\nasyncio.run(main())\n```\n\nTo enable async:\n- Pass `async_mode=True` when creating the TrasorClient\n- All client methods become await-able when async_mode is enabled\n- Install with: `pip install trasor-sdk[async]`\n\n### \ud83d\udce6 Batch Logging\n\nBatch mode allows efficient submission of multiple logs in a single API call:\n\n```python\nfrom trasor import TrasorClient\n\nclient = TrasorClient(api_key=\"trasor_live_xxx\")\n\n# Prepare multiple events\nbatch = [\n {\n \"agent_name\": \"agent-1\",\n \"action\": \"task-a\",\n \"inputs\": {\"data\": 1},\n \"status\": \"success\"\n },\n {\n \"agent_name\": \"agent-2\",\n \"action\": \"task-b\",\n \"inputs\": {\"data\": 2},\n \"status\": \"error\"\n }\n]\n\n# Submit all at once\nresponse = client.log_batch(batch)\nprint(f\"Batch logged {len(response)} events\")\n```\n\nBenefits:\n- Reduce API calls for bulk auditing scenarios\n- Minimize HTTP overhead\n- Maintain data consistency across related events\n\n### Combined Async + Batch Example\n\n```python\nasync def process_large_dataset():\n client = TrasorClient(api_key=\"trasor_live_xxx\", async_mode=True)\n \n # Process data in batches asynchronously\n batch = [\n {\"agent_name\": f\"worker-{i}\", \"action\": \"process\", \"status\": \"success\"}\n for i in range(100)\n ]\n \n response = await client.log_batch(batch)\n print(f\"Logged {len(response)} events asynchronously\")\n \n await client.close()\n```\n\n## API Reference\n\n### TrasorClient\n\n#### `__init__(api_key, base_url=\"https://api.trasor.io/v1\", timeout=30, async_mode=False)`\n\nCreate a new Trasor.io client instance.\n\n**Parameters:**\n- `api_key` (str): Your Trasor.io API key (format: `trasor_live_*`)\n- `base_url` (str, optional): Base URL for the API\n- `timeout` (int, optional): Request timeout in seconds\n- `async_mode` (bool, optional): Enable async operation mode (default: False)\n\n```python\n# Sync mode (default)\nclient = TrasorClient(api_key=\"trasor_live_abc123...\")\n\n# Async mode\nclient = TrasorClient(api_key=\"trasor_live_abc123...\", async_mode=True)\n```\n\n#### `log_event(agent_name, action, **kwargs)`\n\nCreate a new audit log entry.\n\n**Parameters:**\n- `agent_name` (str): Name/identifier of the AI agent or service\n- `action` (str): The action that was performed\n- `inputs` (dict, optional): Input data/parameters for the action\n- `outputs` (dict, optional): Output data/results from the action\n- `metadata` (dict, optional): Additional metadata about the event\n- `workflow_id` (str, optional): Workflow or session identifier\n- `status` (str, optional): Status of the action (e.g., \"success\", \"error\", \"pending\")\n\n**Returns:** `dict` - The created audit log entry\n\n```python\nresponse = client.log_event(\n agent_name=\"email_agent\",\n action=\"send_notification\",\n inputs={\"recipient\": \"user@example.com\"},\n outputs={\"message_id\": \"msg_123\"},\n status=\"success\"\n)\n```\n\n#### `get_logs(limit=50, offset=0, workflow_id=None)`\n\nRetrieve audit logs with pagination.\n\n**Parameters:**\n- `limit` (int, optional): Number of logs to return (max 100)\n- `offset` (int, optional): Number of logs to skip\n- `workflow_id` (str, optional): Filter by workflow ID\n\n**Returns:** `dict` - Paginated list of audit logs\n\n```python\nlogs = client.get_logs(limit=20, offset=0)\nfor log in logs['logs']:\n print(f\"Agent: {log['agentId']}, Action: {log['action']}\")\n```\n\n#### `verify_chain()`\n\nVerify the integrity of the audit log chain.\n\n**Returns:** `dict` - Verification results\n\n```python\nverification = client.verify_chain()\nprint(f\"Chain integrity: {verification['isValid']}\")\n```\n\n#### `get_stats()`\n\nGet account statistics and metrics.\n\n**Returns:** `dict` - Account statistics\n\n```python\nstats = client.get_stats()\nprint(f\"Total logs: {stats['totalLogs']}\")\nprint(f\"Chain integrity: {stats['chainIntegrity']}%\")\n```\n\n#### `log_batch(events)`\n\nLog multiple events in a single API request.\n\n**Parameters:**\n- `events` (list): List of event dictionaries, each containing:\n - `agent_name` (str, required): Name of the agent\n - `action` (str, required): Action performed\n - `inputs` (dict, optional): Input data\n - `outputs` (dict, optional): Output data\n - `metadata` (dict, optional): Additional metadata\n - `workflow_id` (str, optional): Workflow identifier\n - `status` (str, optional): Status of the action\n\n**Returns:** `list` - List of created log entries\n\n```python\nbatch = [\n {\"agent_name\": \"agent1\", \"action\": \"process\", \"status\": \"success\"},\n {\"agent_name\": \"agent2\", \"action\": \"validate\", \"status\": \"success\"}\n]\nresponse = client.log_batch(batch)\n```\n\n**Note:** In async mode (`async_mode=True`), all methods return coroutines and must be awaited.\n\n#### `__init__(api_key, base_url=\"https://api.trasor.io/v1\")`\n\nCreate a new async Trasor.io client instance.\n\n**Parameters:**\n- `api_key` (str): Your Trasor.io API key (format: `trasor_live_*`)\n- `base_url` (str, optional): Base URL for the API\n\n#### `log_event_async(agent_name, action, ...)`\n\nAsynchronously log a single audit event. Same parameters as `log_event()`.\n\n**Returns:** `dict` - The created audit log entry\n\n#### `log_events_async(events)`\n\nAsynchronously log multiple audit events in a single batch request.\n\n**Parameters:**\n- `events` (list): List of event dictionaries, each containing:\n - `agent_name` (str, required): Name of the agent\n - `action` (str, required): Action performed\n - All other parameters from `log_event()` are optional\n\n**Returns:** `dict` - Response containing all created log entries\n\n```python\nevents = [\n {\"agent_name\": \"parser\", \"action\": \"parse_document\", \"status\": \"success\"},\n {\"agent_name\": \"validator\", \"action\": \"validate_data\", \"status\": \"success\"},\n {\"agent_name\": \"storage\", \"action\": \"save_results\", \"status\": \"success\"}\n]\nresponse = await client.log_events_async(events)\n```\n\n#### `get_logs_async(limit=50, offset=0)`\n\nAsynchronously retrieve audit logs. Same parameters and return as `get_logs()`.\n\n#### `verify_chain_async()`\n\nAsynchronously verify chain integrity. Same return as `verify_chain()`.\n\n#### `get_stats_async()`\n\nAsynchronously get account statistics. Same return as `get_stats()`.\n\n#### `close()`\n\nClose the underlying aiohttp session. Called automatically when using context manager.\n\n## Framework Examples\n\n### CrewAI Integration\n\n```python\nfrom trasor import TrasorClient\nfrom crewai import Agent, Task, Crew\n\n# Initialize Trasor.io client\ntrasor = TrasorClient(api_key=\"trasor_live_abc123...\")\n\n# Create your CrewAI agents\nresearcher = Agent(\n role='Research Analyst',\n goal='Analyze market trends',\n backstory='Expert in market analysis'\n)\n\n# Custom callback to log CrewAI events\ndef log_crew_event(agent_name, action, inputs=None, outputs=None, status=\"success\"):\n trasor.log_event(\n agent_name=agent_name,\n action=action,\n inputs=inputs,\n outputs=outputs,\n metadata={\"framework\": \"crewai\"},\n status=status\n )\n\n# Log when agent starts task\nlog_crew_event(\"research_analyst\", \"start_analysis\", {\"topic\": \"AI market\"})\n\n# ... run your CrewAI workflow ...\n\n# Log completion\nlog_crew_event(\"research_analyst\", \"complete_analysis\", \n outputs={\"findings\": \"Market growing 40% YoY\"})\n```\n\n### LangChain Integration\n\n```python\nfrom trasor import TrasorClient\nfrom langchain.chains import LLMChain\nfrom langchain.callbacks.base import BaseCallbackHandler\n\ntrasor = TrasorClient(api_key=\"trasor_live_abc123...\")\n\nclass TrasorCallback(BaseCallbackHandler):\n def on_chain_start(self, serialized, inputs, **kwargs):\n trasor.log_event(\n agent_name=\"langchain_agent\",\n action=\"chain_start\",\n inputs=inputs,\n metadata={\"chain_type\": serialized.get(\"name\", \"unknown\")}\n )\n \n def on_chain_end(self, outputs, **kwargs):\n trasor.log_event(\n agent_name=\"langchain_agent\",\n action=\"chain_end\",\n outputs=outputs,\n status=\"success\"\n )\n\n# Use the callback in your LangChain\nchain = LLMChain(llm=llm, prompt=prompt, callbacks=[TrasorCallback()])\n```\n\n## Error Handling\n\nThe SDK includes comprehensive error handling:\n\n```python\nfrom trasor import TrasorClient, AuthenticationError, ValidationError, APIError\n\nclient = TrasorClient(api_key=\"trasor_live_abc123...\")\n\ntry:\n response = client.log_event(\n agent_name=\"test_agent\",\n action=\"test_action\"\n )\nexcept AuthenticationError:\n print(\"Invalid API key\")\nexcept ValidationError as e:\n print(f\"Invalid parameters: {e}\")\nexcept APIError as e:\n print(f\"API error: {e}\")\n```\n\n## Context Manager Support\n\nThe client supports context manager protocol for automatic cleanup:\n\n```python\nwith TrasorClient(api_key=\"trasor_live_abc123...\") as client:\n client.log_event(\n agent_name=\"context_agent\",\n action=\"test_action\"\n )\n# Client session automatically closed\n```\n\n## Getting Your API Key\n\n1. Sign up at [trasor.io](https://trasor.io)\n2. Go to your Settings page\n3. Generate a new API key\n4. Copy the key (format: `trasor_live_...`)\n\n## Development\n\n### Running Tests\n\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest tests/\n\n# Run tests with coverage\npytest tests/ --cov=trasor\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack trasor/\n\n# Lint code\nflake8 trasor/\n\n# Type checking\nmypy trasor/\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature-name`\n3. Make your changes and add tests\n4. Run the test suite: `pytest`\n5. Submit a pull request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Support\n\n- \ud83d\udce7 **Email**: support@trasor.io\n- \ud83d\udcd6 **Documentation**: https://docs.trasor.io\n- \ud83d\udc1b **Bug Reports**: https://github.com/trasor-io/trasor-python/issues\n- \ud83d\udcac **Community**: https://discord.gg/trasor\n\n## Changelog\n\n### 1.0.0 (2024-01-14)\n- Initial release\n- Core audit logging functionality\n- Chain verification\n- Full API coverage\n- Python 3.7+ support\n",
"bugtrack_url": null,
"license": null,
"summary": "Official Python SDK for Trasor.io trust infrastructure platform",
"version": "2.0.1",
"project_urls": {
"API Reference": "https://docs.trasor.io/api",
"Bug Tracker": "https://github.com/trasor-io/trasor-python/issues",
"Documentation": "https://docs.trasor.io/sdk/python",
"Homepage": "https://trasor.io",
"Repository": "https://github.com/trasor-io/trasor-python"
},
"split_keywords": [
"audit",
" logging",
" security",
" ai",
" agents",
" blockchain",
" verification",
" trust",
" infrastructure"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f0335b47ff77f3d6bad75a56ac19cf94f0ecfbdc15b2f4a5928dcd07fc759c7c",
"md5": "2ab2fe095a8d55b4f6914b493967b76d",
"sha256": "f5bf9e961dc78af9b59ee3cc94dac6da44193bff36c25e3190c222899de9f466"
},
"downloads": -1,
"filename": "trasor_sdk-2.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2ab2fe095a8d55b4f6914b493967b76d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8.1",
"size": 23880,
"upload_time": "2025-07-19T10:24:19",
"upload_time_iso_8601": "2025-07-19T10:24:19.154384Z",
"url": "https://files.pythonhosted.org/packages/f0/33/5b47ff77f3d6bad75a56ac19cf94f0ecfbdc15b2f4a5928dcd07fc759c7c/trasor_sdk-2.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "56e989440ab5db8ec51f7422cb22d87b53e44fe2410c54495970b6783621d27e",
"md5": "bcb20f9a1a894064ca9846645d15b565",
"sha256": "693fd1bc3b8c07a3e8a59b35ba56630694236213ae5f093bdf61a4e03ea3e297"
},
"downloads": -1,
"filename": "trasor_sdk-2.0.1.tar.gz",
"has_sig": false,
"md5_digest": "bcb20f9a1a894064ca9846645d15b565",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8.1",
"size": 33764,
"upload_time": "2025-07-19T10:24:20",
"upload_time_iso_8601": "2025-07-19T10:24:20.620920Z",
"url": "https://files.pythonhosted.org/packages/56/e9/89440ab5db8ec51f7422cb22d87b53e44fe2410c54495970b6783621d27e/trasor_sdk-2.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-19 10:24:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "trasor-io",
"github_project": "trasor-python",
"github_not_found": true,
"lcname": "trasor-sdk"
}