# Neatlogs
A comprehensive LLM tracking system that automatically captures and logs all LLM API calls with detailed metrics.
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://badge.fury.io/py/neatlogs)
## Features
- ๐ **Automatic LLM Call Tracking**: Seamlessly tracks all LLM API calls without code changes
- ๐ **Comprehensive Metrics**: Token usage, costs, response times, and more
- ๐ **Multi-Provider Support**: OpenAI, Anthropic, Google Gemini, Azure OpenAI, and LiteLLM
- ๐ **LangChain Integration**: Seamless tracking for LangChain chains, agents, and tools
- ๐งต **Session Management**: Track conversations across multiple threads and agents
- ๐ **Structured Logging**: Detailed logs with OpenTelemetry support
- ๐ฏ **Easy Integration**: Simple one-line initialization
- ๐ **Real-time Monitoring**: Live tracking and statistics
## Quick Start
### Installation
**Basic installation (no LLM libraries):**
```bash
pip install neatlogs
```
### Basic Usage
```python
import neatlogs
# Initialize tracking with your API key
neatlogs.init(
api_key="your-api-key-here"
)
# add tags
neatlogs.add_tags(["neatlogs"])
# Now all LLM calls are automatically tracked!
# Use any supported LLM library normally
# Get session statistics
stats = neatlogs.get_session_stats()
print(f"Total calls: {stats['total_calls']}")
print(f"Total cost: ${stats['total_cost']:.4f}")
```
## Supported Providers
- **OpenAI** (GPT models)
- **Anthropic** (Claude models)
- **Google Gemini** (Gemini models)
- **Azure OpenAI**
- **LiteLLM** (unified interface)
## Framework
Neatlogs provides comprehensive support for various AI frameworks and models:
- [LangChain Integration](#langchain-integration)
- [CrewAI Integration](#crewai-integration)
- [LangGraph Integration](#langgraph-integration)
### LangChain Integration
Neatlogs provides comprehensive tracking for all LangChain components and workflows:
- **LLM & Chat Models**: Track all LLM calls, token usage, costs, and response times
- **Chains**: Monitor chain execution, inputs, outputs, and performance metrics
- **Agents**: Capture agent actions, tool calls, decision-making processes, and reasoning
- **Tools**: Record tool usage, inputs, outputs, and execution times
- **RAG Systems**: Track retrieval-augmented generation workflows including vector searches and document retrieval
- **Async Workflows**: Full support for asynchronous LangChain pipelines and concurrent operations
- **Error Handling**: Capture and log errors across all LangChain components
- **Model Detection**: Automatic identification of underlying LLM models and providers.
#### LangChain Callback Handler
Neatlogs provides a dedicated callback handler for LangChain to enable detailed tracking of your LangChain applications without modifying your existing code.
#### Usage
```python
from langchain.chains import LLMChain
from langchain.llms import OpenAI
import neatlogs
# Get the callback handler
handler = neatlogs.get_langchain_callback_handler(api_key="your-api-key")
# Use it with your LangChain components
llm = OpenAI()
chain = LLMChain(llm=llm, callbacks=[handler])
# Your chain calls will now be tracked automatically
result = chain.run("Hello world")
```
#### Features
- **LLM Tracking**: Captures all LLM calls with token usage, costs, and response times
- **Chain Monitoring**: Tracks chain executions, inputs, and outputs
- **Tool Call Tracking**: Monitors tool usage and performance
- **Agent Monitoring**: Records agent actions and decision processes
- **Automatic Detection**: Automatically detects model types and providers
- **Async Support**: Full support for both synchronous and asynchronous workflows
#### Asynchronous Usage
For asynchronous LangChain workflows:
```python
from neatlogs.integration.callbacks.langchain.callback import AsyncNeatlogsLangchainCallbackHandler
# Use the async handler for async workflows
async_handler = AsyncNeatlogsLangchainCallbackHandler(api_key="your-api-key")
# Use with async chains
result = await async_chain.arun(..., callbacks=[async_handler])
```
### CrewAI Integration
CrewAI is a framework for orchestrating role-playing AI agents. Neatlogs provides seamless integration with CrewAI through automatic instrumentation:
- **Agent Tracking**: Monitor all agent activities, tasks, and interactions
- **Crew Orchestration**: Track crew-level operations and agent coordination
- **Task Monitoring**: Capture task execution, delegation, and completion
- **Automatic Setup**: No code changes required - just initialize with `neatlogs.init()`
```python
import neatlogs
from crewai import Agent, Task, Crew
# Initialize Neatlogs (that's all you need!)
neatlogs.init(api_key="your-api-key")
# Your CrewAI code works normally and gets tracked automatically
agent = Agent(role="Researcher", goal="Research AI trends")
task = Task(description="Research latest AI developments")
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
```
### LangGraph Integration
LangGraph is a library for building stateful, multi-actor applications with LLMs, using graphs to define the flow of execution.
Neatlogs provides seamless integration with LangGraph through automatic instrumentation:
- **Graph Execution Tracking**: Monitor graph execution, node transitions, and state changes
- **Node Monitoring**: Track individual node executions, inputs, outputs, and performance
- **Edge Tracking**: Capture edge traversals and conditional logic
- **Automatic Setup**: No code changes required - just initialize with `neatlogs.init()`
```python
import neatlogs
from langgraph import StateGraph
# Initialize Neatlogs (that's all you need!)
neatlogs.init(api_key="your-api-key")
# Your LangGraph code works normally and gets tracked automatically
graph = StateGraph(...)
# ... define your graph
result = graph.invoke(...)
```
### Configuration Options
```python
neatlogs.init(
api_key="your-api-key",
tags=["tag1", "tag2"],
)
```
## Session Statistics
Get comprehensive insights into your LLM usage:
```python
stats = neatlogs.get_session_stats()
# Available metrics:
# - total_calls: Number of LLM API calls
# - total_tokens_input: Total input tokens
# - total_tokens_output: Total output tokens
# - total_cost: Total cost in USD
# - average_response_time: Average response time
# - provider_breakdown: Usage by provider
# - model_breakdown: Usage by model
```
Raw data
{
"_id": null,
"home_page": null,
"name": "neatlogs",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, tracking, monitoring, logging, ai, machine-learning, observability, collboration",
"author": null,
"author_email": "Neatlogs <hello@neatlogs.com>",
"download_url": "https://files.pythonhosted.org/packages/dd/aa/750ef04059c8cc35f770a42431b39749fc2481f297d9fdfdf246d9bad9dc/neatlogs-1.1.4.tar.gz",
"platform": null,
"description": "# Neatlogs\r\n\r\nA comprehensive LLM tracking system that automatically captures and logs all LLM API calls with detailed metrics.\r\n\r\n[](https://www.python.org/downloads/)\r\n[](https://opensource.org/licenses/MIT)\r\n[](https://badge.fury.io/py/neatlogs)\r\n\r\n## Features\r\n\r\n- \ud83d\ude80 **Automatic LLM Call Tracking**: Seamlessly tracks all LLM API calls without code changes\r\n- \ud83d\udcca **Comprehensive Metrics**: Token usage, costs, response times, and more\r\n- \ud83d\udd0c **Multi-Provider Support**: OpenAI, Anthropic, Google Gemini, Azure OpenAI, and LiteLLM\r\n- \ud83d\udd17 **LangChain Integration**: Seamless tracking for LangChain chains, agents, and tools\r\n- \ud83e\uddf5 **Session Management**: Track conversations across multiple threads and agents\r\n- \ud83d\udcdd **Structured Logging**: Detailed logs with OpenTelemetry support\r\n- \ud83c\udfaf **Easy Integration**: Simple one-line initialization\r\n- \ud83d\udd0d **Real-time Monitoring**: Live tracking and statistics\r\n\r\n## Quick Start\r\n\r\n### Installation\r\n\r\n**Basic installation (no LLM libraries):**\r\n\r\n```bash\r\npip install neatlogs\r\n```\r\n\r\n### Basic Usage\r\n\r\n```python\r\nimport neatlogs\r\n\r\n# Initialize tracking with your API key\r\nneatlogs.init(\r\n api_key=\"your-api-key-here\"\r\n)\r\n\r\n# add tags\r\nneatlogs.add_tags([\"neatlogs\"])\r\n# Now all LLM calls are automatically tracked!\r\n# Use any supported LLM library normally\r\n\r\n# Get session statistics\r\nstats = neatlogs.get_session_stats()\r\nprint(f\"Total calls: {stats['total_calls']}\")\r\nprint(f\"Total cost: ${stats['total_cost']:.4f}\")\r\n```\r\n\r\n## Supported Providers\r\n\r\n- **OpenAI** (GPT models)\r\n- **Anthropic** (Claude models)\r\n- **Google Gemini** (Gemini models)\r\n- **Azure OpenAI**\r\n- **LiteLLM** (unified interface)\r\n\r\n## Framework\r\n\r\nNeatlogs provides comprehensive support for various AI frameworks and models:\r\n\r\n- [LangChain Integration](#langchain-integration)\r\n- [CrewAI Integration](#crewai-integration)\r\n- [LangGraph Integration](#langgraph-integration)\r\n\r\n### LangChain Integration\r\nNeatlogs provides comprehensive tracking for all LangChain components and workflows:\r\n\r\n- **LLM & Chat Models**: Track all LLM calls, token usage, costs, and response times\r\n- **Chains**: Monitor chain execution, inputs, outputs, and performance metrics\r\n- **Agents**: Capture agent actions, tool calls, decision-making processes, and reasoning\r\n- **Tools**: Record tool usage, inputs, outputs, and execution times\r\n- **RAG Systems**: Track retrieval-augmented generation workflows including vector searches and document retrieval\r\n- **Async Workflows**: Full support for asynchronous LangChain pipelines and concurrent operations\r\n- **Error Handling**: Capture and log errors across all LangChain components\r\n- **Model Detection**: Automatic identification of underlying LLM models and providers.\r\n\r\n#### LangChain Callback Handler\r\n\r\nNeatlogs provides a dedicated callback handler for LangChain to enable detailed tracking of your LangChain applications without modifying your existing code.\r\n\r\n#### Usage\r\n\r\n```python\r\nfrom langchain.chains import LLMChain\r\nfrom langchain.llms import OpenAI\r\nimport neatlogs\r\n\r\n# Get the callback handler\r\nhandler = neatlogs.get_langchain_callback_handler(api_key=\"your-api-key\")\r\n\r\n# Use it with your LangChain components\r\nllm = OpenAI()\r\nchain = LLMChain(llm=llm, callbacks=[handler])\r\n\r\n# Your chain calls will now be tracked automatically\r\nresult = chain.run(\"Hello world\")\r\n```\r\n\r\n#### Features\r\n\r\n- **LLM Tracking**: Captures all LLM calls with token usage, costs, and response times\r\n- **Chain Monitoring**: Tracks chain executions, inputs, and outputs\r\n- **Tool Call Tracking**: Monitors tool usage and performance\r\n- **Agent Monitoring**: Records agent actions and decision processes\r\n- **Automatic Detection**: Automatically detects model types and providers\r\n- **Async Support**: Full support for both synchronous and asynchronous workflows\r\n\r\n#### Asynchronous Usage\r\n\r\nFor asynchronous LangChain workflows:\r\n\r\n```python\r\nfrom neatlogs.integration.callbacks.langchain.callback import AsyncNeatlogsLangchainCallbackHandler\r\n\r\n# Use the async handler for async workflows\r\nasync_handler = AsyncNeatlogsLangchainCallbackHandler(api_key=\"your-api-key\")\r\n\r\n# Use with async chains\r\nresult = await async_chain.arun(..., callbacks=[async_handler])\r\n```\r\n\r\n### CrewAI Integration\r\nCrewAI is a framework for orchestrating role-playing AI agents. Neatlogs provides seamless integration with CrewAI through automatic instrumentation:\r\n\r\n- **Agent Tracking**: Monitor all agent activities, tasks, and interactions\r\n- **Crew Orchestration**: Track crew-level operations and agent coordination\r\n- **Task Monitoring**: Capture task execution, delegation, and completion\r\n- **Automatic Setup**: No code changes required - just initialize with `neatlogs.init()`\r\n\r\n```python\r\nimport neatlogs\r\nfrom crewai import Agent, Task, Crew\r\n\r\n# Initialize Neatlogs (that's all you need!)\r\nneatlogs.init(api_key=\"your-api-key\")\r\n\r\n# Your CrewAI code works normally and gets tracked automatically\r\nagent = Agent(role=\"Researcher\", goal=\"Research AI trends\")\r\ntask = Task(description=\"Research latest AI developments\")\r\ncrew = Crew(agents=[agent], tasks=[task])\r\n\r\nresult = crew.kickoff()\r\n```\r\n\r\n### LangGraph Integration\r\nLangGraph is a library for building stateful, multi-actor applications with LLMs, using graphs to define the flow of execution.\r\n\r\nNeatlogs provides seamless integration with LangGraph through automatic instrumentation:\r\n\r\n- **Graph Execution Tracking**: Monitor graph execution, node transitions, and state changes\r\n- **Node Monitoring**: Track individual node executions, inputs, outputs, and performance\r\n- **Edge Tracking**: Capture edge traversals and conditional logic\r\n- **Automatic Setup**: No code changes required - just initialize with `neatlogs.init()`\r\n\r\n```python\r\nimport neatlogs\r\nfrom langgraph import StateGraph\r\n\r\n# Initialize Neatlogs (that's all you need!)\r\nneatlogs.init(api_key=\"your-api-key\")\r\n\r\n# Your LangGraph code works normally and gets tracked automatically\r\ngraph = StateGraph(...)\r\n# ... define your graph\r\n\r\nresult = graph.invoke(...)\r\n```\r\n\r\n### Configuration Options\r\n\r\n```python\r\nneatlogs.init(\r\n api_key=\"your-api-key\",\r\n tags=[\"tag1\", \"tag2\"],\r\n)\r\n```\r\n\r\n## Session Statistics\r\n\r\nGet comprehensive insights into your LLM usage:\r\n\r\n```python\r\nstats = neatlogs.get_session_stats()\r\n\r\n# Available metrics:\r\n# - total_calls: Number of LLM API calls\r\n# - total_tokens_input: Total input tokens\r\n# - total_tokens_output: Total output tokens\r\n# - total_cost: Total cost in USD\r\n# - average_response_time: Average response time\r\n# - provider_breakdown: Usage by provider\r\n# - model_breakdown: Usage by model\r\n```\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python package for extracting and managing LLM logs to build a collaborative workspace",
"version": "1.1.4",
"project_urls": {
"Documentation": "https://docs.neatlogs.com/",
"Homepage": "https://github.com/NeatLogs/neatlogs",
"Issues": "https://github.com/NeatLogs/neatlogs/issues",
"Repository": "https://github.com/NeatLogs/neatlogs.git"
},
"split_keywords": [
"llm",
" tracking",
" monitoring",
" logging",
" ai",
" machine-learning",
" observability",
" collboration"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "154b258605b5526bce31753be7f29459fa6075894e929803f38d254a8c53f5b9",
"md5": "b1da391b37c9717be38e23b0d9ee4f47",
"sha256": "e94517c79d612222f2f6512bf36d75d56d9aeb00d9fc8e620a075ec2e48654d5"
},
"downloads": -1,
"filename": "neatlogs-1.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b1da391b37c9717be38e23b0d9ee4f47",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 49764,
"upload_time": "2025-09-10T03:39:27",
"upload_time_iso_8601": "2025-09-10T03:39:27.622738Z",
"url": "https://files.pythonhosted.org/packages/15/4b/258605b5526bce31753be7f29459fa6075894e929803f38d254a8c53f5b9/neatlogs-1.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ddaa750ef04059c8cc35f770a42431b39749fc2481f297d9fdfdf246d9bad9dc",
"md5": "3a0b01e4beb06b37d1eb005f76a61496",
"sha256": "637a5bcb6cbaf54f832c020035f61644174d623ec8a6e78f558866d1b70d042b"
},
"downloads": -1,
"filename": "neatlogs-1.1.4.tar.gz",
"has_sig": false,
"md5_digest": "3a0b01e4beb06b37d1eb005f76a61496",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 43307,
"upload_time": "2025-09-10T03:39:29",
"upload_time_iso_8601": "2025-09-10T03:39:29.087833Z",
"url": "https://files.pythonhosted.org/packages/dd/aa/750ef04059c8cc35f770a42431b39749fc2481f297d9fdfdf246d9bad9dc/neatlogs-1.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-10 03:39:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "NeatLogs",
"github_project": "neatlogs",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "neatlogs"
}