# Multi-Agent Debugger
A powerful Python package that uses multiple AI agents to debug API failures by analyzing logs, code, and user questions. Built with CrewAI, it supports LLM providers including OpenAI, Anthropic, Google, Ollama, and more.
## ๐ฅ Demo Video
Watch the multiagent-debugger in action:
[](https://youtu.be/9VTe12iVQ-A?feature=shared)
## ๐๏ธ Architecture
The Multi-Agent Debugger uses a sophisticated architecture that combines multiple specialized AI agents working together to analyze and debug API failures.
### Core Agent Flow

### Detailed Architecture

## โจ Features
### ๐ค Multi-Agent Architecture
- **Question Analyzer Agent**: Extracts key entities from natural language questions and classifies error types
- **Log Analyzer Agent**: Searches and filters logs for relevant information, extracts stack traces
- **Code Path Analyzer Agent**: Validates and analyzes code paths found in logs
- **Code Analyzer Agent**: Finds API handlers, dependencies, and error handling code
- **Root Cause Agent**: Synthesizes findings to determine failure causes and generates visual flowcharts
### ๐ง Comprehensive Analysis Tools
- **Log Analysis**: Enhanced grep, filtering, stack trace extraction, and error pattern analysis
- **Code Analysis**: API handler discovery, dependency mapping, error handler identification, multi-language support
- **Flowchart Generation**: Error flow, system architecture, decision trees, sequence diagrams, and debugging storyboards
- **Natural Language Processing**: Convert user questions into structured queries
### ๐ Multi-Provider LLM Support
- **OpenAI**
- **Anthropic**
- **Google**
- **Ollama**
- **Azure OpenAI**
- **AWS Bedrock**
- **And 50+ more providers**
### Features
- **Visual Flowcharts**: Mermaid diagrams for error propagation and system architecture
- **Copyable Output**: Clean, copyable flowchart code for easy sharing
- **Multi-language Support**: Python, JavaScript, Java, Go, Rust, and more
### ๐ Output Formats
- **Structured JSON**: Programmatic access to analysis results
- **Text Documents**: Human-readable reports saved to local files
- **Visual Flowcharts**: Mermaid diagrams for documentation and sharing
## ๐ Installation
```bash
# From PyPI
pip install multiagent-debugger
# From source
git clone https://github.com/VishApp/multiagent-debugger.git
cd multiagent-debugger
pip install -e .
```
## โก Quick Start
1. **Set up your configuration:**
```bash
multiagent-debugger setup
```
2. **Debug an API failure:**
```bash
multiagent-debugger debug "Why did my /api/users endpoint fail yesterday?"
```
3. **View generated files:**
- Analysis results in JSON format
- Text documents in current directory
- Visual flowcharts for documentation
## ๐ฅ๏ธ Command-Line Usage
### Debug Command
```
Usage: multiagent_debugger debug [OPTIONS] QUESTION
Debug an API failure or error scenario with multi-agent assistance.
Arguments:
QUESTION The natural language question or debugging prompt.
Example: 'find the common errors and the root-cause'
Options:
-c, --config PATH Path to config file (YAML)
-v, --verbose Enable verbose output for detailed logs
--mode [frequent|latest|all] Log analysis mode:
frequent: Find most common error patterns
latest: Focus on most recent errors
all: Analyze all available log lines
--time-window-hours INT Time window (hours) for log analysis
--max-lines INT Maximum log lines to analyze
--code-path PATH Path to source code directory/file for analysis
-h, --help Show this message and exit
Examples:
multiagent-debugger debug 'find the common errors and the root-cause' \
--config ~/.config/multiagent-debugger/config.yaml --mode latest
multiagent-debugger debug 'why did the upload to S3 fail?' \
--mode frequent --time-window-hours 12 \
--code-path /Users/myname/myproject/src
multiagent-debugger debug 'analyze recent errors' \
--code-path /path/to/specific/file.py
```
This command analyzes your logs, extracts error patterns and code paths, and provides root cause analysis with actionable solutions and flowcharts.
## โ๏ธ Configuration
Create a `config.yaml` file (or use the setup command):
```yaml
# Paths to log files
log_paths:
- "/var/log/myapp/app.log"
- "/var/log/nginx/access.log"
# Path to source code directory or file for analysis (SECURITY FEATURE)
code_path: "/path/to/your/source/code" # Restricts code analysis to this path only
# Log analysis options
analysis_mode: "frequent" # frequent, latest, all
time_window_hours: 24 # analyze logs from last N hours
max_lines: 10000 # maximum log lines to analyze
# LLM configuration
llm:
provider: openai # or anthropic, google, ollama, etc.
model_name: gpt-4
temperature: 0.1
#api_key: optional, can use environment variable
# Phoenix monitoring configuration (optional)
phoenix:
enabled: true # Enable/disable Phoenix monitoring
host: "localhost" # Phoenix host
port: 6006 # Phoenix dashboard port
endpoint: "http://localhost:6006/v1/traces" # OTLP endpoint for traces
launch_phoenix: true # Launch Phoenix app locally
headers: {} # Additional headers for OTLP
```
### Code Path Security
The `code_path` configuration is a **security feature** that restricts code analysis to a specific directory or file:
```yaml
# Security: Only analyze code within this path
code_path: "/Users/myname/myproject/src"
```
**How it works:**
- When logs contain file paths (from stack traces, errors), the system validates them against `code_path`
- Files outside the configured path are **rejected** and not analyzed
- This prevents the system from analyzing sensitive system files or unrelated codebases
- Can be a directory (analyzes all source files within) or a specific file
**Use cases:**
- **Multi-project environments**: Restrict analysis to current project only
- **Security**: Prevent analysis of system files or sensitive directories
- **Focus**: Analyze only specific parts of large codebases
**CLI override:**
```bash
# Override config file code_path for this session
multiagent-debugger debug "question" --code-path /path/to/specific/project
```
### Custom Providers
The system supports various LLM providers including OpenRouter, Anthropic, Google, and others. See [Custom Providers Guide](docs/CUSTOM_PROVIDERS.md) for detailed configuration instructions.
### Environment Variables
Set the appropriate environment variable for your chosen provider:
- **OpenAI**: `OPENAI_API_KEY`
- **Anthropic**: `ANTHROPIC_API_KEY`
- **Google**: `GOOGLE_API_KEY`
- **Azure**: `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`
- **AWS**: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`
- See documentation for other providers
## ๐ How It Works
### 1. Question Analysis
- Extracts key information like API routes, timestamps, and error types
- Classifies the error type (API, Database, File, Network, etc.)
- Structures the query for other agents
### 2. Log Analysis
- Searches through specified log files using enhanced grep
- Filters relevant log entries by time and pattern
- Extracts stack traces and error patterns
- **Dynamically extracts code paths** (file paths, line numbers, function names)
- Validates code paths found in logs
### 3. Code Analysis
- **Validates** that extracted file paths are within the configured `code_path` (security)
- Locates relevant API handlers and endpoints
- Identifies dependencies and error handlers
- Maps the code structure and relationships
- Supports multiple programming languages (Python, JavaScript, Java, Go, Rust, etc.)
- **Rejects** analysis of files outside the configured code path
### 4. Root Cause Analysis
- Synthesizes information from all previous agents
- Determines the most likely cause with confidence levels
- Generates creative narratives and metaphors
- Creates visual flowcharts for documentation
### 5. Output Generation
- Structured JSON for programmatic access
- Human-readable text documents
- Visual flowcharts in Mermaid format
- Copyable flowchart code for easy sharing
## ๐ Phoenix Monitoring
The debugger includes built-in Phoenix monitoring for tracking agent execution, LLM usage, and performance metrics.
### View Monitoring Status
```bash
multiagent-debugger phoenix
```
This shows your Phoenix configuration and provides instructions for accessing the dashboard.
### Remote Server Access
When running the debugger on a remote server, use SSH port forwarding to access the Phoenix dashboard:
```bash
# On your local machine, create SSH tunnel
ssh -L 6006:localhost:6006 user@your-server
# Then visit in your local browser
http://localhost:6006
```
### Configuration
Phoenix monitoring is configured in your `config.yaml`:
```yaml
phoenix:
enabled: true
host: localhost
port: 6006
launch_phoenix: true
```
### Features
- **Real-time Monitoring**: Track agent executions as they happen
- **LLM Usage Tracking**: Monitor token usage and costs across providers
- **Performance Metrics**: Analyze execution times and success rates
- **Visual Traces**: See the complete flow of agent interactions
- **Automatic Launch**: Starts automatically when you run debug commands
## ๐ ๏ธ Advanced Usage
### List Available Providers
```bash
multiagent-debugger list-providers
```
### List Models for a Provider
```bash
multiagent-debugger list-models openai
```
### Debug with Custom Config
```bash
multiagent-debugger debug "Question?" --config path/to/config.yaml
```
### Analyze Recent Errors Only
```bash
multiagent-debugger debug "What went wrong?" --mode latest --time-window-hours 2
```
### Analyze Large Log Files
```bash
multiagent-debugger debug "Find patterns" --max-lines 50000
```
### Restrict Code Analysis to Specific Path
```bash
# Only analyze code within /path/to/project directory
multiagent-debugger debug "What caused the error?" --code-path /path/to/project
# Analyze only a specific file
multiagent-debugger debug "Debug this file" --code-path /path/to/file.py
```
## ๐งช Development
```bash
# Create virtual environment
python package_builder.py venv
# Install development dependencies
python package_builder.py install
# Run tests
python package_builder.py test
# Build distribution
python package_builder.py dist
```
## ๐ Requirements
- **Python**: 3.8+
- **Dependencies**:
- crewai>=0.28.0
- pydantic>=2.0.0
- And others (see requirements.txt)
## ๐ License
MIT License - see [LICENSE](LICENSE) for details.
## ๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## ๐ Support
- **GitHub Issues**: [Report a bug](https://github.com/VishApp/multiagent-debugger/issues)
- **Documentation**: [Read more](https://github.com/VishApp/multiagent-debugger#readme)
## ๐ฏ Use Cases
- **API Debugging**: Quickly identify why API endpoints are failing
- **Production Issues**: Analyze logs and code to find root causes
- **Error Investigation**: Understand complex error chains and dependencies
- **Documentation**: Generate visual flowcharts for error propagation
- **Team Collaboration**: Share analysis results in multiple formats
- **Multi-language Projects**: Support for Python, JavaScript, Java, Go, Rust, and more
- **Time-based Analysis**: Focus on recent errors or specific time periods
- **Large Log Analysis**: Handle massive log files with configurable limits
Raw data
{
"_id": null,
"home_page": "https://github.com/VishApp/multiagent-debugger",
"name": "multiagent-debugger",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "debugger, api, multi-agent, crewai, llm, openai, anthropic, debugging",
"author": "Vishnu Prasad",
"author_email": "vishnuprasadapp@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/12/c3/14744a9c557e7c3c268e8f0cad559f6a5e687e2e822cd98fa547f5131f01/multiagent_debugger-1.0.24.tar.gz",
"platform": null,
"description": "# Multi-Agent Debugger\n\nA powerful Python package that uses multiple AI agents to debug API failures by analyzing logs, code, and user questions. Built with CrewAI, it supports LLM providers including OpenAI, Anthropic, Google, Ollama, and more.\n\n## \ud83c\udfa5 Demo Video\n\nWatch the multiagent-debugger in action:\n\n[](https://youtu.be/9VTe12iVQ-A?feature=shared)\n\n## \ud83c\udfd7\ufe0f Architecture\n\nThe Multi-Agent Debugger uses a sophisticated architecture that combines multiple specialized AI agents working together to analyze and debug API failures.\n\n### Core Agent Flow\n\n\n\n### Detailed Architecture\n\n\n\n## \u2728 Features\n\n### \ud83e\udd16 Multi-Agent Architecture\n- **Question Analyzer Agent**: Extracts key entities from natural language questions and classifies error types\n- **Log Analyzer Agent**: Searches and filters logs for relevant information, extracts stack traces\n- **Code Path Analyzer Agent**: Validates and analyzes code paths found in logs\n- **Code Analyzer Agent**: Finds API handlers, dependencies, and error handling code\n- **Root Cause Agent**: Synthesizes findings to determine failure causes and generates visual flowcharts\n\n### \ud83d\udd27 Comprehensive Analysis Tools\n- **Log Analysis**: Enhanced grep, filtering, stack trace extraction, and error pattern analysis\n- **Code Analysis**: API handler discovery, dependency mapping, error handler identification, multi-language support\n- **Flowchart Generation**: Error flow, system architecture, decision trees, sequence diagrams, and debugging storyboards\n- **Natural Language Processing**: Convert user questions into structured queries\n\n### \ud83c\udf10 Multi-Provider LLM Support\n- **OpenAI**\n- **Anthropic**\n- **Google**\n- **Ollama**\n- **Azure OpenAI**\n- **AWS Bedrock**\n- **And 50+ more providers**\n\n### Features\n- **Visual Flowcharts**: Mermaid diagrams for error propagation and system architecture\n- **Copyable Output**: Clean, copyable flowchart code for easy sharing\n- **Multi-language Support**: Python, JavaScript, Java, Go, Rust, and more\n\n### \ud83d\udcca Output Formats\n- **Structured JSON**: Programmatic access to analysis results\n- **Text Documents**: Human-readable reports saved to local files\n- **Visual Flowcharts**: Mermaid diagrams for documentation and sharing\n\n## \ud83d\ude80 Installation\n\n```bash\n# From PyPI\npip install multiagent-debugger\n\n# From source\ngit clone https://github.com/VishApp/multiagent-debugger.git\ncd multiagent-debugger\npip install -e .\n```\n\n## \u26a1 Quick Start\n\n1. **Set up your configuration:**\n```bash\nmultiagent-debugger setup\n```\n\n2. **Debug an API failure:**\n```bash\nmultiagent-debugger debug \"Why did my /api/users endpoint fail yesterday?\"\n```\n\n3. **View generated files:**\n- Analysis results in JSON format\n- Text documents in current directory\n- Visual flowcharts for documentation\n\n## \ud83d\udda5\ufe0f Command-Line Usage\n\n### Debug Command\n\n```\nUsage: multiagent_debugger debug [OPTIONS] QUESTION\n\n Debug an API failure or error scenario with multi-agent assistance.\n\nArguments:\n QUESTION The natural language question or debugging prompt.\n Example: 'find the common errors and the root-cause'\n\nOptions:\n -c, --config PATH Path to config file (YAML)\n -v, --verbose Enable verbose output for detailed logs\n --mode [frequent|latest|all] Log analysis mode:\n frequent: Find most common error patterns\n latest: Focus on most recent errors\n all: Analyze all available log lines\n --time-window-hours INT Time window (hours) for log analysis\n --max-lines INT Maximum log lines to analyze\n --code-path PATH Path to source code directory/file for analysis\n -h, --help Show this message and exit\n\nExamples:\n multiagent-debugger debug 'find the common errors and the root-cause' \\\n --config ~/.config/multiagent-debugger/config.yaml --mode latest\n\n multiagent-debugger debug 'why did the upload to S3 fail?' \\\n --mode frequent --time-window-hours 12 \\\n --code-path /Users/myname/myproject/src\n\n multiagent-debugger debug 'analyze recent errors' \\\n --code-path /path/to/specific/file.py\n```\n\nThis command analyzes your logs, extracts error patterns and code paths, and provides root cause analysis with actionable solutions and flowcharts.\n\n## \u2699\ufe0f Configuration\n\nCreate a `config.yaml` file (or use the setup command):\n\n```yaml\n# Paths to log files\nlog_paths:\n - \"/var/log/myapp/app.log\"\n - \"/var/log/nginx/access.log\"\n\n# Path to source code directory or file for analysis (SECURITY FEATURE)\ncode_path: \"/path/to/your/source/code\" # Restricts code analysis to this path only\n\n# Log analysis options\nanalysis_mode: \"frequent\" # frequent, latest, all\ntime_window_hours: 24 # analyze logs from last N hours\nmax_lines: 10000 # maximum log lines to analyze\n\n# LLM configuration\nllm:\n provider: openai # or anthropic, google, ollama, etc.\n model_name: gpt-4\n temperature: 0.1\n #api_key: optional, can use environment variable\n\n# Phoenix monitoring configuration (optional)\nphoenix:\n enabled: true # Enable/disable Phoenix monitoring\n host: \"localhost\" # Phoenix host\n port: 6006 # Phoenix dashboard port\n endpoint: \"http://localhost:6006/v1/traces\" # OTLP endpoint for traces\n launch_phoenix: true # Launch Phoenix app locally\n headers: {} # Additional headers for OTLP\n```\n\n### Code Path Security\n\nThe `code_path` configuration is a **security feature** that restricts code analysis to a specific directory or file:\n\n```yaml\n# Security: Only analyze code within this path\ncode_path: \"/Users/myname/myproject/src\"\n```\n\n**How it works:**\n- When logs contain file paths (from stack traces, errors), the system validates them against `code_path`\n- Files outside the configured path are **rejected** and not analyzed\n- This prevents the system from analyzing sensitive system files or unrelated codebases\n- Can be a directory (analyzes all source files within) or a specific file\n\n**Use cases:**\n- **Multi-project environments**: Restrict analysis to current project only\n- **Security**: Prevent analysis of system files or sensitive directories\n- **Focus**: Analyze only specific parts of large codebases\n\n**CLI override:**\n```bash\n# Override config file code_path for this session\nmultiagent-debugger debug \"question\" --code-path /path/to/specific/project\n```\n\n### Custom Providers\n\nThe system supports various LLM providers including OpenRouter, Anthropic, Google, and others. See [Custom Providers Guide](docs/CUSTOM_PROVIDERS.md) for detailed configuration instructions.\n\n### Environment Variables\n\nSet the appropriate environment variable for your chosen provider:\n\n- **OpenAI**: `OPENAI_API_KEY`\n- **Anthropic**: `ANTHROPIC_API_KEY`\n- **Google**: `GOOGLE_API_KEY`\n- **Azure**: `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`\n- **AWS**: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_REGION`\n- See documentation for other providers\n\n## \ud83d\udd0d How It Works\n\n### 1. Question Analysis\n- Extracts key information like API routes, timestamps, and error types\n- Classifies the error type (API, Database, File, Network, etc.)\n- Structures the query for other agents\n\n### 2. Log Analysis\n- Searches through specified log files using enhanced grep\n- Filters relevant log entries by time and pattern\n- Extracts stack traces and error patterns\n- **Dynamically extracts code paths** (file paths, line numbers, function names)\n- Validates code paths found in logs\n\n### 3. Code Analysis\n- **Validates** that extracted file paths are within the configured `code_path` (security)\n- Locates relevant API handlers and endpoints\n- Identifies dependencies and error handlers\n- Maps the code structure and relationships\n- Supports multiple programming languages (Python, JavaScript, Java, Go, Rust, etc.)\n- **Rejects** analysis of files outside the configured code path\n\n### 4. Root Cause Analysis\n- Synthesizes information from all previous agents\n- Determines the most likely cause with confidence levels\n- Generates creative narratives and metaphors\n- Creates visual flowcharts for documentation\n\n### 5. Output Generation\n- Structured JSON for programmatic access\n- Human-readable text documents\n- Visual flowcharts in Mermaid format\n- Copyable flowchart code for easy sharing\n\n## \ud83d\udcca Phoenix Monitoring\n\nThe debugger includes built-in Phoenix monitoring for tracking agent execution, LLM usage, and performance metrics.\n\n### View Monitoring Status\n```bash\nmultiagent-debugger phoenix\n```\n\nThis shows your Phoenix configuration and provides instructions for accessing the dashboard.\n\n### Remote Server Access\n\nWhen running the debugger on a remote server, use SSH port forwarding to access the Phoenix dashboard:\n\n```bash\n# On your local machine, create SSH tunnel\nssh -L 6006:localhost:6006 user@your-server\n\n# Then visit in your local browser\nhttp://localhost:6006\n```\n\n### Configuration\n\nPhoenix monitoring is configured in your `config.yaml`:\n\n```yaml\nphoenix:\n enabled: true\n host: localhost\n port: 6006\n launch_phoenix: true\n```\n\n### Features\n\n- **Real-time Monitoring**: Track agent executions as they happen\n- **LLM Usage Tracking**: Monitor token usage and costs across providers\n- **Performance Metrics**: Analyze execution times and success rates\n- **Visual Traces**: See the complete flow of agent interactions\n- **Automatic Launch**: Starts automatically when you run debug commands\n\n## \ud83d\udee0\ufe0f Advanced Usage\n\n### List Available Providers\n```bash\nmultiagent-debugger list-providers\n```\n\n### List Models for a Provider\n```bash\nmultiagent-debugger list-models openai\n```\n\n### Debug with Custom Config\n```bash\nmultiagent-debugger debug \"Question?\" --config path/to/config.yaml\n```\n\n### Analyze Recent Errors Only\n```bash\nmultiagent-debugger debug \"What went wrong?\" --mode latest --time-window-hours 2\n```\n\n### Analyze Large Log Files\n```bash\nmultiagent-debugger debug \"Find patterns\" --max-lines 50000\n```\n\n### Restrict Code Analysis to Specific Path\n```bash\n# Only analyze code within /path/to/project directory\nmultiagent-debugger debug \"What caused the error?\" --code-path /path/to/project\n\n# Analyze only a specific file\nmultiagent-debugger debug \"Debug this file\" --code-path /path/to/file.py\n```\n\n## \ud83e\uddea Development\n\n```bash\n# Create virtual environment\npython package_builder.py venv\n\n# Install development dependencies\npython package_builder.py install\n\n# Run tests\npython package_builder.py test\n\n# Build distribution\npython package_builder.py dist\n```\n\n## \ud83d\udccb Requirements\n\n- **Python**: 3.8+\n- **Dependencies**:\n - crewai>=0.28.0\n - pydantic>=2.0.0\n - And others (see requirements.txt)\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) for details.\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## \ud83c\udd98 Support\n\n- **GitHub Issues**: [Report a bug](https://github.com/VishApp/multiagent-debugger/issues)\n- **Documentation**: [Read more](https://github.com/VishApp/multiagent-debugger#readme)\n\n## \ud83c\udfaf Use Cases\n\n- **API Debugging**: Quickly identify why API endpoints are failing\n- **Production Issues**: Analyze logs and code to find root causes\n- **Error Investigation**: Understand complex error chains and dependencies\n- **Documentation**: Generate visual flowcharts for error propagation\n- **Team Collaboration**: Share analysis results in multiple formats\n- **Multi-language Projects**: Support for Python, JavaScript, Java, Go, Rust, and more\n- **Time-based Analysis**: Focus on recent errors or specific time periods\n- **Large Log Analysis**: Handle massive log files with configurable limits\n",
"bugtrack_url": null,
"license": null,
"summary": "A multi-agent system for debugging API failures",
"version": "1.0.24",
"project_urls": {
"Bug Tracker": "https://github.com/VishApp/multiagent-debugger/issues",
"Documentation": "https://github.com/VishApp/multiagent-debugger#readme",
"Homepage": "https://github.com/VishApp/multiagent-debugger",
"Source Code": "https://github.com/VishApp/multiagent-debugger"
},
"split_keywords": [
"debugger",
" api",
" multi-agent",
" crewai",
" llm",
" openai",
" anthropic",
" debugging"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "65252405dd181b526aec4661b743dc1dce2b460924f346f7a737123ea29c0079",
"md5": "de4e4164fd2339af6eacb4aed0e56ad1",
"sha256": "cb5bc309c4e10b047474cab6e0a76c02ecfa82e16f751752d1aff8707704af83"
},
"downloads": -1,
"filename": "multiagent_debugger-1.0.24-py3-none-any.whl",
"has_sig": false,
"md5_digest": "de4e4164fd2339af6eacb4aed0e56ad1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 87589,
"upload_time": "2025-07-22T16:31:56",
"upload_time_iso_8601": "2025-07-22T16:31:56.495392Z",
"url": "https://files.pythonhosted.org/packages/65/25/2405dd181b526aec4661b743dc1dce2b460924f346f7a737123ea29c0079/multiagent_debugger-1.0.24-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "12c314744a9c557e7c3c268e8f0cad559f6a5e687e2e822cd98fa547f5131f01",
"md5": "e144dc9b27fbb2ec47aecb57cf0fd712",
"sha256": "1fb68b179de1d1962c93410c3efd633572d82e166604f5a834674a138b83e8f8"
},
"downloads": -1,
"filename": "multiagent_debugger-1.0.24.tar.gz",
"has_sig": false,
"md5_digest": "e144dc9b27fbb2ec47aecb57cf0fd712",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 82662,
"upload_time": "2025-07-22T16:31:58",
"upload_time_iso_8601": "2025-07-22T16:31:58.324327Z",
"url": "https://files.pythonhosted.org/packages/12/c3/14744a9c557e7c3c268e8f0cad559f6a5e687e2e822cd98fa547f5131f01/multiagent_debugger-1.0.24.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-22 16:31:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "VishApp",
"github_project": "multiagent-debugger",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "crewai",
"specs": [
[
">=",
"0.28.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "typer",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "anthropic",
"specs": [
[
">=",
"0.8.0"
]
]
},
{
"name": "colorlog",
"specs": [
[
">=",
"6.7.0"
]
]
},
{
"name": "tree-sitter",
"specs": [
[
">=",
"0.20.1"
]
]
},
{
"name": "gitpython",
"specs": [
[
">=",
"3.1.31"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.4.2"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.25.0"
]
]
},
{
"name": "arize-phoenix",
"specs": []
},
{
"name": "arize-phoenix-otel",
"specs": []
},
{
"name": "opentelemetry-api",
"specs": []
},
{
"name": "opentelemetry-sdk",
"specs": []
},
{
"name": "opentelemetry-instrumentation",
"specs": []
},
{
"name": "opentelemetry-instrumentation-openai",
"specs": []
},
{
"name": "opentelemetry-instrumentation-requests",
"specs": []
},
{
"name": "openinference-instrumentation-openai",
"specs": []
},
{
"name": "openinference-instrumentation-anthropic",
"specs": []
},
{
"name": "openinference-instrumentation-google-genai",
"specs": []
},
{
"name": "openinference-instrumentation-groq",
"specs": []
},
{
"name": "openinference-instrumentation-mistralai",
"specs": []
},
{
"name": "psutil",
"specs": [
[
">=",
"5.8.0"
]
]
}
],
"lcname": "multiagent-debugger"
}