Name | result-parser-agent JSON |
Version |
0.2.2
JSON |
| download |
home_page | None |
Summary | A deep agent for extracting metrics from raw result files using LangGraph and intelligent parsing |
upload_time | 2025-08-05 14:26:30 |
maintainer | None |
docs_url | None |
author | None |
requires_python | <3.13,>=3.11 |
license | MIT |
keywords |
agent
ai
langgraph
metrics
parsing
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# 🎯 Results Parser Agent
A powerful, intelligent agent for extracting metrics from raw result files using LangGraph and AI-powered parsing. The agent automatically analyzes unstructured result files and extracts specific metrics into structured JSON output with high accuracy.
## 🚀 Features
- **🤖 AI-Powered Parsing**: Uses advanced LLMs (GROQ, OpenAI, Anthropic, Google Gemini, Ollama) for intelligent metric extraction
- **📁 Flexible Input**: Process single files or entire directories of result files
- **🎯 Pattern Recognition**: Automatically detects and adapts to different file formats and structures
- **⚙️ Rich Configuration**: YAML/JSON configuration with environment variable support
- **📊 Structured Output**: Direct output in Pydantic schemas for easy integration
- **🛠️ Professional CLI**: Feature-rich command-line interface with comprehensive options
- **🔧 Python API**: Easy integration into existing Python applications
- **🔄 Error Recovery**: Robust error handling and retry mechanisms
## 📦 Installation
### Quick Install (Recommended)
```bash
pip install result-parser-agent
```
### Development Install
```bash
# Clone the repository
git clone https://github.com/yourusername/result-parser-agent.git
cd result-parser-agent
# Install with uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
uv pip install -e .
# Or install with pip
pip install -e .
```
## 🎯 Quick Start
### 1. Set up your API key
```bash
# For GROQ (default - recommended for speed and reliability)
export GROQ_API_KEY="your-groq-api-key-here"
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key-here"
# For Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
# For Google Gemini
export GOOGLE_API_KEY="your-google-api-key-here"
```
### 2. Use the CLI
```bash
# Parse a directory of result files
result-parser --dir ./benchmark_results --metrics "RPS,latency,throughput" --output results.json
# Parse a single file with custom LLM
result-parser --file ./specific_result.txt --metrics "accuracy,precision" --provider openai --model gpt-4
# Use YAML configuration file
result-parser --config ./my_config.yaml --file ./results.txt --metrics "RPS,throughput"
# Override specific settings
result-parser --dir ./results --metrics "RPS" --provider groq --model llama3.1-70b-versatile --temperature 0.2
# Verbose output with debug info
result-parser --dir ./results --metrics "RPS" --verbose
# Custom output file
result-parser --file ./results.txt --metrics "throughput,latency" --output my_results.json
```
### 3. Use the Python API
```python
from result_parser_agent import (
ResultsParserAgent,
get_groq_config,
get_openai_config,
load_config_from_file,
modify_config
)
import os
# Method 1: Use pre-configured provider configs
config = get_groq_config(
model="llama3.1-8b-instant",
metrics=["RPS", "latency", "throughput"]
)
# Method 2: Load from YAML file
config = load_config_from_file("./my_config.yaml")
# Method 3: Modify default config
config = modify_config(
provider="openai",
model="gpt-4",
temperature=0.2,
metrics=["accuracy", "precision", "recall"]
)
# Initialize agent
agent = ResultsParserAgent(config)
# Parse results (file or directory)
result_update = await agent.parse_results(
input_path="./benchmark_results", # or "./results.txt"
metrics=["RPS", "latency", "throughput"]
)
# Output structured data
print(result_update.json(indent=2))
```
## 📋 Configuration
### Configuration File Example
```yaml
# config.yaml
agent:
# LLM configuration
llm:
provider: "groq" # groq, openai, anthropic, google, ollama
model: "llama3.1-8b-instant" # Fast and efficient for parsing tasks
api_key: null # Set to null to use environment variable
temperature: 0.1 # Temperature for LLM responses
max_tokens: 4000 # Maximum tokens for responses
# Agent behavior
max_retries: 3
chunk_size: 2000
timeout: 300
parsing:
# Metrics to extract from result files
metrics:
- "RPS"
- "latency"
- "throughput"
- "accuracy"
- "precision"
- "recall"
- "f1_score"
# Parsing options
case_sensitive: false
fuzzy_match: true
min_confidence: 0.7
output:
format: "json"
pretty_print: true
include_metadata: true
logging:
level: "INFO"
format: "{time} | {level} | {message}"
file: null
```
### Environment Variables
You can also configure the agent using environment variables:
```bash
# API Keys
export GOOGLE_API_KEY="your-google-api-key-here"
export OPENAI_API_KEY="your-openai-api-key-here"
export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
export GROQ_API_KEY="your-groq-api-key-here"
# Configuration
export PARSER_AGENT__LLM__PROVIDER="google"
export PARSER_AGENT__LLM__MODEL="gemini-2.0-flash"
export PARSER_PARSING__METRICS='["RPS", "latency", "throughput"]'
export PARSER_OUTPUT__FORMAT="json"
```
## 🛠️ CLI Reference
### Command Options
```bash
result-parser [OPTIONS]
Options:
-d, --dir TEXT Directory containing result files to parse (use --dir OR --file)
-f, --file TEXT Single result file to parse (use --dir OR --file)
-m, --metrics TEXT Comma-separated list of metrics to extract (required, e.g., 'RPS,latency,throughput')
-o, --output PATH Output JSON file path (default: results.json)
-v, --verbose Enable verbose logging
--log-level TEXT Logging level [default: INFO]
--pretty-print Pretty print JSON output [default: True]
--no-pretty-print Disable pretty printing
--help Show this message and exit
```
### Usage Examples
```bash
# Parse all files in a directory
result-parser --dir ./benchmark_results --metrics "RPS,latency" --output results.json
# Parse a single file
result-parser --file ./specific_result.txt --metrics "accuracy,precision"
# Verbose output for debugging
result-parser --dir ./results --metrics "RPS" --verbose
# Custom output file
result-parser --file ./results.txt --metrics "throughput,latency" --output my_results.json
# Compact JSON output
result-parser --dir ./results --metrics "accuracy" --no-pretty-print
```
Raw data
{
"_id": null,
"home_page": null,
"name": "result-parser-agent",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.11",
"maintainer_email": null,
"keywords": "agent, ai, langgraph, metrics, parsing",
"author": null,
"author_email": "Akhilreddy <akhil@Infobellit.com>",
"download_url": "https://files.pythonhosted.org/packages/e9/85/69e123e2affe732c37a9c282e90a70bc6bfabbc3a92de8ba8efeeb3bd083/result_parser_agent-0.2.2.tar.gz",
"platform": null,
"description": "# \ud83c\udfaf Results Parser Agent\n\nA powerful, intelligent agent for extracting metrics from raw result files using LangGraph and AI-powered parsing. The agent automatically analyzes unstructured result files and extracts specific metrics into structured JSON output with high accuracy.\n\n## \ud83d\ude80 Features\n\n- **\ud83e\udd16 AI-Powered Parsing**: Uses advanced LLMs (GROQ, OpenAI, Anthropic, Google Gemini, Ollama) for intelligent metric extraction\n- **\ud83d\udcc1 Flexible Input**: Process single files or entire directories of result files\n- **\ud83c\udfaf Pattern Recognition**: Automatically detects and adapts to different file formats and structures\n- **\u2699\ufe0f Rich Configuration**: YAML/JSON configuration with environment variable support\n- **\ud83d\udcca Structured Output**: Direct output in Pydantic schemas for easy integration\n- **\ud83d\udee0\ufe0f Professional CLI**: Feature-rich command-line interface with comprehensive options\n- **\ud83d\udd27 Python API**: Easy integration into existing Python applications\n- **\ud83d\udd04 Error Recovery**: Robust error handling and retry mechanisms\n\n## \ud83d\udce6 Installation\n\n### Quick Install (Recommended)\n\n```bash\npip install result-parser-agent\n```\n\n### Development Install\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/result-parser-agent.git\ncd result-parser-agent\n\n# Install with uv (recommended)\ncurl -LsSf https://astral.sh/uv/install.sh | sh\nuv sync\nuv pip install -e .\n\n# Or install with pip\npip install -e .\n```\n\n## \ud83c\udfaf Quick Start\n\n### 1. Set up your API key\n\n```bash\n# For GROQ (default - recommended for speed and reliability)\nexport GROQ_API_KEY=\"your-groq-api-key-here\"\n\n# For OpenAI\nexport OPENAI_API_KEY=\"your-openai-api-key-here\"\n\n# For Anthropic\nexport ANTHROPIC_API_KEY=\"your-anthropic-api-key-here\"\n\n# For Google Gemini\nexport GOOGLE_API_KEY=\"your-google-api-key-here\"\n```\n\n### 2. Use the CLI\n\n```bash\n# Parse a directory of result files\nresult-parser --dir ./benchmark_results --metrics \"RPS,latency,throughput\" --output results.json\n\n# Parse a single file with custom LLM\nresult-parser --file ./specific_result.txt --metrics \"accuracy,precision\" --provider openai --model gpt-4\n\n# Use YAML configuration file\nresult-parser --config ./my_config.yaml --file ./results.txt --metrics \"RPS,throughput\"\n\n# Override specific settings\nresult-parser --dir ./results --metrics \"RPS\" --provider groq --model llama3.1-70b-versatile --temperature 0.2\n\n# Verbose output with debug info\nresult-parser --dir ./results --metrics \"RPS\" --verbose\n\n# Custom output file\nresult-parser --file ./results.txt --metrics \"throughput,latency\" --output my_results.json\n```\n\n### 3. Use the Python API\n\n```python\nfrom result_parser_agent import (\n ResultsParserAgent, \n get_groq_config, \n get_openai_config,\n load_config_from_file,\n modify_config\n)\nimport os\n\n# Method 1: Use pre-configured provider configs\nconfig = get_groq_config(\n model=\"llama3.1-8b-instant\",\n metrics=[\"RPS\", \"latency\", \"throughput\"]\n)\n\n# Method 2: Load from YAML file\nconfig = load_config_from_file(\"./my_config.yaml\")\n\n# Method 3: Modify default config\nconfig = modify_config(\n provider=\"openai\",\n model=\"gpt-4\",\n temperature=0.2,\n metrics=[\"accuracy\", \"precision\", \"recall\"]\n)\n\n# Initialize agent\nagent = ResultsParserAgent(config)\n\n# Parse results (file or directory)\nresult_update = await agent.parse_results(\n input_path=\"./benchmark_results\", # or \"./results.txt\"\n metrics=[\"RPS\", \"latency\", \"throughput\"]\n)\n\n# Output structured data\nprint(result_update.json(indent=2))\n```\n\n## \ud83d\udccb Configuration\n\n### Configuration File Example\n\n```yaml\n# config.yaml\nagent:\n # LLM configuration\n llm:\n provider: \"groq\" # groq, openai, anthropic, google, ollama\n model: \"llama3.1-8b-instant\" # Fast and efficient for parsing tasks\n api_key: null # Set to null to use environment variable\n temperature: 0.1 # Temperature for LLM responses\n max_tokens: 4000 # Maximum tokens for responses\n \n # Agent behavior\n max_retries: 3\n chunk_size: 2000\n timeout: 300\n\nparsing:\n # Metrics to extract from result files\n metrics:\n - \"RPS\"\n - \"latency\"\n - \"throughput\"\n - \"accuracy\"\n - \"precision\"\n - \"recall\"\n - \"f1_score\"\n \n # Parsing options\n case_sensitive: false\n fuzzy_match: true\n min_confidence: 0.7\n\noutput:\n format: \"json\"\n pretty_print: true\n include_metadata: true\n\nlogging:\n level: \"INFO\"\n format: \"{time} | {level} | {message}\"\n file: null\n```\n\n### Environment Variables\n\nYou can also configure the agent using environment variables:\n\n```bash\n# API Keys\nexport GOOGLE_API_KEY=\"your-google-api-key-here\"\nexport OPENAI_API_KEY=\"your-openai-api-key-here\"\nexport ANTHROPIC_API_KEY=\"your-anthropic-api-key-here\"\nexport GROQ_API_KEY=\"your-groq-api-key-here\"\n\n# Configuration\nexport PARSER_AGENT__LLM__PROVIDER=\"google\"\nexport PARSER_AGENT__LLM__MODEL=\"gemini-2.0-flash\"\nexport PARSER_PARSING__METRICS='[\"RPS\", \"latency\", \"throughput\"]'\nexport PARSER_OUTPUT__FORMAT=\"json\"\n```\n\n## \ud83d\udee0\ufe0f CLI Reference\n\n### Command Options\n\n```bash\nresult-parser [OPTIONS]\n\nOptions:\n -d, --dir TEXT Directory containing result files to parse (use --dir OR --file)\n -f, --file TEXT Single result file to parse (use --dir OR --file)\n -m, --metrics TEXT Comma-separated list of metrics to extract (required, e.g., 'RPS,latency,throughput')\n -o, --output PATH Output JSON file path (default: results.json)\n -v, --verbose Enable verbose logging\n --log-level TEXT Logging level [default: INFO]\n --pretty-print Pretty print JSON output [default: True]\n --no-pretty-print Disable pretty printing\n --help Show this message and exit\n```\n\n### Usage Examples\n\n```bash\n# Parse all files in a directory\nresult-parser --dir ./benchmark_results --metrics \"RPS,latency\" --output results.json\n\n# Parse a single file\nresult-parser --file ./specific_result.txt --metrics \"accuracy,precision\"\n\n# Verbose output for debugging\nresult-parser --dir ./results --metrics \"RPS\" --verbose\n\n# Custom output file\nresult-parser --file ./results.txt --metrics \"throughput,latency\" --output my_results.json\n\n# Compact JSON output\nresult-parser --dir ./results --metrics \"accuracy\" --no-pretty-print\n```",
"bugtrack_url": null,
"license": "MIT",
"summary": "A deep agent for extracting metrics from raw result files using LangGraph and intelligent parsing",
"version": "0.2.2",
"project_urls": {
"Documentation": "https://github.com/Infobellit-Solutions-Pvt-Ltd/result-parser-agent#readme",
"Homepage": "https://github.com/Infobellit-Solutions-Pvt-Ltd/result-parser-agent",
"Issues": "https://github.com/Infobellit-Solutions-Pvt-Ltd/result-parser-agent/issues",
"Repository": "https://github.com/Infobellit-Solutions-Pvt-Ltd/result-parser-agent"
},
"split_keywords": [
"agent",
" ai",
" langgraph",
" metrics",
" parsing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "87110ea5432fdcd34d3fd82dab989b8466cd42ea41dc2350ded5b23778389842",
"md5": "a819fbef529db36591420174e37869b4",
"sha256": "dd7b5dbc86b1c0d6dfe2e3e2bee6e98e7cef4046dd4dd65bb034c6a32eac3b90"
},
"downloads": -1,
"filename": "result_parser_agent-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a819fbef529db36591420174e37869b4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.11",
"size": 29988,
"upload_time": "2025-08-05T14:26:29",
"upload_time_iso_8601": "2025-08-05T14:26:29.864594Z",
"url": "https://files.pythonhosted.org/packages/87/11/0ea5432fdcd34d3fd82dab989b8466cd42ea41dc2350ded5b23778389842/result_parser_agent-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e98569e123e2affe732c37a9c282e90a70bc6bfabbc3a92de8ba8efeeb3bd083",
"md5": "e233f1638e19acad45f631c943aef571",
"sha256": "9ccb3a59edb733ca557896c48be494f22a0df2c4451944b73d136d174bfdecd2"
},
"downloads": -1,
"filename": "result_parser_agent-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "e233f1638e19acad45f631c943aef571",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.11",
"size": 105086,
"upload_time": "2025-08-05T14:26:30",
"upload_time_iso_8601": "2025-08-05T14:26:30.805736Z",
"url": "https://files.pythonhosted.org/packages/e9/85/69e123e2affe732c37a9c282e90a70bc6bfabbc3a92de8ba8efeeb3bd083/result_parser_agent-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-05 14:26:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Infobellit-Solutions-Pvt-Ltd",
"github_project": "result-parser-agent#readme",
"github_not_found": true,
"lcname": "result-parser-agent"
}