# PraisonAI Bench
🚀 **A simple, powerful LLM benchmarking tool built with PraisonAI Agents**
Benchmark any LiteLLM-compatible model with automatic HTML extraction, model-specific output organization, and flexible test suite management.
## ✨ Key Features
- 🎯 **Any LLM Model** - OpenAI, Anthropic, Google, XAI, local models via LiteLLM
- 🔄 **Single Agent Design** - Your prompt becomes the instruction (no complex configs)
- 💾 **Auto HTML Extraction** - Automatically saves HTML code from responses
- 📁 **Smart Organization** - Model-specific output folders (`output/gpt-4o/`, `output/xai/grok-code-fast-1/`)
- 🎛️ **Flexible Testing** - Run single tests, full suites, or filter specific tests
- ⚡ **Modern Tooling** - Built with `pyproject.toml` and `uv` package manager
- 📊 **Comprehensive Results** - JSON metrics with timing, success rates, and metadata
## 🚀 Quick Start
### 1. Install with uv (Recommended)
```bash
# Clone the repository
git clone https://github.com/MervinPraison/praisonaibench
cd praisonaibench
# Install with uv
uv sync
# Or install in development mode
uv pip install -e .
```
### 2. Alternative: Install with pip
```bash
pip install -e .
```
### 3. Set Your API Keys
```bash
# OpenAI
export OPENAI_API_KEY=your_openai_key
# XAI (Grok)
export XAI_API_KEY=your_xai_key
# Anthropic
export ANTHROPIC_API_KEY=your_anthropic_key
# Google
export GOOGLE_API_KEY=your_google_key
```
### 4. Run Your First Benchmark
```python
from praisonaibench import Bench
# Create benchmark suite
bench = Bench()
# Run a simple test
result = bench.run_single_test("What is 2+2?")
print(result['response'])
# Run with specific model
result = bench.run_single_test(
"Create a rotating cube HTML file",
model="xai/grok-code-fast-1"
)
# Get summary
summary = bench.get_summary()
print(summary)
```
## 📁 Project Structure
```
praisonaibench/
├── pyproject.toml # Modern Python packaging
├── src/praisonaibench/ # Source code
│ ├── __init__.py # Main imports
│ ├── bench.py # Core benchmarking engine
│ ├── agent.py # LLM agent wrapper
│ ├── cli.py # Command line interface
│ └── version.py # Version info
├── examples/ # Example configurations
│ ├── threejs_simulation_suite.yaml
│ └── config_example.yaml
└── output/ # Generated results
├── gpt-4o/ # Model-specific HTML files
├── xai/grok-code-fast-1/
└── benchmark_results_*.json
```
## 💻 CLI Usage
### Basic Commands
```bash
# Single test with default model
praisonaibench --test "Explain quantum computing"
# Single test with specific model
praisonaibench --test "Write a poem" --model gpt-4o
# Use any LiteLLM-compatible model
praisonaibench --test "Create HTML" --model xai/grok-code-fast-1
praisonaibench --test "Write code" --model gemini/gemini-1.5-flash-8b
praisonaibench --test "Analyze data" --model claude-3-sonnet-20240229
```
### Test Suites
```bash
# Run entire test suite
praisonaibench --suite examples/threejs_simulation_suite.yaml
# Run specific test from suite
praisonaibench --suite examples/threejs_simulation_suite.yaml --test-name "rotating_cube_simulation"
# Run suite with specific model (overrides individual test models)
praisonaibench --suite tests.yaml --model xai/grok-code-fast-1
```
### Cross-Model Comparison
```bash
# Compare across multiple models
praisonaibench --cross-model "Write a poem" --models gpt-4o,gpt-3.5-turbo,xai/grok-code-fast-1
```
### Extract HTML from Results
```bash
# Extract HTML from existing benchmark results
praisonaibench --extract output/benchmark_results_20250829_160426.json
# → Processes JSON file and saves any HTML content to .html files
# Works with any benchmark results JSON file
praisonaibench --extract my_results.json
```
### HTML Generation Examples
```bash
# Generate Three.js simulation (auto-saves HTML)
praisonaibench --test "Create a rotating cube HTML with Three.js" --model gpt-4o
# → Saves to: output/gpt-4o/test_cube.html
# Run Three.js test suite
praisonaibench --suite examples/threejs_simulation_suite.yaml --model xai/grok-code-fast-1
# → Saves to: output/xai/grok-code-fast-1/rotating_cube_simulation.html
```
## 📋 Test Suite Format
### Basic Test Suite (`tests.yaml`)
```yaml
tests:
- name: "math_test"
prompt: "What is 15 * 23?"
- name: "creative_test"
prompt: "Write a short story about a robot"
- name: "model_specific_test"
prompt: "Explain quantum physics"
model: "gpt-4o"
```
### Advanced Test Suite with Full Config Support
```yaml
# Global LLM configuration (applies to all tests)
config:
max_tokens: 4000
temperature: 0.7
top_p: 0.9
frequency_penalty: 0.0
presence_penalty: 0.0
# Any LiteLLM-compatible parameter is supported!
tests:
- name: "creative_writing"
prompt: "Write a detailed sci-fi story"
model: "gpt-4o"
- name: "code_generation"
prompt: "Create a Python web scraper"
model: "xai/grok-code-fast-1"
```
### Three.js HTML Generation Suite
```yaml
# examples/threejs_simulation_suite.yaml
tests:
- name: "rotating_cube_simulation"
prompt: |
Create a complete HTML file with Three.js that displays a rotating 3D cube.
The cube should have different colored faces, rotate continuously, and include proper lighting.
The HTML file should be self-contained with Three.js loaded from CDN.
Include camera controls for user interaction.
Save the output as 'rotating_cube.html'.
- name: "particle_system"
prompt: |
Create an HTML file with Three.js showing an animated particle system.
Include 1000+ particles with random colors, movement, and physics.
Add mouse interaction to influence particle behavior.
- name: "terrain_simulation"
prompt: |
Create a Three.js HTML file with a procedurally generated terrain landscape.
Include realistic textures, lighting, and a first-person camera.
Add fog effects and animated elements.
- name: "solar_system"
prompt: |
Create a Three.js solar system simulation in HTML.
Include the sun, planets with realistic orbits, textures, and lighting.
Add controls to speed up/slow down time.
```
## 🔧 Configuration
### Basic Configuration (`config.yaml`)
```yaml
# Default model (can be overridden per test)
default_model: "gpt-4o"
# Output settings
output_format: "json"
save_results: true
output_dir: "output"
# Performance settings
max_retries: 3
timeout: 60
```
### Supported Models
PraisonAI Bench supports **any LiteLLM-compatible model**:
```yaml
# OpenAI Models
- gpt-4o
- gpt-4o-mini
- gpt-3.5-turbo
# Anthropic Models
- claude-3-opus-20240229
- claude-3-sonnet-20240229
- claude-3-haiku-20240307
# Google Models
- gemini/gemini-1.5-pro
- gemini/gemini-1.5-flash
- gemini/gemini-1.5-flash-8b
# XAI Models
- xai/grok-beta
- xai/grok-code-fast-1
# Local Models (via LM Studio, Ollama, etc.)
- ollama/llama2
- openai/gpt-3.5-turbo # with OPENAI_API_BASE set
```
## 📊 Results & Output
### Automatic HTML Extraction
When LLM responses contain HTML code blocks, they're automatically extracted and saved:
```
output/
├── gpt-4o/
│ ├── rotating_cube_simulation.html
│ └── particle_system.html
├── xai/
│ └── grok-code-fast-1/
│ ├── terrain_simulation.html
│ └── solar_system.html
└── benchmark_results_20250829_160426.json
```
### JSON Results Format
```json
[
{
"test_name": "rotating_cube_simulation",
"prompt": "Create a complete HTML file with Three.js...",
"response": "<!DOCTYPE html>\n<html>\n...",
"model": "xai/grok-code-fast-1",
"agent_name": "BenchAgent",
"execution_time": 8.24,
"status": "success",
"timestamp": "2025-08-29 16:04:26"
}
]
```
### Summary Statistics
```bash
📊 Summary:
Total tests: 4
Success rate: 100.0%
Average time: 12.34s
Results saved to: output/benchmark_results_20250829_160426.json
```
## 🎯 Advanced Features
### 🔄 **Universal Model Support**
- Works with **any LiteLLM-compatible model**
- No hardcoded model restrictions
- Automatic API key detection
### 💾 **Smart HTML Handling**
- Auto-detects HTML in multiple formats:
- Markdown-wrapped HTML (```html...```)
- Truncated HTML blocks (incomplete responses)
- Raw HTML content (direct HTML responses)
- Extracts and saves as `.html` files automatically
- Organizes by model in separate folders
- Extract HTML from existing benchmark results with `--extract`
- Perfect for Three.js, React, or any web development benchmarks
### 🎛️ **Flexible Test Management**
- Run entire suites or filter specific tests
- Override models per test or globally
- Cross-model comparisons with detailed metrics
### ⚡ **Modern Development**
- Built with `pyproject.toml` (no legacy `setup.py`)
- Optimized for `uv` package manager
- Fast dependency resolution and installation
### 🏗️ **Simple Architecture**
- **Single Agent Design** - Your prompt becomes the instruction
- **No Complex Configs** - Just write your test prompts
- **Minimal Dependencies** - Only what you need
## 🚀 Use Cases
### Web Development Benchmarking
```bash
# Test HTML/CSS/JS generation across models
praisonaibench --suite web_dev_suite.yaml --model gpt-4o
```
### Code Generation Comparison
```bash
# Compare coding abilities
praisonaibench --cross-model "Write a Python web scraper" --models gpt-4o,claude-3-sonnet-20240229,xai/grok-code-fast-1
```
### Creative Content Testing
```bash
# Test creative writing
praisonaibench --test "Write a sci-fi short story" --model gemini/gemini-1.5-pro
```
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature-name`
3. Install dependencies: `uv sync`
4. Make your changes
5. Run tests: `uv run pytest`
6. Submit a pull request
## 📄 License
MIT License - see LICENSE file for details.
---
**Perfect for developers who need powerful, flexible LLM benchmarking with zero complexity!** 🚀
Raw data
{
"_id": null,
"home_page": null,
"name": "praisonaibench",
"maintainer": "MervinPraison",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "agents, ai, benchmark, llm, testing",
"author": "MervinPraison",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/52/00/d50fb8a310650b4d48ad12a880349e31a3d881d65b6e97aee5d359e194b7/praisonaibench-0.0.3.tar.gz",
"platform": null,
"description": "# PraisonAI Bench\n\n\ud83d\ude80 **A simple, powerful LLM benchmarking tool built with PraisonAI Agents**\n\nBenchmark any LiteLLM-compatible model with automatic HTML extraction, model-specific output organization, and flexible test suite management.\n\n## \u2728 Key Features\n\n- \ud83c\udfaf **Any LLM Model** - OpenAI, Anthropic, Google, XAI, local models via LiteLLM\n- \ud83d\udd04 **Single Agent Design** - Your prompt becomes the instruction (no complex configs)\n- \ud83d\udcbe **Auto HTML Extraction** - Automatically saves HTML code from responses\n- \ud83d\udcc1 **Smart Organization** - Model-specific output folders (`output/gpt-4o/`, `output/xai/grok-code-fast-1/`)\n- \ud83c\udf9b\ufe0f **Flexible Testing** - Run single tests, full suites, or filter specific tests\n- \u26a1 **Modern Tooling** - Built with `pyproject.toml` and `uv` package manager\n- \ud83d\udcca **Comprehensive Results** - JSON metrics with timing, success rates, and metadata\n\n## \ud83d\ude80 Quick Start\n\n### 1. Install with uv (Recommended)\n\n```bash\n# Clone the repository\ngit clone https://github.com/MervinPraison/praisonaibench\ncd praisonaibench\n\n# Install with uv\nuv sync\n\n# Or install in development mode\nuv pip install -e .\n```\n\n### 2. Alternative: Install with pip\n\n```bash\npip install -e .\n```\n\n### 3. Set Your API Keys\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=your_openai_key\n\n# XAI (Grok)\nexport XAI_API_KEY=your_xai_key\n\n# Anthropic\nexport ANTHROPIC_API_KEY=your_anthropic_key\n\n# Google\nexport GOOGLE_API_KEY=your_google_key\n```\n\n### 4. Run Your First Benchmark\n\n```python\nfrom praisonaibench import Bench\n\n# Create benchmark suite\nbench = Bench()\n\n# Run a simple test\nresult = bench.run_single_test(\"What is 2+2?\")\nprint(result['response'])\n\n# Run with specific model\nresult = bench.run_single_test(\n \"Create a rotating cube HTML file\", \n model=\"xai/grok-code-fast-1\"\n)\n\n# Get summary\nsummary = bench.get_summary()\nprint(summary)\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\npraisonaibench/\n\u251c\u2500\u2500 pyproject.toml # Modern Python packaging\n\u251c\u2500\u2500 src/praisonaibench/ # Source code\n\u2502 \u251c\u2500\u2500 __init__.py # Main imports\n\u2502 \u251c\u2500\u2500 bench.py # Core benchmarking engine\n\u2502 \u251c\u2500\u2500 agent.py # LLM agent wrapper\n\u2502 \u251c\u2500\u2500 cli.py # Command line interface\n\u2502 \u2514\u2500\u2500 version.py # Version info\n\u251c\u2500\u2500 examples/ # Example configurations\n\u2502 \u251c\u2500\u2500 threejs_simulation_suite.yaml\n\u2502 \u2514\u2500\u2500 config_example.yaml\n\u2514\u2500\u2500 output/ # Generated results\n \u251c\u2500\u2500 gpt-4o/ # Model-specific HTML files\n \u251c\u2500\u2500 xai/grok-code-fast-1/\n \u2514\u2500\u2500 benchmark_results_*.json\n```\n\n## \ud83d\udcbb CLI Usage\n\n### Basic Commands\n\n```bash\n# Single test with default model\npraisonaibench --test \"Explain quantum computing\"\n\n# Single test with specific model\npraisonaibench --test \"Write a poem\" --model gpt-4o\n\n# Use any LiteLLM-compatible model\npraisonaibench --test \"Create HTML\" --model xai/grok-code-fast-1\npraisonaibench --test \"Write code\" --model gemini/gemini-1.5-flash-8b\npraisonaibench --test \"Analyze data\" --model claude-3-sonnet-20240229\n```\n\n### Test Suites\n\n```bash\n# Run entire test suite\npraisonaibench --suite examples/threejs_simulation_suite.yaml\n\n# Run specific test from suite\npraisonaibench --suite examples/threejs_simulation_suite.yaml --test-name \"rotating_cube_simulation\"\n\n# Run suite with specific model (overrides individual test models)\npraisonaibench --suite tests.yaml --model xai/grok-code-fast-1\n```\n\n### Cross-Model Comparison\n\n```bash\n# Compare across multiple models\npraisonaibench --cross-model \"Write a poem\" --models gpt-4o,gpt-3.5-turbo,xai/grok-code-fast-1\n```\n\n### Extract HTML from Results\n\n```bash\n# Extract HTML from existing benchmark results\npraisonaibench --extract output/benchmark_results_20250829_160426.json\n# \u2192 Processes JSON file and saves any HTML content to .html files\n\n# Works with any benchmark results JSON file\npraisonaibench --extract my_results.json\n```\n\n### HTML Generation Examples\n\n```bash\n# Generate Three.js simulation (auto-saves HTML)\npraisonaibench --test \"Create a rotating cube HTML with Three.js\" --model gpt-4o\n# \u2192 Saves to: output/gpt-4o/test_cube.html\n\n# Run Three.js test suite\npraisonaibench --suite examples/threejs_simulation_suite.yaml --model xai/grok-code-fast-1\n# \u2192 Saves to: output/xai/grok-code-fast-1/rotating_cube_simulation.html\n```\n\n## \ud83d\udccb Test Suite Format\n\n### Basic Test Suite (`tests.yaml`)\n\n```yaml\ntests:\n - name: \"math_test\"\n prompt: \"What is 15 * 23?\"\n \n - name: \"creative_test\"\n prompt: \"Write a short story about a robot\"\n \n - name: \"model_specific_test\"\n prompt: \"Explain quantum physics\"\n model: \"gpt-4o\"\n```\n\n### Advanced Test Suite with Full Config Support\n\n```yaml\n# Global LLM configuration (applies to all tests)\nconfig:\n max_tokens: 4000\n temperature: 0.7\n top_p: 0.9\n frequency_penalty: 0.0\n presence_penalty: 0.0\n # Any LiteLLM-compatible parameter is supported!\n\ntests:\n - name: \"creative_writing\"\n prompt: \"Write a detailed sci-fi story\"\n model: \"gpt-4o\"\n \n - name: \"code_generation\"\n prompt: \"Create a Python web scraper\"\n model: \"xai/grok-code-fast-1\"\n```\n\n### Three.js HTML Generation Suite\n\n```yaml\n# examples/threejs_simulation_suite.yaml\ntests:\n - name: \"rotating_cube_simulation\"\n prompt: |\n Create a complete HTML file with Three.js that displays a rotating 3D cube.\n The cube should have different colored faces, rotate continuously, and include proper lighting.\n The HTML file should be self-contained with Three.js loaded from CDN.\n Include camera controls for user interaction.\n Save the output as 'rotating_cube.html'.\n \n - name: \"particle_system\"\n prompt: |\n Create an HTML file with Three.js showing an animated particle system.\n Include 1000+ particles with random colors, movement, and physics.\n Add mouse interaction to influence particle behavior.\n \n - name: \"terrain_simulation\"\n prompt: |\n Create a Three.js HTML file with a procedurally generated terrain landscape.\n Include realistic textures, lighting, and a first-person camera.\n Add fog effects and animated elements.\n \n - name: \"solar_system\"\n prompt: |\n Create a Three.js solar system simulation in HTML.\n Include the sun, planets with realistic orbits, textures, and lighting.\n Add controls to speed up/slow down time.\n```\n\n## \ud83d\udd27 Configuration\n\n### Basic Configuration (`config.yaml`)\n\n```yaml\n# Default model (can be overridden per test)\ndefault_model: \"gpt-4o\"\n\n# Output settings\noutput_format: \"json\"\nsave_results: true\noutput_dir: \"output\"\n\n# Performance settings\nmax_retries: 3\ntimeout: 60\n```\n\n### Supported Models\n\nPraisonAI Bench supports **any LiteLLM-compatible model**:\n\n```yaml\n# OpenAI Models\n- gpt-4o\n- gpt-4o-mini\n- gpt-3.5-turbo\n\n# Anthropic Models\n- claude-3-opus-20240229\n- claude-3-sonnet-20240229\n- claude-3-haiku-20240307\n\n# Google Models\n- gemini/gemini-1.5-pro\n- gemini/gemini-1.5-flash\n- gemini/gemini-1.5-flash-8b\n\n# XAI Models\n- xai/grok-beta\n- xai/grok-code-fast-1\n\n# Local Models (via LM Studio, Ollama, etc.)\n- ollama/llama2\n- openai/gpt-3.5-turbo # with OPENAI_API_BASE set\n```\n\n## \ud83d\udcca Results & Output\n\n### Automatic HTML Extraction\n\nWhen LLM responses contain HTML code blocks, they're automatically extracted and saved:\n\n```\noutput/\n\u251c\u2500\u2500 gpt-4o/\n\u2502 \u251c\u2500\u2500 rotating_cube_simulation.html\n\u2502 \u2514\u2500\u2500 particle_system.html\n\u251c\u2500\u2500 xai/\n\u2502 \u2514\u2500\u2500 grok-code-fast-1/\n\u2502 \u251c\u2500\u2500 terrain_simulation.html\n\u2502 \u2514\u2500\u2500 solar_system.html\n\u2514\u2500\u2500 benchmark_results_20250829_160426.json\n```\n\n### JSON Results Format\n\n```json\n[\n {\n \"test_name\": \"rotating_cube_simulation\",\n \"prompt\": \"Create a complete HTML file with Three.js...\",\n \"response\": \"<!DOCTYPE html>\\n<html>\\n...\",\n \"model\": \"xai/grok-code-fast-1\",\n \"agent_name\": \"BenchAgent\",\n \"execution_time\": 8.24,\n \"status\": \"success\",\n \"timestamp\": \"2025-08-29 16:04:26\"\n }\n]\n```\n\n### Summary Statistics\n\n```bash\n\ud83d\udcca Summary:\n Total tests: 4\n Success rate: 100.0%\n Average time: 12.34s\nResults saved to: output/benchmark_results_20250829_160426.json\n```\n\n## \ud83c\udfaf Advanced Features\n\n### \ud83d\udd04 **Universal Model Support**\n- Works with **any LiteLLM-compatible model**\n- No hardcoded model restrictions\n- Automatic API key detection\n\n### \ud83d\udcbe **Smart HTML Handling**\n- Auto-detects HTML in multiple formats:\n - Markdown-wrapped HTML (```html...```)\n - Truncated HTML blocks (incomplete responses)\n - Raw HTML content (direct HTML responses)\n- Extracts and saves as `.html` files automatically\n- Organizes by model in separate folders\n- Extract HTML from existing benchmark results with `--extract`\n- Perfect for Three.js, React, or any web development benchmarks\n\n### \ud83c\udf9b\ufe0f **Flexible Test Management**\n- Run entire suites or filter specific tests\n- Override models per test or globally\n- Cross-model comparisons with detailed metrics\n\n### \u26a1 **Modern Development**\n- Built with `pyproject.toml` (no legacy `setup.py`)\n- Optimized for `uv` package manager\n- Fast dependency resolution and installation\n\n### \ud83c\udfd7\ufe0f **Simple Architecture**\n- **Single Agent Design** - Your prompt becomes the instruction\n- **No Complex Configs** - Just write your test prompts\n- **Minimal Dependencies** - Only what you need\n\n## \ud83d\ude80 Use Cases\n\n### Web Development Benchmarking\n```bash\n# Test HTML/CSS/JS generation across models\npraisonaibench --suite web_dev_suite.yaml --model gpt-4o\n```\n\n### Code Generation Comparison\n```bash\n# Compare coding abilities\npraisonaibench --cross-model \"Write a Python web scraper\" --models gpt-4o,claude-3-sonnet-20240229,xai/grok-code-fast-1\n```\n\n### Creative Content Testing\n```bash\n# Test creative writing\npraisonaibench --test \"Write a sci-fi short story\" --model gemini/gemini-1.5-pro\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature-name`\n3. Install dependencies: `uv sync`\n4. Make your changes\n5. Run tests: `uv run pytest`\n6. Submit a pull request\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n---\n\n**Perfect for developers who need powerful, flexible LLM benchmarking with zero complexity!** \ud83d\ude80\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple LLM Benchmarking Tool using PraisonAI Agents",
"version": "0.0.3",
"project_urls": {
"Documentation": "https://github.com/MervinPraison/praisonaibench#readme",
"Homepage": "https://github.com/MervinPraison/praisonaibench",
"Issues": "https://github.com/MervinPraison/praisonaibench/issues",
"Repository": "https://github.com/MervinPraison/praisonaibench"
},
"split_keywords": [
"agents",
" ai",
" benchmark",
" llm",
" testing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a103a536f7d076c745e04735edbd1c3a8431077568a74e266b89b61b7802d989",
"md5": "b50a4ecf1002f9a5e1e4bcc44a690e77",
"sha256": "5c91e211d9bc29af7f8429ca94854051e47fcee768274e7e112c1365df9c7cb2"
},
"downloads": -1,
"filename": "praisonaibench-0.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b50a4ecf1002f9a5e1e4bcc44a690e77",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 13147,
"upload_time": "2025-08-30T07:28:29",
"upload_time_iso_8601": "2025-08-30T07:28:29.510501Z",
"url": "https://files.pythonhosted.org/packages/a1/03/a536f7d076c745e04735edbd1c3a8431077568a74e266b89b61b7802d989/praisonaibench-0.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5200d50fb8a310650b4d48ad12a880349e31a3d881d65b6e97aee5d359e194b7",
"md5": "81a5f2d59d57dbf8a2b20ce7c96acdaf",
"sha256": "86e95ba368da2aed434def752fe98a819351f0911fdf369a8b8013b393106cf7"
},
"downloads": -1,
"filename": "praisonaibench-0.0.3.tar.gz",
"has_sig": false,
"md5_digest": "81a5f2d59d57dbf8a2b20ce7c96acdaf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 14376,
"upload_time": "2025-08-30T07:28:30",
"upload_time_iso_8601": "2025-08-30T07:28:30.743505Z",
"url": "https://files.pythonhosted.org/packages/52/00/d50fb8a310650b4d48ad12a880349e31a3d881d65b6e97aee5d359e194b7/praisonaibench-0.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-30 07:28:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MervinPraison",
"github_project": "praisonaibench#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "praisonaiagents",
"specs": [
[
">=",
"0.0.1"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0"
]
]
}
],
"lcname": "praisonaibench"
}