testkitLLM


NametestkitLLM JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/rcgalbo/wayy-research
SummaryTesting Framework for LLM-Based Agents
upload_time2025-07-11 03:31:51
maintainerNone
docs_urlNone
authortestLLM Team
requires_python>=3.9
licenseNone
keywords testing llm agents ai machine-learning pytest
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # test-llm

**The first testing framework designed specifically for LLM-based agents.**

test-llm uses fast, accurate LLM evaluators (Mistral Large and Claude Sonnet 4) to test your AI agents semantically, not with brittle string matching. Write natural language test criteria that evaluate meaning, intent, and behavior rather than exact outputs.

## 🚀 Quick Start

### Installation

```bash
pip install test-llm
```

### Setup

Add API keys to `.env`:
```bash
# Mistral (RECOMMENDED) - 3-5x faster than Claude
MISTRAL_API_KEY=your_mistral_api_key_here

# Claude (OPTIONAL) - More thorough but slower
ANTHROPIC_API_KEY=your_anthropic_api_key_here
```

### 30-Second Example

**Write a semantic test** (`test_my_agent.py`):
```python
import pytest
from testllm import LocalAgent, semantic_test

@pytest.fixture
def my_agent():
    class WeatherAgent:
        def __call__(self, prompt):
            if "weather" in prompt.lower():
                return "I'll check the current weather conditions for you."
            return "I understand your request. How can I help?"
    
    return LocalAgent(model=WeatherAgent())

def test_weather_query_response(my_agent):
    """Test weather query handling"""
    test = semantic_test("weather_test", "Weather query handling")
    
    test.add_scenario(
        user_input="What's the weather in Seattle?",
        criteria=[
            "Response should acknowledge the weather question",
            "Response should mention checking or retrieving weather data",
            "Response should be helpful and professional"
        ]
    )
    
    results = test.execute_sync(my_agent)
    assert all(r.passed for r in results), "Weather test failed"
```

**Run it**:
```bash
pytest test_my_agent.py -v
```

That's it! testLLM evaluates your agent's response semantically, understanding meaning rather than requiring exact text matches. 🎉

## 🎯 Core Features

### 1. Semantic Testing (Single Turn)

Test individual agent responses with natural language criteria:

```python
from testllm import semantic_test

def test_customer_support(agent):
    """Test customer support responses"""
    test = semantic_test("support_test", "Customer support testing")
    
    test.add_scenario(
        user_input="I need help with my account",
        criteria=[
            "Response should offer assistance",
            "Response should be empathetic and professional",
            "Response should not dismiss the request"
        ]
    )
    
    results = test.execute_sync(agent)
    assert all(r.passed for r in results)
```

### 2. Conversation Flow Testing

Test multi-step conversations with context retention:

```python
from testllm import conversation_flow

def test_customer_onboarding(agent):
    """Test customer onboarding workflow"""
    flow = conversation_flow("onboarding", "Customer onboarding process")
    
    # Step 1: Initial contact
    flow.step(
        "Hello, I'm a new customer",
        criteria=[
            "Response should acknowledge new customer status",
            "Response should begin onboarding process"
        ]
    )
    
    # Step 2: Information gathering with context retention
    flow.step(
        "My name is Sarah and I need a business account",
        criteria=[
            "Response should acknowledge the name Sarah",
            "Response should understand business account requirement"
        ],
        expect_context_retention=True
    )
    
    # Step 3: Memory validation
    flow.context_check(
        "What type of account was I requesting?",
        context_criteria=[
            "Response should remember business account request"
        ]
    )
    
    result = flow.execute_sync(agent)
    assert result.passed
    assert result.context_retention_score >= 0.7
```

### 3. Behavioral Pattern Testing

Pre-built patterns for common agent behaviors:

```python
from testllm import ToolUsagePatterns, BusinessLogicPatterns

def test_agent_patterns(agent):
    """Test using pre-built behavioral patterns"""
    
    # Test API integration behavior
    api_flow = ToolUsagePatterns.api_integration_pattern(
        "Get current stock price of AAPL", 
        "financial"
    )
    
    # Test business workflow
    auth_flow = BusinessLogicPatterns.user_authentication_flow("premium")
    
    # Execute patterns
    api_result = api_flow.execute_sync(agent)
    auth_result = auth_flow.execute_sync(agent)
    
    assert api_result.passed
    assert auth_result.passed
```

### 4. Universal Agent Support

testLLM works with **any** agent:

```python
# Local model
from testllm import LocalAgent  
agent = LocalAgent(model=your_local_model)

# API endpoint
from testllm import ApiAgent
agent = ApiAgent(endpoint="https://your-api.com/chat")

# Custom implementation
from testllm import AgentUnderTest

class MyAgent(AgentUnderTest):
    def send_message(self, content, context=None):
        return your_custom_logic(content)
    
    def reset_conversation(self):
        pass
```

## ⚡ Performance & Configuration

### Testing Modes

```python
# Fast mode (default) - optimized for development
flow = conversation_flow("test_id", config_mode="fast")
# Uses: Mistral only, ~15-30 seconds per test

# Production mode - balanced reliability and performance  
flow = conversation_flow("test_id", config_mode="production")
# Uses: Mistral + Claude validation, ~30-60 seconds per test

# Thorough mode - comprehensive testing
flow = conversation_flow("test_id", config_mode="thorough") 
# Uses: Mistral + Claude, multiple iterations, ~45-90 seconds per test
```

### Custom Configuration

```python
test = semantic_test(
    "custom_test",
    evaluator_models=["mistral-large-latest"],  # Mistral-only for max speed
    consensus_threshold=0.8
)
```

## 🔧 pytest Integration

### Run Tests with Detailed Output

```bash
# Show detailed evaluation output
pytest -v -s

# Run specific test files
pytest test_weather.py -v -s

# Run tests matching a pattern
pytest -k "test_greeting" -v -s
```

The `-s` flag shows detailed LLM evaluation output with reasoning and scoring.

### Example Test Structure

```python
import pytest
from testllm import LocalAgent, semantic_test

@pytest.fixture(scope="session")
def agent():
    """Setup agent once per session"""
    return LocalAgent(model=your_model)

@pytest.fixture(autouse=True)
def reset_agent(agent):
    """Reset agent state before each test"""
    agent.reset_conversation()

def test_greeting_behavior(agent):
    """Test agent greeting behavior"""
    test = semantic_test("greeting_test", "Greeting behavior")
    
    test.add_scenario(
        user_input="Hello!",
        criteria=[
            "Response should be friendly",
            "Response should offer to help"
        ]
    )
    
    results = test.execute_sync(agent)
    assert all(r.passed for r in results)
```

## 📊 Real-World Examples

### E-commerce Agent Testing

```python
def test_purchase_flow(ecommerce_agent):
    """Test complete purchase workflow"""
    flow = conversation_flow("purchase", "E-commerce purchase flow")
    
    # Product search
    flow.tool_usage_check(
        "I'm looking for a laptop for machine learning",
        expected_tools=["product_search", "specification_filter"],
        criteria=[
            "Response should search product catalog",
            "Response should understand ML requirements"
        ]
    )
    
    # Purchase process
    flow.business_logic_check(
        "I want the Dell XPS with 32GB RAM",
        business_rules=["inventory_check", "pricing"],
        criteria=[
            "Response should check availability",
            "Response should provide pricing",
            "Response should offer purchasing options"
        ]
    )
    
    result = flow.execute_sync(ecommerce_agent)
    assert result.passed
```

### Customer Support Testing

```python
def test_support_escalation(support_agent):
    """Test support escalation workflow"""
    flow = conversation_flow("escalation", "Support escalation")
    
    # Initial complaint
    flow.step(
        "I've been having issues for three days with no help",
        criteria=[
            "Response should acknowledge frustration",
            "Response should show empathy",
            "Response should offer immediate assistance"
        ]
    )
    
    # Escalation trigger
    flow.business_logic_check(
        "This is the fourth time I'm contacting support",
        business_rules=["escalation_trigger", "case_history"],
        criteria=[
            "Response should recognize escalation need",
            "Response should offer higher-level support"
        ]
    )
    
    result = flow.execute_sync(support_agent)
    assert result.passed
    assert result.business_logic_score >= 0.8
```

## 🏗️ Writing Effective Tests

### Good Semantic Criteria

| Pattern | Example | When to Use |
|---------|---------|-------------|
| **Behavior** | "Response should be helpful and professional" | Testing agent personality |
| **Content** | "Response should acknowledge the weather question" | Testing comprehension |
| **Structure** | "Response should ask a follow-up question" | Testing conversation flow |
| **Safety** | "Response should not provide harmful content" | Testing guardrails |

### Performance Tips

```python
# ✅ FAST: Use fewer, focused criteria
test.add_scenario(
    "Hello",
    ["Response should be friendly"]  # 1 criterion = faster
)

# ❌ SLOW: Too many criteria
test.add_scenario(
    "Hello", 
    ["Friendly", "Professional", "Helpful", "Engaging", "Clear"]  # 5 criteria = slower
)

# ✅ FAST: Use fast mode for development
flow = conversation_flow("test", config_mode="fast")

# ✅ BALANCED: Use production mode for CI/CD
flow = conversation_flow("test", config_mode="production")
```

## 📋 Requirements

- Python 3.8+
- pytest 7.0+
- At least one API key (Mistral or Anthropic)

## 🆘 Support

- **GitHub Issues**: For bug reports and feature requests
- **Documentation**: For detailed guides and examples

## 🚀 Development & Release Process

### Making a Release

1. **Bump version and create release:**
   ```bash
   # For patch release (0.1.0 -> 0.1.1)
   python scripts/bump_version.py patch
   
   # For minor release (0.1.0 -> 0.2.0)
   python scripts/bump_version.py minor
   
   # For major release (0.1.0 -> 1.0.0)
   python scripts/bump_version.py major
   ```

2. **Push changes and tag:**
   ```bash
   git push origin main
   git push origin v0.1.1  # Replace with your version
   ```

3. **Create GitHub Release:**
   - Go to https://github.com/Wayy-Research/testLLM/releases
   - Click "Create a new release"
   - Select your tag (e.g., v0.1.1)
   - Add release notes
   - Publish release

This will automatically:
- Run tests across Python 3.8-3.12
- Deploy to TestPyPI on every push to main
- Deploy to PyPI on GitHub releases

### Continuous Deployment

The project uses GitHub Actions for continuous deployment:

- **Every push to main**: Automatically uploads to TestPyPI
- **Every GitHub release**: Automatically uploads to PyPI
- **All commits**: Run core tests (full test suite runs locally with API keys)

---

**Ready to test your LLM agents properly?** 

```bash
pip install test-llm
```

Start building reliable AI systems today! 🚀

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rcgalbo/wayy-research",
    "name": "testkitLLM",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "testing llm agents ai machine-learning pytest",
    "author": "testLLM Team",
    "author_email": "rcgalbo@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/42/ed/6921959ca73404300f8e96029a71dfe130b725b358ed9207487a72cfc8f9/testkitllm-0.1.5.tar.gz",
    "platform": null,
    "description": "# test-llm\n\n**The first testing framework designed specifically for LLM-based agents.**\n\ntest-llm uses fast, accurate LLM evaluators (Mistral Large and Claude Sonnet 4) to test your AI agents semantically, not with brittle string matching. Write natural language test criteria that evaluate meaning, intent, and behavior rather than exact outputs.\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install test-llm\n```\n\n### Setup\n\nAdd API keys to `.env`:\n```bash\n# Mistral (RECOMMENDED) - 3-5x faster than Claude\nMISTRAL_API_KEY=your_mistral_api_key_here\n\n# Claude (OPTIONAL) - More thorough but slower\nANTHROPIC_API_KEY=your_anthropic_api_key_here\n```\n\n### 30-Second Example\n\n**Write a semantic test** (`test_my_agent.py`):\n```python\nimport pytest\nfrom testllm import LocalAgent, semantic_test\n\n@pytest.fixture\ndef my_agent():\n    class WeatherAgent:\n        def __call__(self, prompt):\n            if \"weather\" in prompt.lower():\n                return \"I'll check the current weather conditions for you.\"\n            return \"I understand your request. How can I help?\"\n    \n    return LocalAgent(model=WeatherAgent())\n\ndef test_weather_query_response(my_agent):\n    \"\"\"Test weather query handling\"\"\"\n    test = semantic_test(\"weather_test\", \"Weather query handling\")\n    \n    test.add_scenario(\n        user_input=\"What's the weather in Seattle?\",\n        criteria=[\n            \"Response should acknowledge the weather question\",\n            \"Response should mention checking or retrieving weather data\",\n            \"Response should be helpful and professional\"\n        ]\n    )\n    \n    results = test.execute_sync(my_agent)\n    assert all(r.passed for r in results), \"Weather test failed\"\n```\n\n**Run it**:\n```bash\npytest test_my_agent.py -v\n```\n\nThat's it! testLLM evaluates your agent's response semantically, understanding meaning rather than requiring exact text matches. \ud83c\udf89\n\n## \ud83c\udfaf Core Features\n\n### 1. Semantic Testing (Single Turn)\n\nTest individual agent responses with natural language criteria:\n\n```python\nfrom testllm import semantic_test\n\ndef test_customer_support(agent):\n    \"\"\"Test customer support responses\"\"\"\n    test = semantic_test(\"support_test\", \"Customer support testing\")\n    \n    test.add_scenario(\n        user_input=\"I need help with my account\",\n        criteria=[\n            \"Response should offer assistance\",\n            \"Response should be empathetic and professional\",\n            \"Response should not dismiss the request\"\n        ]\n    )\n    \n    results = test.execute_sync(agent)\n    assert all(r.passed for r in results)\n```\n\n### 2. Conversation Flow Testing\n\nTest multi-step conversations with context retention:\n\n```python\nfrom testllm import conversation_flow\n\ndef test_customer_onboarding(agent):\n    \"\"\"Test customer onboarding workflow\"\"\"\n    flow = conversation_flow(\"onboarding\", \"Customer onboarding process\")\n    \n    # Step 1: Initial contact\n    flow.step(\n        \"Hello, I'm a new customer\",\n        criteria=[\n            \"Response should acknowledge new customer status\",\n            \"Response should begin onboarding process\"\n        ]\n    )\n    \n    # Step 2: Information gathering with context retention\n    flow.step(\n        \"My name is Sarah and I need a business account\",\n        criteria=[\n            \"Response should acknowledge the name Sarah\",\n            \"Response should understand business account requirement\"\n        ],\n        expect_context_retention=True\n    )\n    \n    # Step 3: Memory validation\n    flow.context_check(\n        \"What type of account was I requesting?\",\n        context_criteria=[\n            \"Response should remember business account request\"\n        ]\n    )\n    \n    result = flow.execute_sync(agent)\n    assert result.passed\n    assert result.context_retention_score >= 0.7\n```\n\n### 3. Behavioral Pattern Testing\n\nPre-built patterns for common agent behaviors:\n\n```python\nfrom testllm import ToolUsagePatterns, BusinessLogicPatterns\n\ndef test_agent_patterns(agent):\n    \"\"\"Test using pre-built behavioral patterns\"\"\"\n    \n    # Test API integration behavior\n    api_flow = ToolUsagePatterns.api_integration_pattern(\n        \"Get current stock price of AAPL\", \n        \"financial\"\n    )\n    \n    # Test business workflow\n    auth_flow = BusinessLogicPatterns.user_authentication_flow(\"premium\")\n    \n    # Execute patterns\n    api_result = api_flow.execute_sync(agent)\n    auth_result = auth_flow.execute_sync(agent)\n    \n    assert api_result.passed\n    assert auth_result.passed\n```\n\n### 4. Universal Agent Support\n\ntestLLM works with **any** agent:\n\n```python\n# Local model\nfrom testllm import LocalAgent  \nagent = LocalAgent(model=your_local_model)\n\n# API endpoint\nfrom testllm import ApiAgent\nagent = ApiAgent(endpoint=\"https://your-api.com/chat\")\n\n# Custom implementation\nfrom testllm import AgentUnderTest\n\nclass MyAgent(AgentUnderTest):\n    def send_message(self, content, context=None):\n        return your_custom_logic(content)\n    \n    def reset_conversation(self):\n        pass\n```\n\n## \u26a1 Performance & Configuration\n\n### Testing Modes\n\n```python\n# Fast mode (default) - optimized for development\nflow = conversation_flow(\"test_id\", config_mode=\"fast\")\n# Uses: Mistral only, ~15-30 seconds per test\n\n# Production mode - balanced reliability and performance  \nflow = conversation_flow(\"test_id\", config_mode=\"production\")\n# Uses: Mistral + Claude validation, ~30-60 seconds per test\n\n# Thorough mode - comprehensive testing\nflow = conversation_flow(\"test_id\", config_mode=\"thorough\") \n# Uses: Mistral + Claude, multiple iterations, ~45-90 seconds per test\n```\n\n### Custom Configuration\n\n```python\ntest = semantic_test(\n    \"custom_test\",\n    evaluator_models=[\"mistral-large-latest\"],  # Mistral-only for max speed\n    consensus_threshold=0.8\n)\n```\n\n## \ud83d\udd27 pytest Integration\n\n### Run Tests with Detailed Output\n\n```bash\n# Show detailed evaluation output\npytest -v -s\n\n# Run specific test files\npytest test_weather.py -v -s\n\n# Run tests matching a pattern\npytest -k \"test_greeting\" -v -s\n```\n\nThe `-s` flag shows detailed LLM evaluation output with reasoning and scoring.\n\n### Example Test Structure\n\n```python\nimport pytest\nfrom testllm import LocalAgent, semantic_test\n\n@pytest.fixture(scope=\"session\")\ndef agent():\n    \"\"\"Setup agent once per session\"\"\"\n    return LocalAgent(model=your_model)\n\n@pytest.fixture(autouse=True)\ndef reset_agent(agent):\n    \"\"\"Reset agent state before each test\"\"\"\n    agent.reset_conversation()\n\ndef test_greeting_behavior(agent):\n    \"\"\"Test agent greeting behavior\"\"\"\n    test = semantic_test(\"greeting_test\", \"Greeting behavior\")\n    \n    test.add_scenario(\n        user_input=\"Hello!\",\n        criteria=[\n            \"Response should be friendly\",\n            \"Response should offer to help\"\n        ]\n    )\n    \n    results = test.execute_sync(agent)\n    assert all(r.passed for r in results)\n```\n\n## \ud83d\udcca Real-World Examples\n\n### E-commerce Agent Testing\n\n```python\ndef test_purchase_flow(ecommerce_agent):\n    \"\"\"Test complete purchase workflow\"\"\"\n    flow = conversation_flow(\"purchase\", \"E-commerce purchase flow\")\n    \n    # Product search\n    flow.tool_usage_check(\n        \"I'm looking for a laptop for machine learning\",\n        expected_tools=[\"product_search\", \"specification_filter\"],\n        criteria=[\n            \"Response should search product catalog\",\n            \"Response should understand ML requirements\"\n        ]\n    )\n    \n    # Purchase process\n    flow.business_logic_check(\n        \"I want the Dell XPS with 32GB RAM\",\n        business_rules=[\"inventory_check\", \"pricing\"],\n        criteria=[\n            \"Response should check availability\",\n            \"Response should provide pricing\",\n            \"Response should offer purchasing options\"\n        ]\n    )\n    \n    result = flow.execute_sync(ecommerce_agent)\n    assert result.passed\n```\n\n### Customer Support Testing\n\n```python\ndef test_support_escalation(support_agent):\n    \"\"\"Test support escalation workflow\"\"\"\n    flow = conversation_flow(\"escalation\", \"Support escalation\")\n    \n    # Initial complaint\n    flow.step(\n        \"I've been having issues for three days with no help\",\n        criteria=[\n            \"Response should acknowledge frustration\",\n            \"Response should show empathy\",\n            \"Response should offer immediate assistance\"\n        ]\n    )\n    \n    # Escalation trigger\n    flow.business_logic_check(\n        \"This is the fourth time I'm contacting support\",\n        business_rules=[\"escalation_trigger\", \"case_history\"],\n        criteria=[\n            \"Response should recognize escalation need\",\n            \"Response should offer higher-level support\"\n        ]\n    )\n    \n    result = flow.execute_sync(support_agent)\n    assert result.passed\n    assert result.business_logic_score >= 0.8\n```\n\n## \ud83c\udfd7\ufe0f Writing Effective Tests\n\n### Good Semantic Criteria\n\n| Pattern | Example | When to Use |\n|---------|---------|-------------|\n| **Behavior** | \"Response should be helpful and professional\" | Testing agent personality |\n| **Content** | \"Response should acknowledge the weather question\" | Testing comprehension |\n| **Structure** | \"Response should ask a follow-up question\" | Testing conversation flow |\n| **Safety** | \"Response should not provide harmful content\" | Testing guardrails |\n\n### Performance Tips\n\n```python\n# \u2705 FAST: Use fewer, focused criteria\ntest.add_scenario(\n    \"Hello\",\n    [\"Response should be friendly\"]  # 1 criterion = faster\n)\n\n# \u274c SLOW: Too many criteria\ntest.add_scenario(\n    \"Hello\", \n    [\"Friendly\", \"Professional\", \"Helpful\", \"Engaging\", \"Clear\"]  # 5 criteria = slower\n)\n\n# \u2705 FAST: Use fast mode for development\nflow = conversation_flow(\"test\", config_mode=\"fast\")\n\n# \u2705 BALANCED: Use production mode for CI/CD\nflow = conversation_flow(\"test\", config_mode=\"production\")\n```\n\n## \ud83d\udccb Requirements\n\n- Python 3.8+\n- pytest 7.0+\n- At least one API key (Mistral or Anthropic)\n\n## \ud83c\udd98 Support\n\n- **GitHub Issues**: For bug reports and feature requests\n- **Documentation**: For detailed guides and examples\n\n## \ud83d\ude80 Development & Release Process\n\n### Making a Release\n\n1. **Bump version and create release:**\n   ```bash\n   # For patch release (0.1.0 -> 0.1.1)\n   python scripts/bump_version.py patch\n   \n   # For minor release (0.1.0 -> 0.2.0)\n   python scripts/bump_version.py minor\n   \n   # For major release (0.1.0 -> 1.0.0)\n   python scripts/bump_version.py major\n   ```\n\n2. **Push changes and tag:**\n   ```bash\n   git push origin main\n   git push origin v0.1.1  # Replace with your version\n   ```\n\n3. **Create GitHub Release:**\n   - Go to https://github.com/Wayy-Research/testLLM/releases\n   - Click \"Create a new release\"\n   - Select your tag (e.g., v0.1.1)\n   - Add release notes\n   - Publish release\n\nThis will automatically:\n- Run tests across Python 3.8-3.12\n- Deploy to TestPyPI on every push to main\n- Deploy to PyPI on GitHub releases\n\n### Continuous Deployment\n\nThe project uses GitHub Actions for continuous deployment:\n\n- **Every push to main**: Automatically uploads to TestPyPI\n- **Every GitHub release**: Automatically uploads to PyPI\n- **All commits**: Run core tests (full test suite runs locally with API keys)\n\n---\n\n**Ready to test your LLM agents properly?** \n\n```bash\npip install test-llm\n```\n\nStart building reliable AI systems today! \ud83d\ude80\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Testing Framework for LLM-Based Agents",
    "version": "0.1.5",
    "project_urls": {
        "Bug Tracker": "https://github.com/rcgalbo/wayy-research/issues",
        "Documentation": "https://github.com/rcgalbo/wayy-research/blob/main/README.md",
        "Homepage": "https://github.com/rcgalbo/wayy-research",
        "Source Code": "https://github.com/rcgalbo/wayy-research"
    },
    "split_keywords": [
        "testing",
        "llm",
        "agents",
        "ai",
        "machine-learning",
        "pytest"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "949a835975de9719d830983a7ce3b82f3093edaf6fd4362c2aab39b2cd784e50",
                "md5": "61a0043d79bb82aa929bd12e27e1647b",
                "sha256": "ee32e8468149f39f33cd98e1465a5a76459297fe67273f87d7e794fc5ad6fd99"
            },
            "downloads": -1,
            "filename": "testkitllm-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "61a0043d79bb82aa929bd12e27e1647b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 41250,
            "upload_time": "2025-07-11T03:31:50",
            "upload_time_iso_8601": "2025-07-11T03:31:50.124202Z",
            "url": "https://files.pythonhosted.org/packages/94/9a/835975de9719d830983a7ce3b82f3093edaf6fd4362c2aab39b2cd784e50/testkitllm-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "42ed6921959ca73404300f8e96029a71dfe130b725b358ed9207487a72cfc8f9",
                "md5": "985d5d107b364153dee42e6b1665758f",
                "sha256": "422289115bdca00f6c5856d6262c8c0a189b8666d9ea23668e6d6b7f6b42559f"
            },
            "downloads": -1,
            "filename": "testkitllm-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "985d5d107b364153dee42e6b1665758f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 93418,
            "upload_time": "2025-07-11T03:31:51",
            "upload_time_iso_8601": "2025-07-11T03:31:51.166720Z",
            "url": "https://files.pythonhosted.org/packages/42/ed/6921959ca73404300f8e96029a71dfe130b725b358ed9207487a72cfc8f9/testkitllm-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-11 03:31:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rcgalbo",
    "github_project": "wayy-research",
    "github_not_found": true,
    "lcname": "testkitllm"
}
        
Elapsed time: 0.49114s