dynamic-agent-switcher


Namedynamic-agent-switcher JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/sumitpaul/dynamic-agent-switcher
SummaryA flexible package for switching AI models during agent execution
upload_time2025-08-18 07:44:15
maintainerNone
docs_urlNone
authorSumit Paul
requires_python>=3.8
licenseMIT
keywords ai agent model switching rate-limit pydantic-ai langchain
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Dynamic Agent Switcher

A Python package that allows you to dynamically switch between different AI models during agent execution. Perfect for handling rate limits, distributing API calls, and implementing custom switching logic.

## Features

- **Flexible Switching Conditions**: Switch models based on rate limits, tool call counts, response content, time limits, error counts, or custom functions
- **Multiple AI Providers**: Support for OpenAI, Gemini, Groq, and other providers
- **Easy Integration**: Drop-in replacement for existing `pydantic_ai.Agent` instances
- **Configurable Strategies**: Round-robin, rate-limit-based, random, weighted, or fallback selection
- **Real-time Context Tracking**: Monitor execution context and make informed switching decisions
- **Extensible**: Create custom switching conditions for your specific needs

## Installation

```bash
pip install dynamic_agent_switcher
```

## Quick Start

### Basic Usage

```python
from dynamic_agent_switcher import (
    DynamicAgentWrapper, 
    ModelConfig, 
    SwitchStrategy,
    ConditionType
)

# Define your models
model_configs = [
    ModelConfig(
        name="openai_gpt4",
        provider="openai",
        model_name="gpt-4",
        api_key="your-openai-key",
        max_requests_per_minute=50
    ),
    ModelConfig(
        name="gemini_pro",
        provider="gemini",
        model_name="gemini-1.5-pro", 
        api_key="your-gemini-key",
        max_requests_per_minute=60
    )
]

# Create dynamic agent
agent = DynamicAgentWrapper(
    model_configs=model_configs,
    system_prompt="You are a helpful AI assistant.",
    strategy=SwitchStrategy.RATE_LIMIT_BASED
)

# Use like a regular agent
result = await agent.run("Generate a story about a robot.")
```

### Custom Switching Conditions

```python
# Switch after 3 tool calls
tool_call_condition = {
    "type": ConditionType.TOOL_CALL_COUNT.value,
    "name": "tool_call_limit",
    "parameters": {"max_calls": 3}
}

# Switch when the count_words tool reports > 500 words (preferred)
tool_word_count_condition = {
    "type": ConditionType.TOOL_RESPONSE.value,
    "name": "word_limit_via_tool",
    "parameters": {
        "tool_name": "count_words",             # optional: apply only to this tool
        "logic": "ANY",                         # ANY | ALL | NONE
        "response_conditions": [
            {"type": "greater_than", "path": "word_count", "value": 500}
        ]
    }
}

# Note: If you want to measure raw model output length instead of tool output,
# you can still use AI_RESPONSE_CONTENT. However, for generic, reusable logic,
# prefer TOOL_RESPONSE so it works with your explicit tool outputs.

# Switch after 2 minutes
time_condition = {
    "type": ConditionType.TIME_BASED.value,
    "name": "time_limit",
    "parameters": {"max_duration_seconds": 120}
}

agent = DynamicAgentWrapper(
    model_configs=model_configs,
    conditions=[tool_call_condition, tool_word_count_condition, time_condition]
)
```

### Tool Response Based Switching

```python
# Switch when a specific tool returns certain values
tool_response_condition = {
    "type": ConditionType.TOOL_RESPONSE.value,
    "name": "validation_failed",
    "parameters": {
        "tool_name": "validate_html",
        "response_conditions": [
            {"type": "contains", "value": "invalid"},
            {"type": "equals", "value": False}
        ]
    }
}
```

### Custom Function Conditions

```python
def custom_switch_logic(context):
    """Custom logic to determine when to switch models."""
    error_count = context.get("error_count", 0)
    current_model = context.get("current_model")
    
    # Switch if we've had 2 errors with the same model
    if error_count >= 2:
        return True
    
    # Switch if current model is OpenAI and it's peak hours
    if current_model == "openai_gpt4" and is_peak_hours():
        return True
    
    return False

custom_condition = {
    "type": ConditionType.CUSTOM_FUNCTION.value,
    "name": "custom_logic",
    "parameters": {
        "function": custom_switch_logic,
        "reason": "Custom business logic"
    }
}
```

### Combination Conditions

```python
# Switch if ANY of these conditions are met
combination_condition = {
    "type": ConditionType.COMBINATION.value,
    "name": "complex_logic",
    "operator": "OR",  # AND, OR, XOR
    "parameters": {
        "conditions": [
            {
                "type": ConditionType.RATE_LIMIT.value,
                "name": "rate_limit"
            },
            {
                "type": ConditionType.TOOL_CALL_COUNT.value,
                "name": "too_many_tools",
                "parameters": {"max_calls": 10}
            },
            {
                "type": ConditionType.TIME_BASED.value,
                "name": "timeout",
                "parameters": {"max_duration_seconds": 600}
            }
        ]
    }
}
```

## Integration with Existing Code

### Replace Existing Agents

```python
from dynamic_agent_switcher import replace_agent_with_dynamic

# Your existing agent
original_agent = Agent(
    openai_model,
    system_prompt="...",
    tools=[count_words, validate_html]
)

# Replace with dynamic version
dynamic_agent = replace_agent_with_dynamic(
    original_agent=original_agent,
    model_configs=model_configs,
    conditions=conditions
)

# Use exactly the same way
result = await dynamic_agent.run(prompt)
```

### Update Your Chapter Service

```python
# In your chapter_service.py
from dynamic_agent_switcher import DynamicAgentWrapper, ModelConfig

# Replace your existing agent definitions
full_chapter_generator = DynamicAgentWrapper(
    model_configs=[
        ModelConfig("openai_gpt4", "openai", "gpt-4", OPENAI_API_KEY),
        ModelConfig("gemini_pro", "gemini", "gemini-1.5-pro", GEMINI_API_KEY),
        ModelConfig("groq_llama3", "groq", "llama3-70b-8192", GROQ_API_KEY)
    ],
    system_prompt="You are an AI assistant for an advanced micro-learning platform...",
    output_type=FullChapter,
    tools=[Tool(count_words), Tool(validate_html)],
    conditions=[
        {
            "type": "rate_limit",
            "name": "rate_limit_detection"
        },
        {
            "type": "tool_call_count", 
            "name": "tool_call_limit",
            "parameters": {"max_calls": 5}
        },
        {
            "type": "ai_response_content",
            "name": "word_count_limit", 
            "parameters": {
                "content_conditions": [
                    {"type": "word_count_greater_than", "value": 800}
                ]
            }
        }
    ]
)

# Use exactly the same way as before
async def generate_full_chapter_node(state: AgentState) -> Dict[str, Any]:
    # ... your existing code ...
    result = await full_chapter_generator.run(prompt)
    # ... rest of your code ...
```

## Available Condition Types

| Condition Type | Description | Parameters |
|---------------|-------------|------------|
| `RATE_LIMIT` | Switch when rate limit errors occur | None |
| `TOOL_CALL_COUNT` | Switch after N tool calls | `max_calls: int` |
| `TOOL_RESPONSE` | Switch based on tool response | `tool_name: str`, `response_conditions: List` |
| `AI_RESPONSE_STATUS` | Switch based on AI response status | `status_codes: List`, `error_patterns: List` |
| `AI_RESPONSE_CONTENT` | Switch based on response content | `content_conditions: List` |
| `TIME_BASED` | Switch after time limit | `max_duration_seconds: int` |
| `ERROR_COUNT` | Switch after N errors | `max_errors: int` |
| `CUSTOM_FUNCTION` | Switch based on custom function | `function: Callable`, `reason: str` |
| `COMBINATION` | Combine multiple conditions | `conditions: List`, `operator: str` |

## Content Conditions

For `AI_RESPONSE_CONTENT` and `TOOL_RESPONSE` conditions, you can use:

- `contains`: Check if response contains text
- `equals`: Check if response equals value
- `regex`: Check if response matches regex pattern
- `greater_than`: Check if numeric value is greater
- `less_than`: Check if numeric value is less
- `length_greater_than`: Check if string length is greater
- `length_less_than`: Check if string length is less
- `word_count_greater_than`: Check if word count is greater
- `word_count_less_than`: Check if word count is less

## Monitoring and Debugging

```python
# Get current status
status = agent.get_status()
print(f"Current model: {status['current_model']}")
print(f"Available models: {status['available_models']}")
print(f"Active conditions: {status['conditions']}")
print(f"Execution context: {status['execution_context']}")

# Update context manually
agent.update_context(
    tool_call_count=5,
    last_tool_response={"word_count": 750}
)

# Reset context
agent.reset_context()

# Add/remove conditions dynamically
agent.add_condition(new_condition)
agent.remove_condition("condition_name")
```

## Advanced Usage

### Custom Model Selection Strategy

```python
def custom_model_selector(available_models, context):
    """Custom logic for model selection."""
    if context.get("task_type") == "creative":
        return "gemini_pro"  # Better for creative tasks
    elif context.get("task_type") == "analytical":
        return "openai_gpt4"  # Better for analysis
    else:
        return random.choice(available_models)

# Use with custom strategy
agent = DynamicAgentWrapper(
    model_configs=model_configs,
    strategy=custom_model_selector
)
```

### Context-Aware Switching

```python
# Update context based on your application logic
agent.update_context(
    task_type="creative",
    user_preference="fast",
    current_topic="artificial_intelligence"
)

# Your custom condition can use this context
def context_aware_condition(context):
    if context.get("task_type") == "creative" and context.get("current_model") == "openai_gpt4":
        return True  # Switch to Gemini for creative tasks
    return False
```

## Examples

See `usage_examples.py` for comprehensive examples including:
- Basic usage with rate limit handling
- Tool call count based switching
- Response content based switching
- Time-based switching
- Custom function conditions
- Combination conditions
- Tool response based switching
- Monitoring and debugging
- Integration with existing code
- Advanced context-aware switching

## License

MIT License - see LICENSE file for details.

## Author

**Sumit Paul** - sumit.18.paul@gmail.com

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/sumitpaul/dynamic-agent-switcher",
    "name": "dynamic-agent-switcher",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Sumit Paul <sumit.18.paul@gmail.com>",
    "keywords": "ai, agent, model, switching, rate-limit, pydantic-ai, langchain",
    "author": "Sumit Paul",
    "author_email": "Sumit Paul <sumit.18.paul@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/87/b7/f82808c49a466e39be19a792cf0c75bd5d3f95fb31390a7f124ab6b7504e/dynamic_agent_switcher-0.1.0.tar.gz",
    "platform": null,
    "description": "# Dynamic Agent Switcher\n\nA Python package that allows you to dynamically switch between different AI models during agent execution. Perfect for handling rate limits, distributing API calls, and implementing custom switching logic.\n\n## Features\n\n- **Flexible Switching Conditions**: Switch models based on rate limits, tool call counts, response content, time limits, error counts, or custom functions\n- **Multiple AI Providers**: Support for OpenAI, Gemini, Groq, and other providers\n- **Easy Integration**: Drop-in replacement for existing `pydantic_ai.Agent` instances\n- **Configurable Strategies**: Round-robin, rate-limit-based, random, weighted, or fallback selection\n- **Real-time Context Tracking**: Monitor execution context and make informed switching decisions\n- **Extensible**: Create custom switching conditions for your specific needs\n\n## Installation\n\n```bash\npip install dynamic_agent_switcher\n```\n\n## Quick Start\n\n### Basic Usage\n\n```python\nfrom dynamic_agent_switcher import (\n    DynamicAgentWrapper, \n    ModelConfig, \n    SwitchStrategy,\n    ConditionType\n)\n\n# Define your models\nmodel_configs = [\n    ModelConfig(\n        name=\"openai_gpt4\",\n        provider=\"openai\",\n        model_name=\"gpt-4\",\n        api_key=\"your-openai-key\",\n        max_requests_per_minute=50\n    ),\n    ModelConfig(\n        name=\"gemini_pro\",\n        provider=\"gemini\",\n        model_name=\"gemini-1.5-pro\", \n        api_key=\"your-gemini-key\",\n        max_requests_per_minute=60\n    )\n]\n\n# Create dynamic agent\nagent = DynamicAgentWrapper(\n    model_configs=model_configs,\n    system_prompt=\"You are a helpful AI assistant.\",\n    strategy=SwitchStrategy.RATE_LIMIT_BASED\n)\n\n# Use like a regular agent\nresult = await agent.run(\"Generate a story about a robot.\")\n```\n\n### Custom Switching Conditions\n\n```python\n# Switch after 3 tool calls\ntool_call_condition = {\n    \"type\": ConditionType.TOOL_CALL_COUNT.value,\n    \"name\": \"tool_call_limit\",\n    \"parameters\": {\"max_calls\": 3}\n}\n\n# Switch when the count_words tool reports > 500 words (preferred)\ntool_word_count_condition = {\n    \"type\": ConditionType.TOOL_RESPONSE.value,\n    \"name\": \"word_limit_via_tool\",\n    \"parameters\": {\n        \"tool_name\": \"count_words\",             # optional: apply only to this tool\n        \"logic\": \"ANY\",                         # ANY | ALL | NONE\n        \"response_conditions\": [\n            {\"type\": \"greater_than\", \"path\": \"word_count\", \"value\": 500}\n        ]\n    }\n}\n\n# Note: If you want to measure raw model output length instead of tool output,\n# you can still use AI_RESPONSE_CONTENT. However, for generic, reusable logic,\n# prefer TOOL_RESPONSE so it works with your explicit tool outputs.\n\n# Switch after 2 minutes\ntime_condition = {\n    \"type\": ConditionType.TIME_BASED.value,\n    \"name\": \"time_limit\",\n    \"parameters\": {\"max_duration_seconds\": 120}\n}\n\nagent = DynamicAgentWrapper(\n    model_configs=model_configs,\n    conditions=[tool_call_condition, tool_word_count_condition, time_condition]\n)\n```\n\n### Tool Response Based Switching\n\n```python\n# Switch when a specific tool returns certain values\ntool_response_condition = {\n    \"type\": ConditionType.TOOL_RESPONSE.value,\n    \"name\": \"validation_failed\",\n    \"parameters\": {\n        \"tool_name\": \"validate_html\",\n        \"response_conditions\": [\n            {\"type\": \"contains\", \"value\": \"invalid\"},\n            {\"type\": \"equals\", \"value\": False}\n        ]\n    }\n}\n```\n\n### Custom Function Conditions\n\n```python\ndef custom_switch_logic(context):\n    \"\"\"Custom logic to determine when to switch models.\"\"\"\n    error_count = context.get(\"error_count\", 0)\n    current_model = context.get(\"current_model\")\n    \n    # Switch if we've had 2 errors with the same model\n    if error_count >= 2:\n        return True\n    \n    # Switch if current model is OpenAI and it's peak hours\n    if current_model == \"openai_gpt4\" and is_peak_hours():\n        return True\n    \n    return False\n\ncustom_condition = {\n    \"type\": ConditionType.CUSTOM_FUNCTION.value,\n    \"name\": \"custom_logic\",\n    \"parameters\": {\n        \"function\": custom_switch_logic,\n        \"reason\": \"Custom business logic\"\n    }\n}\n```\n\n### Combination Conditions\n\n```python\n# Switch if ANY of these conditions are met\ncombination_condition = {\n    \"type\": ConditionType.COMBINATION.value,\n    \"name\": \"complex_logic\",\n    \"operator\": \"OR\",  # AND, OR, XOR\n    \"parameters\": {\n        \"conditions\": [\n            {\n                \"type\": ConditionType.RATE_LIMIT.value,\n                \"name\": \"rate_limit\"\n            },\n            {\n                \"type\": ConditionType.TOOL_CALL_COUNT.value,\n                \"name\": \"too_many_tools\",\n                \"parameters\": {\"max_calls\": 10}\n            },\n            {\n                \"type\": ConditionType.TIME_BASED.value,\n                \"name\": \"timeout\",\n                \"parameters\": {\"max_duration_seconds\": 600}\n            }\n        ]\n    }\n}\n```\n\n## Integration with Existing Code\n\n### Replace Existing Agents\n\n```python\nfrom dynamic_agent_switcher import replace_agent_with_dynamic\n\n# Your existing agent\noriginal_agent = Agent(\n    openai_model,\n    system_prompt=\"...\",\n    tools=[count_words, validate_html]\n)\n\n# Replace with dynamic version\ndynamic_agent = replace_agent_with_dynamic(\n    original_agent=original_agent,\n    model_configs=model_configs,\n    conditions=conditions\n)\n\n# Use exactly the same way\nresult = await dynamic_agent.run(prompt)\n```\n\n### Update Your Chapter Service\n\n```python\n# In your chapter_service.py\nfrom dynamic_agent_switcher import DynamicAgentWrapper, ModelConfig\n\n# Replace your existing agent definitions\nfull_chapter_generator = DynamicAgentWrapper(\n    model_configs=[\n        ModelConfig(\"openai_gpt4\", \"openai\", \"gpt-4\", OPENAI_API_KEY),\n        ModelConfig(\"gemini_pro\", \"gemini\", \"gemini-1.5-pro\", GEMINI_API_KEY),\n        ModelConfig(\"groq_llama3\", \"groq\", \"llama3-70b-8192\", GROQ_API_KEY)\n    ],\n    system_prompt=\"You are an AI assistant for an advanced micro-learning platform...\",\n    output_type=FullChapter,\n    tools=[Tool(count_words), Tool(validate_html)],\n    conditions=[\n        {\n            \"type\": \"rate_limit\",\n            \"name\": \"rate_limit_detection\"\n        },\n        {\n            \"type\": \"tool_call_count\", \n            \"name\": \"tool_call_limit\",\n            \"parameters\": {\"max_calls\": 5}\n        },\n        {\n            \"type\": \"ai_response_content\",\n            \"name\": \"word_count_limit\", \n            \"parameters\": {\n                \"content_conditions\": [\n                    {\"type\": \"word_count_greater_than\", \"value\": 800}\n                ]\n            }\n        }\n    ]\n)\n\n# Use exactly the same way as before\nasync def generate_full_chapter_node(state: AgentState) -> Dict[str, Any]:\n    # ... your existing code ...\n    result = await full_chapter_generator.run(prompt)\n    # ... rest of your code ...\n```\n\n## Available Condition Types\n\n| Condition Type | Description | Parameters |\n|---------------|-------------|------------|\n| `RATE_LIMIT` | Switch when rate limit errors occur | None |\n| `TOOL_CALL_COUNT` | Switch after N tool calls | `max_calls: int` |\n| `TOOL_RESPONSE` | Switch based on tool response | `tool_name: str`, `response_conditions: List` |\n| `AI_RESPONSE_STATUS` | Switch based on AI response status | `status_codes: List`, `error_patterns: List` |\n| `AI_RESPONSE_CONTENT` | Switch based on response content | `content_conditions: List` |\n| `TIME_BASED` | Switch after time limit | `max_duration_seconds: int` |\n| `ERROR_COUNT` | Switch after N errors | `max_errors: int` |\n| `CUSTOM_FUNCTION` | Switch based on custom function | `function: Callable`, `reason: str` |\n| `COMBINATION` | Combine multiple conditions | `conditions: List`, `operator: str` |\n\n## Content Conditions\n\nFor `AI_RESPONSE_CONTENT` and `TOOL_RESPONSE` conditions, you can use:\n\n- `contains`: Check if response contains text\n- `equals`: Check if response equals value\n- `regex`: Check if response matches regex pattern\n- `greater_than`: Check if numeric value is greater\n- `less_than`: Check if numeric value is less\n- `length_greater_than`: Check if string length is greater\n- `length_less_than`: Check if string length is less\n- `word_count_greater_than`: Check if word count is greater\n- `word_count_less_than`: Check if word count is less\n\n## Monitoring and Debugging\n\n```python\n# Get current status\nstatus = agent.get_status()\nprint(f\"Current model: {status['current_model']}\")\nprint(f\"Available models: {status['available_models']}\")\nprint(f\"Active conditions: {status['conditions']}\")\nprint(f\"Execution context: {status['execution_context']}\")\n\n# Update context manually\nagent.update_context(\n    tool_call_count=5,\n    last_tool_response={\"word_count\": 750}\n)\n\n# Reset context\nagent.reset_context()\n\n# Add/remove conditions dynamically\nagent.add_condition(new_condition)\nagent.remove_condition(\"condition_name\")\n```\n\n## Advanced Usage\n\n### Custom Model Selection Strategy\n\n```python\ndef custom_model_selector(available_models, context):\n    \"\"\"Custom logic for model selection.\"\"\"\n    if context.get(\"task_type\") == \"creative\":\n        return \"gemini_pro\"  # Better for creative tasks\n    elif context.get(\"task_type\") == \"analytical\":\n        return \"openai_gpt4\"  # Better for analysis\n    else:\n        return random.choice(available_models)\n\n# Use with custom strategy\nagent = DynamicAgentWrapper(\n    model_configs=model_configs,\n    strategy=custom_model_selector\n)\n```\n\n### Context-Aware Switching\n\n```python\n# Update context based on your application logic\nagent.update_context(\n    task_type=\"creative\",\n    user_preference=\"fast\",\n    current_topic=\"artificial_intelligence\"\n)\n\n# Your custom condition can use this context\ndef context_aware_condition(context):\n    if context.get(\"task_type\") == \"creative\" and context.get(\"current_model\") == \"openai_gpt4\":\n        return True  # Switch to Gemini for creative tasks\n    return False\n```\n\n## Examples\n\nSee `usage_examples.py` for comprehensive examples including:\n- Basic usage with rate limit handling\n- Tool call count based switching\n- Response content based switching\n- Time-based switching\n- Custom function conditions\n- Combination conditions\n- Tool response based switching\n- Monitoring and debugging\n- Integration with existing code\n- Advanced context-aware switching\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Author\n\n**Sumit Paul** - sumit.18.paul@gmail.com\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A flexible package for switching AI models during agent execution",
    "version": "0.1.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/sumitpaul/dynamic-agent-switcher/issues",
        "Documentation": "https://github.com/sumitpaul/dynamic-agent-switcher#readme",
        "Homepage": "https://github.com/sumitpaul/dynamic-agent-switcher",
        "Repository": "https://github.com/sumitpaul/dynamic-agent-switcher"
    },
    "split_keywords": [
        "ai",
        " agent",
        " model",
        " switching",
        " rate-limit",
        " pydantic-ai",
        " langchain"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "000f43f5b0d5e8931d8ddf2dd44e2134c3c427199e57f8f8f95f112676d366e1",
                "md5": "73c31053d1e593f67e822509e3cd70c1",
                "sha256": "33f6c27a9ca5fac9c484f81daff8ae65748527bd281e1148326d31661a6605d2"
            },
            "downloads": -1,
            "filename": "dynamic_agent_switcher-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "73c31053d1e593f67e822509e3cd70c1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 18725,
            "upload_time": "2025-08-18T07:44:14",
            "upload_time_iso_8601": "2025-08-18T07:44:14.384104Z",
            "url": "https://files.pythonhosted.org/packages/00/0f/43f5b0d5e8931d8ddf2dd44e2134c3c427199e57f8f8f95f112676d366e1/dynamic_agent_switcher-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "87b7f82808c49a466e39be19a792cf0c75bd5d3f95fb31390a7f124ab6b7504e",
                "md5": "7da08056c0266fb356f204657b8b3a75",
                "sha256": "d6cf3cf4db13da9e95d9b2ee2f83e5c0e200cd80b664119082a8892010aafcf4"
            },
            "downloads": -1,
            "filename": "dynamic_agent_switcher-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7da08056c0266fb356f204657b8b3a75",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 21911,
            "upload_time": "2025-08-18T07:44:15",
            "upload_time_iso_8601": "2025-08-18T07:44:15.733766Z",
            "url": "https://files.pythonhosted.org/packages/87/b7/f82808c49a466e39be19a792cf0c75bd5d3f95fb31390a7f124ab6b7504e/dynamic_agent_switcher-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-18 07:44:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "sumitpaul",
    "github_project": "dynamic-agent-switcher",
    "github_not_found": true,
    "lcname": "dynamic-agent-switcher"
}
        
Elapsed time: 0.67721s