# Agent Interrogator
<p align="center">
<img src="https://raw.githubusercontent.com/qwordsmith/agent-interrogator/refs/heads/main/assets/logo.webp" alt="Agent Interrogator Logo" width="400" />
</p>
<p align="center">
<strong>Systematically discover and map AI agent attack surface for security research</strong>
</p>
<p align="center">
<a href="https://pypi.org/project/agent-interrogator/">
<img src="https://badge.fury.io/py/agent-interrogator.svg" alt="PyPI version">
</a>
<a href="https://www.python.org/downloads/">
<img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="Python 3.9+">
</a>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-yellow.svg" alt="License: Apache 2.0">
</a>
</p>
---
## What is Agent Interrogator?
Agent Interrogator is a Python framework designed for **security researchers** to systematically discover and analyze AI agent attack surface through automated interrogation. It uses iterative discovery cycles to map an agent's available tools (functions).
### Why Use Agent Interrogator?
- **🔍 Attack Surface Discovery**: Automatically discovers agent capabilities and supporting tools without requiring documentation
- **🛡️ Security Research**: Purpose-built for vulnerability assessment and prompt injection testing
- **📊 Structured Output**: Generates structured profiles perfect for integration with other security tools
- **🔄 Iterative Analysis**: Uses smart prompt adaptation to uncover hidden or complex capabilities
- **🚀 Flexible Integrations**: Works with any agent via customizable callback functions
### Perfect For:
- Security researchers testing AI agents for vulnerabilities
- Red teams conducting agent penetration testing
- Security teams auditing agent functionality
---
## Quick Start
### Installation
```bash
pip install agent-interrogator
```
### Basic Usage
Here's a minimal example that interrogates an agent:
```python
import asyncio
from agent_interrogator import AgentInterrogator, InterrogationConfig, LLMConfig, ModelProvider
# Configure the interrogator
config = InterrogationConfig(
llm=LLMConfig(
provider=ModelProvider.OPENAI,
model_name="gpt-4",
api_key="your-openai-api-key"
),
max_iterations=5
)
# Define how to interact with your target agent
async def my_agent_callback(prompt: str) -> str:
"""
This function defines how to send prompts to your target agent.
Replace this with your actual agent interaction logic.
"""
# Example: HTTP API call to your agent
# response = await call_your_agent_api(prompt)
# return response.text
# For demo purposes, return a mock response
return "I can help with web searches, file operations, and calculations."
# Run the interrogation
async def main():
interrogator = AgentInterrogator(config, my_agent_callback)
profile = await interrogator.interrogate()
# View discovered capabilities
print(f"Discovered {len(profile.capabilities)} capabilities:")
for capability in profile.capabilities:
print(f" - {capability.name}: {capability.description}")
for f in capability.functions:
print(f" Function Name: {f.name}")
print(f" Function Parameters: {f.parameters}")
print(f" Function Return Type: {f.return_type}")
if __name__ == "__main__":
asyncio.run(main())
```
### Expected Output
```
Discovered 3 capabilities:
- web_search: Search the internet for information
Function Name: search_web
Function Parameters: [ { "name": "query", "type": "string", "description": "The search query", "required": true }, { "name": "max_results", "type": "integer", "description": "Maximum number of results", "required": false, "default": 5 } ]
Function Return Types: list[SearchResult]
...
```
---
## Installation
### Standard Installation
```bash
pip install agent-interrogator
```
### Development Installation
For contributors or advanced users who want to modify the code:
```bash
git clone https://github.com/qwordsmith/agent-interrogator.git
cd agent-interrogator
pip install -e .[dev]
```
### Requirements
- **Python**: 3.9 or higher
- **OpenAI API Key**: For using GPT models (optional, can use HuggingFace instead)
- **Dependencies**: Automatically installed with pip
---
## Configuration
Agent Interrogator supports using either OpenAI or local models for analyzing agent responses:
### OpenAI Configuration
```python
from agent_interrogator import InterrogationConfig, LLMConfig, ModelProvider, OutputMode
config = InterrogationConfig(
llm=LLMConfig(
provider=ModelProvider.OPENAI,
model_name="gpt-4",
api_key="your-openai-api-key"
),
max_iterations=5, # Maximum discovery cycles
output_mode=OutputMode.STANDARD # QUIET, STANDARD, or VERBOSE
)
```
### Local Model (HuggingFace) Configuration
```python
from agent_interrogator import HuggingFaceConfig
config = InterrogationConfig(
llm=LLMConfig(
provider=ModelProvider.HUGGINGFACE,
model_name="mistralai/Mistral-7B-v0.1", # Any HF model
huggingface=HuggingFaceConfig(
device="auto", # auto, cpu, cuda, mps
quantization="fp16", # fp16, int8, or None
allow_download=True
)
),
max_iterations=5,
output_mode=OutputMode.VERBOSE
)
```
### Output Modes
- **`QUIET`**: No terminal output (ideal for automated scripts)
- **`STANDARD`**: Shows progress and results (default)
- **`VERBOSE`**: Detailed logging including prompts and responses (useful for debugging)
---
## Implementing Callbacks
The callback function is how Agent Interrogator communicates with your target agent. It must be an async function that takes a prompt string and returns the agent's response.
### Callback Interface
```python
from typing import Awaitable, Callable
# Your callback must match this signature
AgentCallback = Callable[[str], Awaitable[str]]
```
### HTTP API Example
Here's an example for an agent exposed via an HTTP API:
```python
import aiohttp
from typing import Optional
class HTTPAgentCallback:
def __init__(self, endpoint: str, api_key: Optional[str] = None):
self.endpoint = endpoint
self.headers = {}
if api_key:
self.headers["Authorization"] = f"Bearer {api_key}"
self.session: Optional[aiohttp.ClientSession] = None
async def __call__(self, prompt: str) -> str:
if not self.session:
self.session = aiohttp.ClientSession()
async with self.session.post(
self.endpoint,
json={"message": prompt, "stream": False},
headers=self.headers
) as response:
if response.status != 200:
raise Exception(f"Agent API error: {response.status}")
result = await response.json()
return result["response"]
async def cleanup(self):
"""Optional cleanup method"""
if self.session:
await self.session.close()
self.session = None
# Usage
callback = HTTPAgentCallback(
endpoint="https://your-agent-api.com/chat",
api_key="your-agent-api-key"
)
interrogator = AgentInterrogator(config, callback)
profile = await interrogator.interrogate()
```
### More Examples
For additional callback implementations (WebSocket, Playwright browser automation, process-based agents, etc.), see the [`examples/callbacks.py`](examples/callbacks.py) file.
---
## Understanding Results
Agent Interrogator produces a structured `AgentProfile` containing all discovered capabilities and functions. This data is specifically designed for security research and tool integration.
### Profile Structure
```python
# Access the profile data
profile = await interrogator.interrogate()
# Iterate through capabilities
for capability in profile.capabilities:
print(f"Capability: {capability.name}")
print(f"Description: {capability.description}")
for function in capability.functions:
print(f" Function: {function.name}")
print(f" Description: {function.description}")
print(f" Return Type: {function.return_type}")
for param in function.parameters:
print(f" Parameter: {param.name}")
print(f" Type: {param.type}")
print(f" Required: {param.required}")
print(f" Default: {param.default}")
```
### Security Research Applications
The structured data enables:
- **Attack Surface Mapping**: Complete inventory of agent capabilities
- **Fuzzing Target Generation**: Automated payload creation for each function
- **Prompt Injection Testing**: Parameter-aware injection attempts
- **Capability Monitoring**: Track changes between agent versions
- **Agent Auditing**: Verify agents operate within expected bounds
---
## Development
### Running Tests
```bash
# Install development dependencies
pip install -e .[dev]
# Run the test suite
pytest tests/
# Run with coverage
pytest tests/ --cov=agent_interrogator
```
### Code Quality
```bash
# Format code
black src/ tests/
isort src/ tests/
# Type checking
mypy src/agent_interrogator/
# Linting
flake8 src/ tests/
```
### Project Structure
```
agent-interrogator/
├── src/agent_interrogator/ # Main package
│ ├── __init__.py # Public API
│ ├── interrogator.py # Core interrogation logic
│ ├── config.py # Configuration models
│ ├── llm.py # LLM provider interfaces
│ ├── models.py # Data models (AgentProfile, etc.)
│ ├── output.py # Terminal output management
│ └── prompt_templates.py # LLM prompts
├── examples/ # Usage examples
├── tests/ # Test suite
```
---
## Contributing
Contributions are welcome!
### How to Contribute
1. **Fork** the repository
2. **Create** a feature branch (`git checkout -b feature/amazing-feature`)
3. **Add tests** for your changes
4. **Ensure** all tests pass (`pytest tests/`)
5. **Format** your code (`black src/ tests/`)
6. **Submit** a pull request
### Areas for Contribution
- Callback implementations for different agent types
- Recursive interrogation of agents to agent communication
- Agents made available to target agent via A2A
- Agents made available to target agent via MCP
- Performance optimizations for large-scale agent scanning
- Guardrail bypass capabilities
- Integration examples with security tools
- Additional LLM provider support
- Mechanisms to improve agent profile output quality
- Documentation improvements
---
## License
This project is licensed under the **Apache License, Version 2.0** - see the [LICENSE](LICENSE) file for details.
## Support
- **Documentation**: [README.md](README.md) and inline code documentation
- **Related Research**: [Research-Paper-Resources](https://github.com/qwordsmith/Research-Paper-Resources/)
- **Issues**: [GitHub Issues](https://github.com/qwordsmith/agent-interrogator/issues)
- **Examples**: See [`examples/`](examples/) directory
- **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md)
Raw data
{
"_id": null,
"home_page": null,
"name": "agent-interrogator",
"maintainer": "Michael Samson",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, agent, security, testing, interrogation, llm, prompt-injection, vulnerability-assessment",
"author": "Michael Samson",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/e5/91/4a3f7953e96d407e82280f9b11bad5c16e6405e6833a8babb2a37db21f69/agent_interrogator-0.1.1.tar.gz",
"platform": null,
"description": "# Agent Interrogator\n\n<p align=\"center\">\n <img src=\"https://raw.githubusercontent.com/qwordsmith/agent-interrogator/refs/heads/main/assets/logo.webp\" alt=\"Agent Interrogator Logo\" width=\"400\" />\n</p>\n\n<p align=\"center\">\n <strong>Systematically discover and map AI agent attack surface for security research</strong>\n</p>\n\n<p align=\"center\">\n <a href=\"https://pypi.org/project/agent-interrogator/\">\n <img src=\"https://badge.fury.io/py/agent-interrogator.svg\" alt=\"PyPI version\">\n </a>\n <a href=\"https://www.python.org/downloads/\">\n <img src=\"https://img.shields.io/badge/python-3.9+-blue.svg\" alt=\"Python 3.9+\">\n </a>\n <a href=\"https://opensource.org/licenses/Apache-2.0\">\n <img src=\"https://img.shields.io/badge/License-Apache%202.0-yellow.svg\" alt=\"License: Apache 2.0\">\n </a>\n</p>\n\n---\n\n## What is Agent Interrogator?\n\nAgent Interrogator is a Python framework designed for **security researchers** to systematically discover and analyze AI agent attack surface through automated interrogation. It uses iterative discovery cycles to map an agent's available tools (functions).\n\n### Why Use Agent Interrogator?\n\n- **\ud83d\udd0d Attack Surface Discovery**: Automatically discovers agent capabilities and supporting tools without requiring documentation\n- **\ud83d\udee1\ufe0f Security Research**: Purpose-built for vulnerability assessment and prompt injection testing\n- **\ud83d\udcca Structured Output**: Generates structured profiles perfect for integration with other security tools\n- **\ud83d\udd04 Iterative Analysis**: Uses smart prompt adaptation to uncover hidden or complex capabilities\n- **\ud83d\ude80 Flexible Integrations**: Works with any agent via customizable callback functions\n\n### Perfect For:\n- Security researchers testing AI agents for vulnerabilities\n- Red teams conducting agent penetration testing\n- Security teams auditing agent functionality\n\n---\n\n## Quick Start\n\n### Installation\n\n```bash\npip install agent-interrogator\n```\n\n### Basic Usage\n\nHere's a minimal example that interrogates an agent:\n\n```python\nimport asyncio\nfrom agent_interrogator import AgentInterrogator, InterrogationConfig, LLMConfig, ModelProvider\n\n# Configure the interrogator\nconfig = InterrogationConfig(\n llm=LLMConfig(\n provider=ModelProvider.OPENAI,\n model_name=\"gpt-4\",\n api_key=\"your-openai-api-key\"\n ),\n max_iterations=5\n)\n\n# Define how to interact with your target agent\nasync def my_agent_callback(prompt: str) -> str:\n \"\"\"\n This function defines how to send prompts to your target agent.\n Replace this with your actual agent interaction logic.\n \"\"\"\n # Example: HTTP API call to your agent\n # response = await call_your_agent_api(prompt)\n # return response.text\n \n # For demo purposes, return a mock response\n return \"I can help with web searches, file operations, and calculations.\"\n\n# Run the interrogation\nasync def main():\n interrogator = AgentInterrogator(config, my_agent_callback)\n profile = await interrogator.interrogate()\n \n # View discovered capabilities\n print(f\"Discovered {len(profile.capabilities)} capabilities:\")\n for capability in profile.capabilities:\n print(f\" - {capability.name}: {capability.description}\")\n for f in capability.functions:\n print(f\" Function Name: {f.name}\")\n print(f\" Function Parameters: {f.parameters}\")\n print(f\" Function Return Type: {f.return_type}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### Expected Output\n\n```\nDiscovered 3 capabilities:\n - web_search: Search the internet for information\n Function Name: search_web\n Function Parameters: [ { \"name\": \"query\", \"type\": \"string\", \"description\": \"The search query\", \"required\": true }, { \"name\": \"max_results\", \"type\": \"integer\", \"description\": \"Maximum number of results\", \"required\": false, \"default\": 5 } ]\n Function Return Types: list[SearchResult]\n...\n```\n\n---\n\n## Installation\n\n### Standard Installation\n\n```bash\npip install agent-interrogator\n```\n\n### Development Installation\n\nFor contributors or advanced users who want to modify the code:\n\n```bash\ngit clone https://github.com/qwordsmith/agent-interrogator.git\ncd agent-interrogator\npip install -e .[dev]\n```\n\n### Requirements\n\n- **Python**: 3.9 or higher\n- **OpenAI API Key**: For using GPT models (optional, can use HuggingFace instead)\n- **Dependencies**: Automatically installed with pip\n\n---\n\n## Configuration\n\nAgent Interrogator supports using either OpenAI or local models for analyzing agent responses:\n\n### OpenAI Configuration\n\n```python\nfrom agent_interrogator import InterrogationConfig, LLMConfig, ModelProvider, OutputMode\n\nconfig = InterrogationConfig(\n llm=LLMConfig(\n provider=ModelProvider.OPENAI,\n model_name=\"gpt-4\",\n api_key=\"your-openai-api-key\"\n ),\n max_iterations=5, # Maximum discovery cycles\n output_mode=OutputMode.STANDARD # QUIET, STANDARD, or VERBOSE\n)\n```\n\n### Local Model (HuggingFace) Configuration\n\n```python\nfrom agent_interrogator import HuggingFaceConfig\n\nconfig = InterrogationConfig(\n llm=LLMConfig(\n provider=ModelProvider.HUGGINGFACE,\n model_name=\"mistralai/Mistral-7B-v0.1\", # Any HF model\n huggingface=HuggingFaceConfig(\n device=\"auto\", # auto, cpu, cuda, mps\n quantization=\"fp16\", # fp16, int8, or None\n allow_download=True\n )\n ),\n max_iterations=5,\n output_mode=OutputMode.VERBOSE\n)\n```\n\n### Output Modes\n\n- **`QUIET`**: No terminal output (ideal for automated scripts)\n- **`STANDARD`**: Shows progress and results (default)\n- **`VERBOSE`**: Detailed logging including prompts and responses (useful for debugging)\n\n---\n\n## Implementing Callbacks\n\nThe callback function is how Agent Interrogator communicates with your target agent. It must be an async function that takes a prompt string and returns the agent's response.\n\n### Callback Interface\n\n```python\nfrom typing import Awaitable, Callable\n\n# Your callback must match this signature\nAgentCallback = Callable[[str], Awaitable[str]]\n```\n\n### HTTP API Example\n\nHere's an example for an agent exposed via an HTTP API:\n\n```python\nimport aiohttp\nfrom typing import Optional\n\nclass HTTPAgentCallback:\n def __init__(self, endpoint: str, api_key: Optional[str] = None):\n self.endpoint = endpoint\n self.headers = {}\n if api_key:\n self.headers[\"Authorization\"] = f\"Bearer {api_key}\"\n self.session: Optional[aiohttp.ClientSession] = None\n \n async def __call__(self, prompt: str) -> str:\n if not self.session:\n self.session = aiohttp.ClientSession()\n \n async with self.session.post(\n self.endpoint,\n json={\"message\": prompt, \"stream\": False},\n headers=self.headers\n ) as response:\n if response.status != 200:\n raise Exception(f\"Agent API error: {response.status}\")\n \n result = await response.json()\n return result[\"response\"]\n \n async def cleanup(self):\n \"\"\"Optional cleanup method\"\"\"\n if self.session:\n await self.session.close()\n self.session = None\n\n# Usage\ncallback = HTTPAgentCallback(\n endpoint=\"https://your-agent-api.com/chat\",\n api_key=\"your-agent-api-key\"\n)\n\ninterrogator = AgentInterrogator(config, callback)\nprofile = await interrogator.interrogate()\n```\n\n### More Examples\n\nFor additional callback implementations (WebSocket, Playwright browser automation, process-based agents, etc.), see the [`examples/callbacks.py`](examples/callbacks.py) file.\n\n---\n\n## Understanding Results\n\nAgent Interrogator produces a structured `AgentProfile` containing all discovered capabilities and functions. This data is specifically designed for security research and tool integration.\n\n### Profile Structure\n\n```python\n# Access the profile data\nprofile = await interrogator.interrogate()\n\n# Iterate through capabilities\nfor capability in profile.capabilities:\n print(f\"Capability: {capability.name}\")\n print(f\"Description: {capability.description}\")\n \n for function in capability.functions:\n print(f\" Function: {function.name}\")\n print(f\" Description: {function.description}\")\n print(f\" Return Type: {function.return_type}\")\n \n for param in function.parameters:\n print(f\" Parameter: {param.name}\")\n print(f\" Type: {param.type}\")\n print(f\" Required: {param.required}\")\n print(f\" Default: {param.default}\")\n```\n\n### Security Research Applications\n\nThe structured data enables:\n\n- **Attack Surface Mapping**: Complete inventory of agent capabilities\n- **Fuzzing Target Generation**: Automated payload creation for each function\n- **Prompt Injection Testing**: Parameter-aware injection attempts\n- **Capability Monitoring**: Track changes between agent versions\n- **Agent Auditing**: Verify agents operate within expected bounds\n\n---\n\n## Development\n\n### Running Tests\n\n```bash\n# Install development dependencies\npip install -e .[dev]\n\n# Run the test suite\npytest tests/\n\n# Run with coverage\npytest tests/ --cov=agent_interrogator\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack src/ tests/\nisort src/ tests/\n\n# Type checking\nmypy src/agent_interrogator/\n\n# Linting\nflake8 src/ tests/\n```\n\n### Project Structure\n\n```\nagent-interrogator/\n\u251c\u2500\u2500 src/agent_interrogator/ # Main package\n\u2502 \u251c\u2500\u2500 __init__.py # Public API\n\u2502 \u251c\u2500\u2500 interrogator.py # Core interrogation logic\n\u2502 \u251c\u2500\u2500 config.py # Configuration models\n\u2502 \u251c\u2500\u2500 llm.py # LLM provider interfaces\n\u2502 \u251c\u2500\u2500 models.py # Data models (AgentProfile, etc.)\n\u2502 \u251c\u2500\u2500 output.py # Terminal output management\n\u2502 \u2514\u2500\u2500 prompt_templates.py # LLM prompts\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 tests/ # Test suite\n```\n\n---\n\n## Contributing\n\nContributions are welcome!\n\n### How to Contribute\n\n1. **Fork** the repository\n2. **Create** a feature branch (`git checkout -b feature/amazing-feature`)\n3. **Add tests** for your changes\n4. **Ensure** all tests pass (`pytest tests/`)\n5. **Format** your code (`black src/ tests/`)\n6. **Submit** a pull request\n\n### Areas for Contribution\n\n- Callback implementations for different agent types\n- Recursive interrogation of agents to agent communication\n - Agents made available to target agent via A2A\n - Agents made available to target agent via MCP\n- Performance optimizations for large-scale agent scanning\n- Guardrail bypass capabilities\n- Integration examples with security tools\n- Additional LLM provider support\n- Mechanisms to improve agent profile output quality\n- Documentation improvements\n\n---\n\n## License\n\nThis project is licensed under the **Apache License, Version 2.0** - see the [LICENSE](LICENSE) file for details.\n\n## Support\n\n- **Documentation**: [README.md](README.md) and inline code documentation\n- **Related Research**: [Research-Paper-Resources](https://github.com/qwordsmith/Research-Paper-Resources/)\n- **Issues**: [GitHub Issues](https://github.com/qwordsmith/agent-interrogator/issues)\n- **Examples**: See [`examples/`](examples/) directory\n- **Contributing**: See [CONTRIBUTING.md](CONTRIBUTING.md)\n",
"bugtrack_url": null,
"license": null,
"summary": "An AI agent interrogation framework for identifying attack surface.",
"version": "0.1.1",
"project_urls": {
"Documentation": "https://github.com/qwordsmith/agent-interrogator#readme",
"Homepage": "https://github.com/qwordsmith/agent-interrogator",
"Issues": "https://github.com/qwordsmith/agent-interrogator/issues"
},
"split_keywords": [
"ai",
" agent",
" security",
" testing",
" interrogation",
" llm",
" prompt-injection",
" vulnerability-assessment"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9c70b777679d942206bc2aff9fbe516e1e950695d42413f9f5f4d70fdfe8db2f",
"md5": "666885aab733ec02937d96ffc9b3f1c2",
"sha256": "01997757bcb8385bed609011a8079e29803a35d70ff6fc09290efea105ce97f9"
},
"downloads": -1,
"filename": "agent_interrogator-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "666885aab733ec02937d96ffc9b3f1c2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 24212,
"upload_time": "2025-07-28T02:59:10",
"upload_time_iso_8601": "2025-07-28T02:59:10.880033Z",
"url": "https://files.pythonhosted.org/packages/9c/70/b777679d942206bc2aff9fbe516e1e950695d42413f9f5f4d70fdfe8db2f/agent_interrogator-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e5914a3f7953e96d407e82280f9b11bad5c16e6405e6833a8babb2a37db21f69",
"md5": "e93768a75c1d673510a56aca1351dfc5",
"sha256": "65a9004fe1289ce8e35fe4cc9f07cdf23f0982003a778a718a20dc7adae7ef92"
},
"downloads": -1,
"filename": "agent_interrogator-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "e93768a75c1d673510a56aca1351dfc5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 30345,
"upload_time": "2025-07-28T02:59:12",
"upload_time_iso_8601": "2025-07-28T02:59:12.156971Z",
"url": "https://files.pythonhosted.org/packages/e5/91/4a3f7953e96d407e82280f9b11bad5c16e6405e6833a8babb2a37db21f69/agent_interrogator-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-28 02:59:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "qwordsmith",
"github_project": "agent-interrogator#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "transformers",
"specs": [
[
">=",
"4.30.0"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "aiohttp",
"specs": [
[
">=",
"3.8.0"
]
]
},
{
"name": "huggingface_hub",
"specs": [
[
">=",
"0.15.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.0.0"
]
]
}
],
"lcname": "agent-interrogator"
}