Name | pocket-agent JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | A lightweight, extensible framework for building LLM agents with Model Context Protocol (MCP) support |
upload_time | 2025-09-12 20:26:58 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | None |
keywords |
agent
ai
llm
mcp
model-context-protocol
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
# Pocket-Agent
<img src="./assets/pocket-agent.png" alt="Pocket Agent" width="300" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0,0,0,0.1);">
<p><em>A lightweight, extensible framework for building LLM agents with Model Context Protocol (MCP) support</em></p>

</div>
---
## Why Pocket Agent?
Most agent frameworks are severely over-bloated. The reason for this is that they are trying to support too many things at once and make every possible agent implementation "simple". This only works until it doesn't and you are stuck having to understand the enormous code base to implement what should be a very simple feature.
Pocket Agent takes the opposite approach by handling only the basic functions of an LLM agent and working with the MCP protocol. That way you don't give up any flexibility when building your agent but a lot of the lower level implementation details are taken care of.
## Design Principles
### 🚀 **Lightweight & Simple**
- Minimal dependencies - just `fastmcp` and `litellm`
- Clean abstractions that separate agent logic from MCP client details
- < 500 lines of code
### 🎯 **Developer-Friendly**
- Abstract base class design for easy extension
- Clear separation of concerns between agents and clients
- Built-in logging and event system
### 🌐 **Multi-Model Support**
- Works with any endpoint supported by LiteLLM without requiring code changes
- Easy model switching and configuration
### 💡 **Extensible**
- Use any custom logging implementation
- Easily integrate custom frontends using the built-in event system
- Easily create fully custom agent implementations
## [Cookbook](./cookbook)
#### Refer to the [Cookbook](./cookbook) to find example implementations and try out PocketAgent without any implementation overhead
## Installation
Install with uv (Recommended):
```bash
uv add pocket-agent
```
Install with pip:
```bash
pip install pocket-agent
```
## Creating Your First Pocket-Agent (Quick Start)
#### To build a Pocket-Agent, all you need to implement is the agent's `run` method:
```python
class SimpleAgent(PocketAgent):
async def run(self):
"""Simple conversation loop"""
while True:
# Accept user message
user_input = input("Your input: ")
if user_input.lower() == 'quit':
break
# Add user message
self.add_user_message(user_input)
# Generates response and executes any tool calls
step_result = await self.step()
while step_result["llm_message"].tool_calls is not None:
step_result = await self.step()
return {"status": "completed"}
```
#### To run the agent, you only need to pass your [JSON MCP config](https://gofastmcp.com/integrations/mcp-json-configuration) and your agent configuration:
```python
mcp_config = {
"mcpServers": {
"weather": {
"transport": "stdio",
"command": "python",
"args": ["server.py"],
"cwd": os.path.dirname(os.path.abspath(__file__))
}
}
}
# Configure agent
config = AgentConfig(
llm_model="gpt-5-nano",
system_prompt="You are a helpful assistant who answers user questions and uses provided tools when applicable"
)
# Create and run agent
agent = SimpleAgent(
agent_config=config,
mcp_config=mcp_config
)
await agent.run()
```
## Core Concepts
### 🏗️ **PocketAgent Base Class**
The `PocketAgent` is an abstract base class that provides the foundation for building custom agents. You inherit from this class and implement the `run()` method to define your agent's behavior.
```python
from pocket_agent import PocketAgent, AgentConfig
class MyAgent(PocketAgent):
async def run(self):
# Your agent logic here
return {"status": "completed"}
```
**PocketAgent Input Parameters:**
```python
agent = (
agent_config, # Required: Instance of the AgentConfig class
mcp_config, # Required: JSON MCP server configuration to pass tools to the agent
router, # Optional: A litellm router to manage llm rate limits
logger, # Optional: A logger instance to capture logs
hooks, # Optional: Instance of AgentHooks to optionally define custom behavior at common junction points
)
```
### ⚙️ **AgentConfig**
Configuration object that defines your agent's setup and behavior:
```python
config = AgentConfig(
llm_model="gpt-4", # Required: LLM model to use
system_prompt="You are helpful", # Optional: System prompt for the agent
agent_id="my-agent-123", # Optional: Custom agent ID
context_id="conversation-456", # Optional: Custom context ID
allow_images=True, # Optional: Enable image input support (default: False)
messages=[], # Optional: Initial conversation history (default: [])
completion_kwargs={ # Optional: Additional LLM parameters (default: {"tool_choice": "auto"})
"tool_choice": "auto",
"temperature": 0.7
}
)
```
### 🔄 **The Step Method**
The `step()` method is the core execution unit that:
1. Gets an LLM response with available tools
2. Executes any tool calls in parallel
3. Updates conversation history
The output of calling the `step()` method is the StepResult
```python
@dataclass
class StepResult:
llm_message: LitellmMessage # The message generated by the llm including str content, tool calls, images, etc.
tool_execution_results: Optional[list[ToolResult]] = None # Results of any executed tools
```
```python
# Single step execution
step_result = await agent.step()
# Handle tool calls (continue until no more tool calls)
while step_result.llm_message.tool_calls is not None:
step_result = await agent.step()
```
**Step Result Structure:**
```python
{
"llm_message": LitellmMessage, # The LLM response
"tool_execution_results": [ToolResult] # Results from tool calls (if any)
}
```
### 💬 **Message Management**
Pocket Agent automatically adds llm generated messages and tool result messages in the `step()` function.
Input provided by a user can easily be managed using `add_user_message()` and should be done before calling the `step()` method:
```python
class Agent(PocketAgent)
def run(self):
# Add user messages (with optional images)
agent.add_user_message("Hello!", image_base64s=["base64_image_data"])
self.step()
# Clear all messages except the system promp `reset_messages` function
agent.reset_messages()
```
**Message Format:** Standard OpenAI message format with role, content, and optional tool metadata.
### 🛠️ **MCP Integration via PocketAgentClient**
The `PocketAgentClient` handles all MCP server communication:
- **Tool Discovery**: Automatically fetches available tools from MCP servers
- **Tool Execution**: Transforms OpenAI tool calls to MCP format and handles execution
- **Parallel Execution**: Executes multiple tool calls simultaneously
- **Error Handling**: Provides hooks for custom error handling
```python
# MCP configuration format
mcp_config = {
"mcpServers": {
"weather": {
"transport": "stdio",
"command": "python",
"args": ["weather_server.py"]
},
"web": {
"transport": "sse",
"url": "http://localhost:3001/sse"
}
}
}
```
### 🪝 **Hook System for Extensibility**
Customize agent behavior at key execution points:
```python
class CustomHooks(AgentHooks):
def pre_step(self, context):
# executed before the llm response is generated in the step() method
print("About to execute step")
def post_step(self, context):
# executed after all tool results (if any) are retrieved; This runs even if tool calling results in an error
print("Step completed")
def pre_tool_call(self, context, tool_call):
# executed right before a tool is run
print(f"Calling tool: {tool_call.name}")
# Return modified tool_call or None
def post_tool_call(self, context, tool_call, result):
# executed right after a tool call result is retrieved from the PocketAgentClient
print(f"Tool {tool_call.name} completed")
return result # Return modified result
def on_llm_response(self, context, response):
# executed right after a response message has been generated by the llm
print("Got LLM response")
def on_event(self, event: AgentEvent):
# Default behavior is to run when any new message is added
print(f"Event: {event.event_type}")
# Use custom hooks
agent = MyAgent(
agent_config=config,
mcp_config=mcp_config,
hooks=CustomHooks()
)
```
### 📡 **Event System**
Built-in event system for monitoring and integration:
```python
@dataclass
class AgentEvent:
event_type: str # e.g., "new_message", "tool_call_start"
data: dict # Event-specific data
```
Events are automatically emitted for:
- New messages added to conversation
- Tool calls and completions
- Agent lifecycle events
### 🔧 **Multi-Model Support**
Works seamlessly with any LiteLLM-supported model:
```python
# OpenAI
config = AgentConfig(llm_model="gpt-4")
# Anthropic
config = AgentConfig(llm_model="anthropic/claude-3-sonnet-20240229")
# Local models
config = AgentConfig(llm_model="ollama/llama2")
# Azure OpenAI
config = AgentConfig(llm_model="azure/gpt-4")
```
You can also provide a custom LiteLLM Router for advanced model routing and fallback logic.
## Feature Roadmap
### Core Features
| Feature | Status | Priority | Description |
|---------|--------|----------|-------------|
| **Agent Abstraction** | ✅ Implemented | - | Basic agent abstraction with PocketAgent base class |
| **MCP Protocol Support** | ✅ Implemented | - | Full integration with Model Context Protocol via fastmcp |
| **Multi-Model Support** | ✅ Implemented | - | Support for any LiteLLM compatible model/endpoint |
| **Tool Execution** | ✅ Implemented | - | Automatic parallel tool calling and results handling |
| **Hook System** | ✅ Implemented | - | Allow configurable hooks to inject functionality during agent execution |
| **Logging Integration** | ✅ Implemented | - | Built-in logging with custom logger support |
| **Streaming Responses** | 📋 Planned | Medium | Real-time response streaming support |
| **Define Defaults for standard MCP Client handlers | 📋 Planned | Medium | Standard MCP client methods (i.e. sampling, progress, etc) may benefit from default implementations if custom behavior is not often needed |
| **Multi-Agent Integration** | 📋 Planned | High | Allow a PocketAgent to accept other PocketAgents as Sub Agents and automatically set up Sub Agents as tools for the Agent to use |
| **Resources Integration** | 📋 Planned | Medium | Automatically set up mcp read_resource functionality as a tool |
### Modality support
| Modality | Status | Priority | Description |
|---------|--------|----------|-------------|
| **Text** | ✅ Implemented | - | Multi-modal input support for vision models |
| **Images** | ✅ Implemented | - | Multi-modal input support for VLMs with option to enable/disable |
| **Audio** | 📋 Planned | Low | Multi-modal input support for LLMs which allow audio inputs |
Raw data
{
"_id": null,
"home_page": null,
"name": "pocket-agent",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": "Chris Egersdoerfer <cegersdoerfer@gmail.com>",
"keywords": "agent, ai, llm, mcp, model-context-protocol",
"author": null,
"author_email": "Chris Egersdoerfer <cegersdoerfer@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/ea/f1/3a2998362228c238c3a5d1ababda504f7109a2f1fe4412d3fd0601bf0d08/pocket_agent-0.1.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n\n# Pocket-Agent\n\n<img src=\"./assets/pocket-agent.png\" alt=\"Pocket Agent\" width=\"300\" style=\"border-radius: 10px; box-shadow: 0 4px 8px rgba(0,0,0,0.1);\">\n\n<p><em>A lightweight, extensible framework for building LLM agents with Model Context Protocol (MCP) support</em></p>\n\n\n\n</div>\n\n---\n\n## Why Pocket Agent?\n\nMost agent frameworks are severely over-bloated. The reason for this is that they are trying to support too many things at once and make every possible agent implementation \"simple\". This only works until it doesn't and you are stuck having to understand the enormous code base to implement what should be a very simple feature.\n\nPocket Agent takes the opposite approach by handling only the basic functions of an LLM agent and working with the MCP protocol. That way you don't give up any flexibility when building your agent but a lot of the lower level implementation details are taken care of.\n\n\n## Design Principles\n\n### \ud83d\ude80 **Lightweight & Simple**\n- Minimal dependencies - just `fastmcp` and `litellm`\n- Clean abstractions that separate agent logic from MCP client details \n- < 500 lines of code\n\n### \ud83c\udfaf **Developer-Friendly**\n- Abstract base class design for easy extension\n- Clear separation of concerns between agents and clients\n- Built-in logging and event system\n\n### \ud83c\udf10 **Multi-Model Support**\n- Works with any endpoint supported by LiteLLM without requiring code changes\n- Easy model switching and configuration\n\n### \ud83d\udca1 **Extensible**\n- Use any custom logging implementation\n- Easily integrate custom frontends using the built-in event system\n- Easily create fully custom agent implementations\n\n## [Cookbook](./cookbook)\n#### Refer to the [Cookbook](./cookbook) to find example implementations and try out PocketAgent without any implementation overhead\n\n\n## Installation\n\nInstall with uv (Recommended):\n```bash\nuv add pocket-agent\n```\n\nInstall with pip:\n```bash\npip install pocket-agent\n```\n\n## Creating Your First Pocket-Agent (Quick Start)\n\n#### To build a Pocket-Agent, all you need to implement is the agent's `run` method:\n\n```python\nclass SimpleAgent(PocketAgent):\n async def run(self):\n \"\"\"Simple conversation loop\"\"\"\n\n while True:\n # Accept user message\n user_input = input(\"Your input: \")\n if user_input.lower() == 'quit':\n break\n \n # Add user message\n self.add_user_message(user_input)\n \n # Generates response and executes any tool calls\n step_result = await self.step()\n while step_result[\"llm_message\"].tool_calls is not None:\n step_result = await self.step()\n \n return {\"status\": \"completed\"}\n```\n\n#### To run the agent, you only need to pass your [JSON MCP config](https://gofastmcp.com/integrations/mcp-json-configuration) and your agent configuration:\n\n```python\nmcp_config = {\n \"mcpServers\": {\n \"weather\": {\n \"transport\": \"stdio\",\n \"command\": \"python\",\n \"args\": [\"server.py\"],\n \"cwd\": os.path.dirname(os.path.abspath(__file__))\n }\n }\n}\n# Configure agent \nconfig = AgentConfig(\n llm_model=\"gpt-5-nano\",\n system_prompt=\"You are a helpful assistant who answers user questions and uses provided tools when applicable\"\n)\n# Create and run agent\nagent = SimpleAgent(\n agent_config=config,\n mcp_config=mcp_config\n)\n\nawait agent.run()\n```\n\n\n## Core Concepts\n\n### \ud83c\udfd7\ufe0f **PocketAgent Base Class**\n\nThe `PocketAgent` is an abstract base class that provides the foundation for building custom agents. You inherit from this class and implement the `run()` method to define your agent's behavior.\n\n```python\nfrom pocket_agent import PocketAgent, AgentConfig\n\nclass MyAgent(PocketAgent):\n async def run(self):\n # Your agent logic here\n return {\"status\": \"completed\"}\n```\n\n**PocketAgent Input Parameters:**\n```python\nagent = (\n agent_config, # Required: Instance of the AgentConfig class\n mcp_config, # Required: JSON MCP server configuration to pass tools to the agent\n router, # Optional: A litellm router to manage llm rate limits\n logger, # Optional: A logger instance to capture logs\n hooks, # Optional: Instance of AgentHooks to optionally define custom behavior at common junction points\n\n)\n```\n\n### \u2699\ufe0f **AgentConfig**\n\nConfiguration object that defines your agent's setup and behavior:\n\n```python\nconfig = AgentConfig(\n llm_model=\"gpt-4\", # Required: LLM model to use\n system_prompt=\"You are helpful\", # Optional: System prompt for the agent\n agent_id=\"my-agent-123\", # Optional: Custom agent ID\n context_id=\"conversation-456\", # Optional: Custom context ID\n allow_images=True, # Optional: Enable image input support (default: False)\n messages=[], # Optional: Initial conversation history (default: [])\n completion_kwargs={ # Optional: Additional LLM parameters (default: {\"tool_choice\": \"auto\"})\n \"tool_choice\": \"auto\",\n \"temperature\": 0.7\n }\n)\n```\n\n### \ud83d\udd04 **The Step Method**\n\nThe `step()` method is the core execution unit that:\n1. Gets an LLM response with available tools\n2. Executes any tool calls in parallel\n3. Updates conversation history\n\nThe output of calling the `step()` method is the StepResult\n```python\n@dataclass\nclass StepResult:\n llm_message: LitellmMessage # The message generated by the llm including str content, tool calls, images, etc.\n tool_execution_results: Optional[list[ToolResult]] = None # Results of any executed tools \n```\n\n```python\n# Single step execution\nstep_result = await agent.step()\n\n# Handle tool calls (continue until no more tool calls)\nwhile step_result.llm_message.tool_calls is not None:\n step_result = await agent.step()\n```\n\n**Step Result Structure:**\n```python\n{\n \"llm_message\": LitellmMessage, # The LLM response\n \"tool_execution_results\": [ToolResult] # Results from tool calls (if any)\n}\n```\n\n### \ud83d\udcac **Message Management**\n\nPocket Agent automatically adds llm generated messages and tool result messages in the `step()` function.\nInput provided by a user can easily be managed using `add_user_message()` and should be done before calling the `step()` method:\n\n```python\nclass Agent(PocketAgent)\n def run(self):\n # Add user messages (with optional images)\n agent.add_user_message(\"Hello!\", image_base64s=[\"base64_image_data\"])\n self.step()\n\n# Clear all messages except the system promp `reset_messages` function\nagent.reset_messages()\n```\n\n**Message Format:** Standard OpenAI message format with role, content, and optional tool metadata.\n\n### \ud83d\udee0\ufe0f **MCP Integration via PocketAgentClient**\n\nThe `PocketAgentClient` handles all MCP server communication:\n- **Tool Discovery**: Automatically fetches available tools from MCP servers\n- **Tool Execution**: Transforms OpenAI tool calls to MCP format and handles execution\n- **Parallel Execution**: Executes multiple tool calls simultaneously\n- **Error Handling**: Provides hooks for custom error handling\n\n```python\n# MCP configuration format\nmcp_config = {\n \"mcpServers\": {\n \"weather\": {\n \"transport\": \"stdio\",\n \"command\": \"python\",\n \"args\": [\"weather_server.py\"]\n },\n \"web\": {\n \"transport\": \"sse\",\n \"url\": \"http://localhost:3001/sse\"\n }\n }\n}\n```\n\n### \ud83e\ude9d **Hook System for Extensibility**\n\nCustomize agent behavior at key execution points:\n\n```python\nclass CustomHooks(AgentHooks):\n def pre_step(self, context):\n # executed before the llm response is generated in the step() method\n print(\"About to execute step\")\n \n def post_step(self, context):\n # executed after all tool results (if any) are retrieved; This runs even if tool calling results in an error\n print(\"Step completed\")\n \n def pre_tool_call(self, context, tool_call):\n # executed right before a tool is run\n print(f\"Calling tool: {tool_call.name}\")\n # Return modified tool_call or None\n \n def post_tool_call(self, context, tool_call, result):\n # executed right after a tool call result is retrieved from the PocketAgentClient\n print(f\"Tool {tool_call.name} completed\")\n return result # Return modified result\n \n def on_llm_response(self, context, response):\n # executed right after a response message has been generated by the llm\n print(\"Got LLM response\")\n \n def on_event(self, event: AgentEvent):\n # Default behavior is to run when any new message is added\n print(f\"Event: {event.event_type}\")\n\n# Use custom hooks\nagent = MyAgent(\n agent_config=config,\n mcp_config=mcp_config,\n hooks=CustomHooks()\n)\n```\n\n### \ud83d\udce1 **Event System**\n\nBuilt-in event system for monitoring and integration:\n\n```python\n@dataclass\nclass AgentEvent:\n event_type: str # e.g., \"new_message\", \"tool_call_start\"\n data: dict # Event-specific data\n```\n\nEvents are automatically emitted for:\n- New messages added to conversation\n- Tool calls and completions\n- Agent lifecycle events\n\n### \ud83d\udd27 **Multi-Model Support**\n\nWorks seamlessly with any LiteLLM-supported model:\n\n```python\n# OpenAI\nconfig = AgentConfig(llm_model=\"gpt-4\")\n\n# Anthropic\nconfig = AgentConfig(llm_model=\"anthropic/claude-3-sonnet-20240229\")\n\n# Local models\nconfig = AgentConfig(llm_model=\"ollama/llama2\")\n\n# Azure OpenAI\nconfig = AgentConfig(llm_model=\"azure/gpt-4\")\n```\n\nYou can also provide a custom LiteLLM Router for advanced model routing and fallback logic.\n\n## Feature Roadmap\n\n### Core Features\n| Feature | Status | Priority | Description |\n|---------|--------|----------|-------------|\n| **Agent Abstraction** | \u2705 Implemented | - | Basic agent abstraction with PocketAgent base class |\n| **MCP Protocol Support** | \u2705 Implemented | - | Full integration with Model Context Protocol via fastmcp |\n| **Multi-Model Support** | \u2705 Implemented | - | Support for any LiteLLM compatible model/endpoint |\n| **Tool Execution** | \u2705 Implemented | - | Automatic parallel tool calling and results handling |\n| **Hook System** | \u2705 Implemented | - | Allow configurable hooks to inject functionality during agent execution |\n| **Logging Integration** | \u2705 Implemented | - | Built-in logging with custom logger support |\n| **Streaming Responses** | \ud83d\udccb Planned | Medium | Real-time response streaming support |\n| **Define Defaults for standard MCP Client handlers | \ud83d\udccb Planned | Medium | Standard MCP client methods (i.e. sampling, progress, etc) may benefit from default implementations if custom behavior is not often needed |\n| **Multi-Agent Integration** | \ud83d\udccb Planned | High | Allow a PocketAgent to accept other PocketAgents as Sub Agents and automatically set up Sub Agents as tools for the Agent to use |\n| **Resources Integration** | \ud83d\udccb Planned | Medium | Automatically set up mcp read_resource functionality as a tool |\n\n### Modality support\n| Modality | Status | Priority | Description |\n|---------|--------|----------|-------------|\n| **Text** | \u2705 Implemented | - | Multi-modal input support for vision models |\n| **Images** | \u2705 Implemented | - | Multi-modal input support for VLMs with option to enable/disable |\n| **Audio** | \ud83d\udccb Planned | Low | Multi-modal input support for LLMs which allow audio inputs |\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A lightweight, extensible framework for building LLM agents with Model Context Protocol (MCP) support",
"version": "0.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/DIR-LAB/pocket-agent/issues",
"Changelog": "https://github.com/DIR-LAB/pocket-agent/releases",
"Documentation": "https://github.com/DIR-LAB/pocket-agent#readme",
"Homepage": "https://github.com/DIR-LAB/pocket-agent",
"Repository": "https://github.com/DIR-LAB/pocket-agent"
},
"split_keywords": [
"agent",
" ai",
" llm",
" mcp",
" model-context-protocol"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "cec8955090695b6a00ff983cb12c07a4a406f76188fae2cdd7c437311cb9e414",
"md5": "5f3112a4d2f71305be318aa17ebb8807",
"sha256": "c1aa01079bcbf1ccfe61c49b5bebca9fa7eed48f9a77a7384190987a28b3cd88"
},
"downloads": -1,
"filename": "pocket_agent-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5f3112a4d2f71305be318aa17ebb8807",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 14595,
"upload_time": "2025-09-12T20:26:56",
"upload_time_iso_8601": "2025-09-12T20:26:56.709109Z",
"url": "https://files.pythonhosted.org/packages/ce/c8/955090695b6a00ff983cb12c07a4a406f76188fae2cdd7c437311cb9e414/pocket_agent-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "eaf13a2998362228c238c3a5d1ababda504f7109a2f1fe4412d3fd0601bf0d08",
"md5": "f3e3f8452c799732b8f680c0fceb4c9b",
"sha256": "bb143b97a1b867adb6d87691cc6c68367b3e5287302b295f54a01b811f13a5f0"
},
"downloads": -1,
"filename": "pocket_agent-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "f3e3f8452c799732b8f680c0fceb4c9b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 1873800,
"upload_time": "2025-09-12T20:26:58",
"upload_time_iso_8601": "2025-09-12T20:26:58.841702Z",
"url": "https://files.pythonhosted.org/packages/ea/f1/3a2998362228c238c3a5d1ababda504f7109a2f1fe4412d3fd0601bf0d08/pocket_agent-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-12 20:26:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "DIR-LAB",
"github_project": "pocket-agent",
"github_not_found": true,
"lcname": "pocket-agent"
}