<div align="center">
<img src="docs/docs/assets/images/spade_llm_logo.png" alt="SPADE-LLM Logo" width="200"/>
</div>
<div align="center">
[](https://opensource.org/licenses/MIT)
[](https://coveralls.io/github/sosanzma/spade_llm?branch=main)

[](https://github.com/sosanzma/spade_llm/actions)
[](https://github.com/sosanzma/spade_llm/actions/workflows/docs.yml)
[**Documentation**](https://sosanzma.github.io/spade_llm) | [**Quick Start**](https://sosanzma.github.io/spade_llm/getting-started/quickstart/) | [**Examples**](https://sosanzma.github.io/spade_llm/reference/examples/) | [**API Reference**](https://sosanzma.github.io/spade_llm/reference/)
</div>
# SPADE-LLM
Extension for [SPADE](https://spadeagents.eu) that integrates Large Language Models into multi-agent systems. Build intelligent, collaborative agents that can communicate, reason, and take actions in complex distributed environments.
## Table of Contents
- [Key Features](#key-features)
- [Built-in XMPP Server](#built-in-xmpp-server)
- [Quick Start](#quick-start)
- [Installation](#installation)
- [Architecture](#architecture)
- [Documentation](#documentation)
- [Examples](#examples)
- [Multi-Provider Support](#multi-provider-support)
- [Tools and Function Calling](#tools-and-function-calling)
- [Content Safety with Guardrails](#content-safety-with-guardrails)
- [Message Routing](#message-routing)
- [Interactive Chat](#interactive-chat)
- [Memory Extensions](#memory-extensions)
- [Context Management](#context-management)
- [Human-in-the-Loop](#human-in-the-loop)
- [Requirements](#requirements)
- [Contributing](#contributing)
- [License](#license)
## Key Features
- **Built-in XMPP Server** - No external server setup needed! Start agents with one command
- **Multi-Provider Support** - OpenAI, Ollama, LM Studio, vLLM integration
- **Tool System** - Function calling with async execution
- **Context Management** - Multi-conversation support with automatic cleanup
- **Memory Extensions** - Agent-based and agent-thread memory for persistent state
- **Message Routing** - Conditional routing based on LLM responses
- **Guardrails System** - Content filtering and safety controls for input/output
- **MCP Integration** - Model Context Protocol server support
- **Human-in-the-Loop** - Web interface for human expert consultation
## Built-in XMPP Server
SPADE 4+ includes a built-in XMPP server, eliminating the need for external server setup. This is a major advantage over other multi-agent frameworks like AutoGen or Swarm that require complex infrastructure configuration.
### Start the Server
```bash
# Start SPADE's built-in XMPP server
spade run
```
The server automatically handles:
- Agent registration and authentication
- Message routing between agents
- Connection management
- Domain resolution
Agents automatically connect to the built-in server when using standard SPADE agent configuration.
## Quick Start
Get started with SPADE-LLM in just 2 steps:
### Step 1: Start the Built-in XMPP Server
```bash
# Terminal 1: Start SPADE's built-in server
spade run
```
### Step 2: Create and Run Your LLM Agent
```python
# your_agent.py
import spade
from spade_llm import LLMAgent, LLMProvider
async def main():
provider = LLMProvider.create_openai(
api_key="your-api-key",
model="gpt-4o-mini"
)
agent = LLMAgent(
jid="assistant@localhost", # Connects to built-in server
password="password",
provider=provider,
system_prompt="You are a helpful assistant"
)
await agent.start()
if __name__ == "__main__":
spade.run(main())
```
```bash
# Terminal 2: Run your agent
python your_agent.py
```
That's it! No external XMPP server configuration needed.
## Installation
```bash
pip install spade_llm
```
## Examples
### Multi-Provider Support
```python
# OpenAI
provider = LLMProvider.create_openai(api_key="key", model="gpt-4o-mini")
# Ollama (local)
provider = LLMProvider.create_ollama(model="llama3.1:8b")
# LM Studio (local)
provider = LLMProvider.create_lm_studio(model="local-model")
```
### Tools and Function Calling
```python
from spade_llm import LLMTool
async def get_weather(city: str) -> str:
return f"Weather in {city}: 22°C, sunny"
weather_tool = LLMTool(
name="get_weather",
description="Get weather for a city",
parameters={
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"]
},
func=get_weather
)
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
tools=[weather_tool]
)
```
### Content Safety with Guardrails
```python
from spade_llm.guardrails import KeywordGuardrail, GuardrailAction
# Block harmful content
safety_filter = KeywordGuardrail(
name="safety_filter",
blocked_keywords=["hack", "exploit", "malware"],
action=GuardrailAction.BLOCK,
blocked_message="I cannot help with potentially harmful activities."
)
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
input_guardrails=[safety_filter] # Filter incoming messages
)
```
### Message Routing
```python
def router(msg, response, context):
if "technical" in response.lower():
return "tech-support@example.com"
return str(msg.sender)
agent = LLMAgent(
jid="router@localhost", # Uses built-in server
password="password",
provider=provider,
routing_function=router
)
```
### Interactive Chat
```python
from spade_llm import ChatAgent
# Create chat interface
chat_agent = ChatAgent(
jid="human@localhost", # Uses built-in server
password="password",
target_agent_jid="assistant@localhost"
)
await chat_agent.start()
await chat_agent.run_interactive() # Start interactive chat
```
### Memory Extensions
```python
# Agent-based memory: Single shared memory per agent
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
agent_base_memory=(True, "./memory.db") # Enabled with custom path
)
# Agent-thread memory: Separate memory per conversation
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
agent_thread_memory=(True, "./thread_memory.db") # Enabled with custom path
)
# Default memory paths (if path not specified)
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
agent_base_memory=(True, None) # Uses default path
)
```
### Context Management
```python
from spade_llm.context import SmartWindowSizeContext, FixedWindowSizeContext
# Smart context: Dynamic window sizing based on content
smart_context = SmartWindowSizeContext(
max_tokens=4000,
include_system_prompt=True,
preserve_last_k_messages=5
)
# Fixed context: Traditional sliding window
fixed_context = FixedWindowSizeContext(
max_messages=20,
include_system_prompt=True
)
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
context_manager=smart_context
)
```
### Human-in-the-Loop
```python
from spade_llm import HumanInTheLoopTool
# Create tool for human consultation
human_tool = HumanInTheLoopTool(
human_expert_jid="expert@localhost", # Uses built-in server
timeout=300.0 # 5 minutes
)
agent = LLMAgent(
jid="assistant@localhost", # Uses built-in server
password="password",
provider=provider,
tools=[human_tool] # Pass tools in constructor
)
# Start web interface for human expert
# python -m spade_llm.human_interface.web_server
# Open http://localhost:8080 and connect as expert
```
## Architecture
```mermaid
graph LR
A[LLMAgent] --> C[ContextManager]
A --> D[LLMProvider]
A --> E[LLMTool]
A --> G[Guardrails]
A --> M[Memory]
D --> F[OpenAI/Ollama/etc]
G --> H[Input/Output Filtering]
E --> I[Human-in-the-Loop]
E --> J[MCP]
E --> P[CustomTool/LangchainTool]
J --> K[STDIO]
J --> L[HTTP Streaming]
M --> N[Agent-based]
M --> O[Agent-thread]
```
## Documentation
- **[Installation](https://sosanzma.github.io/spade_llm/getting-started/installation/)** - Setup and requirements
- **[Quick Start](https://sosanzma.github.io/spade_llm/getting-started/quickstart/)** - Basic usage examples
- **[Providers](https://sosanzma.github.io/spade_llm/guides/providers/)** - LLM provider configuration
- **[Tools](https://sosanzma.github.io/spade_llm/guides/tools-system/)** - Function calling system
- **[Guardrails](https://sosanzma.github.io/spade_llm/guides/guardrails/)** - Content filtering and safety
- **[API Reference](https://sosanzma.github.io/spade_llm/reference/)** - Complete API documentation
## Examples Directory
The `/examples` directory contains complete working examples:
- `multi_provider_chat_example.py` - Chat with different LLM providers
- `ollama_with_tools_example.py` - Local models with tool calling
- `langchain_tools_example.py` - LangChain tool integration
- `guardrails_example.py` - Content filtering and safety controls
- `human_in_the_loop_example.py` - Human expert consultation via web interface
- `valencia_multiagent_trip_planner.py` - Multi-agent workflow
## Requirements
- Python 3.10+
- SPADE 3.3.0+
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Submit a pull request
See [Contributing Guide](https://sosanzma.github.io/spade_llm/contributing/) for details.
## License
MIT License
Raw data
{
"_id": null,
"home_page": "https://github.com/sosanzma/spade_llm",
"name": "spade-llm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "spade llm ai agents multi-agent-systems openai ollama",
"author": "Manel Soler Sanz",
"author_email": "masosan9@upvnet.upv.es",
"download_url": "https://files.pythonhosted.org/packages/ee/c3/f8ba4c8b6f7c12eb1f1944cdaa770416e4d6208ffab252b642478f14eaba/spade_llm-0.1.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\r\n <img src=\"docs/docs/assets/images/spade_llm_logo.png\" alt=\"SPADE-LLM Logo\" width=\"200\"/>\r\n</div>\r\n\r\n<div align=\"center\">\r\n\r\n\r\n\r\n[](https://opensource.org/licenses/MIT)\r\n[](https://coveralls.io/github/sosanzma/spade_llm?branch=main)\r\n\r\n[](https://github.com/sosanzma/spade_llm/actions)\r\n[](https://github.com/sosanzma/spade_llm/actions/workflows/docs.yml)\r\n\r\n[**Documentation**](https://sosanzma.github.io/spade_llm) | [**Quick Start**](https://sosanzma.github.io/spade_llm/getting-started/quickstart/) | [**Examples**](https://sosanzma.github.io/spade_llm/reference/examples/) | [**API Reference**](https://sosanzma.github.io/spade_llm/reference/)\r\n\r\n</div>\r\n\r\n# SPADE-LLM\r\n\r\nExtension for [SPADE](https://spadeagents.eu) that integrates Large Language Models into multi-agent systems. Build intelligent, collaborative agents that can communicate, reason, and take actions in complex distributed environments.\r\n\r\n## Table of Contents\r\n\r\n- [Key Features](#key-features)\r\n- [Built-in XMPP Server](#built-in-xmpp-server)\r\n- [Quick Start](#quick-start)\r\n- [Installation](#installation)\r\n- [Architecture](#architecture)\r\n- [Documentation](#documentation)\r\n- [Examples](#examples)\r\n - [Multi-Provider Support](#multi-provider-support)\r\n - [Tools and Function Calling](#tools-and-function-calling)\r\n - [Content Safety with Guardrails](#content-safety-with-guardrails)\r\n - [Message Routing](#message-routing)\r\n - [Interactive Chat](#interactive-chat)\r\n - [Memory Extensions](#memory-extensions)\r\n - [Context Management](#context-management)\r\n - [Human-in-the-Loop](#human-in-the-loop)\r\n- [Requirements](#requirements)\r\n- [Contributing](#contributing)\r\n- [License](#license)\r\n\r\n## Key Features\r\n\r\n- **Built-in XMPP Server** - No external server setup needed! Start agents with one command\r\n- **Multi-Provider Support** - OpenAI, Ollama, LM Studio, vLLM integration \r\n- **Tool System** - Function calling with async execution \r\n- **Context Management** - Multi-conversation support with automatic cleanup \r\n- **Memory Extensions** - Agent-based and agent-thread memory for persistent state \r\n- **Message Routing** - Conditional routing based on LLM responses \r\n- **Guardrails System** - Content filtering and safety controls for input/output \r\n- **MCP Integration** - Model Context Protocol server support \r\n- **Human-in-the-Loop** - Web interface for human expert consultation\r\n\r\n## Built-in XMPP Server\r\n\r\nSPADE 4+ includes a built-in XMPP server, eliminating the need for external server setup. This is a major advantage over other multi-agent frameworks like AutoGen or Swarm that require complex infrastructure configuration.\r\n\r\n### Start the Server\r\n\r\n```bash\r\n# Start SPADE's built-in XMPP server\r\nspade run\r\n```\r\n\r\nThe server automatically handles:\r\n- Agent registration and authentication\r\n- Message routing between agents\r\n- Connection management\r\n- Domain resolution\r\n\r\nAgents automatically connect to the built-in server when using standard SPADE agent configuration.\r\n\r\n## Quick Start\r\n\r\nGet started with SPADE-LLM in just 2 steps:\r\n\r\n### Step 1: Start the Built-in XMPP Server\r\n\r\n```bash\r\n# Terminal 1: Start SPADE's built-in server\r\nspade run\r\n```\r\n\r\n### Step 2: Create and Run Your LLM Agent\r\n\r\n```python\r\n# your_agent.py\r\nimport spade\r\nfrom spade_llm import LLMAgent, LLMProvider\r\n\r\nasync def main():\r\n provider = LLMProvider.create_openai(\r\n api_key=\"your-api-key\",\r\n model=\"gpt-4o-mini\"\r\n )\r\n \r\n agent = LLMAgent(\r\n jid=\"assistant@localhost\", # Connects to built-in server\r\n password=\"password\",\r\n provider=provider,\r\n system_prompt=\"You are a helpful assistant\"\r\n )\r\n \r\n await agent.start()\r\n\r\nif __name__ == \"__main__\":\r\n spade.run(main())\r\n```\r\n\r\n```bash\r\n# Terminal 2: Run your agent\r\npython your_agent.py\r\n```\r\n\r\nThat's it! No external XMPP server configuration needed.\r\n\r\n## Installation\r\n\r\n```bash\r\npip install spade_llm\r\n```\r\n## Examples\r\n\r\n### Multi-Provider Support\r\n\r\n```python\r\n# OpenAI\r\nprovider = LLMProvider.create_openai(api_key=\"key\", model=\"gpt-4o-mini\")\r\n\r\n# Ollama (local)\r\nprovider = LLMProvider.create_ollama(model=\"llama3.1:8b\")\r\n\r\n# LM Studio (local)\r\nprovider = LLMProvider.create_lm_studio(model=\"local-model\")\r\n```\r\n\r\n### Tools and Function Calling\r\n\r\n```python\r\nfrom spade_llm import LLMTool\r\n\r\nasync def get_weather(city: str) -> str:\r\n return f\"Weather in {city}: 22\u00b0C, sunny\"\r\n\r\nweather_tool = LLMTool(\r\n name=\"get_weather\",\r\n description=\"Get weather for a city\",\r\n parameters={\r\n \"type\": \"object\",\r\n \"properties\": {\"city\": {\"type\": \"string\"}},\r\n \"required\": [\"city\"]\r\n },\r\n func=get_weather\r\n)\r\n\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n tools=[weather_tool]\r\n)\r\n```\r\n\r\n### Content Safety with Guardrails\r\n\r\n```python\r\nfrom spade_llm.guardrails import KeywordGuardrail, GuardrailAction\r\n\r\n# Block harmful content\r\nsafety_filter = KeywordGuardrail(\r\n name=\"safety_filter\",\r\n blocked_keywords=[\"hack\", \"exploit\", \"malware\"],\r\n action=GuardrailAction.BLOCK,\r\n blocked_message=\"I cannot help with potentially harmful activities.\"\r\n)\r\n\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n input_guardrails=[safety_filter] # Filter incoming messages\r\n)\r\n```\r\n\r\n### Message Routing\r\n\r\n```python\r\ndef router(msg, response, context):\r\n if \"technical\" in response.lower():\r\n return \"tech-support@example.com\"\r\n return str(msg.sender)\r\n\r\nagent = LLMAgent(\r\n jid=\"router@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n routing_function=router\r\n)\r\n```\r\n\r\n### Interactive Chat\r\n\r\n```python\r\nfrom spade_llm import ChatAgent\r\n\r\n# Create chat interface\r\nchat_agent = ChatAgent(\r\n jid=\"human@localhost\", # Uses built-in server\r\n password=\"password\",\r\n target_agent_jid=\"assistant@localhost\"\r\n)\r\n\r\nawait chat_agent.start()\r\nawait chat_agent.run_interactive() # Start interactive chat\r\n```\r\n\r\n### Memory Extensions\r\n\r\n```python\r\n# Agent-based memory: Single shared memory per agent\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n agent_base_memory=(True, \"./memory.db\") # Enabled with custom path\r\n)\r\n\r\n# Agent-thread memory: Separate memory per conversation\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n agent_thread_memory=(True, \"./thread_memory.db\") # Enabled with custom path\r\n)\r\n\r\n# Default memory paths (if path not specified)\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n agent_base_memory=(True, None) # Uses default path\r\n)\r\n```\r\n\r\n### Context Management\r\n\r\n```python\r\nfrom spade_llm.context import SmartWindowSizeContext, FixedWindowSizeContext\r\n\r\n# Smart context: Dynamic window sizing based on content\r\nsmart_context = SmartWindowSizeContext(\r\n max_tokens=4000,\r\n include_system_prompt=True,\r\n preserve_last_k_messages=5\r\n)\r\n\r\n# Fixed context: Traditional sliding window\r\nfixed_context = FixedWindowSizeContext(\r\n max_messages=20,\r\n include_system_prompt=True\r\n)\r\n\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n context_manager=smart_context\r\n)\r\n```\r\n\r\n### Human-in-the-Loop\r\n\r\n```python\r\nfrom spade_llm import HumanInTheLoopTool\r\n\r\n# Create tool for human consultation\r\nhuman_tool = HumanInTheLoopTool(\r\n human_expert_jid=\"expert@localhost\", # Uses built-in server\r\n timeout=300.0 # 5 minutes\r\n)\r\n\r\nagent = LLMAgent(\r\n jid=\"assistant@localhost\", # Uses built-in server\r\n password=\"password\",\r\n provider=provider,\r\n tools=[human_tool] # Pass tools in constructor\r\n)\r\n\r\n# Start web interface for human expert\r\n# python -m spade_llm.human_interface.web_server\r\n# Open http://localhost:8080 and connect as expert\r\n```\r\n\r\n## Architecture\r\n\r\n```mermaid\r\ngraph LR\r\n A[LLMAgent] --> C[ContextManager]\r\n A --> D[LLMProvider]\r\n A --> E[LLMTool]\r\n A --> G[Guardrails]\r\n A --> M[Memory]\r\n D --> F[OpenAI/Ollama/etc]\r\n G --> H[Input/Output Filtering]\r\n E --> I[Human-in-the-Loop]\r\n E --> J[MCP]\r\n E --> P[CustomTool/LangchainTool]\r\n J --> K[STDIO]\r\n J --> L[HTTP Streaming]\r\n M --> N[Agent-based]\r\n M --> O[Agent-thread]\r\n```\r\n\r\n## Documentation\r\n\r\n- **[Installation](https://sosanzma.github.io/spade_llm/getting-started/installation/)** - Setup and requirements\r\n- **[Quick Start](https://sosanzma.github.io/spade_llm/getting-started/quickstart/)** - Basic usage examples\r\n- **[Providers](https://sosanzma.github.io/spade_llm/guides/providers/)** - LLM provider configuration\r\n- **[Tools](https://sosanzma.github.io/spade_llm/guides/tools-system/)** - Function calling system\r\n- **[Guardrails](https://sosanzma.github.io/spade_llm/guides/guardrails/)** - Content filtering and safety\r\n- **[API Reference](https://sosanzma.github.io/spade_llm/reference/)** - Complete API documentation\r\n\r\n## Examples Directory\r\n\r\nThe `/examples` directory contains complete working examples:\r\n\r\n- `multi_provider_chat_example.py` - Chat with different LLM providers\r\n- `ollama_with_tools_example.py` - Local models with tool calling\r\n- `langchain_tools_example.py` - LangChain tool integration\r\n- `guardrails_example.py` - Content filtering and safety controls\r\n- `human_in_the_loop_example.py` - Human expert consultation via web interface\r\n- `valencia_multiagent_trip_planner.py` - Multi-agent workflow\r\n\r\n## Requirements\r\n\r\n- Python 3.10+\r\n- SPADE 3.3.0+\r\n\r\n## Contributing\r\n\r\n1. Fork the repository\r\n2. Create a feature branch\r\n3. Add tests for new functionality\r\n4. Submit a pull request\r\n\r\nSee [Contributing Guide](https://sosanzma.github.io/spade_llm/contributing/) for details.\r\n\r\n## License\r\n\r\nMIT License\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Extension for SPADE to integrate Large Language Models in agents",
"version": "0.1.1",
"project_urls": {
"Bug Reports": "https://github.com/sosanzma/spade_llm/issues",
"Documentation": "https://sosanzma.github.io/spade_llm",
"Homepage": "https://github.com/sosanzma/spade_llm",
"Source": "https://github.com/sosanzma/spade_llm"
},
"split_keywords": [
"spade",
"llm",
"ai",
"agents",
"multi-agent-systems",
"openai",
"ollama"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5a3cb9b8b674ec8f3d85091ba1c9369e6af9fe85d57c68ecb1051218d0bc47bc",
"md5": "e7d974fd48c0b3e19a484e5e5facdb1d",
"sha256": "53da835578158f66421fffece4bb4b25b9dc929e8b82a05cbad08b87c39d6476"
},
"downloads": -1,
"filename": "spade_llm-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e7d974fd48c0b3e19a484e5e5facdb1d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 73786,
"upload_time": "2025-08-10T15:40:07",
"upload_time_iso_8601": "2025-08-10T15:40:07.726923Z",
"url": "https://files.pythonhosted.org/packages/5a/3c/b9b8b674ec8f3d85091ba1c9369e6af9fe85d57c68ecb1051218d0bc47bc/spade_llm-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "eec3f8ba4c8b6f7c12eb1f1944cdaa770416e4d6208ffab252b642478f14eaba",
"md5": "4543dbd03a3bcd566a64929b86f6d760",
"sha256": "cd29cc03091ed7b99345a37891dd990b18fb9862687134030e90bc5bba1100d5"
},
"downloads": -1,
"filename": "spade_llm-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "4543dbd03a3bcd566a64929b86f6d760",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 5192631,
"upload_time": "2025-08-10T15:40:09",
"upload_time_iso_8601": "2025-08-10T15:40:09.915457Z",
"url": "https://files.pythonhosted.org/packages/ee/c3/f8ba4c8b6f7c12eb1f1944cdaa770416e4d6208ffab252b642478f14eaba/spade_llm-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-10 15:40:09",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sosanzma",
"github_project": "spade_llm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
">=",
"2.8.2"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "aiohttp",
"specs": [
[
">=",
"3.8.0"
]
]
},
{
"name": "spade",
"specs": [
[
">=",
"4.1.2"
]
]
},
{
"name": "mcp",
"specs": [
[
">=",
"1.8.0"
]
]
},
{
"name": "aiosqlite",
"specs": [
[
">=",
"0.17.0"
]
]
}
],
"tox": true,
"lcname": "spade-llm"
}