# ๐ LangSwarm
**LangSwarm** is a comprehensive multi-agent framework that combines intelligent workflows, persistent memory, and zero-latency MCP (Model Context Protocol) tools. Build sophisticated AI systems with YAML workflows, Python agents, and integrated tool orchestration.
## ๐ Latest Updates
### ๐ **Revolutionary Structured JSON Responses** (v0.0.50+)
- **Breakthrough Design**: Agents can now provide BOTH user responses AND tool calls simultaneously
- **No More Forced Choice**: Previously agents chose between communication OR tool usage - now they do both
- **Dual Response Modes**: Integrated (polished final answer) or Streaming (immediate feedback + tool results)
- **Natural Interactions**: Users see what agents are doing while tools execute
```json
{
"response": "I'll check that configuration file for you to analyze its contents",
"mcp": {
"tool": "filesystem",
"method": "read_file",
"params": {"path": "/tmp/config.json"}
}
}
```
### ๐ฅ **Local MCP Mode** - Zero Latency Tools
- **1000x Faster**: Direct function calls vs HTTP (0ms vs 50-100ms)
- **Zero Setup**: No containers, no external servers
- **Full Compatibility**: Works with existing MCP workflows
### ๐พ **Enhanced Memory System**
- **BigQuery Integration**: Analytics-ready conversation storage
- **Multiple Backends**: SQLite, ChromaDB, Redis, Qdrant, Elasticsearch
- **Auto-Embeddings**: Semantic search built-in
### ๐ ๏ธ **Fixed Dependencies**
- **Complete Installation**: `pip install langswarm` now installs all dependencies
- **30+ Libraries**: LangChain, OpenAI, FastAPI, Discord, and more
- **Ready to Use**: No manual dependency management needed
## โจ Key Features
### ๐ง **Multi-Agent Intelligence**
- **Workflow Orchestration**: Define complex agent interactions in YAML
- **Parallel Execution**: Fan-out/fan-in patterns with async support
- **Intelligent Tool Selection**: Agents automatically choose the right tools
- **Memory Integration**: Persistent conversation and context storage
### ๐ **Dual Response Modes**
- **Streaming Mode**: Show immediate response, then tool results (conversational)
- **Integrated Mode**: Combine user explanation with tool results (polished)
- **Transparent AI**: Users see what agents are doing while tools execute
- **Configurable**: Set `response_mode: "streaming"` or `"integrated"` per agent
### ๐ง **Local MCP Tools (Zero Latency)**
- **Filesystem Access**: Read files, list directories with `local://filesystem`
- **GitHub Integration**: Issues, PRs, workflows via `stdio://github_mcp`
- **Custom Tools**: Build your own MCP tools with BaseMCPToolServer
- **Mixed Deployment**: Combine local, HTTP, and stdio MCP tools
### ๐พ **Persistent Memory**
- **Multiple Backends**: SQLite, ChromaDB, Redis, Qdrant, Elasticsearch, BigQuery
- **Conversation History**: Long-term agent memory across sessions
- **Vector Search**: Semantic retrieval with embedding models
- **Analytics Ready**: BigQuery integration for large-scale analysis
### ๐ **UI Integrations**
- **Chat Interfaces**: Discord, Telegram, Slack bots
- **Web APIs**: FastAPI endpoints with async support
- **Cloud Ready**: AWS SES, Twilio, Mailgun integrations
---
## โก๏ธ Quick Start
### Installation
```bash
pip install langswarm
```
### Minimal Example
```python
from langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor
# Load configuration
loader = LangSwarmConfigLoader()
workflows, agents, tools, *_ = loader.load()
# Execute workflow
executor = WorkflowExecutor(workflows, agents)
result = executor.run_workflow("simple_chat", "Hello, world!")
print(result)
```
> โ๏ธ No complex setup. Just install, define YAML, and run.
> ๐ก **New**: Configure `response_mode: "streaming"` for immediate feedback or `"integrated"` for polished responses!
---
## ๐ง Local MCP Tools
LangSwarm includes a revolutionary **local MCP mode** that provides zero-latency tool execution without containers or external servers.
* True multi-agent logic: parallel execution, loops, retries
* Named step routing: pass data between agents with precision
* Async fan-out, sync chaining, and subflow support
### ๐ Bring Your Stack
* Use OpenAI, Claude, Hugging Face, or LangChain agents
* Embed tools or functions directly as steps
* Drop in LangChain or LlamaIndex components
### Building Custom MCP Tools
```python
from langswarm.mcp.server_base import BaseMCPToolServer
from pydantic import BaseModel
class MyInput(BaseModel):
message: str
class MyOutput(BaseModel):
response: str
def my_handler(message: str):
return {"response": f"Processed: {message}"}
# Create local MCP server
server = BaseMCPToolServer(
name="my_tool",
description="My custom tool",
local_mode=True # Enable zero-latency mode
)
server.add_task(
name="process_message",
description="Process a message",
input_model=MyInput,
output_model=MyOutput,
handler=my_handler
)
# Tool is ready for use with local://my_tool
```
### MCP Performance Comparison
| Mode | Latency | Setup | Use Case |
|------|---------|-------|----------|
| **Local Mode** | **0ms** | Zero setup | Development, simple tools |
| HTTP Mode | 50-100ms | Docker/server | Production, complex tools |
| Stdio Mode | 20-50ms | External process | GitHub, complex APIs |
---
## ๐พ Memory & Persistence
### Supported Memory Backends
```yaml
# agents.yaml
agents:
- id: memory_agent
type: openai
model: gpt-4o
memory_adapter:
type: bigquery # or sqlite, chromadb, redis, qdrant
config:
project_id: "my-project"
dataset_id: "langswarm_memory"
table_id: "agent_conversations"
```
#### BigQuery (Analytics Ready)
```python
# Automatic conversation analytics
from langswarm.memory.adapters.langswarm import BigQueryAdapter
adapter = BigQueryAdapter(
project_id="my-project",
dataset_id="ai_conversations",
table_id="agent_memory"
)
# Stores conversations with automatic timestamp, metadata, embeddings
```
#### ChromaDB (Vector Search)
```python
from langswarm.memory.adapters.langswarm import ChromaDBAdapter
adapter = ChromaDBAdapter(
persist_directory="./memory",
collection_name="agent_memory"
)
# Automatic semantic search and retrieval
```
### Memory Configuration
```yaml
# retrievers.yaml
retrievers:
semantic_search:
type: langswarm
config:
adapter_type: chromadb
top_k: 5
similarity_threshold: 0.7
```
---
## ๐ค Agent Types & Configuration
### OpenAI Agents
```yaml
agents:
- id: gpt_agent
type: openai
model: gpt-4o
temperature: 0.7
system_prompt: "You are a helpful assistant"
memory_adapter:
type: sqlite
config:
db_path: "./memory.db"
```
### Structured JSON Response Agents
```yaml
agents:
# Streaming Mode: Immediate response, then tool results
- id: streaming_assistant
type: langchain-openai
model: gpt-4o-mini-2024-07-18
response_mode: "streaming" # Key setting for immediate feedback
system_prompt: |
Always respond with immediate feedback before using tools:
{
"response": "I'll help you with that right now. Let me check...",
"mcp": {"tool": "filesystem", "method": "read_file", "params": {...}}
}
tools: [filesystem]
# Integrated Mode: Polished final response (default)
- id: integrated_assistant
type: langchain-openai
model: gpt-4o-mini-2024-07-18
response_mode: "integrated" # Combines explanation with tool results
system_prompt: |
Provide both explanations and tool calls:
{
"response": "I'll analyze that configuration file for you",
"mcp": {"tool": "filesystem", "method": "read_file", "params": {...}}
}
tools: [filesystem]
```
### LangChain Integration
```yaml
agents:
- id: langchain_agent
type: langchain-openai
model: gpt-4o-mini
memory_adapter:
type: chromadb
```
### Custom Agents
```python
from langswarm.core.base.bot import Bot
class CustomAgent(Bot):
def chat(self, message: str) -> str:
# Your custom logic
return "Custom response"
# Register in config
loader.register_agent_class("custom", CustomAgent)
```
---
## ๐ Response Mode Examples
### Streaming Mode User Experience
**User:** "Check my config file"
**Agent Response (Immediate):**
```
"I'll check that configuration file for you to analyze its contents"
```
**Tool Results (After execution):**
```
[Tool executed successfully]
Found your config.json file. It contains:
- Database connection settings
- API endpoint configurations
- Authentication tokens
```
### Integrated Mode User Experience
**User:** "Check my config file"
**Agent Response (Final):**
```
"I analyzed your configuration file and found it contains database connection
settings for PostgreSQL on localhost:5432, API endpoints for your production
environment, and properly formatted authentication tokens. The configuration
appears valid and ready for deployment."
```
---
## ๐ Workflow Patterns
### Sequential Processing
```yaml
workflows:
main_workflow:
- id: analyze_document
steps:
- id: extract_text
agent: extractor
input: ${context.user_input}
output: {to: summarize}
- id: summarize
agent: summarizer
input: ${context.step_outputs.extract_text}
output: {to: user}
```
### Parallel Fan-out
```yaml
workflows:
main_workflow:
- id: parallel_analysis
steps:
- id: sentiment_analysis
agent: sentiment_agent
fan_key: "analysis"
input: ${context.user_input}
- id: topic_extraction
agent: topic_agent
fan_key: "analysis"
input: ${context.user_input}
- id: combine_results
agent: combiner
fan_key: "analysis"
is_fan_in: true
args: {steps: ["sentiment_analysis", "topic_extraction"]}
```
### Tool Integration (no_mcp pattern)
```yaml
workflows:
main_workflow:
- id: agent_tool_use
steps:
- id: agent_decision
agent: universal_agent
input: ${context.user_input}
output:
to: user
```
---
## ๐ UI & Integration Examples
### Discord Bot
```python
from langswarm.ui.discord_gateway import DiscordGateway
gateway = DiscordGateway(
token="your_token",
workflow_executor=executor
)
gateway.run()
```
### FastAPI Web Interface
```python
from langswarm.ui.api import create_api_app
app = create_api_app(executor)
# uvicorn main:app --host 0.0.0.0 --port 8000
```
### Telegram Bot
```python
from langswarm.ui.telegram_gateway import TelegramGateway
gateway = TelegramGateway(
token="your_bot_token",
workflow_executor=executor
)
gateway.start_polling()
```
---
## ๐ Monitoring & Analytics
### Workflow Intelligence
```yaml
# workflows.yaml
workflows:
main_workflow:
- id: monitored_workflow
settings:
intelligence:
track_performance: true
log_level: "info"
analytics_backend: "bigquery"
```
### Memory Analytics
```sql
-- Query conversation patterns in BigQuery
SELECT
agent_id,
COUNT(*) as conversations,
AVG(LENGTH(content)) as avg_message_length,
DATE(created_at) as date
FROM `project.dataset.agent_conversations`
GROUP BY agent_id, date
ORDER BY date DESC
```
---
## ๐ Deployment
### Local Development
```bash
# Clone and install
git clone https://github.com/your-org/langswarm.git
cd langswarm
pip install -e .
# Run examples
python examples/simple_chat.py
```
### Docker
```dockerfile
FROM python:3.11-slim
COPY . /app
WORKDIR /app
RUN pip install -e .
CMD ["python", "main.py"]
```
### Cloud Run
```yaml
# cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/langswarm', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/langswarm']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'langswarm', '--image', 'gcr.io/$PROJECT_ID/langswarm']
```
---
## ๐ง Advanced Configuration
### Environment Variables
```bash
# API Keys
export OPENAI_API_KEY="your_key"
export ANTHROPIC_API_KEY="your_key"
# Memory Backends
export BIGQUERY_PROJECT_ID="your_project"
export REDIS_URL="redis://localhost:6379"
export QDRANT_URL="http://localhost:6333"
# MCP Tools
export GITHUB_TOKEN="your_github_token"
```
### Configuration Structure
```
your_project/
โโโ workflows.yaml # Workflow definitions
โโโ agents.yaml # Agent configurations
โโโ tools.yaml # Tool registrations
โโโ retrievers.yaml # Memory configurations
โโโ secrets.yaml # API keys (gitignored)
โโโ main.py # Your application
```
---
## ๐งช Testing
```bash
# Run all tests
pytest tests/
# Test specific components
pytest tests/core/test_workflow_executor.py
pytest tests/mcp/test_local_mode.py
pytest tests/memory/test_adapters.py
# Test with coverage
pytest --cov=langswarm tests/
```
---
## ๐ค Contributing
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/amazing-feature`
3. Commit changes: `git commit -m 'Add amazing feature'`
4. Push to branch: `git push origin feature/amazing-feature`
5. Open a Pull Request
### Development Setup
```bash
git clone https://github.com/your-org/langswarm.git
cd langswarm
pip install -e ".[dev]"
pre-commit install
```
---
## ๐ Performance
### Local MCP Benchmarks
- **Local Mode**: 0ms latency, 1000+ ops/sec
- **HTTP Mode**: 50-100ms latency, 50-100 ops/sec
- **Stdio Mode**: 20-50ms latency, 100-200 ops/sec
### Memory Performance
- **SQLite**: <1ms query time, perfect for development
- **ChromaDB**: <10ms semantic search, great for RAG
- **BigQuery**: Batch analytics, unlimited scale
- **Redis**: <1ms cache access, production ready
---
## ๐ License
MIT License - see [LICENSE](LICENSE) file for details.
---
## ๐โโ๏ธ Support
- ๐ **Documentation**: Coming soon
- ๐ **Issues**: [GitHub Issues](https://github.com/your-org/langswarm/issues)
- ๐ฌ **Discussions**: [GitHub Discussions](https://github.com/your-org/langswarm/discussions)
- ๐ง **Email**: support@langswarm.dev
---
**Built with โค๏ธ for the AI community**
*LangSwarm: Where agents collaborate, tools integrate, and intelligence scales.*
---
## ๐ Registering and Using MCP Tools (Filesystem, GitHub, etc.)
LangSwarm supports both local and remote MCP tools. **The recommended pattern is agent-driven invocation:**
- The agent outputs a tool id and arguments in JSON.
- The workflow engine routes the call to the correct MCP tool (local or remote) using the tool's id and configuration.
- **Do not use direct mcp_call steps for MCP tools in your workflow YAML.**
### 1. **Register MCP Tools in `tools.yaml`**
- **type** must start with `mcp` (e.g., `mcpfilesystem`, `mcpgithubtool`).
- **local_mode: true** for local MCP tools.
- **mcp_url** for remote MCP tools (e.g., `stdio://github_mcp`).
- **id** is the logical name the agent will use.
**Example:**
```yaml
tools:
- id: filesystem
type: mcpfilesystem
description: "Local filesystem MCP tool"
local_mode: true
- id: github_mcp
type: mcpgithubtool
description: "Official GitHub MCP server"
mcp_url: "stdio://github_mcp"
```
| Field | Required? | Example Value | Notes |
|------------|-----------|----------------------|---------------------------------------|
| id | Yes | filesystem | Used by agent and workflow |
| type | Yes | mcpfilesystem | Must start with `mcp` |
| description| Optional | ... | Human-readable |
| local_mode | Optional | true | For local MCP tools |
| mcp_url | Optional | stdio://github_mcp | For remote MCP tools |
**Best Practices:**
- Use clear, descriptive `id` and `type` values.
- Only use `metadata` for direct Python function tools (not MCP tools).
- For remote MCP tools, specify `mcp_url` (and optionally `image`/`env` for deployment).
- Agents should be prompted to refer to tools by their `id`.
- **Do not use `local://` in new configs; use `local_mode: true` instead.**
---
### 2. **Configure Your Agent (agents.yaml)**
Prompt the agent to use the tool by its `id`:
```yaml
agents:
- id: universal_agent
type: openai
model: gpt-4o
system_prompt: |
You can use these tools:
- filesystem: List/read files (needs: path)
- github_mcp: GitHub operations (needs: operation, repo, title, body, etc.)
Always return JSON:
{"tool": "filesystem", "args": {"path": "/tmp"}}
{"tool": "github_mcp", "args": {"operation": "create_issue", "repo": "octocat/Hello-World", "title": "Bug", "body": "There is a bug."}}
```
---
### 3. **Write Your Workflow (workflows.yaml)**
Let the agent output trigger the tool call (no direct mcp_call step!):
```yaml
workflows:
main_workflow:
- id: agent_tool_use
steps:
- id: agent_decision
agent: universal_agent
input: ${context.user_input}
output:
to: user
```
- The agent's output (e.g., `{ "tool": "filesystem", "args": { "path": "/tmp" } }`) is parsed by the workflow engine, which looks up the tool by `id` and routes the call.
---
### 4. **Legacy/Low-Level Pattern (Not Recommended for MCP Tools)**
If you see examples like this:
```yaml
function: langswarm.core.utils.workflows.functions.mcp_call
args:
mcp_url: "local://filesystem"
task: "list_directory"
params: {"path": "/tmp"}
```
**This is a low-level/legacy pattern and should not be used for MCP tools.**
---
### 5. **How It Works**
1. **Agent** outputs a tool id and arguments in JSON.
2. **Workflow engine** looks up the tool by `id` in `tools.yaml` and routes the call (local or remote, as configured).
3. **Parameter values** are provided by the agent at runtime, not hardcoded in `tools.yaml`.
4. **No need to use `local://` or direct mcp_call steps.**
---
### 6. **Summary Table: MCP Tool Registration**
| Field | Required? | Example Value | Notes |
|------------|-----------|----------------------|---------------------------------------|
| id | Yes | filesystem | Used by agent and workflow |
| type | Yes | mcpfilesystem | Must start with `mcp` |
| description| Optional | ... | Human-readable |
| local_mode | Optional | true | For local MCP tools |
| mcp_url | Optional | stdio://github_mcp | For remote MCP tools |
---
### 7. **Best Practices**
- Register all MCP tools with `type` starting with `mcp`.
- Use `local_mode: true` for local tools, `mcp_url` for remote tools.
- Prompt agents to refer to tools by their `id`.
- Do not use `local://` in new configs.
- Do not use direct mcp_call steps for MCP tools in workflows.
---
## ๐ง Enhanced MCP Patterns: Intent-Based vs Direct
LangSwarm supports two powerful patterns for MCP tool invocation, solving the duplication problem where agents needed deep implementation knowledge.
### ๐ฏ **The Problem We Solved**
**Before (Problematic):**
```json
{"mcp": {"tool": "filesystem", "method": "read_file", "params": {"path": "/tmp/file.txt"}}}
```
โ Agents needed exact method names and parameter structures
โ Duplication between agent knowledge and tool implementation
โ No abstraction - agents couldn't focus on intent
**After (Enhanced):**
```json
{"mcp": {"tool": "github_mcp", "intent": "create issue about bug", "context": "auth failing"}}
```
โ
Agents express natural language intent
โ
Tools handle implementation details
โ
True separation of concerns
---
### ๐ **Pattern 1: Intent-Based (Recommended for Complex Tools)**
Agents provide high-level intent, tool workflows handle orchestration.
#### **Tools Configuration (tools.yaml):**
```yaml
tools:
# Intent-based tool with orchestration workflow
- id: github_mcp
type: mcpgithubtool
description: "GitHub repository management - supports issue creation, PR management, file operations"
mcp_url: "stdio://github_mcp"
pattern: "intent"
main_workflow: "main_workflow"
# Analytics tool supporting complex operations
- id: analytics_tool
type: mcpanalytics
description: "Data analysis and reporting - supports trend analysis, metric calculation, report generation"
mcp_url: "http://analytics-service:8080"
pattern: "intent"
main_workflow: "analytics_workflow"
```
#### **Agent Configuration (agents.yaml):**
```yaml
agents:
- id: intent_agent
type: openai
model: gpt-4o
system_prompt: |
You are an intelligent assistant with access to intent-based tools.
Available tools:
- github_mcp: GitHub repository management (describe what you want to do)
- analytics_tool: Data analysis and reporting (describe your analysis needs)
For complex operations, use intent-based pattern:
{
"mcp": {
"tool": "github_mcp",
"intent": "create an issue about authentication bug",
"context": "Users can't log in after the latest security update - critical priority"
}
}
The tool workflow will handle method selection, parameter building, and execution.
```
#### **Workflow Configuration (workflows.yaml):**
```yaml
workflows:
main_workflow:
- id: intent_based_workflow
steps:
- id: agent_intent
agent: intent_agent
input: ${context.user_input}
output:
to: user
```
#### **Tool Workflow (langswarm/mcp/tools/github_mcp/workflows.yaml):**
```yaml
workflows:
main_workflow:
- id: use_github_mcp_tool
description: Intent-based GitHub tool orchestration
inputs:
- user_input
steps:
# 1) Interpret intent and choose appropriate GitHub method
- id: choose_tool
agent: github_action_decider
input:
user_query: ${context.user_input}
available_tools:
- name: create_issue
description: Create a new issue in a repository
- name: list_repositories
description: List repositories for a user or organization
- name: get_file_contents
description: Read the contents of a file in a repository
# ... more tools
output:
to: fetch_schema
# 2) Get the schema for the selected method
- id: fetch_schema
function: langswarm.core.utils.workflows.functions.mcp_fetch_schema
args:
mcp_url: "stdio://github_mcp"
mode: stdio
output:
to: build_input
# 3) Build specific parameters from intent + schema
- id: build_input
agent: github_input_builder
input:
user_query: ${context.user_input}
schema: ${context.step_outputs.fetch_schema}
output:
to: call_tool
# 4) Execute the MCP call
- id: call_tool
function: langswarm.core.utils.workflows.functions.mcp_call
args:
mcp_url: "stdio://github_mcp"
mode: stdio
payload: ${context.step_outputs.build_input}
output:
to: summarize
# 5) Format results for the user
- id: summarize
agent: summarizer
input: ${context.step_outputs.call_tool}
output:
to: user
```
---
### โก **Pattern 2: Direct (Fallback for Simple Tools)**
Agents provide specific method and parameters for straightforward operations.
#### **Tools Configuration (tools.yaml):**
```yaml
tools:
# Direct tool for simple operations
- id: filesystem
type: mcpfilesystem
description: "Direct file operations"
local_mode: true
pattern: "direct"
methods:
- read_file: "Read file contents"
- list_directory: "List directory contents"
- write_file: "Write content to file"
# Calculator tool for simple math
- id: calculator
type: mcpcalculator
description: "Mathematical operations"
local_mode: true
pattern: "direct"
methods:
- calculate: "Evaluate mathematical expression"
- solve_equation: "Solve algebraic equation"
```
#### **Agent Configuration (agents.yaml):**
```yaml
agents:
- id: direct_agent
type: openai
model: gpt-4o
system_prompt: |
You are an assistant with access to direct tools that require specific method calls.
Available tools:
- filesystem: File operations
Methods: read_file(path), list_directory(path), write_file(path, content)
- calculator: Mathematical operations
Methods: calculate(expression), solve_equation(equation)
For simple operations, use direct pattern:
{
"mcp": {
"tool": "filesystem",
"method": "read_file",
"params": {"path": "/tmp/config.json"}
}
}
{
"mcp": {
"tool": "calculator",
"method": "calculate",
"params": {"expression": "2 + 2 * 3"}
}
}
```
#### **Workflow Configuration (workflows.yaml):**
```yaml
workflows:
direct_workflow:
- id: direct_tool_workflow
steps:
- id: agent_direct_call
agent: direct_agent
input: ${context.user_input}
output:
to: user
```
---
### ๐ **Pattern 3: Hybrid (Both Patterns Supported)**
Advanced tools that support both intent-based and direct patterns.
#### **Tools Configuration (tools.yaml):**
```yaml
tools:
# Hybrid tool supporting both patterns
- id: advanced_tool
type: mcpadvanced
description: "Advanced data processing tool"
mcp_url: "http://advanced-service:8080"
pattern: "hybrid"
main_workflow: "advanced_workflow"
methods:
- get_metrics: "Get current system metrics"
- export_data: "Export data in specified format"
- simple_query: "Execute simple database query"
```
#### **Agent Configuration (agents.yaml):**
```yaml
agents:
- id: hybrid_agent
type: openai
model: gpt-4o
system_prompt: |
You have access to a hybrid tool that supports both patterns.
Available tools:
- advanced_tool: Data processing (both intent-based and direct)
Use intent-based for complex operations:
{
"mcp": {
"tool": "advanced_tool",
"intent": "analyze quarterly sales trends and generate report",
"context": "Focus on Q3-Q4 comparison with regional breakdown"
}
}
Use direct for simple operations:
{
"mcp": {
"tool": "advanced_tool",
"method": "get_metrics",
"params": {"metric_type": "cpu_usage"}
}
}
Choose the appropriate pattern based on operation complexity.
```
---
### ๐ **Complete YAML Example: Mixed Patterns**
#### **Full Project Structure:**
```
my_project/
โโโ workflows.yaml # Main workflow definitions
โโโ agents.yaml # Agent configurations
โโโ tools.yaml # Tool registrations
โโโ main.py # Application entry point
```
#### **workflows.yaml:**
```yaml
workflows:
# Main workflow supporting both patterns
main_workflow:
- id: mixed_patterns_workflow
steps:
- id: intelligent_agent
agent: mixed_pattern_agent
input: ${context.user_input}
output:
to: user
# Example workflow demonstrating sequential tool use
sequential_workflow:
- id: file_then_github
steps:
# Step 1: Read local file (direct pattern)
- id: read_config
agent: file_agent
input: "Read the configuration file /tmp/app.conf"
output:
to: create_issue
# Step 2: Create GitHub issue based on file content (intent pattern)
- id: create_issue
agent: github_agent
input: |
Create a GitHub issue about configuration problems.
Configuration content: ${context.step_outputs.read_config}
output:
to: user
```
#### **agents.yaml:**
```yaml
agents:
# Agent that can use both patterns intelligently
- id: mixed_pattern_agent
type: openai
model: gpt-4o
system_prompt: |
You are an intelligent assistant with access to both intent-based and direct tools.
**Intent-Based Tools** (describe what you want to do):
- github_mcp: GitHub repository management
- analytics_tool: Data analysis and reporting
**Direct Tools** (specify method and parameters):
- filesystem: File operations
Methods: read_file(path), list_directory(path)
- calculator: Mathematical operations
Methods: calculate(expression)
**Usage Examples:**
Intent-based:
{
"mcp": {
"tool": "github_mcp",
"intent": "create issue about performance problem",
"context": "API response times increased by 50% after deployment"
}
}
Direct:
{
"mcp": {
"tool": "filesystem",
"method": "read_file",
"params": {"path": "/tmp/config.json"}
}
}
Choose the appropriate pattern based on complexity:
- Use intent-based for complex operations requiring orchestration
- Use direct for simple, well-defined method calls
# Specialized agent for file operations
- id: file_agent
type: openai
model: gpt-4o-mini
system_prompt: |
You specialize in file operations using direct tool calls.
Available tool:
- filesystem: read_file(path), list_directory(path)
Always return:
{
"mcp": {
"tool": "filesystem",
"method": "read_file",
"params": {"path": "/path/to/file"}
}
}
# Specialized agent for GitHub operations
- id: github_agent
type: openai
model: gpt-4o
system_prompt: |
You specialize in GitHub operations using intent-based patterns.
Available tool:
- github_mcp: GitHub repository management
Always return:
{
"mcp": {
"tool": "github_mcp",
"intent": "describe what you want to do",
"context": "provide relevant context and details"
}
}
```
#### **tools.yaml:**
```yaml
tools:
# Intent-based tools with orchestration workflows
- id: github_mcp
type: mcpgithubtool
description: "GitHub repository management - supports issue creation, PR management, file operations"
mcp_url: "stdio://github_mcp"
pattern: "intent"
main_workflow: "main_workflow"
- id: analytics_tool
type: mcpanalytics
description: "Data analysis and reporting - supports trend analysis, metric calculation, report generation"
mcp_url: "http://analytics-service:8080"
pattern: "intent"
main_workflow: "analytics_workflow"
# Direct tools for simple operations
- id: filesystem
type: mcpfilesystem
description: "Direct file operations"
local_mode: true
pattern: "direct"
methods:
- read_file: "Read file contents"
- list_directory: "List directory contents"
- write_file: "Write content to file"
- id: calculator
type: mcpcalculator
description: "Mathematical operations"
local_mode: true
pattern: "direct"
methods:
- calculate: "Evaluate mathematical expression"
- solve_equation: "Solve algebraic equation"
# Hybrid tool supporting both patterns
- id: advanced_tool
type: mcpadvanced
description: "Advanced data processing - supports both intent-based and direct patterns"
mcp_url: "http://advanced-service:8080"
pattern: "hybrid"
main_workflow: "advanced_workflow"
methods:
- get_metrics: "Get current system metrics"
- export_data: "Export data in specified format"
```
#### **main.py:**
```python
#!/usr/bin/env python3
"""
Enhanced MCP Patterns Example Application
"""
from langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor
def main():
# Load configuration
loader = LangSwarmConfigLoader()
workflows, agents, tools, brokers = loader.load()
# Create workflow executor
executor = WorkflowExecutor(workflows, agents)
print("๐ Enhanced MCP Patterns Demo")
print("=" * 50)
# Example 1: Intent-based GitHub operation
print("\n1. Intent-Based Pattern (GitHub)")
result1 = executor.run_workflow(
"main_workflow",
"Create a GitHub issue about the authentication bug that's preventing user logins"
)
print(f"Result: {result1}")
# Example 2: Direct filesystem operation
print("\n2. Direct Pattern (Filesystem)")
result2 = executor.run_workflow(
"main_workflow",
"Read the contents of /tmp/config.json"
)
print(f"Result: {result2}")
# Example 3: Sequential workflow using both patterns
print("\n3. Sequential Mixed Patterns")
result3 = executor.run_workflow(
"sequential_workflow",
"Process configuration file and create GitHub issue"
)
print(f"Result: {result3}")
print("\nโ
Demo completed!")
if __name__ == "__main__":
main()
```
---
### ๐ฏ **Benefits of Enhanced Patterns**
| Aspect | Intent-Based | Direct | Hybrid |
|--------|-------------|--------|--------|
| **Complexity** | High orchestration | Simple operations | Variable |
| **Agent Knowledge** | High-level descriptions | Method signatures | Both |
| **Flexibility** | Maximum | Limited | Maximum |
| **Performance** | Slower (orchestration) | Faster (direct) | Variable |
| **Use Cases** | GitHub, Analytics | Filesystem, Calculator | Advanced APIs |
### ๐ **Migration Guide**
**From Legacy Direct Calls:**
```yaml
# OLD (Don't use)
- id: legacy_call
function: langswarm.core.utils.workflows.functions.mcp_call
args:
mcp_url: "local://filesystem"
task: "read_file"
params: {"path": "/tmp/file"}
# NEW (Intent-based)
- id: intent_call
agent: file_agent
input: "Read the important configuration file"
# Agent outputs: {"mcp": {"tool": "filesystem", "intent": "read config", "context": "..."}}
# NEW (Direct)
- id: direct_call
agent: file_agent
input: "Read /tmp/file using direct method"
# Agent outputs: {"mcp": {"tool": "filesystem", "method": "read_file", "params": {"path": "/tmp/file"}}}
```
### ๐ **Best Practices**
1. **Choose the Right Pattern:**
- **Intent-based**: Complex tools requiring orchestration (GitHub, Analytics)
- **Direct**: Simple tools with clear method APIs (Filesystem, Calculator)
- **Hybrid**: Advanced tools that benefit from both approaches
2. **Agent Design:**
- Give agents high-level tool descriptions for intent-based tools
- Provide method signatures for direct tools
- Train agents to choose appropriate patterns
3. **Tool Configuration:**
- Set `pattern: "intent"` for complex tools with workflows
- Set `pattern: "direct"` for simple tools with clear methods
- Set `pattern: "hybrid"` for advanced tools supporting both
4. **Workflow Structure:**
- Let agents drive tool selection through their output
- Avoid direct `mcp_call` functions in workflows for MCP tools
- Use sequential steps for multi-tool operations
---
## โก **Local Mode with Enhanced Patterns: Zero-Latency Intelligence**
The combination of `local_mode: true` with enhanced patterns provides **zero-latency tool execution** while maintaining intelligent agent abstraction.
### ๐ฏ **Performance Revolution**
| Pattern | Local Mode | Remote Mode | Performance Gain |
|---------|------------|-------------|------------------|
| **Intent-Based** | **0ms** | 50-100ms | **1000x faster** |
| **Direct** | **0ms** | 20-50ms | **500x faster** |
| **Hybrid** | **0ms** | 50-100ms | **1000x faster** |
### ๐ง **How It Works**
The enhanced middleware automatically detects `local_mode: true` and uses optimal `local://` URLs:
```python
# Middleware automatically handles local mode
if getattr(handler, 'local_mode', False):
mcp_url = f"local://{tool_id}" # Zero-latency direct call
elif hasattr(handler, 'mcp_url'):
mcp_url = handler.mcp_url # Remote call
```
### ๐ **Local Mode Configuration Examples**
#### **Intent-Based Local Tool:**
```yaml
# tools.yaml
tools:
- id: local_analytics
type: mcpanalytics
description: "Local data analysis with zero-latency orchestration"
local_mode: true # Enable zero-latency execution
pattern: "intent"
main_workflow: "analytics_workflow"
```
```yaml
# agents.yaml
agents:
- id: analytics_agent
type: openai
model: gpt-4o
system_prompt: |
You have access to a local analytics tool (zero-latency).
Available tool:
- local_analytics: Data analysis (describe what analysis you want)
Use intent-based pattern:
{
"mcp": {
"tool": "local_analytics",
"intent": "analyze sales trends for Q4",
"context": "Focus on regional performance and seasonal patterns"
}
}
The tool provides instant response with full orchestration.
```
#### **Direct Local Tool:**
```yaml
# tools.yaml
tools:
- id: filesystem
type: mcpfilesystem
description: "Local filesystem operations"
local_mode: true # Enable zero-latency execution
pattern: "direct"
methods:
- read_file: "Read file contents"
- list_directory: "List directory contents"
- write_file: "Write content to file"
```
```yaml
# agents.yaml
agents:
- id: file_agent
type: openai
model: gpt-4o-mini
system_prompt: |
You specialize in local filesystem operations (zero-latency).
Available tool:
- filesystem: Local file operations
Methods: read_file(path), list_directory(path), write_file(path, content)
Use direct pattern:
{
"mcp": {
"tool": "filesystem",
"method": "read_file",
"params": {"path": "/tmp/config.json"}
}
}
Local mode provides instant response times.
```
#### **Hybrid Local Tool:**
```yaml
# tools.yaml
tools:
- id: local_calculator
type: mcpcalculator
description: "Advanced calculator supporting both patterns"
local_mode: true # Enable zero-latency execution
pattern: "hybrid"
main_workflow: "calculator_workflow"
methods:
- calculate: "Simple mathematical expression"
- convert_units: "Unit conversion"
```
```yaml
# agents.yaml
agents:
- id: calculator_agent
type: openai
model: gpt-4o
system_prompt: |
You have access to a local calculator (zero-latency).
Available tool:
- local_calculator: Mathematical operations
Use intent-based for complex operations:
{
"mcp": {
"tool": "local_calculator",
"intent": "solve physics problem with unit conversion",
"context": "Convert between metric and imperial units"
}
}
Use direct for simple operations:
{
"mcp": {
"tool": "local_calculator",
"method": "calculate",
"params": {"expression": "2 + 2 * 3"}
}
}
```
### ๐ **Mixed Local/Remote Workflow:**
```yaml
# workflows.yaml
workflows:
mixed_performance_workflow:
- id: high_performance_analysis
steps:
# Step 1: Read data file (local, 0ms)
- id: read_data
agent: file_agent
input: "Read the data file /tmp/sales_data.csv"
output:
to: analyze
# Step 2: Analyze data (local intent-based, 0ms)
- id: analyze
agent: analytics_agent
input: |
Analyze the sales data for trends and patterns.
Data: ${context.step_outputs.read_data}
output:
to: create_issue
# Step 3: Create GitHub issue (remote, 50ms)
- id: create_issue
agent: github_agent
input: |
Create a GitHub issue with the analysis results.
Analysis: ${context.step_outputs.analyze}
output:
to: user
```
### ๐๏ธ **Building Custom Local Tools**
```python
# my_tools/analytics.py
from langswarm.mcp.server_base import BaseMCPToolServer
from pydantic import BaseModel
class AnalysisInput(BaseModel):
data: str
analysis_type: str
class AnalysisOutput(BaseModel):
result: str
metrics: dict
def analyze_data(data: str, analysis_type: str):
# Your analysis logic here
return {
"result": f"Analysis of type {analysis_type} completed",
"metrics": {"trend": "upward", "confidence": 0.85}
}
# Create local MCP server
analytics_server = BaseMCPToolServer(
name="local_analytics",
description="Local data analytics tool",
local_mode=True # Enable zero-latency mode
)
analytics_server.add_task(
name="analyze",
description="Analyze data trends",
input_model=AnalysisInput,
output_model=AnalysisOutput,
handler=analyze_data
)
# Auto-register when imported
app = analytics_server.build_app()
```
### ๐ **Complete Local Mode Application:**
```python
#!/usr/bin/env python3
"""
Zero-Latency Enhanced Patterns Example
"""
from langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor
# Import local tools to register them
import langswarm.mcp.tools.filesystem.main # Registers local filesystem
import my_tools.analytics # Registers custom analytics
def main():
# Load configuration
loader = LangSwarmConfigLoader()
workflows, agents, tools, brokers = loader.load()
# Create executor
executor = WorkflowExecutor(workflows, agents)
print("๐ Zero-Latency Enhanced Patterns Demo")
print("=" * 50)
# Example 1: Local direct pattern (0ms)
print("\n1. Local Direct Pattern (Filesystem)")
result1 = executor.run_workflow(
"main_workflow",
"List the contents of the /tmp directory"
)
print(f"Result: {result1}")
# Example 2: Local intent pattern (0ms)
print("\n2. Local Intent Pattern (Analytics)")
result2 = executor.run_workflow(
"main_workflow",
"Analyze quarterly sales performance and identify key trends"
)
print(f"Result: {result2}")
# Example 3: Mixed local/remote workflow
print("\n3. Mixed Performance Workflow")
result3 = executor.run_workflow(
"mixed_performance_workflow",
"Process sales data and create GitHub issue with results"
)
print(f"Result: {result3}")
print("\nโ
Local operations completed with zero latency!")
if __name__ == "__main__":
main()
```
### ๐ฏ **Local vs Remote Strategy:**
```yaml
# Development Environment (prioritize speed)
tools:
- id: filesystem
local_mode: true # 0ms for fast iteration
pattern: "direct"
- id: analytics
local_mode: true # 0ms for rapid testing
pattern: "intent"
# Production Environment (balance performance and isolation)
tools:
- id: filesystem
local_mode: true # Keep local for performance
pattern: "direct"
- id: github
mcp_url: "stdio://github_mcp" # External for security
pattern: "intent"
- id: database
mcp_url: "http://db-service:8080" # External for isolation
pattern: "hybrid"
```
### ๐ **Migration from Legacy Local Calls:**
```yaml
# OLD (Legacy direct calls)
- id: legacy_call
function: langswarm.core.utils.workflows.functions.mcp_call
args:
mcp_url: "local://filesystem"
task: "read_file"
params: {"path": "/tmp/file"}
# NEW (Enhanced local direct pattern)
- id: enhanced_call
agent: file_agent
input: "Read the file /tmp/file"
# Agent outputs: {"mcp": {"tool": "filesystem", "method": "read_file", "params": {"path": "/tmp/file"}}}
# Middleware automatically uses local://filesystem for 0ms latency
# NEW (Enhanced local intent pattern)
- id: intent_call
agent: analytics_agent
input: "Analyze the performance data in the file"
# Agent outputs: {"mcp": {"tool": "local_analytics", "intent": "analyze performance", "context": "..."}}
# Tool workflow handles orchestration with 0ms latency
```
### โจ **Benefits Summary:**
**๐ Performance Benefits:**
- Zero latency (0ms vs 50-100ms for HTTP)
- 1000x faster execution for complex operations
- Shared memory space with LangSwarm process
- No container or server setup required
**๐ง Intelligence Benefits:**
- Intent-based: Natural language tool interaction
- Direct: Explicit method calls for simple operations
- Hybrid: Best of both worlds
- No agent implementation knowledge required
**๐ง Combined Benefits:**
- Zero-latency intent-based tool orchestration
- Instant direct method calls
- Scalable from development to production
- Maximum performance with maximum abstraction
**The combination of local mode + enhanced patterns delivers both the highest performance AND the most intelligent tool abstraction possible!** ๐ฏ
---
Raw data
{
"_id": null,
"home_page": "https://github.com/aekdahl/langswarm",
"name": "langswarm",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": "LLM, multi-agent, langchain, hugginface, openai, MCP, agent, orchestration",
"author": "Alexander Ekdahl",
"author_email": "alexander.ekdahl@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/cc/33/3fa44753f3b60a6e3b5db0309a7ae3a7248a239deb925d8d42588273cecf/langswarm-0.0.51.tar.gz",
"platform": null,
"description": "# \ud83d\ude80 LangSwarm\n\n**LangSwarm** is a comprehensive multi-agent framework that combines intelligent workflows, persistent memory, and zero-latency MCP (Model Context Protocol) tools. Build sophisticated AI systems with YAML workflows, Python agents, and integrated tool orchestration.\n\n## \ud83c\udd95 Latest Updates\n\n### \ud83d\ude80 **Revolutionary Structured JSON Responses** (v0.0.50+)\n- **Breakthrough Design**: Agents can now provide BOTH user responses AND tool calls simultaneously\n- **No More Forced Choice**: Previously agents chose between communication OR tool usage - now they do both\n- **Dual Response Modes**: Integrated (polished final answer) or Streaming (immediate feedback + tool results)\n- **Natural Interactions**: Users see what agents are doing while tools execute\n\n```json\n{\n \"response\": \"I'll check that configuration file for you to analyze its contents\",\n \"mcp\": {\n \"tool\": \"filesystem\",\n \"method\": \"read_file\", \n \"params\": {\"path\": \"/tmp/config.json\"}\n }\n}\n```\n\n### \ud83d\udd25 **Local MCP Mode** - Zero Latency Tools\n- **1000x Faster**: Direct function calls vs HTTP (0ms vs 50-100ms)\n- **Zero Setup**: No containers, no external servers\n- **Full Compatibility**: Works with existing MCP workflows\n\n### \ud83d\udcbe **Enhanced Memory System**\n- **BigQuery Integration**: Analytics-ready conversation storage\n- **Multiple Backends**: SQLite, ChromaDB, Redis, Qdrant, Elasticsearch\n- **Auto-Embeddings**: Semantic search built-in\n\n### \ud83d\udee0\ufe0f **Fixed Dependencies**\n- **Complete Installation**: `pip install langswarm` now installs all dependencies\n- **30+ Libraries**: LangChain, OpenAI, FastAPI, Discord, and more\n- **Ready to Use**: No manual dependency management needed\n\n## \u2728 Key Features\n\n### \ud83e\udde0 **Multi-Agent Intelligence**\n- **Workflow Orchestration**: Define complex agent interactions in YAML\n- **Parallel Execution**: Fan-out/fan-in patterns with async support\n- **Intelligent Tool Selection**: Agents automatically choose the right tools\n- **Memory Integration**: Persistent conversation and context storage\n\n### \ud83d\udd04 **Dual Response Modes**\n- **Streaming Mode**: Show immediate response, then tool results (conversational)\n- **Integrated Mode**: Combine user explanation with tool results (polished)\n- **Transparent AI**: Users see what agents are doing while tools execute\n- **Configurable**: Set `response_mode: \"streaming\"` or `\"integrated\"` per agent\n\n### \ud83d\udd27 **Local MCP Tools (Zero Latency)**\n- **Filesystem Access**: Read files, list directories with `local://filesystem`\n- **GitHub Integration**: Issues, PRs, workflows via `stdio://github_mcp`\n- **Custom Tools**: Build your own MCP tools with BaseMCPToolServer\n- **Mixed Deployment**: Combine local, HTTP, and stdio MCP tools\n\n### \ud83d\udcbe **Persistent Memory**\n- **Multiple Backends**: SQLite, ChromaDB, Redis, Qdrant, Elasticsearch, BigQuery\n- **Conversation History**: Long-term agent memory across sessions\n- **Vector Search**: Semantic retrieval with embedding models\n- **Analytics Ready**: BigQuery integration for large-scale analysis\n\n### \ud83c\udf10 **UI Integrations**\n- **Chat Interfaces**: Discord, Telegram, Slack bots\n- **Web APIs**: FastAPI endpoints with async support\n- **Cloud Ready**: AWS SES, Twilio, Mailgun integrations\n\n---\n\n## \u26a1\ufe0f Quick Start\n\n### Installation\n```bash\npip install langswarm\n```\n\n### Minimal Example\n```python\nfrom langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor\n\n# Load configuration\nloader = LangSwarmConfigLoader()\nworkflows, agents, tools, *_ = loader.load()\n\n# Execute workflow\nexecutor = WorkflowExecutor(workflows, agents)\nresult = executor.run_workflow(\"simple_chat\", \"Hello, world!\")\nprint(result)\n```\n\n> \u2611\ufe0f No complex setup. Just install, define YAML, and run. \n> \ud83d\udca1 **New**: Configure `response_mode: \"streaming\"` for immediate feedback or `\"integrated\"` for polished responses!\n\n---\n\n## \ud83d\udd27 Local MCP Tools\n\nLangSwarm includes a revolutionary **local MCP mode** that provides zero-latency tool execution without containers or external servers.\n\n* True multi-agent logic: parallel execution, loops, retries\n* Named step routing: pass data between agents with precision\n* Async fan-out, sync chaining, and subflow support\n\n### \ud83d\udd0c Bring Your Stack\n\n* Use OpenAI, Claude, Hugging Face, or LangChain agents\n* Embed tools or functions directly as steps\n* Drop in LangChain or LlamaIndex components\n\n### Building Custom MCP Tools\n```python\nfrom langswarm.mcp.server_base import BaseMCPToolServer\nfrom pydantic import BaseModel\n\nclass MyInput(BaseModel):\n message: str\n\nclass MyOutput(BaseModel):\n response: str\n\ndef my_handler(message: str):\n return {\"response\": f\"Processed: {message}\"}\n\n# Create local MCP server\nserver = BaseMCPToolServer(\n name=\"my_tool\",\n description=\"My custom tool\",\n local_mode=True # Enable zero-latency mode\n)\n\nserver.add_task(\n name=\"process_message\",\n description=\"Process a message\",\n input_model=MyInput,\n output_model=MyOutput,\n handler=my_handler\n)\n\n# Tool is ready for use with local://my_tool\n```\n\n### MCP Performance Comparison\n\n| Mode | Latency | Setup | Use Case |\n|------|---------|-------|----------|\n| **Local Mode** | **0ms** | Zero setup | Development, simple tools |\n| HTTP Mode | 50-100ms | Docker/server | Production, complex tools |\n| Stdio Mode | 20-50ms | External process | GitHub, complex APIs |\n\n---\n\n## \ud83d\udcbe Memory & Persistence\n\n### Supported Memory Backends\n\n```yaml\n# agents.yaml\nagents:\n - id: memory_agent\n type: openai\n model: gpt-4o\n memory_adapter:\n type: bigquery # or sqlite, chromadb, redis, qdrant\n config:\n project_id: \"my-project\"\n dataset_id: \"langswarm_memory\"\n table_id: \"agent_conversations\"\n```\n\n#### BigQuery (Analytics Ready)\n```python\n# Automatic conversation analytics\nfrom langswarm.memory.adapters.langswarm import BigQueryAdapter\n\nadapter = BigQueryAdapter(\n project_id=\"my-project\",\n dataset_id=\"ai_conversations\",\n table_id=\"agent_memory\"\n)\n\n# Stores conversations with automatic timestamp, metadata, embeddings\n```\n\n#### ChromaDB (Vector Search)\n```python\nfrom langswarm.memory.adapters.langswarm import ChromaDBAdapter\n\nadapter = ChromaDBAdapter(\n persist_directory=\"./memory\",\n collection_name=\"agent_memory\"\n)\n# Automatic semantic search and retrieval\n```\n\n### Memory Configuration\n```yaml\n# retrievers.yaml\nretrievers:\n semantic_search:\n type: langswarm\n config:\n adapter_type: chromadb\n top_k: 5\n similarity_threshold: 0.7\n```\n\n---\n\n## \ud83e\udd16 Agent Types & Configuration\n\n### OpenAI Agents\n```yaml\nagents:\n - id: gpt_agent\n type: openai\n model: gpt-4o\n temperature: 0.7\n system_prompt: \"You are a helpful assistant\"\n memory_adapter:\n type: sqlite\n config:\n db_path: \"./memory.db\"\n```\n\n### Structured JSON Response Agents\n```yaml\nagents:\n # Streaming Mode: Immediate response, then tool results\n - id: streaming_assistant\n type: langchain-openai\n model: gpt-4o-mini-2024-07-18\n response_mode: \"streaming\" # Key setting for immediate feedback\n system_prompt: |\n Always respond with immediate feedback before using tools:\n {\n \"response\": \"I'll help you with that right now. Let me check...\",\n \"mcp\": {\"tool\": \"filesystem\", \"method\": \"read_file\", \"params\": {...}}\n }\n tools: [filesystem]\n\n # Integrated Mode: Polished final response (default)\n - id: integrated_assistant \n type: langchain-openai\n model: gpt-4o-mini-2024-07-18\n response_mode: \"integrated\" # Combines explanation with tool results\n system_prompt: |\n Provide both explanations and tool calls:\n {\n \"response\": \"I'll analyze that configuration file for you\",\n \"mcp\": {\"tool\": \"filesystem\", \"method\": \"read_file\", \"params\": {...}}\n }\n tools: [filesystem]\n```\n\n### LangChain Integration\n```yaml\nagents:\n - id: langchain_agent\n type: langchain-openai\n model: gpt-4o-mini\n memory_adapter:\n type: chromadb\n```\n\n### Custom Agents\n```python\nfrom langswarm.core.base.bot import Bot\n\nclass CustomAgent(Bot):\n def chat(self, message: str) -> str:\n # Your custom logic\n return \"Custom response\"\n\n# Register in config\nloader.register_agent_class(\"custom\", CustomAgent)\n```\n\n---\n\n## \ud83d\udd04 Response Mode Examples\n\n### Streaming Mode User Experience\n**User:** \"Check my config file\"\n\n**Agent Response (Immediate):**\n```\n\"I'll check that configuration file for you to analyze its contents\"\n```\n\n**Tool Results (After execution):**\n```\n[Tool executed successfully]\n\nFound your config.json file. It contains:\n- Database connection settings\n- API endpoint configurations \n- Authentication tokens\n```\n\n### Integrated Mode User Experience \n**User:** \"Check my config file\"\n\n**Agent Response (Final):**\n```\n\"I analyzed your configuration file and found it contains database connection \nsettings for PostgreSQL on localhost:5432, API endpoints for your production \nenvironment, and properly formatted authentication tokens. The configuration \nappears valid and ready for deployment.\"\n```\n\n---\n\n## \ud83d\udd04 Workflow Patterns\n\n### Sequential Processing\n```yaml\nworkflows:\n main_workflow:\n - id: analyze_document\n steps:\n - id: extract_text\n agent: extractor\n input: ${context.user_input}\n output: {to: summarize}\n \n - id: summarize\n agent: summarizer\n input: ${context.step_outputs.extract_text}\n output: {to: user}\n```\n\n### Parallel Fan-out\n```yaml\nworkflows:\n main_workflow:\n - id: parallel_analysis\n steps:\n - id: sentiment_analysis\n agent: sentiment_agent\n fan_key: \"analysis\"\n input: ${context.user_input}\n \n - id: topic_extraction\n agent: topic_agent\n fan_key: \"analysis\"\n input: ${context.user_input}\n \n - id: combine_results\n agent: combiner\n fan_key: \"analysis\"\n is_fan_in: true\n args: {steps: [\"sentiment_analysis\", \"topic_extraction\"]}\n```\n\n### Tool Integration (no_mcp pattern)\n```yaml\nworkflows:\n main_workflow:\n - id: agent_tool_use\n steps:\n - id: agent_decision\n agent: universal_agent\n input: ${context.user_input}\n output:\n to: user\n```\n\n---\n\n## \ud83c\udf10 UI & Integration Examples\n\n### Discord Bot\n```python\nfrom langswarm.ui.discord_gateway import DiscordGateway\n\ngateway = DiscordGateway(\n token=\"your_token\",\n workflow_executor=executor\n)\ngateway.run()\n```\n\n### FastAPI Web Interface\n```python\nfrom langswarm.ui.api import create_api_app\n\napp = create_api_app(executor)\n# uvicorn main:app --host 0.0.0.0 --port 8000\n```\n\n### Telegram Bot\n```python\nfrom langswarm.ui.telegram_gateway import TelegramGateway\n\ngateway = TelegramGateway(\n token=\"your_bot_token\",\n workflow_executor=executor\n)\ngateway.start_polling()\n```\n\n---\n\n## \ud83d\udcca Monitoring & Analytics\n\n### Workflow Intelligence\n```yaml\n# workflows.yaml\nworkflows:\n main_workflow:\n - id: monitored_workflow\n settings:\n intelligence:\n track_performance: true\n log_level: \"info\"\n analytics_backend: \"bigquery\"\n```\n\n### Memory Analytics\n```sql\n-- Query conversation patterns in BigQuery\nSELECT \n agent_id,\n COUNT(*) as conversations,\n AVG(LENGTH(content)) as avg_message_length,\n DATE(created_at) as date\nFROM `project.dataset.agent_conversations`\nGROUP BY agent_id, date\nORDER BY date DESC\n```\n\n---\n\n## \ud83d\ude80 Deployment\n\n### Local Development\n```bash\n# Clone and install\ngit clone https://github.com/your-org/langswarm.git\ncd langswarm\npip install -e .\n\n# Run examples\npython examples/simple_chat.py\n```\n\n### Docker\n```dockerfile\nFROM python:3.11-slim\nCOPY . /app\nWORKDIR /app\nRUN pip install -e .\nCMD [\"python\", \"main.py\"]\n```\n\n### Cloud Run\n```yaml\n# cloudbuild.yaml\nsteps:\n - name: 'gcr.io/cloud-builders/docker'\n args: ['build', '-t', 'gcr.io/$PROJECT_ID/langswarm', '.']\n - name: 'gcr.io/cloud-builders/docker'\n args: ['push', 'gcr.io/$PROJECT_ID/langswarm']\n - name: 'gcr.io/cloud-builders/gcloud'\n args: ['run', 'deploy', 'langswarm', '--image', 'gcr.io/$PROJECT_ID/langswarm']\n```\n\n---\n\n## \ud83d\udd27 Advanced Configuration\n\n### Environment Variables\n```bash\n# API Keys\nexport OPENAI_API_KEY=\"your_key\"\nexport ANTHROPIC_API_KEY=\"your_key\"\n\n# Memory Backends\nexport BIGQUERY_PROJECT_ID=\"your_project\"\nexport REDIS_URL=\"redis://localhost:6379\"\nexport QDRANT_URL=\"http://localhost:6333\"\n\n# MCP Tools\nexport GITHUB_TOKEN=\"your_github_token\"\n```\n\n### Configuration Structure\n```\nyour_project/\n\u251c\u2500\u2500 workflows.yaml # Workflow definitions\n\u251c\u2500\u2500 agents.yaml # Agent configurations\n\u251c\u2500\u2500 tools.yaml # Tool registrations\n\u251c\u2500\u2500 retrievers.yaml # Memory configurations\n\u251c\u2500\u2500 secrets.yaml # API keys (gitignored)\n\u2514\u2500\u2500 main.py # Your application\n```\n\n---\n\n## \ud83e\uddea Testing\n\n```bash\n# Run all tests\npytest tests/\n\n# Test specific components\npytest tests/core/test_workflow_executor.py\npytest tests/mcp/test_local_mode.py\npytest tests/memory/test_adapters.py\n\n# Test with coverage\npytest --cov=langswarm tests/\n```\n\n---\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature/amazing-feature`\n3. Commit changes: `git commit -m 'Add amazing feature'`\n4. Push to branch: `git push origin feature/amazing-feature`\n5. Open a Pull Request\n\n### Development Setup\n```bash\ngit clone https://github.com/your-org/langswarm.git\ncd langswarm\npip install -e \".[dev]\"\npre-commit install\n```\n\n---\n\n## \ud83d\udcc8 Performance\n\n### Local MCP Benchmarks\n- **Local Mode**: 0ms latency, 1000+ ops/sec\n- **HTTP Mode**: 50-100ms latency, 50-100 ops/sec\n- **Stdio Mode**: 20-50ms latency, 100-200 ops/sec\n\n### Memory Performance\n- **SQLite**: <1ms query time, perfect for development\n- **ChromaDB**: <10ms semantic search, great for RAG\n- **BigQuery**: Batch analytics, unlimited scale\n- **Redis**: <1ms cache access, production ready\n\n---\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n---\n\n## \ud83d\ude4b\u200d\u2642\ufe0f Support\n\n- \ud83d\udcd6 **Documentation**: Coming soon\n- \ud83d\udc1b **Issues**: [GitHub Issues](https://github.com/your-org/langswarm/issues)\n- \ud83d\udcac **Discussions**: [GitHub Discussions](https://github.com/your-org/langswarm/discussions)\n- \ud83d\udce7 **Email**: support@langswarm.dev\n\n---\n\n**Built with \u2764\ufe0f for the AI community**\n\n*LangSwarm: Where agents collaborate, tools integrate, and intelligence scales.*\n\n---\n\n## \ud83d\ude80 Registering and Using MCP Tools (Filesystem, GitHub, etc.)\n\nLangSwarm supports both local and remote MCP tools. **The recommended pattern is agent-driven invocation:**\n- The agent outputs a tool id and arguments in JSON.\n- The workflow engine routes the call to the correct MCP tool (local or remote) using the tool's id and configuration.\n- **Do not use direct mcp_call steps for MCP tools in your workflow YAML.**\n\n### 1. **Register MCP Tools in `tools.yaml`**\n\n- **type** must start with `mcp` (e.g., `mcpfilesystem`, `mcpgithubtool`).\n- **local_mode: true** for local MCP tools.\n- **mcp_url** for remote MCP tools (e.g., `stdio://github_mcp`).\n- **id** is the logical name the agent will use.\n\n**Example:**\n```yaml\ntools:\n - id: filesystem\n type: mcpfilesystem\n description: \"Local filesystem MCP tool\"\n local_mode: true\n\n - id: github_mcp\n type: mcpgithubtool\n description: \"Official GitHub MCP server\"\n mcp_url: \"stdio://github_mcp\"\n```\n\n| Field | Required? | Example Value | Notes |\n|------------|-----------|----------------------|---------------------------------------|\n| id | Yes | filesystem | Used by agent and workflow |\n| type | Yes | mcpfilesystem | Must start with `mcp` |\n| description| Optional | ... | Human-readable |\n| local_mode | Optional | true | For local MCP tools |\n| mcp_url | Optional | stdio://github_mcp | For remote MCP tools |\n\n**Best Practices:**\n- Use clear, descriptive `id` and `type` values.\n- Only use `metadata` for direct Python function tools (not MCP tools).\n- For remote MCP tools, specify `mcp_url` (and optionally `image`/`env` for deployment).\n- Agents should be prompted to refer to tools by their `id`.\n- **Do not use `local://` in new configs; use `local_mode: true` instead.**\n\n---\n\n### 2. **Configure Your Agent (agents.yaml)**\n\nPrompt the agent to use the tool by its `id`:\n```yaml\nagents:\n - id: universal_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You can use these tools:\n - filesystem: List/read files (needs: path)\n - github_mcp: GitHub operations (needs: operation, repo, title, body, etc.)\n\n Always return JSON:\n {\"tool\": \"filesystem\", \"args\": {\"path\": \"/tmp\"}}\n {\"tool\": \"github_mcp\", \"args\": {\"operation\": \"create_issue\", \"repo\": \"octocat/Hello-World\", \"title\": \"Bug\", \"body\": \"There is a bug.\"}}\n```\n\n---\n\n### 3. **Write Your Workflow (workflows.yaml)**\n\nLet the agent output trigger the tool call (no direct mcp_call step!):\n```yaml\nworkflows:\n main_workflow:\n - id: agent_tool_use\n steps:\n - id: agent_decision\n agent: universal_agent\n input: ${context.user_input}\n output:\n to: user\n```\n- The agent's output (e.g., `{ \"tool\": \"filesystem\", \"args\": { \"path\": \"/tmp\" } }`) is parsed by the workflow engine, which looks up the tool by `id` and routes the call.\n\n---\n\n### 4. **Legacy/Low-Level Pattern (Not Recommended for MCP Tools)**\n\nIf you see examples like this:\n```yaml\nfunction: langswarm.core.utils.workflows.functions.mcp_call\nargs:\n mcp_url: \"local://filesystem\"\n task: \"list_directory\"\n params: {\"path\": \"/tmp\"}\n```\n**This is a low-level/legacy pattern and should not be used for MCP tools.**\n\n---\n\n### 5. **How It Works**\n\n1. **Agent** outputs a tool id and arguments in JSON.\n2. **Workflow engine** looks up the tool by `id` in `tools.yaml` and routes the call (local or remote, as configured).\n3. **Parameter values** are provided by the agent at runtime, not hardcoded in `tools.yaml`.\n4. **No need to use `local://` or direct mcp_call steps.**\n\n---\n\n### 6. **Summary Table: MCP Tool Registration**\n\n| Field | Required? | Example Value | Notes |\n|------------|-----------|----------------------|---------------------------------------|\n| id | Yes | filesystem | Used by agent and workflow |\n| type | Yes | mcpfilesystem | Must start with `mcp` |\n| description| Optional | ... | Human-readable |\n| local_mode | Optional | true | For local MCP tools |\n| mcp_url | Optional | stdio://github_mcp | For remote MCP tools |\n\n---\n\n### 7. **Best Practices**\n- Register all MCP tools with `type` starting with `mcp`.\n- Use `local_mode: true` for local tools, `mcp_url` for remote tools.\n- Prompt agents to refer to tools by their `id`.\n- Do not use `local://` in new configs.\n- Do not use direct mcp_call steps for MCP tools in workflows.\n\n---\n\n## \ud83e\udde0 Enhanced MCP Patterns: Intent-Based vs Direct\n\nLangSwarm supports two powerful patterns for MCP tool invocation, solving the duplication problem where agents needed deep implementation knowledge.\n\n### \ud83c\udfaf **The Problem We Solved**\n\n**Before (Problematic):**\n```json\n{\"mcp\": {\"tool\": \"filesystem\", \"method\": \"read_file\", \"params\": {\"path\": \"/tmp/file.txt\"}}}\n```\n\u274c Agents needed exact method names and parameter structures \n\u274c Duplication between agent knowledge and tool implementation \n\u274c No abstraction - agents couldn't focus on intent\n\n**After (Enhanced):**\n```json\n{\"mcp\": {\"tool\": \"github_mcp\", \"intent\": \"create issue about bug\", \"context\": \"auth failing\"}}\n```\n\u2705 Agents express natural language intent \n\u2705 Tools handle implementation details \n\u2705 True separation of concerns\n\n---\n\n### \ud83d\udd04 **Pattern 1: Intent-Based (Recommended for Complex Tools)**\n\nAgents provide high-level intent, tool workflows handle orchestration.\n\n#### **Tools Configuration (tools.yaml):**\n```yaml\ntools:\n # Intent-based tool with orchestration workflow\n - id: github_mcp\n type: mcpgithubtool\n description: \"GitHub repository management - supports issue creation, PR management, file operations\"\n mcp_url: \"stdio://github_mcp\"\n pattern: \"intent\"\n main_workflow: \"main_workflow\"\n \n # Analytics tool supporting complex operations\n - id: analytics_tool\n type: mcpanalytics\n description: \"Data analysis and reporting - supports trend analysis, metric calculation, report generation\"\n mcp_url: \"http://analytics-service:8080\"\n pattern: \"intent\"\n main_workflow: \"analytics_workflow\"\n```\n\n#### **Agent Configuration (agents.yaml):**\n```yaml\nagents:\n - id: intent_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You are an intelligent assistant with access to intent-based tools.\n \n Available tools:\n - github_mcp: GitHub repository management (describe what you want to do)\n - analytics_tool: Data analysis and reporting (describe your analysis needs)\n \n For complex operations, use intent-based pattern:\n {\n \"mcp\": {\n \"tool\": \"github_mcp\",\n \"intent\": \"create an issue about authentication bug\",\n \"context\": \"Users can't log in after the latest security update - critical priority\"\n }\n }\n \n The tool workflow will handle method selection, parameter building, and execution.\n```\n\n#### **Workflow Configuration (workflows.yaml):**\n```yaml\nworkflows:\n main_workflow:\n - id: intent_based_workflow\n steps:\n - id: agent_intent\n agent: intent_agent\n input: ${context.user_input}\n output:\n to: user\n```\n\n#### **Tool Workflow (langswarm/mcp/tools/github_mcp/workflows.yaml):**\n```yaml\nworkflows:\n main_workflow:\n - id: use_github_mcp_tool\n description: Intent-based GitHub tool orchestration\n inputs:\n - user_input\n\n steps:\n # 1) Interpret intent and choose appropriate GitHub method\n - id: choose_tool\n agent: github_action_decider\n input:\n user_query: ${context.user_input}\n available_tools:\n - name: create_issue\n description: Create a new issue in a repository\n - name: list_repositories\n description: List repositories for a user or organization\n - name: get_file_contents\n description: Read the contents of a file in a repository\n # ... more tools\n output: \n to: fetch_schema\n\n # 2) Get the schema for the selected method\n - id: fetch_schema\n function: langswarm.core.utils.workflows.functions.mcp_fetch_schema\n args:\n mcp_url: \"stdio://github_mcp\"\n mode: stdio\n output: \n to: build_input\n\n # 3) Build specific parameters from intent + schema\n - id: build_input\n agent: github_input_builder\n input:\n user_query: ${context.user_input}\n schema: ${context.step_outputs.fetch_schema}\n output: \n to: call_tool\n\n # 4) Execute the MCP call\n - id: call_tool\n function: langswarm.core.utils.workflows.functions.mcp_call\n args:\n mcp_url: \"stdio://github_mcp\"\n mode: stdio\n payload: ${context.step_outputs.build_input}\n output: \n to: summarize\n\n # 5) Format results for the user\n - id: summarize\n agent: summarizer\n input: ${context.step_outputs.call_tool}\n output: \n to: user\n```\n\n---\n\n### \u26a1 **Pattern 2: Direct (Fallback for Simple Tools)**\n\nAgents provide specific method and parameters for straightforward operations.\n\n#### **Tools Configuration (tools.yaml):**\n```yaml\ntools:\n # Direct tool for simple operations\n - id: filesystem\n type: mcpfilesystem\n description: \"Direct file operations\"\n local_mode: true\n pattern: \"direct\"\n methods:\n - read_file: \"Read file contents\"\n - list_directory: \"List directory contents\"\n - write_file: \"Write content to file\"\n \n # Calculator tool for simple math\n - id: calculator\n type: mcpcalculator\n description: \"Mathematical operations\"\n local_mode: true\n pattern: \"direct\"\n methods:\n - calculate: \"Evaluate mathematical expression\"\n - solve_equation: \"Solve algebraic equation\"\n```\n\n#### **Agent Configuration (agents.yaml):**\n```yaml\nagents:\n - id: direct_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You are an assistant with access to direct tools that require specific method calls.\n \n Available tools:\n - filesystem: File operations\n Methods: read_file(path), list_directory(path), write_file(path, content)\n - calculator: Mathematical operations \n Methods: calculate(expression), solve_equation(equation)\n \n For simple operations, use direct pattern:\n {\n \"mcp\": {\n \"tool\": \"filesystem\",\n \"method\": \"read_file\",\n \"params\": {\"path\": \"/tmp/config.json\"}\n }\n }\n \n {\n \"mcp\": {\n \"tool\": \"calculator\", \n \"method\": \"calculate\",\n \"params\": {\"expression\": \"2 + 2 * 3\"}\n }\n }\n```\n\n#### **Workflow Configuration (workflows.yaml):**\n```yaml\nworkflows:\n direct_workflow:\n - id: direct_tool_workflow\n steps:\n - id: agent_direct_call\n agent: direct_agent\n input: ${context.user_input}\n output:\n to: user\n```\n\n---\n\n### \ud83d\udd04 **Pattern 3: Hybrid (Both Patterns Supported)**\n\nAdvanced tools that support both intent-based and direct patterns.\n\n#### **Tools Configuration (tools.yaml):**\n```yaml\ntools:\n # Hybrid tool supporting both patterns\n - id: advanced_tool\n type: mcpadvanced\n description: \"Advanced data processing tool\"\n mcp_url: \"http://advanced-service:8080\"\n pattern: \"hybrid\"\n main_workflow: \"advanced_workflow\"\n methods:\n - get_metrics: \"Get current system metrics\"\n - export_data: \"Export data in specified format\"\n - simple_query: \"Execute simple database query\"\n```\n\n#### **Agent Configuration (agents.yaml):**\n```yaml\nagents:\n - id: hybrid_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You have access to a hybrid tool that supports both patterns.\n \n Available tools:\n - advanced_tool: Data processing (both intent-based and direct)\n \n Use intent-based for complex operations:\n {\n \"mcp\": {\n \"tool\": \"advanced_tool\",\n \"intent\": \"analyze quarterly sales trends and generate report\",\n \"context\": \"Focus on Q3-Q4 comparison with regional breakdown\"\n }\n }\n \n Use direct for simple operations:\n {\n \"mcp\": {\n \"tool\": \"advanced_tool\",\n \"method\": \"get_metrics\", \n \"params\": {\"metric_type\": \"cpu_usage\"}\n }\n }\n \n Choose the appropriate pattern based on operation complexity.\n```\n\n---\n\n### \ud83d\udccb **Complete YAML Example: Mixed Patterns**\n\n#### **Full Project Structure:**\n```\nmy_project/\n\u251c\u2500\u2500 workflows.yaml # Main workflow definitions\n\u251c\u2500\u2500 agents.yaml # Agent configurations \n\u251c\u2500\u2500 tools.yaml # Tool registrations\n\u2514\u2500\u2500 main.py # Application entry point\n```\n\n#### **workflows.yaml:**\n```yaml\nworkflows:\n # Main workflow supporting both patterns\n main_workflow:\n - id: mixed_patterns_workflow\n steps:\n - id: intelligent_agent\n agent: mixed_pattern_agent\n input: ${context.user_input}\n output:\n to: user\n\n # Example workflow demonstrating sequential tool use\n sequential_workflow:\n - id: file_then_github\n steps:\n # Step 1: Read local file (direct pattern)\n - id: read_config\n agent: file_agent\n input: \"Read the configuration file /tmp/app.conf\"\n output:\n to: create_issue\n \n # Step 2: Create GitHub issue based on file content (intent pattern) \n - id: create_issue\n agent: github_agent\n input: |\n Create a GitHub issue about configuration problems.\n Configuration content: ${context.step_outputs.read_config}\n output:\n to: user\n```\n\n#### **agents.yaml:**\n```yaml\nagents:\n # Agent that can use both patterns intelligently\n - id: mixed_pattern_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You are an intelligent assistant with access to both intent-based and direct tools.\n \n **Intent-Based Tools** (describe what you want to do):\n - github_mcp: GitHub repository management\n - analytics_tool: Data analysis and reporting\n \n **Direct Tools** (specify method and parameters):\n - filesystem: File operations\n Methods: read_file(path), list_directory(path)\n - calculator: Mathematical operations\n Methods: calculate(expression)\n \n **Usage Examples:**\n \n Intent-based:\n {\n \"mcp\": {\n \"tool\": \"github_mcp\",\n \"intent\": \"create issue about performance problem\",\n \"context\": \"API response times increased by 50% after deployment\"\n }\n }\n \n Direct:\n {\n \"mcp\": {\n \"tool\": \"filesystem\",\n \"method\": \"read_file\", \n \"params\": {\"path\": \"/tmp/config.json\"}\n }\n }\n \n Choose the appropriate pattern based on complexity:\n - Use intent-based for complex operations requiring orchestration\n - Use direct for simple, well-defined method calls\n\n # Specialized agent for file operations\n - id: file_agent\n type: openai\n model: gpt-4o-mini\n system_prompt: |\n You specialize in file operations using direct tool calls.\n \n Available tool:\n - filesystem: read_file(path), list_directory(path)\n \n Always return:\n {\n \"mcp\": {\n \"tool\": \"filesystem\",\n \"method\": \"read_file\",\n \"params\": {\"path\": \"/path/to/file\"}\n }\n }\n\n # Specialized agent for GitHub operations \n - id: github_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You specialize in GitHub operations using intent-based patterns.\n \n Available tool:\n - github_mcp: GitHub repository management\n \n Always return:\n {\n \"mcp\": {\n \"tool\": \"github_mcp\", \n \"intent\": \"describe what you want to do\",\n \"context\": \"provide relevant context and details\"\n }\n }\n```\n\n#### **tools.yaml:**\n```yaml\ntools:\n # Intent-based tools with orchestration workflows\n - id: github_mcp\n type: mcpgithubtool\n description: \"GitHub repository management - supports issue creation, PR management, file operations\"\n mcp_url: \"stdio://github_mcp\"\n pattern: \"intent\"\n main_workflow: \"main_workflow\"\n \n - id: analytics_tool\n type: mcpanalytics\n description: \"Data analysis and reporting - supports trend analysis, metric calculation, report generation\"\n mcp_url: \"http://analytics-service:8080\" \n pattern: \"intent\"\n main_workflow: \"analytics_workflow\"\n \n # Direct tools for simple operations\n - id: filesystem\n type: mcpfilesystem\n description: \"Direct file operations\"\n local_mode: true\n pattern: \"direct\"\n methods:\n - read_file: \"Read file contents\"\n - list_directory: \"List directory contents\"\n - write_file: \"Write content to file\"\n \n - id: calculator\n type: mcpcalculator\n description: \"Mathematical operations\"\n local_mode: true\n pattern: \"direct\"\n methods:\n - calculate: \"Evaluate mathematical expression\"\n - solve_equation: \"Solve algebraic equation\"\n \n # Hybrid tool supporting both patterns\n - id: advanced_tool\n type: mcpadvanced\n description: \"Advanced data processing - supports both intent-based and direct patterns\"\n mcp_url: \"http://advanced-service:8080\"\n pattern: \"hybrid\"\n main_workflow: \"advanced_workflow\"\n methods:\n - get_metrics: \"Get current system metrics\"\n - export_data: \"Export data in specified format\"\n```\n\n#### **main.py:**\n```python\n#!/usr/bin/env python3\n\"\"\"\nEnhanced MCP Patterns Example Application\n\"\"\"\n\nfrom langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor\n\ndef main():\n # Load configuration \n loader = LangSwarmConfigLoader()\n workflows, agents, tools, brokers = loader.load()\n \n # Create workflow executor\n executor = WorkflowExecutor(workflows, agents)\n \n print(\"\ud83d\ude80 Enhanced MCP Patterns Demo\")\n print(\"=\" * 50)\n \n # Example 1: Intent-based GitHub operation\n print(\"\\n1. Intent-Based Pattern (GitHub)\")\n result1 = executor.run_workflow(\n \"main_workflow\",\n \"Create a GitHub issue about the authentication bug that's preventing user logins\"\n )\n print(f\"Result: {result1}\")\n \n # Example 2: Direct filesystem operation\n print(\"\\n2. Direct Pattern (Filesystem)\") \n result2 = executor.run_workflow(\n \"main_workflow\", \n \"Read the contents of /tmp/config.json\"\n )\n print(f\"Result: {result2}\")\n \n # Example 3: Sequential workflow using both patterns\n print(\"\\n3. Sequential Mixed Patterns\")\n result3 = executor.run_workflow(\n \"sequential_workflow\",\n \"Process configuration file and create GitHub issue\"\n )\n print(f\"Result: {result3}\")\n \n print(\"\\n\u2705 Demo completed!\")\n\nif __name__ == \"__main__\":\n main()\n```\n\n---\n\n### \ud83c\udfaf **Benefits of Enhanced Patterns**\n\n| Aspect | Intent-Based | Direct | Hybrid |\n|--------|-------------|--------|--------|\n| **Complexity** | High orchestration | Simple operations | Variable |\n| **Agent Knowledge** | High-level descriptions | Method signatures | Both |\n| **Flexibility** | Maximum | Limited | Maximum |\n| **Performance** | Slower (orchestration) | Faster (direct) | Variable |\n| **Use Cases** | GitHub, Analytics | Filesystem, Calculator | Advanced APIs |\n\n### \ud83d\udd04 **Migration Guide**\n\n**From Legacy Direct Calls:**\n```yaml\n# OLD (Don't use)\n- id: legacy_call\n function: langswarm.core.utils.workflows.functions.mcp_call\n args:\n mcp_url: \"local://filesystem\"\n task: \"read_file\" \n params: {\"path\": \"/tmp/file\"}\n\n# NEW (Intent-based)\n- id: intent_call\n agent: file_agent\n input: \"Read the important configuration file\"\n # Agent outputs: {\"mcp\": {\"tool\": \"filesystem\", \"intent\": \"read config\", \"context\": \"...\"}}\n\n# NEW (Direct) \n- id: direct_call\n agent: file_agent\n input: \"Read /tmp/file using direct method\"\n # Agent outputs: {\"mcp\": {\"tool\": \"filesystem\", \"method\": \"read_file\", \"params\": {\"path\": \"/tmp/file\"}}}\n```\n\n### \ud83d\ude80 **Best Practices**\n\n1. **Choose the Right Pattern:**\n - **Intent-based**: Complex tools requiring orchestration (GitHub, Analytics)\n - **Direct**: Simple tools with clear method APIs (Filesystem, Calculator)\n - **Hybrid**: Advanced tools that benefit from both approaches\n\n2. **Agent Design:**\n - Give agents high-level tool descriptions for intent-based tools\n - Provide method signatures for direct tools\n - Train agents to choose appropriate patterns\n\n3. **Tool Configuration:**\n - Set `pattern: \"intent\"` for complex tools with workflows\n - Set `pattern: \"direct\"` for simple tools with clear methods\n - Set `pattern: \"hybrid\"` for advanced tools supporting both\n\n4. **Workflow Structure:**\n - Let agents drive tool selection through their output\n - Avoid direct `mcp_call` functions in workflows for MCP tools\n - Use sequential steps for multi-tool operations\n\n---\n\n## \u26a1 **Local Mode with Enhanced Patterns: Zero-Latency Intelligence**\n\nThe combination of `local_mode: true` with enhanced patterns provides **zero-latency tool execution** while maintaining intelligent agent abstraction.\n\n### \ud83c\udfaf **Performance Revolution**\n\n| Pattern | Local Mode | Remote Mode | Performance Gain |\n|---------|------------|-------------|------------------|\n| **Intent-Based** | **0ms** | 50-100ms | **1000x faster** |\n| **Direct** | **0ms** | 20-50ms | **500x faster** |\n| **Hybrid** | **0ms** | 50-100ms | **1000x faster** |\n\n### \ud83d\udd27 **How It Works**\n\nThe enhanced middleware automatically detects `local_mode: true` and uses optimal `local://` URLs:\n\n```python\n# Middleware automatically handles local mode\nif getattr(handler, 'local_mode', False):\n mcp_url = f\"local://{tool_id}\" # Zero-latency direct call\nelif hasattr(handler, 'mcp_url'):\n mcp_url = handler.mcp_url # Remote call\n```\n\n### \ud83d\udccb **Local Mode Configuration Examples**\n\n#### **Intent-Based Local Tool:**\n```yaml\n# tools.yaml\ntools:\n - id: local_analytics\n type: mcpanalytics\n description: \"Local data analysis with zero-latency orchestration\"\n local_mode: true # Enable zero-latency execution\n pattern: \"intent\"\n main_workflow: \"analytics_workflow\"\n```\n\n```yaml\n# agents.yaml\nagents:\n - id: analytics_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You have access to a local analytics tool (zero-latency).\n \n Available tool:\n - local_analytics: Data analysis (describe what analysis you want)\n \n Use intent-based pattern:\n {\n \"mcp\": {\n \"tool\": \"local_analytics\",\n \"intent\": \"analyze sales trends for Q4\",\n \"context\": \"Focus on regional performance and seasonal patterns\"\n }\n }\n \n The tool provides instant response with full orchestration.\n```\n\n#### **Direct Local Tool:**\n```yaml\n# tools.yaml\ntools:\n - id: filesystem\n type: mcpfilesystem\n description: \"Local filesystem operations\"\n local_mode: true # Enable zero-latency execution\n pattern: \"direct\"\n methods:\n - read_file: \"Read file contents\"\n - list_directory: \"List directory contents\"\n - write_file: \"Write content to file\"\n```\n\n```yaml\n# agents.yaml\nagents:\n - id: file_agent\n type: openai\n model: gpt-4o-mini\n system_prompt: |\n You specialize in local filesystem operations (zero-latency).\n \n Available tool:\n - filesystem: Local file operations\n Methods: read_file(path), list_directory(path), write_file(path, content)\n \n Use direct pattern:\n {\n \"mcp\": {\n \"tool\": \"filesystem\",\n \"method\": \"read_file\",\n \"params\": {\"path\": \"/tmp/config.json\"}\n }\n }\n \n Local mode provides instant response times.\n```\n\n#### **Hybrid Local Tool:**\n```yaml\n# tools.yaml\ntools:\n - id: local_calculator\n type: mcpcalculator\n description: \"Advanced calculator supporting both patterns\"\n local_mode: true # Enable zero-latency execution\n pattern: \"hybrid\"\n main_workflow: \"calculator_workflow\"\n methods:\n - calculate: \"Simple mathematical expression\"\n - convert_units: \"Unit conversion\"\n```\n\n```yaml\n# agents.yaml\nagents:\n - id: calculator_agent\n type: openai\n model: gpt-4o\n system_prompt: |\n You have access to a local calculator (zero-latency).\n \n Available tool:\n - local_calculator: Mathematical operations\n \n Use intent-based for complex operations:\n {\n \"mcp\": {\n \"tool\": \"local_calculator\",\n \"intent\": \"solve physics problem with unit conversion\",\n \"context\": \"Convert between metric and imperial units\"\n }\n }\n \n Use direct for simple operations:\n {\n \"mcp\": {\n \"tool\": \"local_calculator\",\n \"method\": \"calculate\",\n \"params\": {\"expression\": \"2 + 2 * 3\"}\n }\n }\n```\n\n### \ud83d\udd04 **Mixed Local/Remote Workflow:**\n```yaml\n# workflows.yaml\nworkflows:\n mixed_performance_workflow:\n - id: high_performance_analysis\n steps:\n # Step 1: Read data file (local, 0ms)\n - id: read_data\n agent: file_agent\n input: \"Read the data file /tmp/sales_data.csv\"\n output:\n to: analyze\n \n # Step 2: Analyze data (local intent-based, 0ms)\n - id: analyze\n agent: analytics_agent\n input: |\n Analyze the sales data for trends and patterns.\n Data: ${context.step_outputs.read_data}\n output:\n to: create_issue\n \n # Step 3: Create GitHub issue (remote, 50ms)\n - id: create_issue\n agent: github_agent\n input: |\n Create a GitHub issue with the analysis results.\n Analysis: ${context.step_outputs.analyze}\n output:\n to: user\n```\n\n### \ud83c\udfd7\ufe0f **Building Custom Local Tools**\n\n```python\n# my_tools/analytics.py\nfrom langswarm.mcp.server_base import BaseMCPToolServer\nfrom pydantic import BaseModel\n\nclass AnalysisInput(BaseModel):\n data: str\n analysis_type: str\n\nclass AnalysisOutput(BaseModel):\n result: str\n metrics: dict\n\ndef analyze_data(data: str, analysis_type: str):\n # Your analysis logic here\n return {\n \"result\": f\"Analysis of type {analysis_type} completed\",\n \"metrics\": {\"trend\": \"upward\", \"confidence\": 0.85}\n }\n\n# Create local MCP server\nanalytics_server = BaseMCPToolServer(\n name=\"local_analytics\",\n description=\"Local data analytics tool\",\n local_mode=True # Enable zero-latency mode\n)\n\nanalytics_server.add_task(\n name=\"analyze\",\n description=\"Analyze data trends\",\n input_model=AnalysisInput,\n output_model=AnalysisOutput,\n handler=analyze_data\n)\n\n# Auto-register when imported\napp = analytics_server.build_app()\n```\n\n### \ud83d\ude80 **Complete Local Mode Application:**\n\n```python\n#!/usr/bin/env python3\n\"\"\"\nZero-Latency Enhanced Patterns Example\n\"\"\"\n\nfrom langswarm.core.config import LangSwarmConfigLoader, WorkflowExecutor\n\n# Import local tools to register them\nimport langswarm.mcp.tools.filesystem.main # Registers local filesystem\nimport my_tools.analytics # Registers custom analytics\n\ndef main():\n # Load configuration\n loader = LangSwarmConfigLoader()\n workflows, agents, tools, brokers = loader.load()\n \n # Create executor\n executor = WorkflowExecutor(workflows, agents)\n \n print(\"\ud83d\ude80 Zero-Latency Enhanced Patterns Demo\")\n print(\"=\" * 50)\n \n # Example 1: Local direct pattern (0ms)\n print(\"\\n1. Local Direct Pattern (Filesystem)\")\n result1 = executor.run_workflow(\n \"main_workflow\",\n \"List the contents of the /tmp directory\"\n )\n print(f\"Result: {result1}\")\n \n # Example 2: Local intent pattern (0ms)\n print(\"\\n2. Local Intent Pattern (Analytics)\")\n result2 = executor.run_workflow(\n \"main_workflow\",\n \"Analyze quarterly sales performance and identify key trends\"\n )\n print(f\"Result: {result2}\")\n \n # Example 3: Mixed local/remote workflow\n print(\"\\n3. Mixed Performance Workflow\")\n result3 = executor.run_workflow(\n \"mixed_performance_workflow\",\n \"Process sales data and create GitHub issue with results\"\n )\n print(f\"Result: {result3}\")\n \n print(\"\\n\u2705 Local operations completed with zero latency!\")\n\nif __name__ == \"__main__\":\n main()\n```\n\n### \ud83c\udfaf **Local vs Remote Strategy:**\n\n```yaml\n# Development Environment (prioritize speed)\ntools:\n - id: filesystem\n local_mode: true # 0ms for fast iteration\n pattern: \"direct\"\n \n - id: analytics\n local_mode: true # 0ms for rapid testing\n pattern: \"intent\"\n\n# Production Environment (balance performance and isolation)\ntools:\n - id: filesystem\n local_mode: true # Keep local for performance\n pattern: \"direct\"\n \n - id: github\n mcp_url: \"stdio://github_mcp\" # External for security\n pattern: \"intent\"\n \n - id: database\n mcp_url: \"http://db-service:8080\" # External for isolation\n pattern: \"hybrid\"\n```\n\n### \ud83d\udd04 **Migration from Legacy Local Calls:**\n\n```yaml\n# OLD (Legacy direct calls)\n- id: legacy_call\n function: langswarm.core.utils.workflows.functions.mcp_call\n args:\n mcp_url: \"local://filesystem\"\n task: \"read_file\"\n params: {\"path\": \"/tmp/file\"}\n\n# NEW (Enhanced local direct pattern)\n- id: enhanced_call\n agent: file_agent\n input: \"Read the file /tmp/file\"\n # Agent outputs: {\"mcp\": {\"tool\": \"filesystem\", \"method\": \"read_file\", \"params\": {\"path\": \"/tmp/file\"}}}\n # Middleware automatically uses local://filesystem for 0ms latency\n\n# NEW (Enhanced local intent pattern)\n- id: intent_call\n agent: analytics_agent \n input: \"Analyze the performance data in the file\"\n # Agent outputs: {\"mcp\": {\"tool\": \"local_analytics\", \"intent\": \"analyze performance\", \"context\": \"...\"}}\n # Tool workflow handles orchestration with 0ms latency\n```\n\n### \u2728 **Benefits Summary:**\n\n**\ud83d\ude80 Performance Benefits:**\n- Zero latency (0ms vs 50-100ms for HTTP)\n- 1000x faster execution for complex operations\n- Shared memory space with LangSwarm process\n- No container or server setup required\n\n**\ud83e\udde0 Intelligence Benefits:**\n- Intent-based: Natural language tool interaction\n- Direct: Explicit method calls for simple operations \n- Hybrid: Best of both worlds\n- No agent implementation knowledge required\n\n**\ud83d\udd27 Combined Benefits:**\n- Zero-latency intent-based tool orchestration\n- Instant direct method calls\n- Scalable from development to production\n- Maximum performance with maximum abstraction\n\n**The combination of local mode + enhanced patterns delivers both the highest performance AND the most intelligent tool abstraction possible!** \ud83c\udfaf\n\n---\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A multi-agent ecosystem for large language models (LLMs) and autonomous systems.",
"version": "0.0.51",
"project_urls": {
"Homepage": "https://github.com/aekdahl/langswarm",
"Repository": "https://github.com/aekdahl/langswarm"
},
"split_keywords": [
"llm",
" multi-agent",
" langchain",
" hugginface",
" openai",
" mcp",
" agent",
" orchestration"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a8efeee23a66e13c0f0b045d62d4f6c712aad16f8bc291d8d0b1bf557cec4711",
"md5": "ed95b7b03ae4d10c2d77a8876e04da7e",
"sha256": "4c4760434ca0a4810d840e50256a91d165353adef421684f46533409b7961a5d"
},
"downloads": -1,
"filename": "langswarm-0.0.51-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ed95b7b03ae4d10c2d77a8876e04da7e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 208534,
"upload_time": "2025-07-04T13:17:45",
"upload_time_iso_8601": "2025-07-04T13:17:45.599587Z",
"url": "https://files.pythonhosted.org/packages/a8/ef/eee23a66e13c0f0b045d62d4f6c712aad16f8bc291d8d0b1bf557cec4711/langswarm-0.0.51-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "cc333fa44753f3b60a6e3b5db0309a7ae3a7248a239deb925d8d42588273cecf",
"md5": "08a499f22253f54896f8160c708f8b24",
"sha256": "729e61d58248060671a001249944b0a64495c1635f68beaa9713a238bcabb51f"
},
"downloads": -1,
"filename": "langswarm-0.0.51.tar.gz",
"has_sig": false,
"md5_digest": "08a499f22253f54896f8160c708f8b24",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 151803,
"upload_time": "2025-07-04T13:17:46",
"upload_time_iso_8601": "2025-07-04T13:17:46.810525Z",
"url": "https://files.pythonhosted.org/packages/cc/33/3fa44753f3b60a6e3b5db0309a7ae3a7248a239deb925d8d42588273cecf/langswarm-0.0.51.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-04 13:17:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aekdahl",
"github_project": "langswarm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "langswarm"
}