# LangChain-Drasi
LangChain-Drasi enables building reactive, event-driven AI agents by bridging external data changes with LangGraph workflows. [Drasi](https://drasi.io/) continuous queries stream real-time updates that trigger agent state transitions, modify memory, or dynamically control workflow execution—transforming static agents into long-lived, responsive systems.
## Overview
`langchain-drasi` provides a seamless way to connect LangChain/LangGraph agents to [Drasi](https://drasi.io/) continuous queries, enabling AI agents to:
- **Discover** available Drasi queries
- **Read** current query results
- **Subscribe** to real-time query updates
- **React** to changes via notification handlers
## Installation
```bash
pip install langchain-drasi
```
### Development Installation
This project uses [uv](https://docs.astral.sh/uv/) for fast dependency management.
```bash
# Clone the repository
git clone https://github.com/drasi-project/langchain-drasi.git
cd langchain-drasi
# Install with development dependencies
uv sync
# Or install without dev dependencies
make install
```
## Quick Start
```python
from langchain_drasi import create_drasi_tool, MCPConnectionConfig, ConsoleHandler
# Configure HTTP connection to remote Drasi MCP server
config = MCPConnectionConfig(
server_url="http://localhost:8083", # Default Drasi MCP server URL
headers={"Authorization": "Bearer your-token"}, # Optional authentication
timeout=30.0
)
# Create notification handler
handler = ConsoleHandler()
# Create the tool
tool = create_drasi_tool(
mcp_config=config,
notification_handlers=[handler]
)
# Use with LangChain agents (requires langchain <1.0)
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_deployment="gpt-4o-mini",
temperature=0
)
prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, [tool], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[tool])
# Agent can now discover and read Drasi queries
result = await agent_executor.ainvoke({
"input": "What queries are available?"
})
```
## Features
### 🔍 Query Discovery
Agents can discover available Drasi queries automatically:
```python
queries = await tool.discover_queries()
# Returns: [QueryInfo, QueryInfo, ...]
```
### 📖 Query Reading
Read current results from any Drasi query:
```python
result = await tool.read_query("active-orders")
# Returns: QueryResult with current data
```
### 🔔 Real-time Subscriptions
Subscribe to query updates and handle changes:
```python
await tool.subscribe("hot-freezers")
# Notifications routed to registered handlers
```
### 🎯 Built-in Handlers
Six ready-to-use notification handlers:
- **ConsoleHandler**: Prints notifications to stdout with formatting
- **LoggingHandler**: Logs notifications using Python logging
- **MemoryHandler**: Stores notifications in memory for analysis
- **BufferHandler**: Stores notifications in a FIFO queue for sequential consumption
- **LangChainMemoryHandler**: Automatically injects notifications into LangChain conversation memory
- **LangGraphMemoryHandler**: Automatically injects notifications into LangGraph checkpoints
### 🛠️ Custom Handlers
Implement your own notification handlers:
```python
from langchain_drasi import BaseDrasiNotificationHandler
class MyHandler(BaseDrasiNotificationHandler):
def on_result_added(self, query_name: str, added_data: dict) -> None:
# Custom logic for new results
self.save_to_database(query_name, added_data)
def on_result_updated(self, query_name: str, updated_data: dict) -> None:
# Custom logic for updates
self.update_cache(query_name, updated_data)
def on_result_deleted(self, query_name: str, deleted_data: dict) -> None:
# Custom logic for deletions
self.remove_from_cache(query_name, deleted_data)
```
## Examples
See the [examples/](examples/) directory for complete working examples:
### Chat Examples ([examples/chat/](examples/chat/))
Interactive ReAct agents demonstrating automatic notification memory:
- **langchain_react.py**: LangChain ReAct agent with `LangChainMemoryHandler`
- **langgraph_react.py**: LangGraph ReAct agent with `LangGraphMemoryHandler`
- **Use case**: Freezer temperature monitoring with real-time alerts
### Terminator Game ([examples/terminator/](examples/terminator/))
Complex LangGraph agent demonstrating custom workflows and notification handling:
- **Custom LangGraph state machine** that integrates Drasi tool
- **BufferHandler** for processing real-time player positions
- **Use case**: AI agent hunts players using Drasi continuous queries
Basic usage demonstrating core functionality
## API Reference
### Main Functions
#### `create_drasi_tool()`
Factory function to create a DrasiTool instance.
**Parameters:**
- `mcp_config` (MCPConnectionConfig): MCP connection configuration
- `notification_handlers` (list[DrasiNotificationHandler], optional): Notification handlers
**Returns:** DrasiTool instance
### Configuration
#### `MCPConnectionConfig`
Pydantic model for HTTP-based MCP server connection configuration.
**Fields:**
- `server_url` (str): HTTP/HTTPS URL of the Drasi MCP server
- `headers` (dict[str, str], optional): HTTP headers for authentication
- `timeout` (float): Request timeout in seconds (default: 30.0)
- `reconnect_policy` (ReconnectPolicy): Reconnection settings
### Handlers
#### `LoggingHandler`
Logs notifications using Python's logging framework.
```python
handler = LoggingHandler(
logger_name="drasi.notifications",
log_level=logging.INFO
)
```
#### `ConsoleHandler`
Prints notifications to console with formatted output.
```python
from langchain_drasi import ConsoleHandler
handler = ConsoleHandler()
# Use with create_drasi_tool
tool = create_drasi_tool(
mcp_config=config,
notification_handlers=[handler]
)
```
#### `MemoryHandler`
Stores notifications in memory.
```python
from langchain_drasi import MemoryHandler
handler = MemoryHandler(max_size=100)
# Retrieve notifications
all_notifs = handler.get_all()
freezer_notifs = handler.get_by_query("freezerx")
added_events = handler.get_by_type("added")
```
#### `BufferHandler`
Stores notifications in a FIFO queue for sequential consumption. This is useful for buffering incoming change notifications, while your workflow may be busy on another step.
```python
from langchain_drasi import BufferHandler
handler = BufferHandler(max_size=50)
# Use with create_drasi_tool
tool = create_drasi_tool(
mcp_config=config,
notification_handlers=[handler]
)
# Consume notifications one at a time
while not handler.is_empty():
notification = handler.consume()
process_notification(notification)
# Or peek without consuming
next_notif = handler.peek()
# Check buffer status
current_size = handler.size()
```
#### `LangChainMemoryHandler`
Automatically injects notifications into LangChain conversation memory as system messages.
```python
from langchain.memory import ConversationBufferMemory
from langchain_drasi import LangChainMemoryHandler
memory = ConversationBufferMemory(
memory_key="chat_history",
input_key="input",
output_key="output",
)
handler = LangChainMemoryHandler(memory)
# Notifications are automatically added to conversation memory
tool = create_drasi_tool(
mcp_config=config,
notification_handlers=[handler]
)
```
See [examples/chat/langchain_react.py](examples/chat/langchain_react.py) for a complete example.
#### `LangGraphMemoryHandler`
Automatically injects notifications into LangGraph checkpoints as system messages.
```python
from langgraph.checkpoint.memory import MemorySaver
from langchain_drasi import LangGraphMemoryHandler
memory = MemorySaver()
thread_id = "my-conversation"
handler = LangGraphMemoryHandler(memory, thread_id)
# Create agent with wrapped checkpointer
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
model=llm,
tools=[drasi_tool],
checkpointer=handler.checkpointer, # Use wrapped checkpointer
)
```
See [examples/chat/langgraph_react.py](examples/chat/langgraph_react.py) for a complete example.
## Development
### Running Tests
```bash
# Run all tests (including integration)
make test
# Run tests excluding integration tests
make test-fast
# Run unit tests only
make test-unit
# Run integration tests only
make test-integration
```
### Code Quality
```bash
# Format code
make format
# Run linting checks (ruff + mypy + pyright)
make lint
# Run type checking only
make typecheck
```
### Available Make Targets
Run `make help` to see all available commands.
## Requirements
- Python 3.11+
- LangChain Core >=0.1.0
- LangGraph >=0.1.0
- MCP SDK >=1.0.0
- Pydantic >=2.0.0
**Note**: Examples using LangChain's legacy APIs (agents, memory, hub) require LangChain <1.0. For LangChain 1.0+, use LangGraph-based workflows.
## Use Case Examples
### 1. Realtime Knowledge Agents
**Example: AI Trading or News Monitoring Agent**
Build agents that maintain evolving understanding of companies or topics. When new market data, filings, or news arrives, Drasi continuous queries detect changes and push updates into LangGraph memory via notification handlers.
The workflow can:
- Trigger summarization nodes
- Re-evaluate trading strategy nodes
- Send alerts when thresholds are crossed
**Key benefits:** Async events changing agent reasoning mid-execution
### 2. Collaborative AI Co-Pilots
**Example: Project Management Assistant (Jira + Slack Integration)**
Create agents that manage "plan and execute" loops. LangChain-Drasi streams updates when:
- New tickets are created
- Teammates comment
- Deployment pipelines fail
The agent dynamically adjusts workflows to:
- Reassign tasks
- Summarize recent changes
- Notify relevant stakeholders
**Key benefits:** Integration with human workflows and reactive decision making
### 3. IoT or Environment-Aware Agents
**Example: Smart Home / Facility Monitoring Agent**
Implement agents following "Observe → Diagnose → Act" patterns. LangChain-Drasi streams sensor data (temperature, occupancy, motion) from Drasi queries, enabling agents to receive events like:
- "Temperature > 90°F in server room"
- "Door left open after 10PM"
These trigger subgraph actions such as notifying staff or adjusting systems.
**Key benefits:** Event-driven control loops with context persistence
### 4. Customer Support or CRM AI
**Example: Proactive Customer Agent**
Build agents that track ongoing support tickets and dynamically respond to external updates (customer replies, sentiment scores, transaction data) streamed via Drasi.
The agent can:
- Update its mental model of customers
- Suggest next actions
- Flag escalation workflows
**Key benefits:** Dynamic memory updates and priority re-ranking based on streaming data
### 5. Game or Simulation AI
**Example: Dynamic NPC or Dungeon Master Agent**
Create agents where LangGraph models narrative or game logic, while LangChain-Drasi feeds real-time game state:
- Player positions
- Inventory changes
- Player chat inputs
The AI responds with adaptive storylines or strategic NPC behaviors.
**Key benefits:** Continuous interaction loops and event-driven storytelling
### 7. DevOps or Observability Assistant
**Example: LLM-Augmented Ops Monitor**
Build agents that follow response patterns (detect → diagnose → remediate). Drasi queries monitor logs, metrics, or alerts, and LangChain-Drasi routes these updates into the agent's context, triggering:
- Log analysis
- Hypothesis generation
- Action nodes (restart service, notify engineer)
**Key benefits:** Event-triggered reasoning pipelines integrating with infrastructure telemetry
### 8. Realtime Collaborative Editing / Chat Agents
**Example: Async Group Assistant**
Develop agents for multi-user scenarios where users edit or discuss in real-time. LangChain-Drasi receives streaming edits, comments, or conversation events, enabling agents to:
- Maintain global context
- Offer live suggestions
- Adjust strategies collaboratively
**Key benefits:** Multi-user event synchronization and async context adaptation
## License
Apache License 2.0 - see [LICENSE](LICENSE) file for details.
## Support
- **Issues**: [GitHub Issues](https://github.com/drasi-project/langchain-drasi/issues)
- **Drasi Documentation**: [drasi.io](https://drasi.io/)
- **LangChain Documentation**: [python.langchain.com](https://python.langchain.com/)
- **LangGraph Documentation**: [langchain-ai.github.io/langgraph](https://langchain-ai.github.io/langgraph/)
## Acknowledgments
- Built with [LangChain](https://www.langchain.com/)
- Powered by [Drasi](https://drasi.io/)
- Uses [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-drasi",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "ai, continuous-queries, drasi, langchain, llm, mcp",
"author": "Drasi Contributors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/6e/57/2ff2f0d94abec024a7bae0bdeb5dbd7852f21b186a46574b651df13ddb53/langchain_drasi-0.1.0.tar.gz",
"platform": null,
"description": "# LangChain-Drasi\n\nLangChain-Drasi enables building reactive, event-driven AI agents by bridging external data changes with LangGraph workflows. [Drasi](https://drasi.io/) continuous queries stream real-time updates that trigger agent state transitions, modify memory, or dynamically control workflow execution\u2014transforming static agents into long-lived, responsive systems.\n\n## Overview\n\n`langchain-drasi` provides a seamless way to connect LangChain/LangGraph agents to [Drasi](https://drasi.io/) continuous queries, enabling AI agents to:\n\n- **Discover** available Drasi queries\n- **Read** current query results\n- **Subscribe** to real-time query updates\n- **React** to changes via notification handlers\n\n## Installation\n\n```bash\npip install langchain-drasi\n```\n\n### Development Installation\n\nThis project uses [uv](https://docs.astral.sh/uv/) for fast dependency management.\n\n```bash\n# Clone the repository\ngit clone https://github.com/drasi-project/langchain-drasi.git\ncd langchain-drasi\n\n# Install with development dependencies\nuv sync\n\n# Or install without dev dependencies\nmake install\n```\n\n## Quick Start\n\n```python\nfrom langchain_drasi import create_drasi_tool, MCPConnectionConfig, ConsoleHandler\n\n# Configure HTTP connection to remote Drasi MCP server\nconfig = MCPConnectionConfig(\n server_url=\"http://localhost:8083\", # Default Drasi MCP server URL\n headers={\"Authorization\": \"Bearer your-token\"}, # Optional authentication\n timeout=30.0\n)\n\n# Create notification handler\nhandler = ConsoleHandler()\n\n# Create the tool\ntool = create_drasi_tool(\n mcp_config=config,\n notification_handlers=[handler]\n)\n\n# Use with LangChain agents (requires langchain <1.0)\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_react_agent\nfrom langchain_openai import AzureChatOpenAI\n\nllm = AzureChatOpenAI(\n azure_deployment=\"gpt-4o-mini\",\n temperature=0\n)\n\nprompt = hub.pull(\"hwchase17/react-chat\")\nagent = create_react_agent(llm, [tool], prompt)\nagent_executor = AgentExecutor(agent=agent, tools=[tool])\n\n# Agent can now discover and read Drasi queries\nresult = await agent_executor.ainvoke({\n \"input\": \"What queries are available?\"\n})\n```\n\n## Features\n\n### \ud83d\udd0d Query Discovery\n\nAgents can discover available Drasi queries automatically:\n\n```python\nqueries = await tool.discover_queries()\n# Returns: [QueryInfo, QueryInfo, ...]\n```\n\n### \ud83d\udcd6 Query Reading\n\nRead current results from any Drasi query:\n\n```python\nresult = await tool.read_query(\"active-orders\")\n# Returns: QueryResult with current data\n```\n\n### \ud83d\udd14 Real-time Subscriptions\n\nSubscribe to query updates and handle changes:\n\n```python\nawait tool.subscribe(\"hot-freezers\")\n# Notifications routed to registered handlers\n```\n\n### \ud83c\udfaf Built-in Handlers\n\nSix ready-to-use notification handlers:\n\n- **ConsoleHandler**: Prints notifications to stdout with formatting\n- **LoggingHandler**: Logs notifications using Python logging\n- **MemoryHandler**: Stores notifications in memory for analysis\n- **BufferHandler**: Stores notifications in a FIFO queue for sequential consumption\n- **LangChainMemoryHandler**: Automatically injects notifications into LangChain conversation memory\n- **LangGraphMemoryHandler**: Automatically injects notifications into LangGraph checkpoints\n\n### \ud83d\udee0\ufe0f Custom Handlers\n\nImplement your own notification handlers:\n\n```python\nfrom langchain_drasi import BaseDrasiNotificationHandler\n\nclass MyHandler(BaseDrasiNotificationHandler):\n def on_result_added(self, query_name: str, added_data: dict) -> None:\n # Custom logic for new results\n self.save_to_database(query_name, added_data)\n\n def on_result_updated(self, query_name: str, updated_data: dict) -> None:\n # Custom logic for updates\n self.update_cache(query_name, updated_data)\n\n def on_result_deleted(self, query_name: str, deleted_data: dict) -> None:\n # Custom logic for deletions\n self.remove_from_cache(query_name, deleted_data)\n```\n\n## Examples\n\nSee the [examples/](examples/) directory for complete working examples:\n\n### Chat Examples ([examples/chat/](examples/chat/))\n\nInteractive ReAct agents demonstrating automatic notification memory:\n- **langchain_react.py**: LangChain ReAct agent with `LangChainMemoryHandler`\n- **langgraph_react.py**: LangGraph ReAct agent with `LangGraphMemoryHandler`\n- **Use case**: Freezer temperature monitoring with real-time alerts\n\n### Terminator Game ([examples/terminator/](examples/terminator/))\n\nComplex LangGraph agent demonstrating custom workflows and notification handling:\n- **Custom LangGraph state machine** that integrates Drasi tool\n- **BufferHandler** for processing real-time player positions\n- **Use case**: AI agent hunts players using Drasi continuous queries\n\nBasic usage demonstrating core functionality\n\n## API Reference\n\n### Main Functions\n\n#### `create_drasi_tool()`\n\nFactory function to create a DrasiTool instance.\n\n**Parameters:**\n- `mcp_config` (MCPConnectionConfig): MCP connection configuration\n- `notification_handlers` (list[DrasiNotificationHandler], optional): Notification handlers\n\n**Returns:** DrasiTool instance\n\n### Configuration\n\n#### `MCPConnectionConfig`\n\nPydantic model for HTTP-based MCP server connection configuration.\n\n**Fields:**\n- `server_url` (str): HTTP/HTTPS URL of the Drasi MCP server\n- `headers` (dict[str, str], optional): HTTP headers for authentication\n- `timeout` (float): Request timeout in seconds (default: 30.0)\n- `reconnect_policy` (ReconnectPolicy): Reconnection settings\n\n### Handlers\n\n#### `LoggingHandler`\n\nLogs notifications using Python's logging framework.\n\n```python\nhandler = LoggingHandler(\n logger_name=\"drasi.notifications\",\n log_level=logging.INFO\n)\n```\n\n#### `ConsoleHandler`\n\nPrints notifications to console with formatted output.\n\n```python\nfrom langchain_drasi import ConsoleHandler\n\nhandler = ConsoleHandler()\n\n# Use with create_drasi_tool\ntool = create_drasi_tool(\n mcp_config=config,\n notification_handlers=[handler]\n)\n```\n\n#### `MemoryHandler`\n\nStores notifications in memory.\n\n```python\nfrom langchain_drasi import MemoryHandler\n\nhandler = MemoryHandler(max_size=100)\n\n# Retrieve notifications\nall_notifs = handler.get_all()\nfreezer_notifs = handler.get_by_query(\"freezerx\")\nadded_events = handler.get_by_type(\"added\")\n```\n\n#### `BufferHandler`\n\nStores notifications in a FIFO queue for sequential consumption. This is useful for buffering incoming change notifications, while your workflow may be busy on another step.\n\n```python\nfrom langchain_drasi import BufferHandler\n\nhandler = BufferHandler(max_size=50)\n\n# Use with create_drasi_tool\ntool = create_drasi_tool(\n mcp_config=config,\n notification_handlers=[handler]\n)\n\n# Consume notifications one at a time\nwhile not handler.is_empty():\n notification = handler.consume()\n process_notification(notification)\n\n# Or peek without consuming\nnext_notif = handler.peek()\n\n# Check buffer status\ncurrent_size = handler.size()\n```\n\n#### `LangChainMemoryHandler`\n\nAutomatically injects notifications into LangChain conversation memory as system messages.\n\n```python\nfrom langchain.memory import ConversationBufferMemory\nfrom langchain_drasi import LangChainMemoryHandler\n\nmemory = ConversationBufferMemory(\n memory_key=\"chat_history\",\n input_key=\"input\",\n output_key=\"output\",\n)\n\nhandler = LangChainMemoryHandler(memory)\n\n# Notifications are automatically added to conversation memory\ntool = create_drasi_tool(\n mcp_config=config,\n notification_handlers=[handler]\n)\n```\n\nSee [examples/chat/langchain_react.py](examples/chat/langchain_react.py) for a complete example.\n\n#### `LangGraphMemoryHandler`\n\nAutomatically injects notifications into LangGraph checkpoints as system messages.\n\n```python\nfrom langgraph.checkpoint.memory import MemorySaver\nfrom langchain_drasi import LangGraphMemoryHandler\n\nmemory = MemorySaver()\nthread_id = \"my-conversation\"\n\nhandler = LangGraphMemoryHandler(memory, thread_id)\n\n# Create agent with wrapped checkpointer\nfrom langgraph.prebuilt import create_react_agent\n\nagent = create_react_agent(\n model=llm,\n tools=[drasi_tool],\n checkpointer=handler.checkpointer, # Use wrapped checkpointer\n)\n```\n\nSee [examples/chat/langgraph_react.py](examples/chat/langgraph_react.py) for a complete example.\n\n## Development\n\n### Running Tests\n\n```bash\n# Run all tests (including integration)\nmake test\n\n# Run tests excluding integration tests\nmake test-fast\n\n# Run unit tests only\nmake test-unit\n\n# Run integration tests only\nmake test-integration\n\n```\n\n### Code Quality\n\n```bash\n# Format code\nmake format\n\n# Run linting checks (ruff + mypy + pyright)\nmake lint\n\n# Run type checking only\nmake typecheck\n```\n\n### Available Make Targets\n\nRun `make help` to see all available commands.\n\n## Requirements\n\n- Python 3.11+\n- LangChain Core >=0.1.0\n- LangGraph >=0.1.0\n- MCP SDK >=1.0.0\n- Pydantic >=2.0.0\n\n**Note**: Examples using LangChain's legacy APIs (agents, memory, hub) require LangChain <1.0. For LangChain 1.0+, use LangGraph-based workflows.\n\n## Use Case Examples\n\n### 1. Realtime Knowledge Agents\n\n**Example: AI Trading or News Monitoring Agent**\n\nBuild agents that maintain evolving understanding of companies or topics. When new market data, filings, or news arrives, Drasi continuous queries detect changes and push updates into LangGraph memory via notification handlers.\n\nThe workflow can:\n- Trigger summarization nodes\n- Re-evaluate trading strategy nodes\n- Send alerts when thresholds are crossed\n\n**Key benefits:** Async events changing agent reasoning mid-execution\n\n### 2. Collaborative AI Co-Pilots\n\n**Example: Project Management Assistant (Jira + Slack Integration)**\n\nCreate agents that manage \"plan and execute\" loops. LangChain-Drasi streams updates when:\n- New tickets are created\n- Teammates comment\n- Deployment pipelines fail\n\nThe agent dynamically adjusts workflows to:\n- Reassign tasks\n- Summarize recent changes\n- Notify relevant stakeholders\n\n**Key benefits:** Integration with human workflows and reactive decision making\n\n### 3. IoT or Environment-Aware Agents\n\n**Example: Smart Home / Facility Monitoring Agent**\n\nImplement agents following \"Observe \u2192 Diagnose \u2192 Act\" patterns. LangChain-Drasi streams sensor data (temperature, occupancy, motion) from Drasi queries, enabling agents to receive events like:\n- \"Temperature > 90\u00b0F in server room\"\n- \"Door left open after 10PM\"\n\nThese trigger subgraph actions such as notifying staff or adjusting systems.\n\n**Key benefits:** Event-driven control loops with context persistence\n\n### 4. Customer Support or CRM AI\n\n**Example: Proactive Customer Agent**\n\nBuild agents that track ongoing support tickets and dynamically respond to external updates (customer replies, sentiment scores, transaction data) streamed via Drasi.\n\nThe agent can:\n- Update its mental model of customers\n- Suggest next actions\n- Flag escalation workflows\n\n**Key benefits:** Dynamic memory updates and priority re-ranking based on streaming data\n\n### 5. Game or Simulation AI\n\n**Example: Dynamic NPC or Dungeon Master Agent**\n\nCreate agents where LangGraph models narrative or game logic, while LangChain-Drasi feeds real-time game state:\n- Player positions\n- Inventory changes\n- Player chat inputs\n\nThe AI responds with adaptive storylines or strategic NPC behaviors.\n\n**Key benefits:** Continuous interaction loops and event-driven storytelling\n\n### 7. DevOps or Observability Assistant\n\n**Example: LLM-Augmented Ops Monitor**\n\nBuild agents that follow response patterns (detect \u2192 diagnose \u2192 remediate). Drasi queries monitor logs, metrics, or alerts, and LangChain-Drasi routes these updates into the agent's context, triggering:\n- Log analysis\n- Hypothesis generation\n- Action nodes (restart service, notify engineer)\n\n**Key benefits:** Event-triggered reasoning pipelines integrating with infrastructure telemetry\n\n### 8. Realtime Collaborative Editing / Chat Agents\n\n**Example: Async Group Assistant**\n\nDevelop agents for multi-user scenarios where users edit or discuss in real-time. LangChain-Drasi receives streaming edits, comments, or conversation events, enabling agents to:\n- Maintain global context\n- Offer live suggestions\n- Adjust strategies collaboratively\n\n**Key benefits:** Multi-user event synchronization and async context adaptation\n\n\n## License\n\nApache License 2.0 - see [LICENSE](LICENSE) file for details.\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/drasi-project/langchain-drasi/issues)\n- **Drasi Documentation**: [drasi.io](https://drasi.io/)\n- **LangChain Documentation**: [python.langchain.com](https://python.langchain.com/)\n- **LangGraph Documentation**: [langchain-ai.github.io/langgraph](https://langchain-ai.github.io/langgraph/)\n\n## Acknowledgments\n\n- Built with [LangChain](https://www.langchain.com/)\n- Powered by [Drasi](https://drasi.io/)\n- Uses [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "LangChain extension for accessing Drasi continuous queries via MCP",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/drasi-project/langchain-drasi#readme",
"Homepage": "https://github.com/drasi-project/langchain-drasi",
"Issues": "https://github.com/drasi-project/langchain-drasi/issues",
"Repository": "https://github.com/drasi-project/langchain-drasi"
},
"split_keywords": [
"ai",
" continuous-queries",
" drasi",
" langchain",
" llm",
" mcp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "404e5b1a73878f5bf01cb62d703a7734266c5cc9615f8e9c18b67f95a997e60f",
"md5": "a0b979246258e408363def3039ca9528",
"sha256": "692227a0c23cee252fffb9bf7211979bce612c49c9827b102874ff84e22f5d1c"
},
"downloads": -1,
"filename": "langchain_drasi-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a0b979246258e408363def3039ca9528",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 36795,
"upload_time": "2025-10-28T00:18:38",
"upload_time_iso_8601": "2025-10-28T00:18:38.163587Z",
"url": "https://files.pythonhosted.org/packages/40/4e/5b1a73878f5bf01cb62d703a7734266c5cc9615f8e9c18b67f95a997e60f/langchain_drasi-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "6e572ff2f0d94abec024a7bae0bdeb5dbd7852f21b186a46574b651df13ddb53",
"md5": "96e3e44c1e7db21c11eb965ea51f955f",
"sha256": "f283c5bc587402242b3504102014833da177a11474da5861659d7a0d4c4e037e"
},
"downloads": -1,
"filename": "langchain_drasi-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "96e3e44c1e7db21c11eb965ea51f955f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 593874,
"upload_time": "2025-10-28T00:18:41",
"upload_time_iso_8601": "2025-10-28T00:18:41.571410Z",
"url": "https://files.pythonhosted.org/packages/6e/57/2ff2f0d94abec024a7bae0bdeb5dbd7852f21b186a46574b651df13ddb53/langchain_drasi-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-28 00:18:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "drasi-project",
"github_project": "langchain-drasi#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "langchain-drasi"
}