langchain-aisdk-adapter


Namelangchain-aisdk-adapter JSON
Version 0.0.1a1 PyPI version JSON
download
home_pageNone
SummaryA Python package that converts LangChain/LangGraph event streams to AI SDK UI Stream Protocol format
upload_time2025-07-16 06:35:46
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.9.0
licenseApache-2.0
keywords langchain ai-sdk stream protocol adapter langgraph ai llm fastapi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LangChain AI SDK Adapter

[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Development Status](https://img.shields.io/badge/status-alpha-orange.svg)](https://pypi.org/project/langchain-aisdk-adapter/)

> **⚠️ Alpha Release Notice**: This project is currently in alpha stage. While we strive for stability and reliability, please be aware that APIs may change and some features might still be under development. We appreciate your patience and welcome any feedback to help us improve!

A thoughtfully designed Python adapter that bridges LangChain/LangGraph applications with the AI SDK UI Stream Protocol. This library aims to make it easier for developers to integrate LangChain's powerful capabilities with modern streaming interfaces.

## ✨ Features

We've tried to make this adapter as comprehensive and user-friendly as possible:

- **πŸ”„ Comprehensive Protocol Support**: Supports 15+ AI SDK protocols including text streaming, tool interactions, step management, and data handling
- **βš™οΈ Intelligent Configuration**: Flexible `AdapterConfig` system allows you to control exactly which protocols are generated
- **⏸️ Protocol Control**: Advanced pause/resume functionality for temporarily disabling specific protocols during execution
- **πŸ”’ Thread-Safe Multi-User Support**: `ThreadSafeAdapterConfig` provides isolated protocol state for concurrent requests in web applications
- **πŸŽ›οΈ Dynamic Protocol Control**: Context managers for temporarily enabling/disabling specific protocols during execution
- **πŸ› οΈ Rich Tool Support**: Seamless integration with LangChain tools, agents, and function calling
- **πŸ” Extended Step Detection**: Enhanced recognition of LangChain agents, chains, and LangGraph components for better step tracking
- **πŸ“Š Usage Tracking**: Built-in statistics and monitoring for stream processing
- **πŸ”’ Type Safety**: Complete Python type hints and Pydantic validation
- **🏭 Factory Methods**: Convenient factory methods for manual protocol generation when needed
- **🌐 Web Framework Ready**: Optimized for FastAPI, Flask, and other web frameworks
- **πŸ”Œ Extensible Design**: Easy to extend and customize for specific use cases

## πŸš€ Quick Start

We hope this gets you up and running quickly:

### Installation

```bash
# Basic installation
pip install langchain-aisdk-adapter

# With examples (includes LangChain, LangGraph, OpenAI)
pip install langchain-aisdk-adapter[examples]

# With web framework support (includes FastAPI, Uvicorn)
pip install langchain-aisdk-adapter[web]

# For development (includes testing and linting tools)
pip install langchain-aisdk-adapter[dev]
```

### Basic Usage

Here's a simple example to get you started:

```python
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_aisdk_adapter import LangChainAdapter

# Create your LangChain model
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)

# Create a stream
stream = llm.astream([HumanMessage(content="Hello, world!")])

# Convert to AI SDK format - it's that simple!
ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)

# Use in your application
async for chunk in ai_sdk_stream:
    print(chunk, end="", flush=True)
```

### Configuration Options

We've included several preset configurations to make common use cases easier:

```python
from langchain_aisdk_adapter import LangChainAdapter, AdapterConfig, ThreadSafeAdapterConfig

# Minimal output - just essential text and data
config = AdapterConfig.minimal()

# Focus on tool interactions
config = AdapterConfig.tools_only()

# Everything enabled (default)
config = AdapterConfig.comprehensive()

# Custom configuration
config = AdapterConfig(
    enable_text=True,
    enable_data=True,
    enable_tool_calls=True,
    enable_steps=False,  # Disable step tracking
    enable_reasoning=False  # Disable reasoning output
)

stream = LangChainAdapter.to_data_stream_response(your_stream, config=config)

# Protocol pause/resume functionality
with config.pause_protocols(['0', '2']):  # Temporarily disable text and data protocols
    # During this block, text and data protocols won't be generated
    restricted_stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)
    async for chunk in restricted_stream:
        # Only tool calls, results, and steps will be emitted
        print(chunk)
# Protocols are automatically restored after the context

# Thread-safe configuration for multi-user applications
safe_config = ThreadSafeAdapterConfig()

# Each request gets isolated protocol state
with safe_config.protocols(['0', '9', 'a']):  # Only text, tool calls, and results
    stream = LangChainAdapter.to_data_stream_response(your_stream, config=safe_config)
    # This configuration won't affect other concurrent requests
```

## πŸ“‹ Protocol Support Status

We've organized the supported protocols into three categories to help you understand what's available and when they're triggered:

### 🟒 Automatically Supported Protocols

These protocols are generated automatically from LangChain/LangGraph events with specific trigger conditions:

#### **`0:` (Text Protocol)**
**Trigger Condition**: Generated when LLM produces streaming text content
**Format**: `0:"streaming text content"`
**When it occurs**: 
- During `llm.astream()` calls
- When LangGraph nodes produce text output
- Any streaming text from language models

#### **`2:` (Data Protocol)**
**Trigger Condition**: Generated for structured data and metadata
**Format**: `2:[{"key":"value"}]`
**When it occurs**:
- LangGraph node metadata and intermediate results
- Tool execution metadata
- Custom data from LangChain callbacks

#### **`9:` (Tool Call Protocol)**
**Trigger Condition**: Generated when tools are invoked
**Format**: `9:{"toolCallId":"call_123","toolName":"search","args":{"query":"test"}}`
**When it occurs**:
- LangChain agent tool invocations
- LangGraph tool node executions
- Function calling in chat models

#### **`a:` (Tool Result Protocol)**
**Trigger Condition**: Generated when tool execution completes
**Format**: `a:{"toolCallId":"call_123","result":"tool output"}`
**When it occurs**:
- After successful tool execution
- Following any `9:` protocol
- Both successful and error results

#### **`b:` (Tool Call Stream Start Protocol)**
**Trigger Condition**: Generated at the beginning of streaming tool calls
**Format**: `b:{"toolCallId":"call_123","toolName":"search"}`
**When it occurs**:
- Before tool parameter streaming begins
- Only for tools that support streaming parameters

#### **`d:` (Finish Message Protocol)** ⚠️ **LangGraph Only**
**Trigger Condition**: **Only generated in LangGraph workflows** when a message is completed
**Format**: `d:{"finishReason":"stop","usage":{"promptTokens":10,"completionTokens":20}}`
**When it occurs**:
- **LangGraph workflow message completion**
- **NOT generated in basic LangChain streams**
- End of LangGraph node execution
- Contains usage statistics when available

#### **`e:` (Finish Step Protocol)** πŸ”„ **Enhanced Support**
**Trigger Condition**: Generated when major workflow components complete execution
**Format**: `e:{"stepId":"step_123","finishReason":"completed"}`
**When it occurs**:
- **LangGraph workflow step completion** (primary use case)
- **LangChain agent execution** (AgentExecutor, ReActAgent, ChatAgent, etc.)
- **Chain-based workflows** (LLMChain, SequentialChain, RouterChain, etc.)
- **Components with specific tags** (agent, chain, executor, workflow, multi_agent)
- End of multi-step processes and reasoning steps

#### **`f:` (Start Step Protocol)** πŸ”„ **Enhanced Support**
**Trigger Condition**: Generated when major workflow components begin execution
**Format**: `f:{"stepId":"step_123","stepType":"agent_action"}`
**When it occurs**:
- **LangGraph workflow step initiation** (primary use case)
- **LangChain agent execution** (AgentExecutor, ReActAgent, ChatAgent, etc.)
- **Chain-based workflows** (LLMChain, SequentialChain, RouterChain, etc.)
- **LangGraph components** (LangGraph, CompiledGraph, StateGraph, etc.)
- **Components with specific tags** (agent, chain, executor, workflow, multi_agent, langgraph, graph)
- Beginning of multi-step processes and reasoning steps

> πŸ’‘ **Important Notes**: 
> - Protocols `d:`, `e:`, and `f:` are **LangGraph-specific** and will not appear in basic LangChain streams
> - All automatically supported protocols can be individually enabled or disabled through `AdapterConfig`
> - The exact format may vary based on the underlying LangChain/LangGraph event structure

### 🟑 Manual Support Only Protocols

These protocols require manual generation using our factory methods:

#### **`g:` (Reasoning Protocol)** ⚠️ **Manual Support Only**
**Purpose**: Transmits AI reasoning process and thought chains
**Format**: `g:{"reasoning":"Let me think about this step by step...","confidence":0.85}`
**Manual Creation**:
```python
from langchain_aisdk_adapter import AISDKFactory

# Create reasoning protocol
reasoning_part = AISDKFactory.create_reasoning_part(
    reasoning="Let me analyze the user's request...",
    confidence=0.9
)
print(f"g:{reasoning_part.model_dump_json()}")
```
**Use Cases**: Chain-of-thought reasoning, decision explanations, confidence scoring

#### **`c:` (Tool Call Delta Protocol)** ⚠️ **Manual Support Only**
**Purpose**: Streams incremental updates during tool call execution
**Format**: `c:{"toolCallId":"call_123","delta":{"function":{"arguments":"{\"query\":\"hello\"}"}},"index":0}`
**Manual Creation**:
```python
from langchain_aisdk_adapter import AISDKFactory

# Create tool call delta
delta_part = AISDKFactory.create_tool_call_delta_part(
    tool_call_id="call_123",
    delta={"function": {"arguments": '{"query":"hello"}'}},
    index=0
)
print(f"c:{delta_part.model_dump_json()}")
```
**Use Cases**: Real-time tool execution feedback, streaming function calls

#### **`8:` (Message Annotations Protocol)** ⚠️ **Manual Support Only**
**Purpose**: Adds metadata and annotations to messages
**Format**: `8:{"annotations":[{"type":"citation","text":"Source: Wikipedia"}],"metadata":{"confidence":0.95}}`
**Manual Creation**:
```python
from langchain_aisdk_adapter import AISDKFactory

# Create message annotation
annotation_part = AISDKFactory.create_message_annotation_part(
    annotations=[{"type": "citation", "text": "Source: Wikipedia"}],
    metadata={"confidence": 0.95}
)
print(f"8:{annotation_part.model_dump_json()}")
```
**Use Cases**: Source citations, confidence scores, content metadata

#### **`h:` (Source Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_source_part(url, title=None)` or `AISDKFactory.source(url, title=None)`
**Format**: `h:{"url":"https://example.com","title":"Document Title"}`
**Use Cases**: Document references, citation tracking, source attribution

#### **`i:` (Redacted Reasoning Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_redacted_reasoning_part(data)` or `AISDKFactory.redacted_reasoning(data)`
**Format**: `i:{"data":"[REDACTED] reasoning content"}`
**Use Cases**: Privacy-compliant reasoning output, content filtering

#### **`j:` (Reasoning Signature Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_reasoning_signature_part(signature)` or `AISDKFactory.reasoning_signature(signature)`
**Format**: `j:{"signature":"signature_abc123"}`
**Use Cases**: Reasoning verification, model signatures, authenticity tracking

#### **`k:` (File Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_file_part(data, mime_type)` or `AISDKFactory.file(data, mime_type)`
**Format**: `k:{"data":"base64_encoded_data","mimeType":"image/png"}`
**Use Cases**: File attachments, binary data transmission, document sharing

#### **`h:` (Source Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_source_part(url, title=None)` or `AISDKFactory.source(url, title=None)`
**Format**: `h:{"url":"https://example.com","title":"Document Title"}`
**Use Cases**: Document references, citation tracking, source attribution

#### **`i:` (Redacted Reasoning Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_redacted_reasoning_part(data)` or `AISDKFactory.redacted_reasoning(data)`
**Format**: `i:{"data":"[REDACTED] reasoning content"}`
**Use Cases**: Privacy-compliant reasoning output, content filtering

#### **`j:` (Reasoning Signature Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_reasoning_signature_part(signature)` or `AISDKFactory.reasoning_signature(signature)`
**Format**: `j:{"signature":"signature_abc123"}`
**Use Cases**: Reasoning verification, model signatures, authenticity tracking

#### **`k:` (File Protocol)** βœ… **Manual Support**
**Manual Creation**: Use `create_file_part(data, mime_type)` or `AISDKFactory.file(data, mime_type)`
**Format**: `k:{"data":"base64_encoded_data","mimeType":"image/png"}`
**Use Cases**: File attachments, binary data transmission, document sharing

### πŸ”΄ Currently Unsupported Protocols

We're working on these, but they're not yet available:

- **`1:` (Function Call)**: Different from LangChain's tool system architecture
- **`4:` (Tool Call Stream)**: Requires streaming parameter support not available in current LangChain versions
- **`5:` (Tool Call Stream Part)**: Same limitation as above
- **`6:` (Tool Call Stream Delta)**: Same limitation as above
- **`7:` (Tool Call Stream Finish)**: Same limitation as above

## πŸ› οΈ Manual Protocol Generation

For protocols that need manual implementation, we've provided convenient factory methods:

```python
from langchain_aisdk_adapter.factory import AISDKFactory

# Create factory instance
factory = AISDKFactory()

# Generate reasoning protocol
reasoning_part = factory.reasoning(
    content="Let me think about this step by step..."
)

# Generate source protocol
source_part = factory.source(
    url="https://example.com/document",
    title="Important Document"
)

# Generate redacted reasoning protocol
redacted_part = factory.redacted_reasoning(
    data="[REDACTED] sensitive reasoning content"
)

# Generate reasoning signature protocol
signature_part = factory.reasoning_signature(
    signature="model_signature_abc123"
)

# Generate file protocol
file_part = factory.file(
    data="iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8/5+hHgAHggJ/PchI7wAAAABJRU5ErkJggg==",
    mime_type="image/png"
)

# Generate message annotation
annotation_part = factory.annotation(
    message_id="msg_123",
    annotation_type="confidence",
    value={"score": 0.95}
)

# Generate tool call delta (for streaming parameters)
tool_delta_part = factory.tool_call_delta(
    tool_call_id="call_123",
    name="search",
    args_delta='{"query": "artificial intel'  # partial JSON
)

# Use factory instance for quick protocol creation
from langchain_aisdk_adapter import factory

# Simplified factory methods
text_part = factory.text("Hello from LangChain!")
data_part = factory.data({"temperature": 0.7, "max_tokens": 100})
error_part = factory.error("Connection timeout")
reasoning_part = factory.reasoning("Based on the context, I should...")
source_part = factory.source(
    url="https://docs.langchain.com",
    title="LangChain Documentation"
)

# Use in streaming responses
async def stream_with_factory():
    yield text_part
    yield reasoning_part
    yield data_part
```

**Why Manual Implementation?**

We've had to make some protocols manual due to technical limitations:
- **Reasoning content**: Different LLMs use varying reasoning formats that can't be automatically standardized
- **Tool call deltas**: LangChain's tool system doesn't provide streaming parameter generation
- **Message annotations**: LangChain lacks a standardized event system for message metadata
- **Source tracking**: Document source information requires explicit application-level implementation
- **Content filtering**: Redacted reasoning needs custom privacy and security policies
- **File handling**: Binary file processing and encoding varies significantly across different implementations

## 🌐 Web Integration Examples

We've included comprehensive examples for web frameworks:

### FastAPI Integration

```python
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from langchain_aisdk_adapter import LangChainAdapter

app = FastAPI()

@app.post("/chat")
async def chat(message: str):
    # Your LangChain setup here
    stream = llm.astream([HumanMessage(content=message)])
    
    return StreamingResponse(
        LangChainAdapter.to_data_stream_response(stream),
        media_type="text/plain"
    )
```

### Multi-turn Conversations

Handle multi-turn conversations with message history:

```python
from langchain_core.messages import HumanMessage, AIMessage
from langchain_aisdk_adapter import LangChainAdapter

async def multi_turn_chat():
    conversation_history = []
    
    # First turn
    user_input = "What is machine learning?"
    conversation_history.append(HumanMessage(content=user_input))
    
    response_content = ""
    stream = llm.astream(conversation_history)
    ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)
    
    async for chunk in ai_sdk_stream:
        if chunk.startswith('0:'):
            # Extract text content from protocol
            text_content = chunk[2:].strip('"')
            response_content += text_content
        yield chunk
    
    conversation_history.append(AIMessage(content=response_content))
    
    # Second turn
    user_input = "Can you give me an example?"
    conversation_history.append(HumanMessage(content=user_input))
    
    stream = llm.astream(conversation_history)
    ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)
    
    async for chunk in ai_sdk_stream:
        yield chunk
```

For complete examples including agent integration, tool usage, and error handling, please check the `web/` directory.

## πŸ§ͺ Usage Examples

Here are comprehensive examples showing different ways to use the adapter:

### Basic LangChain Streaming

```python
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_aisdk_adapter import LangChainAdapter

# Simple streaming example
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)
stream = llm.astream([HumanMessage(content="Tell me a joke")])

# Convert to AI SDK format
ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)

# Process the stream
async for chunk in ai_sdk_stream:
    print(chunk, end="", flush=True)
    # Output: 0:"Why did the chicken cross the road?"
    #         0:" To get to the other side!"
```

### LangChain with Tools

```python
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain_aisdk_adapter import LangChainAdapter

# Define a tool
@tool
def get_weather(city: str) -> str:
    """Get weather information for a city."""
    return f"The weather in {city} is sunny and 25Β°C"

# Create agent
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_openai_functions_agent(llm, [get_weather], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[get_weather])

# Stream with tools
stream = agent_executor.astream({"input": "What's the weather in Paris?"})
ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)

async for chunk in ai_sdk_stream:
    print(chunk)
    # Output includes:
    # 9:{"toolCallId":"call_123","toolName":"get_weather","args":{"city":"Paris"}}
    # a:{"toolCallId":"call_123","result":"The weather in Paris is sunny and 25Β°C"}
    # 0:"The weather in Paris is sunny and 25Β°C"
```

### LangGraph Workflow (with Step Protocols)

```python
from langgraph import StateGraph
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
from langchain_aisdk_adapter import LangChainAdapter
from typing import TypedDict, List

class State(TypedDict):
    messages: List[HumanMessage | AIMessage]

def chat_node(state: State):
    llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True)
    response = llm.invoke(state["messages"])
    return {"messages": state["messages"] + [response]}

# Create workflow
workflow = StateGraph(State)
workflow.add_node("chat", chat_node)
workflow.set_entry_point("chat")
workflow.set_finish_point("chat")

app = workflow.compile()

# Stream LangGraph workflow
stream = app.astream({"messages": [HumanMessage(content="Hello!")]})
ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)

async for chunk in ai_sdk_stream:
    print(chunk)
    # Output includes LangGraph-specific protocols:
    # f:{"stepId":"step_123","stepType":"node_execution"}
    # 0:"Hello! How can I help you today?"
    # e:{"stepId":"step_123","finishReason":"completed"}
    # d:{"finishReason":"stop","usage":{"promptTokens":10,"completionTokens":15}}
```

### Custom Configuration Examples

```python
from langchain_aisdk_adapter import LangChainAdapter, AdapterConfig

# Only text output
config = AdapterConfig(
    enable_text=True,
    enable_data=False,
    enable_tool_calls=False,
    enable_steps=False
)

# Only tool interactions
config = AdapterConfig.tools_only()

# Everything except steps (for basic LangChain)
config = AdapterConfig(
    enable_text=True,
    enable_data=True,
    enable_tool_calls=True,
    enable_tool_results=True,
    enable_steps=False  # Disable LangGraph-specific protocols
)

stream = LangChainAdapter.to_data_stream_response(your_stream, config=config)
```

### Thread-Safe Configuration for Multi-User Applications

```python
from langchain_aisdk_adapter import ThreadSafeAdapterConfig
from fastapi import FastAPI
from fastapi.responses import StreamingResponse

# Create thread-safe configuration for FastAPI
safe_config = ThreadSafeAdapterConfig()

app = FastAPI()

@app.post("/chat")
async def chat(message: str):
    """Each request gets isolated protocol state"""
    stream = llm.astream([HumanMessage(content=message)])
    
    # Thread-safe: each request has isolated configuration
    return StreamingResponse(
        LangChainAdapter.to_data_stream_response(stream, config=safe_config),
        media_type="text/plain"
    )

@app.post("/chat-minimal")
async def chat_minimal(message: str):
    """Temporarily disable certain protocols for this request only"""
    stream = llm.astream([HumanMessage(content=message)])
    
    # Use context manager to temporarily modify protocols
    with safe_config.pause_protocols(['2', '9', 'a']):  # Disable data and tools
        return StreamingResponse(
            LangChainAdapter.to_data_stream_response(stream, config=safe_config),
            media_type="text/plain"
        )
    # Protocols automatically restored after context

@app.post("/chat-selective")
async def chat_selective(message: str):
    """Enable only specific protocols for this request"""
    stream = llm.astream([HumanMessage(content=message)])
    
    # Enable only text and data protocols
    with safe_config.protocols(['0', '2']):
        return StreamingResponse(
            LangChainAdapter.to_data_stream_response(stream, config=safe_config),
            media_type="text/plain"
        )
```

### Protocol Context Management

```python
from langchain_aisdk_adapter import AdapterConfig, ThreadSafeAdapterConfig

# Regular config with context management
config = AdapterConfig()

# Temporarily disable specific protocols
with config.pause_protocols(['0', '2']):  # Pause text and data
    # Only tool calls and results will be generated
    stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)
    async for chunk in stream:
        print(chunk)  # No text or data protocols
# Protocols automatically restored

# Enable only specific protocols
with config.protocols(['0', '9', 'a']):  # Only text, tool calls, and results
    stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)
    async for chunk in stream:
        print(chunk)  # Only specified protocols

# Thread-safe version for concurrent applications
safe_config = ThreadSafeAdapterConfig()

# Each context is isolated per request/thread
with safe_config.protocols(['0']):  # Text only
    # This won't affect other concurrent requests
    stream = LangChainAdapter.to_data_stream_response(stream1, config=safe_config)

# Nested contexts are supported
with safe_config.pause_protocols(['2']):
    with safe_config.protocols(['0', '9']):
        # Only text and tool calls, data is paused
        stream = LangChainAdapter.to_data_stream_response(stream2, config=safe_config)
```

### Error Handling

```python
from langchain_aisdk_adapter import LangChainAdapter
import asyncio

async def safe_streaming():
    try:
        stream = llm.astream([HumanMessage(content="Hello")])
        ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)
        
        async for chunk in ai_sdk_stream:
            print(chunk, end="", flush=True)
            
    except Exception as e:
        print(f"Error during streaming: {e}")
        # Handle errors appropriately

asyncio.run(safe_streaming())
```

### Integration with Callbacks

```python
from langchain_core.callbacks import AsyncCallbackHandler
from langchain_aisdk_adapter import LangChainAdapter

class CustomCallback(AsyncCallbackHandler):
    async def on_llm_start(self, serialized, prompts, **kwargs):
        print("LLM started")
    
    async def on_llm_end(self, response, **kwargs):
        print("LLM finished")

# Use with callbacks
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True, callbacks=[CustomCallback()])
stream = llm.astream([HumanMessage(content="Hello")])
ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)

# The adapter will capture callback events as data protocols
async for chunk in ai_sdk_stream:
    print(chunk)
    # May include: 2:[{"event":"llm_start","timestamp":"..."}]
```

## πŸ§ͺ Testing

We take testing seriously and maintain high coverage:

```bash
# Install development dependencies
pip install langchain-aisdk-adapter[dev]

# Run tests with coverage
pytest tests/ -v --cov=src --cov-report=term-missing

# Current coverage: 98%
```

## πŸ“š API Reference

### LangChainAdapter

The main adapter class:

```python
class LangChainAdapter:
    @staticmethod
    async def to_data_stream_response(
        stream: AsyncIterator,
        config: Optional[AdapterConfig] = None
    ) -> AsyncIterator[str]:
        """Convert LangChain stream to AI SDK format"""
```

### AdapterConfig

Configuration class for controlling protocol generation:

```python
class AdapterConfig:
    enable_text: bool = True
    enable_data: bool = True
    enable_tool_calls: bool = True
    enable_tool_results: bool = True
    enable_steps: bool = True
    enable_reasoning: bool = False  # Manual only
    enable_annotations: bool = False  # Manual only
    enable_files: bool = False  # Manual only
    
    @classmethod
    def minimal(cls) -> "AdapterConfig": ...
    
    @classmethod
    def tools_only(cls) -> "AdapterConfig": ...
    
    @classmethod
    def comprehensive(cls) -> "AdapterConfig": ...
    
    @contextmanager
    def pause_protocols(self, protocol_types: List[str]):
        """Temporarily disable specific protocol types"""
    
    @contextmanager
    def protocols(self, protocol_types: List[str]):
        """Enable only specific protocol types"""

### AISDKFactory

Factory class for manual protocol creation:

```python
class AISDKFactory:
    @staticmethod
    def create_reasoning_part(
        reasoning: str,
        confidence: Optional[float] = None
    ) -> ReasoningPartContent:
        """Create reasoning protocol part"""
    
    @staticmethod
    def create_tool_call_delta_part(
        tool_call_id: str,
        delta: Dict[str, Any],
        index: int = 0
    ) -> ToolCallDeltaPartContent:
        """Create tool call delta protocol part"""
    
    @staticmethod
    def create_message_annotation_part(
        annotations: List[Dict[str, Any]],
        metadata: Optional[Dict[str, Any]] = None
    ) -> MessageAnnotationPartContent:
        """Create message annotation protocol part"""
    
    @staticmethod
    def create_source_part(
        source_id: str,
        source_type: str,
        content: Optional[str] = None,
        metadata: Optional[Dict[str, Any]] = None
    ) -> SourcePartContent:
        """Create source protocol part"""
    
    @staticmethod
    def create_file_part(
        file_id: str,
        file_name: str,
        file_type: str,
        content: Optional[str] = None,
        metadata: Optional[Dict[str, Any]] = None
    ) -> FilePartContent:
        """Create file protocol part"""

### Factory Functions (Backward Compatibility)

Convenience functions for creating protocol parts:

```python
# Basic protocol creation
create_text_part(text: str) -> AISDKPartEmitter
create_data_part(data: Any) -> AISDKPartEmitter
create_error_part(error: str) -> AISDKPartEmitter

# Tool-related protocols
create_tool_call_part(tool_call_id: str, tool_name: str, args: Dict) -> AISDKPartEmitter
create_tool_result_part(tool_call_id: str, result: str) -> AISDKPartEmitter
create_tool_call_streaming_start_part(tool_call_id: str, tool_name: str) -> AISDKPartEmitter

# Step protocols
create_start_step_part(message_id: str) -> AISDKPartEmitter
create_finish_step_part(finish_reason: str, **kwargs) -> AISDKPartEmitter
create_finish_message_part(finish_reason: str, **kwargs) -> AISDKPartEmitter

# Advanced protocols
create_redacted_reasoning_part(reasoning: str) -> AISDKPartEmitter
create_reasoning_signature_part(signature: str) -> AISDKPartEmitter

# Generic factory
create_ai_sdk_part(protocol_type: str, content: Any) -> AISDKPartEmitter
```

### Factory Instance

Convenience factory instance with simplified methods:

```python
from langchain_aisdk_adapter import factory

# Simplified factory methods
text_part = factory.text("Hello world")
data_part = factory.data(["key", "value"])
error_part = factory.error("Something went wrong")
reasoning_part = factory.reasoning("Let me think...")
source_part = factory.source(url="https://example.com", title="Example")
```

### Configuration Instances

Pre-configured instances for common use cases:

```python
from langchain_aisdk_adapter import default_config, safe_config

# Default configuration instance
default_config: AdapterConfig

# Thread-safe configuration instance
safe_config: ThreadSafeAdapterConfig
```
```

### ThreadSafeAdapterConfig

Thread-safe configuration wrapper for multi-user applications:

```python
class ThreadSafeAdapterConfig:
    def __init__(self, base_config: Optional[AdapterConfig] = None):
        """Initialize with optional base configuration"""
    
    def is_protocol_enabled(self, protocol_type: str) -> bool:
        """Check if protocol is enabled (thread-safe)"""
    
    @contextmanager
    def pause_protocols(self, protocol_types: List[str]):
        """Thread-safe context manager to temporarily disable protocols"""
    
    @contextmanager
    def protocols(self, protocol_types: List[str]):
        """Thread-safe context manager to enable only specific protocols"""
```

**Key Features:**
- **Thread Isolation**: Each request/thread gets isolated protocol state
- **Context Management**: Supports nested context managers
- **FastAPI Ready**: Perfect for multi-user web applications
- **Base Config Support**: Can wrap existing AdapterConfig instances

## 🀝 Contributing

We welcome contributions! This project is still in alpha, so there's plenty of room for improvement:

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes with tests
4. Ensure tests pass (`pytest tests/`)
5. Submit a pull request

Please feel free to:
- Report bugs and issues
- Suggest new features
- Improve documentation
- Add more examples
- Enhance test coverage

## πŸ“„ License

This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.

## πŸ™ Acknowledgments

We're grateful to:
- The LangChain team for their excellent framework
- The AI SDK community for the streaming protocol specification
- All contributors and users who help make this project better

## πŸ“ž Support

If you encounter any issues or have questions:

- πŸ“‹ [Open an issue](https://github.com/lointain/langchain_aisdk_adapter/issues)
- πŸ“– [Check the documentation](https://github.com/lointain/langchain_aisdk_adapter#readme)
- πŸ’¬ [Start a discussion](https://github.com/lointain/langchain_aisdk_adapter/discussions)

We appreciate your patience as we continue to improve this alpha release!

---

*Made with ❀️ for the LangChain and AI SDK communities*

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "langchain-aisdk-adapter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9.0",
    "maintainer_email": "Egoalpha <yao_yuan@live.com>",
    "keywords": "langchain, ai-sdk, stream, protocol, adapter, langgraph, ai, llm, fastapi",
    "author": null,
    "author_email": "Egoalpha <yao_yuan@live.com>",
    "download_url": "https://files.pythonhosted.org/packages/0b/d2/aaba28827d636fe31072c56baae8b1767a413cb9d5e2bdd42d32904e228b/langchain_aisdk_adapter-0.0.1a1.tar.gz",
    "platform": null,
    "description": "# LangChain AI SDK Adapter\r\n\r\n[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)\r\n[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\r\n[![Development Status](https://img.shields.io/badge/status-alpha-orange.svg)](https://pypi.org/project/langchain-aisdk-adapter/)\r\n\r\n> **\u26a0\ufe0f Alpha Release Notice**: This project is currently in alpha stage. While we strive for stability and reliability, please be aware that APIs may change and some features might still be under development. We appreciate your patience and welcome any feedback to help us improve!\r\n\r\nA thoughtfully designed Python adapter that bridges LangChain/LangGraph applications with the AI SDK UI Stream Protocol. This library aims to make it easier for developers to integrate LangChain's powerful capabilities with modern streaming interfaces.\r\n\r\n## \u2728 Features\r\n\r\nWe've tried to make this adapter as comprehensive and user-friendly as possible:\r\n\r\n- **\ud83d\udd04 Comprehensive Protocol Support**: Supports 15+ AI SDK protocols including text streaming, tool interactions, step management, and data handling\r\n- **\u2699\ufe0f Intelligent Configuration**: Flexible `AdapterConfig` system allows you to control exactly which protocols are generated\r\n- **\u23f8\ufe0f Protocol Control**: Advanced pause/resume functionality for temporarily disabling specific protocols during execution\r\n- **\ud83d\udd12 Thread-Safe Multi-User Support**: `ThreadSafeAdapterConfig` provides isolated protocol state for concurrent requests in web applications\r\n- **\ud83c\udf9b\ufe0f Dynamic Protocol Control**: Context managers for temporarily enabling/disabling specific protocols during execution\r\n- **\ud83d\udee0\ufe0f Rich Tool Support**: Seamless integration with LangChain tools, agents, and function calling\r\n- **\ud83d\udd0d Extended Step Detection**: Enhanced recognition of LangChain agents, chains, and LangGraph components for better step tracking\r\n- **\ud83d\udcca Usage Tracking**: Built-in statistics and monitoring for stream processing\r\n- **\ud83d\udd12 Type Safety**: Complete Python type hints and Pydantic validation\r\n- **\ud83c\udfed Factory Methods**: Convenient factory methods for manual protocol generation when needed\r\n- **\ud83c\udf10 Web Framework Ready**: Optimized for FastAPI, Flask, and other web frameworks\r\n- **\ud83d\udd0c Extensible Design**: Easy to extend and customize for specific use cases\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\nWe hope this gets you up and running quickly:\r\n\r\n### Installation\r\n\r\n```bash\r\n# Basic installation\r\npip install langchain-aisdk-adapter\r\n\r\n# With examples (includes LangChain, LangGraph, OpenAI)\r\npip install langchain-aisdk-adapter[examples]\r\n\r\n# With web framework support (includes FastAPI, Uvicorn)\r\npip install langchain-aisdk-adapter[web]\r\n\r\n# For development (includes testing and linting tools)\r\npip install langchain-aisdk-adapter[dev]\r\n```\r\n\r\n### Basic Usage\r\n\r\nHere's a simple example to get you started:\r\n\r\n```python\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langchain_core.messages import HumanMessage\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\n# Create your LangChain model\r\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", streaming=True)\r\n\r\n# Create a stream\r\nstream = llm.astream([HumanMessage(content=\"Hello, world!\")])\r\n\r\n# Convert to AI SDK format - it's that simple!\r\nai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n\r\n# Use in your application\r\nasync for chunk in ai_sdk_stream:\r\n    print(chunk, end=\"\", flush=True)\r\n```\r\n\r\n### Configuration Options\r\n\r\nWe've included several preset configurations to make common use cases easier:\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import LangChainAdapter, AdapterConfig, ThreadSafeAdapterConfig\r\n\r\n# Minimal output - just essential text and data\r\nconfig = AdapterConfig.minimal()\r\n\r\n# Focus on tool interactions\r\nconfig = AdapterConfig.tools_only()\r\n\r\n# Everything enabled (default)\r\nconfig = AdapterConfig.comprehensive()\r\n\r\n# Custom configuration\r\nconfig = AdapterConfig(\r\n    enable_text=True,\r\n    enable_data=True,\r\n    enable_tool_calls=True,\r\n    enable_steps=False,  # Disable step tracking\r\n    enable_reasoning=False  # Disable reasoning output\r\n)\r\n\r\nstream = LangChainAdapter.to_data_stream_response(your_stream, config=config)\r\n\r\n# Protocol pause/resume functionality\r\nwith config.pause_protocols(['0', '2']):  # Temporarily disable text and data protocols\r\n    # During this block, text and data protocols won't be generated\r\n    restricted_stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)\r\n    async for chunk in restricted_stream:\r\n        # Only tool calls, results, and steps will be emitted\r\n        print(chunk)\r\n# Protocols are automatically restored after the context\r\n\r\n# Thread-safe configuration for multi-user applications\r\nsafe_config = ThreadSafeAdapterConfig()\r\n\r\n# Each request gets isolated protocol state\r\nwith safe_config.protocols(['0', '9', 'a']):  # Only text, tool calls, and results\r\n    stream = LangChainAdapter.to_data_stream_response(your_stream, config=safe_config)\r\n    # This configuration won't affect other concurrent requests\r\n```\r\n\r\n## \ud83d\udccb Protocol Support Status\r\n\r\nWe've organized the supported protocols into three categories to help you understand what's available and when they're triggered:\r\n\r\n### \ud83d\udfe2 Automatically Supported Protocols\r\n\r\nThese protocols are generated automatically from LangChain/LangGraph events with specific trigger conditions:\r\n\r\n#### **`0:` (Text Protocol)**\r\n**Trigger Condition**: Generated when LLM produces streaming text content\r\n**Format**: `0:\"streaming text content\"`\r\n**When it occurs**: \r\n- During `llm.astream()` calls\r\n- When LangGraph nodes produce text output\r\n- Any streaming text from language models\r\n\r\n#### **`2:` (Data Protocol)**\r\n**Trigger Condition**: Generated for structured data and metadata\r\n**Format**: `2:[{\"key\":\"value\"}]`\r\n**When it occurs**:\r\n- LangGraph node metadata and intermediate results\r\n- Tool execution metadata\r\n- Custom data from LangChain callbacks\r\n\r\n#### **`9:` (Tool Call Protocol)**\r\n**Trigger Condition**: Generated when tools are invoked\r\n**Format**: `9:{\"toolCallId\":\"call_123\",\"toolName\":\"search\",\"args\":{\"query\":\"test\"}}`\r\n**When it occurs**:\r\n- LangChain agent tool invocations\r\n- LangGraph tool node executions\r\n- Function calling in chat models\r\n\r\n#### **`a:` (Tool Result Protocol)**\r\n**Trigger Condition**: Generated when tool execution completes\r\n**Format**: `a:{\"toolCallId\":\"call_123\",\"result\":\"tool output\"}`\r\n**When it occurs**:\r\n- After successful tool execution\r\n- Following any `9:` protocol\r\n- Both successful and error results\r\n\r\n#### **`b:` (Tool Call Stream Start Protocol)**\r\n**Trigger Condition**: Generated at the beginning of streaming tool calls\r\n**Format**: `b:{\"toolCallId\":\"call_123\",\"toolName\":\"search\"}`\r\n**When it occurs**:\r\n- Before tool parameter streaming begins\r\n- Only for tools that support streaming parameters\r\n\r\n#### **`d:` (Finish Message Protocol)** \u26a0\ufe0f **LangGraph Only**\r\n**Trigger Condition**: **Only generated in LangGraph workflows** when a message is completed\r\n**Format**: `d:{\"finishReason\":\"stop\",\"usage\":{\"promptTokens\":10,\"completionTokens\":20}}`\r\n**When it occurs**:\r\n- **LangGraph workflow message completion**\r\n- **NOT generated in basic LangChain streams**\r\n- End of LangGraph node execution\r\n- Contains usage statistics when available\r\n\r\n#### **`e:` (Finish Step Protocol)** \ud83d\udd04 **Enhanced Support**\r\n**Trigger Condition**: Generated when major workflow components complete execution\r\n**Format**: `e:{\"stepId\":\"step_123\",\"finishReason\":\"completed\"}`\r\n**When it occurs**:\r\n- **LangGraph workflow step completion** (primary use case)\r\n- **LangChain agent execution** (AgentExecutor, ReActAgent, ChatAgent, etc.)\r\n- **Chain-based workflows** (LLMChain, SequentialChain, RouterChain, etc.)\r\n- **Components with specific tags** (agent, chain, executor, workflow, multi_agent)\r\n- End of multi-step processes and reasoning steps\r\n\r\n#### **`f:` (Start Step Protocol)** \ud83d\udd04 **Enhanced Support**\r\n**Trigger Condition**: Generated when major workflow components begin execution\r\n**Format**: `f:{\"stepId\":\"step_123\",\"stepType\":\"agent_action\"}`\r\n**When it occurs**:\r\n- **LangGraph workflow step initiation** (primary use case)\r\n- **LangChain agent execution** (AgentExecutor, ReActAgent, ChatAgent, etc.)\r\n- **Chain-based workflows** (LLMChain, SequentialChain, RouterChain, etc.)\r\n- **LangGraph components** (LangGraph, CompiledGraph, StateGraph, etc.)\r\n- **Components with specific tags** (agent, chain, executor, workflow, multi_agent, langgraph, graph)\r\n- Beginning of multi-step processes and reasoning steps\r\n\r\n> \ud83d\udca1 **Important Notes**: \r\n> - Protocols `d:`, `e:`, and `f:` are **LangGraph-specific** and will not appear in basic LangChain streams\r\n> - All automatically supported protocols can be individually enabled or disabled through `AdapterConfig`\r\n> - The exact format may vary based on the underlying LangChain/LangGraph event structure\r\n\r\n### \ud83d\udfe1 Manual Support Only Protocols\r\n\r\nThese protocols require manual generation using our factory methods:\r\n\r\n#### **`g:` (Reasoning Protocol)** \u26a0\ufe0f **Manual Support Only**\r\n**Purpose**: Transmits AI reasoning process and thought chains\r\n**Format**: `g:{\"reasoning\":\"Let me think about this step by step...\",\"confidence\":0.85}`\r\n**Manual Creation**:\r\n```python\r\nfrom langchain_aisdk_adapter import AISDKFactory\r\n\r\n# Create reasoning protocol\r\nreasoning_part = AISDKFactory.create_reasoning_part(\r\n    reasoning=\"Let me analyze the user's request...\",\r\n    confidence=0.9\r\n)\r\nprint(f\"g:{reasoning_part.model_dump_json()}\")\r\n```\r\n**Use Cases**: Chain-of-thought reasoning, decision explanations, confidence scoring\r\n\r\n#### **`c:` (Tool Call Delta Protocol)** \u26a0\ufe0f **Manual Support Only**\r\n**Purpose**: Streams incremental updates during tool call execution\r\n**Format**: `c:{\"toolCallId\":\"call_123\",\"delta\":{\"function\":{\"arguments\":\"{\\\"query\\\":\\\"hello\\\"}\"}},\"index\":0}`\r\n**Manual Creation**:\r\n```python\r\nfrom langchain_aisdk_adapter import AISDKFactory\r\n\r\n# Create tool call delta\r\ndelta_part = AISDKFactory.create_tool_call_delta_part(\r\n    tool_call_id=\"call_123\",\r\n    delta={\"function\": {\"arguments\": '{\"query\":\"hello\"}'}},\r\n    index=0\r\n)\r\nprint(f\"c:{delta_part.model_dump_json()}\")\r\n```\r\n**Use Cases**: Real-time tool execution feedback, streaming function calls\r\n\r\n#### **`8:` (Message Annotations Protocol)** \u26a0\ufe0f **Manual Support Only**\r\n**Purpose**: Adds metadata and annotations to messages\r\n**Format**: `8:{\"annotations\":[{\"type\":\"citation\",\"text\":\"Source: Wikipedia\"}],\"metadata\":{\"confidence\":0.95}}`\r\n**Manual Creation**:\r\n```python\r\nfrom langchain_aisdk_adapter import AISDKFactory\r\n\r\n# Create message annotation\r\nannotation_part = AISDKFactory.create_message_annotation_part(\r\n    annotations=[{\"type\": \"citation\", \"text\": \"Source: Wikipedia\"}],\r\n    metadata={\"confidence\": 0.95}\r\n)\r\nprint(f\"8:{annotation_part.model_dump_json()}\")\r\n```\r\n**Use Cases**: Source citations, confidence scores, content metadata\r\n\r\n#### **`h:` (Source Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_source_part(url, title=None)` or `AISDKFactory.source(url, title=None)`\r\n**Format**: `h:{\"url\":\"https://example.com\",\"title\":\"Document Title\"}`\r\n**Use Cases**: Document references, citation tracking, source attribution\r\n\r\n#### **`i:` (Redacted Reasoning Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_redacted_reasoning_part(data)` or `AISDKFactory.redacted_reasoning(data)`\r\n**Format**: `i:{\"data\":\"[REDACTED] reasoning content\"}`\r\n**Use Cases**: Privacy-compliant reasoning output, content filtering\r\n\r\n#### **`j:` (Reasoning Signature Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_reasoning_signature_part(signature)` or `AISDKFactory.reasoning_signature(signature)`\r\n**Format**: `j:{\"signature\":\"signature_abc123\"}`\r\n**Use Cases**: Reasoning verification, model signatures, authenticity tracking\r\n\r\n#### **`k:` (File Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_file_part(data, mime_type)` or `AISDKFactory.file(data, mime_type)`\r\n**Format**: `k:{\"data\":\"base64_encoded_data\",\"mimeType\":\"image/png\"}`\r\n**Use Cases**: File attachments, binary data transmission, document sharing\r\n\r\n#### **`h:` (Source Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_source_part(url, title=None)` or `AISDKFactory.source(url, title=None)`\r\n**Format**: `h:{\"url\":\"https://example.com\",\"title\":\"Document Title\"}`\r\n**Use Cases**: Document references, citation tracking, source attribution\r\n\r\n#### **`i:` (Redacted Reasoning Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_redacted_reasoning_part(data)` or `AISDKFactory.redacted_reasoning(data)`\r\n**Format**: `i:{\"data\":\"[REDACTED] reasoning content\"}`\r\n**Use Cases**: Privacy-compliant reasoning output, content filtering\r\n\r\n#### **`j:` (Reasoning Signature Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_reasoning_signature_part(signature)` or `AISDKFactory.reasoning_signature(signature)`\r\n**Format**: `j:{\"signature\":\"signature_abc123\"}`\r\n**Use Cases**: Reasoning verification, model signatures, authenticity tracking\r\n\r\n#### **`k:` (File Protocol)** \u2705 **Manual Support**\r\n**Manual Creation**: Use `create_file_part(data, mime_type)` or `AISDKFactory.file(data, mime_type)`\r\n**Format**: `k:{\"data\":\"base64_encoded_data\",\"mimeType\":\"image/png\"}`\r\n**Use Cases**: File attachments, binary data transmission, document sharing\r\n\r\n### \ud83d\udd34 Currently Unsupported Protocols\r\n\r\nWe're working on these, but they're not yet available:\r\n\r\n- **`1:` (Function Call)**: Different from LangChain's tool system architecture\r\n- **`4:` (Tool Call Stream)**: Requires streaming parameter support not available in current LangChain versions\r\n- **`5:` (Tool Call Stream Part)**: Same limitation as above\r\n- **`6:` (Tool Call Stream Delta)**: Same limitation as above\r\n- **`7:` (Tool Call Stream Finish)**: Same limitation as above\r\n\r\n## \ud83d\udee0\ufe0f Manual Protocol Generation\r\n\r\nFor protocols that need manual implementation, we've provided convenient factory methods:\r\n\r\n```python\r\nfrom langchain_aisdk_adapter.factory import AISDKFactory\r\n\r\n# Create factory instance\r\nfactory = AISDKFactory()\r\n\r\n# Generate reasoning protocol\r\nreasoning_part = factory.reasoning(\r\n    content=\"Let me think about this step by step...\"\r\n)\r\n\r\n# Generate source protocol\r\nsource_part = factory.source(\r\n    url=\"https://example.com/document\",\r\n    title=\"Important Document\"\r\n)\r\n\r\n# Generate redacted reasoning protocol\r\nredacted_part = factory.redacted_reasoning(\r\n    data=\"[REDACTED] sensitive reasoning content\"\r\n)\r\n\r\n# Generate reasoning signature protocol\r\nsignature_part = factory.reasoning_signature(\r\n    signature=\"model_signature_abc123\"\r\n)\r\n\r\n# Generate file protocol\r\nfile_part = factory.file(\r\n    data=\"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8/5+hHgAHggJ/PchI7wAAAABJRU5ErkJggg==\",\r\n    mime_type=\"image/png\"\r\n)\r\n\r\n# Generate message annotation\r\nannotation_part = factory.annotation(\r\n    message_id=\"msg_123\",\r\n    annotation_type=\"confidence\",\r\n    value={\"score\": 0.95}\r\n)\r\n\r\n# Generate tool call delta (for streaming parameters)\r\ntool_delta_part = factory.tool_call_delta(\r\n    tool_call_id=\"call_123\",\r\n    name=\"search\",\r\n    args_delta='{\"query\": \"artificial intel'  # partial JSON\r\n)\r\n\r\n# Use factory instance for quick protocol creation\r\nfrom langchain_aisdk_adapter import factory\r\n\r\n# Simplified factory methods\r\ntext_part = factory.text(\"Hello from LangChain!\")\r\ndata_part = factory.data({\"temperature\": 0.7, \"max_tokens\": 100})\r\nerror_part = factory.error(\"Connection timeout\")\r\nreasoning_part = factory.reasoning(\"Based on the context, I should...\")\r\nsource_part = factory.source(\r\n    url=\"https://docs.langchain.com\",\r\n    title=\"LangChain Documentation\"\r\n)\r\n\r\n# Use in streaming responses\r\nasync def stream_with_factory():\r\n    yield text_part\r\n    yield reasoning_part\r\n    yield data_part\r\n```\r\n\r\n**Why Manual Implementation?**\r\n\r\nWe've had to make some protocols manual due to technical limitations:\r\n- **Reasoning content**: Different LLMs use varying reasoning formats that can't be automatically standardized\r\n- **Tool call deltas**: LangChain's tool system doesn't provide streaming parameter generation\r\n- **Message annotations**: LangChain lacks a standardized event system for message metadata\r\n- **Source tracking**: Document source information requires explicit application-level implementation\r\n- **Content filtering**: Redacted reasoning needs custom privacy and security policies\r\n- **File handling**: Binary file processing and encoding varies significantly across different implementations\r\n\r\n## \ud83c\udf10 Web Integration Examples\r\n\r\nWe've included comprehensive examples for web frameworks:\r\n\r\n### FastAPI Integration\r\n\r\n```python\r\nfrom fastapi import FastAPI\r\nfrom fastapi.responses import StreamingResponse\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/chat\")\r\nasync def chat(message: str):\r\n    # Your LangChain setup here\r\n    stream = llm.astream([HumanMessage(content=message)])\r\n    \r\n    return StreamingResponse(\r\n        LangChainAdapter.to_data_stream_response(stream),\r\n        media_type=\"text/plain\"\r\n    )\r\n```\r\n\r\n### Multi-turn Conversations\r\n\r\nHandle multi-turn conversations with message history:\r\n\r\n```python\r\nfrom langchain_core.messages import HumanMessage, AIMessage\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\nasync def multi_turn_chat():\r\n    conversation_history = []\r\n    \r\n    # First turn\r\n    user_input = \"What is machine learning?\"\r\n    conversation_history.append(HumanMessage(content=user_input))\r\n    \r\n    response_content = \"\"\r\n    stream = llm.astream(conversation_history)\r\n    ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n    \r\n    async for chunk in ai_sdk_stream:\r\n        if chunk.startswith('0:'):\r\n            # Extract text content from protocol\r\n            text_content = chunk[2:].strip('\"')\r\n            response_content += text_content\r\n        yield chunk\r\n    \r\n    conversation_history.append(AIMessage(content=response_content))\r\n    \r\n    # Second turn\r\n    user_input = \"Can you give me an example?\"\r\n    conversation_history.append(HumanMessage(content=user_input))\r\n    \r\n    stream = llm.astream(conversation_history)\r\n    ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n    \r\n    async for chunk in ai_sdk_stream:\r\n        yield chunk\r\n```\r\n\r\nFor complete examples including agent integration, tool usage, and error handling, please check the `web/` directory.\r\n\r\n## \ud83e\uddea Usage Examples\r\n\r\nHere are comprehensive examples showing different ways to use the adapter:\r\n\r\n### Basic LangChain Streaming\r\n\r\n```python\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langchain_core.messages import HumanMessage\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\n# Simple streaming example\r\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", streaming=True)\r\nstream = llm.astream([HumanMessage(content=\"Tell me a joke\")])\r\n\r\n# Convert to AI SDK format\r\nai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n\r\n# Process the stream\r\nasync for chunk in ai_sdk_stream:\r\n    print(chunk, end=\"\", flush=True)\r\n    # Output: 0:\"Why did the chicken cross the road?\"\r\n    #         0:\" To get to the other side!\"\r\n```\r\n\r\n### LangChain with Tools\r\n\r\n```python\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langchain_core.tools import tool\r\nfrom langchain.agents import create_openai_functions_agent, AgentExecutor\r\nfrom langchain_core.prompts import ChatPromptTemplate\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\n# Define a tool\r\n@tool\r\ndef get_weather(city: str) -> str:\r\n    \"\"\"Get weather information for a city.\"\"\"\r\n    return f\"The weather in {city} is sunny and 25\u00b0C\"\r\n\r\n# Create agent\r\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", streaming=True)\r\nprompt = ChatPromptTemplate.from_messages([\r\n    (\"system\", \"You are a helpful assistant.\"),\r\n    (\"human\", \"{input}\"),\r\n    (\"placeholder\", \"{agent_scratchpad}\")\r\n])\r\n\r\nagent = create_openai_functions_agent(llm, [get_weather], prompt)\r\nagent_executor = AgentExecutor(agent=agent, tools=[get_weather])\r\n\r\n# Stream with tools\r\nstream = agent_executor.astream({\"input\": \"What's the weather in Paris?\"})\r\nai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n\r\nasync for chunk in ai_sdk_stream:\r\n    print(chunk)\r\n    # Output includes:\r\n    # 9:{\"toolCallId\":\"call_123\",\"toolName\":\"get_weather\",\"args\":{\"city\":\"Paris\"}}\r\n    # a:{\"toolCallId\":\"call_123\",\"result\":\"The weather in Paris is sunny and 25\u00b0C\"}\r\n    # 0:\"The weather in Paris is sunny and 25\u00b0C\"\r\n```\r\n\r\n### LangGraph Workflow (with Step Protocols)\r\n\r\n```python\r\nfrom langgraph import StateGraph\r\nfrom langchain_openai import ChatOpenAI\r\nfrom langchain_core.messages import HumanMessage, AIMessage\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\nfrom typing import TypedDict, List\r\n\r\nclass State(TypedDict):\r\n    messages: List[HumanMessage | AIMessage]\r\n\r\ndef chat_node(state: State):\r\n    llm = ChatOpenAI(model=\"gpt-3.5-turbo\", streaming=True)\r\n    response = llm.invoke(state[\"messages\"])\r\n    return {\"messages\": state[\"messages\"] + [response]}\r\n\r\n# Create workflow\r\nworkflow = StateGraph(State)\r\nworkflow.add_node(\"chat\", chat_node)\r\nworkflow.set_entry_point(\"chat\")\r\nworkflow.set_finish_point(\"chat\")\r\n\r\napp = workflow.compile()\r\n\r\n# Stream LangGraph workflow\r\nstream = app.astream({\"messages\": [HumanMessage(content=\"Hello!\")]})\r\nai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n\r\nasync for chunk in ai_sdk_stream:\r\n    print(chunk)\r\n    # Output includes LangGraph-specific protocols:\r\n    # f:{\"stepId\":\"step_123\",\"stepType\":\"node_execution\"}\r\n    # 0:\"Hello! How can I help you today?\"\r\n    # e:{\"stepId\":\"step_123\",\"finishReason\":\"completed\"}\r\n    # d:{\"finishReason\":\"stop\",\"usage\":{\"promptTokens\":10,\"completionTokens\":15}}\r\n```\r\n\r\n### Custom Configuration Examples\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import LangChainAdapter, AdapterConfig\r\n\r\n# Only text output\r\nconfig = AdapterConfig(\r\n    enable_text=True,\r\n    enable_data=False,\r\n    enable_tool_calls=False,\r\n    enable_steps=False\r\n)\r\n\r\n# Only tool interactions\r\nconfig = AdapterConfig.tools_only()\r\n\r\n# Everything except steps (for basic LangChain)\r\nconfig = AdapterConfig(\r\n    enable_text=True,\r\n    enable_data=True,\r\n    enable_tool_calls=True,\r\n    enable_tool_results=True,\r\n    enable_steps=False  # Disable LangGraph-specific protocols\r\n)\r\n\r\nstream = LangChainAdapter.to_data_stream_response(your_stream, config=config)\r\n```\r\n\r\n### Thread-Safe Configuration for Multi-User Applications\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import ThreadSafeAdapterConfig\r\nfrom fastapi import FastAPI\r\nfrom fastapi.responses import StreamingResponse\r\n\r\n# Create thread-safe configuration for FastAPI\r\nsafe_config = ThreadSafeAdapterConfig()\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/chat\")\r\nasync def chat(message: str):\r\n    \"\"\"Each request gets isolated protocol state\"\"\"\r\n    stream = llm.astream([HumanMessage(content=message)])\r\n    \r\n    # Thread-safe: each request has isolated configuration\r\n    return StreamingResponse(\r\n        LangChainAdapter.to_data_stream_response(stream, config=safe_config),\r\n        media_type=\"text/plain\"\r\n    )\r\n\r\n@app.post(\"/chat-minimal\")\r\nasync def chat_minimal(message: str):\r\n    \"\"\"Temporarily disable certain protocols for this request only\"\"\"\r\n    stream = llm.astream([HumanMessage(content=message)])\r\n    \r\n    # Use context manager to temporarily modify protocols\r\n    with safe_config.pause_protocols(['2', '9', 'a']):  # Disable data and tools\r\n        return StreamingResponse(\r\n            LangChainAdapter.to_data_stream_response(stream, config=safe_config),\r\n            media_type=\"text/plain\"\r\n        )\r\n    # Protocols automatically restored after context\r\n\r\n@app.post(\"/chat-selective\")\r\nasync def chat_selective(message: str):\r\n    \"\"\"Enable only specific protocols for this request\"\"\"\r\n    stream = llm.astream([HumanMessage(content=message)])\r\n    \r\n    # Enable only text and data protocols\r\n    with safe_config.protocols(['0', '2']):\r\n        return StreamingResponse(\r\n            LangChainAdapter.to_data_stream_response(stream, config=safe_config),\r\n            media_type=\"text/plain\"\r\n        )\r\n```\r\n\r\n### Protocol Context Management\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import AdapterConfig, ThreadSafeAdapterConfig\r\n\r\n# Regular config with context management\r\nconfig = AdapterConfig()\r\n\r\n# Temporarily disable specific protocols\r\nwith config.pause_protocols(['0', '2']):  # Pause text and data\r\n    # Only tool calls and results will be generated\r\n    stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)\r\n    async for chunk in stream:\r\n        print(chunk)  # No text or data protocols\r\n# Protocols automatically restored\r\n\r\n# Enable only specific protocols\r\nwith config.protocols(['0', '9', 'a']):  # Only text, tool calls, and results\r\n    stream = LangChainAdapter.to_data_stream_response(some_stream, config=config)\r\n    async for chunk in stream:\r\n        print(chunk)  # Only specified protocols\r\n\r\n# Thread-safe version for concurrent applications\r\nsafe_config = ThreadSafeAdapterConfig()\r\n\r\n# Each context is isolated per request/thread\r\nwith safe_config.protocols(['0']):  # Text only\r\n    # This won't affect other concurrent requests\r\n    stream = LangChainAdapter.to_data_stream_response(stream1, config=safe_config)\r\n\r\n# Nested contexts are supported\r\nwith safe_config.pause_protocols(['2']):\r\n    with safe_config.protocols(['0', '9']):\r\n        # Only text and tool calls, data is paused\r\n        stream = LangChainAdapter.to_data_stream_response(stream2, config=safe_config)\r\n```\r\n\r\n### Error Handling\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\nimport asyncio\r\n\r\nasync def safe_streaming():\r\n    try:\r\n        stream = llm.astream([HumanMessage(content=\"Hello\")])\r\n        ai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n        \r\n        async for chunk in ai_sdk_stream:\r\n            print(chunk, end=\"\", flush=True)\r\n            \r\n    except Exception as e:\r\n        print(f\"Error during streaming: {e}\")\r\n        # Handle errors appropriately\r\n\r\nasyncio.run(safe_streaming())\r\n```\r\n\r\n### Integration with Callbacks\r\n\r\n```python\r\nfrom langchain_core.callbacks import AsyncCallbackHandler\r\nfrom langchain_aisdk_adapter import LangChainAdapter\r\n\r\nclass CustomCallback(AsyncCallbackHandler):\r\n    async def on_llm_start(self, serialized, prompts, **kwargs):\r\n        print(\"LLM started\")\r\n    \r\n    async def on_llm_end(self, response, **kwargs):\r\n        print(\"LLM finished\")\r\n\r\n# Use with callbacks\r\nllm = ChatOpenAI(model=\"gpt-3.5-turbo\", streaming=True, callbacks=[CustomCallback()])\r\nstream = llm.astream([HumanMessage(content=\"Hello\")])\r\nai_sdk_stream = LangChainAdapter.to_data_stream_response(stream)\r\n\r\n# The adapter will capture callback events as data protocols\r\nasync for chunk in ai_sdk_stream:\r\n    print(chunk)\r\n    # May include: 2:[{\"event\":\"llm_start\",\"timestamp\":\"...\"}]\r\n```\r\n\r\n## \ud83e\uddea Testing\r\n\r\nWe take testing seriously and maintain high coverage:\r\n\r\n```bash\r\n# Install development dependencies\r\npip install langchain-aisdk-adapter[dev]\r\n\r\n# Run tests with coverage\r\npytest tests/ -v --cov=src --cov-report=term-missing\r\n\r\n# Current coverage: 98%\r\n```\r\n\r\n## \ud83d\udcda API Reference\r\n\r\n### LangChainAdapter\r\n\r\nThe main adapter class:\r\n\r\n```python\r\nclass LangChainAdapter:\r\n    @staticmethod\r\n    async def to_data_stream_response(\r\n        stream: AsyncIterator,\r\n        config: Optional[AdapterConfig] = None\r\n    ) -> AsyncIterator[str]:\r\n        \"\"\"Convert LangChain stream to AI SDK format\"\"\"\r\n```\r\n\r\n### AdapterConfig\r\n\r\nConfiguration class for controlling protocol generation:\r\n\r\n```python\r\nclass AdapterConfig:\r\n    enable_text: bool = True\r\n    enable_data: bool = True\r\n    enable_tool_calls: bool = True\r\n    enable_tool_results: bool = True\r\n    enable_steps: bool = True\r\n    enable_reasoning: bool = False  # Manual only\r\n    enable_annotations: bool = False  # Manual only\r\n    enable_files: bool = False  # Manual only\r\n    \r\n    @classmethod\r\n    def minimal(cls) -> \"AdapterConfig\": ...\r\n    \r\n    @classmethod\r\n    def tools_only(cls) -> \"AdapterConfig\": ...\r\n    \r\n    @classmethod\r\n    def comprehensive(cls) -> \"AdapterConfig\": ...\r\n    \r\n    @contextmanager\r\n    def pause_protocols(self, protocol_types: List[str]):\r\n        \"\"\"Temporarily disable specific protocol types\"\"\"\r\n    \r\n    @contextmanager\r\n    def protocols(self, protocol_types: List[str]):\r\n        \"\"\"Enable only specific protocol types\"\"\"\r\n\r\n### AISDKFactory\r\n\r\nFactory class for manual protocol creation:\r\n\r\n```python\r\nclass AISDKFactory:\r\n    @staticmethod\r\n    def create_reasoning_part(\r\n        reasoning: str,\r\n        confidence: Optional[float] = None\r\n    ) -> ReasoningPartContent:\r\n        \"\"\"Create reasoning protocol part\"\"\"\r\n    \r\n    @staticmethod\r\n    def create_tool_call_delta_part(\r\n        tool_call_id: str,\r\n        delta: Dict[str, Any],\r\n        index: int = 0\r\n    ) -> ToolCallDeltaPartContent:\r\n        \"\"\"Create tool call delta protocol part\"\"\"\r\n    \r\n    @staticmethod\r\n    def create_message_annotation_part(\r\n        annotations: List[Dict[str, Any]],\r\n        metadata: Optional[Dict[str, Any]] = None\r\n    ) -> MessageAnnotationPartContent:\r\n        \"\"\"Create message annotation protocol part\"\"\"\r\n    \r\n    @staticmethod\r\n    def create_source_part(\r\n        source_id: str,\r\n        source_type: str,\r\n        content: Optional[str] = None,\r\n        metadata: Optional[Dict[str, Any]] = None\r\n    ) -> SourcePartContent:\r\n        \"\"\"Create source protocol part\"\"\"\r\n    \r\n    @staticmethod\r\n    def create_file_part(\r\n        file_id: str,\r\n        file_name: str,\r\n        file_type: str,\r\n        content: Optional[str] = None,\r\n        metadata: Optional[Dict[str, Any]] = None\r\n    ) -> FilePartContent:\r\n        \"\"\"Create file protocol part\"\"\"\r\n\r\n### Factory Functions (Backward Compatibility)\r\n\r\nConvenience functions for creating protocol parts:\r\n\r\n```python\r\n# Basic protocol creation\r\ncreate_text_part(text: str) -> AISDKPartEmitter\r\ncreate_data_part(data: Any) -> AISDKPartEmitter\r\ncreate_error_part(error: str) -> AISDKPartEmitter\r\n\r\n# Tool-related protocols\r\ncreate_tool_call_part(tool_call_id: str, tool_name: str, args: Dict) -> AISDKPartEmitter\r\ncreate_tool_result_part(tool_call_id: str, result: str) -> AISDKPartEmitter\r\ncreate_tool_call_streaming_start_part(tool_call_id: str, tool_name: str) -> AISDKPartEmitter\r\n\r\n# Step protocols\r\ncreate_start_step_part(message_id: str) -> AISDKPartEmitter\r\ncreate_finish_step_part(finish_reason: str, **kwargs) -> AISDKPartEmitter\r\ncreate_finish_message_part(finish_reason: str, **kwargs) -> AISDKPartEmitter\r\n\r\n# Advanced protocols\r\ncreate_redacted_reasoning_part(reasoning: str) -> AISDKPartEmitter\r\ncreate_reasoning_signature_part(signature: str) -> AISDKPartEmitter\r\n\r\n# Generic factory\r\ncreate_ai_sdk_part(protocol_type: str, content: Any) -> AISDKPartEmitter\r\n```\r\n\r\n### Factory Instance\r\n\r\nConvenience factory instance with simplified methods:\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import factory\r\n\r\n# Simplified factory methods\r\ntext_part = factory.text(\"Hello world\")\r\ndata_part = factory.data([\"key\", \"value\"])\r\nerror_part = factory.error(\"Something went wrong\")\r\nreasoning_part = factory.reasoning(\"Let me think...\")\r\nsource_part = factory.source(url=\"https://example.com\", title=\"Example\")\r\n```\r\n\r\n### Configuration Instances\r\n\r\nPre-configured instances for common use cases:\r\n\r\n```python\r\nfrom langchain_aisdk_adapter import default_config, safe_config\r\n\r\n# Default configuration instance\r\ndefault_config: AdapterConfig\r\n\r\n# Thread-safe configuration instance\r\nsafe_config: ThreadSafeAdapterConfig\r\n```\r\n```\r\n\r\n### ThreadSafeAdapterConfig\r\n\r\nThread-safe configuration wrapper for multi-user applications:\r\n\r\n```python\r\nclass ThreadSafeAdapterConfig:\r\n    def __init__(self, base_config: Optional[AdapterConfig] = None):\r\n        \"\"\"Initialize with optional base configuration\"\"\"\r\n    \r\n    def is_protocol_enabled(self, protocol_type: str) -> bool:\r\n        \"\"\"Check if protocol is enabled (thread-safe)\"\"\"\r\n    \r\n    @contextmanager\r\n    def pause_protocols(self, protocol_types: List[str]):\r\n        \"\"\"Thread-safe context manager to temporarily disable protocols\"\"\"\r\n    \r\n    @contextmanager\r\n    def protocols(self, protocol_types: List[str]):\r\n        \"\"\"Thread-safe context manager to enable only specific protocols\"\"\"\r\n```\r\n\r\n**Key Features:**\r\n- **Thread Isolation**: Each request/thread gets isolated protocol state\r\n- **Context Management**: Supports nested context managers\r\n- **FastAPI Ready**: Perfect for multi-user web applications\r\n- **Base Config Support**: Can wrap existing AdapterConfig instances\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! This project is still in alpha, so there's plenty of room for improvement:\r\n\r\n1. Fork the repository\r\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\r\n3. Make your changes with tests\r\n4. Ensure tests pass (`pytest tests/`)\r\n5. Submit a pull request\r\n\r\nPlease feel free to:\r\n- Report bugs and issues\r\n- Suggest new features\r\n- Improve documentation\r\n- Add more examples\r\n- Enhance test coverage\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\nWe're grateful to:\r\n- The LangChain team for their excellent framework\r\n- The AI SDK community for the streaming protocol specification\r\n- All contributors and users who help make this project better\r\n\r\n## \ud83d\udcde Support\r\n\r\nIf you encounter any issues or have questions:\r\n\r\n- \ud83d\udccb [Open an issue](https://github.com/lointain/langchain_aisdk_adapter/issues)\r\n- \ud83d\udcd6 [Check the documentation](https://github.com/lointain/langchain_aisdk_adapter#readme)\r\n- \ud83d\udcac [Start a discussion](https://github.com/lointain/langchain_aisdk_adapter/discussions)\r\n\r\nWe appreciate your patience as we continue to improve this alpha release!\r\n\r\n---\r\n\r\n*Made with \u2764\ufe0f for the LangChain and AI SDK communities*\r\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A Python package that converts LangChain/LangGraph event streams to AI SDK UI Stream Protocol format",
    "version": "0.0.1a1",
    "project_urls": {
        "Bug Tracker": "https://github.com/lointain/langchain_aisdk_adapter/issues",
        "Documentation": "https://github.com/lointain/langchain_aisdk_adapter#readme",
        "Homepage": "https://github.com/lointain/langchain_aisdk_adapter",
        "Repository": "https://github.com/lointain/langchain_aisdk_adapter"
    },
    "split_keywords": [
        "langchain",
        " ai-sdk",
        " stream",
        " protocol",
        " adapter",
        " langgraph",
        " ai",
        " llm",
        " fastapi"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2460d9e22504f183154d3e73c751b3b837c3ae6177a83410ffe8b497dcce4baa",
                "md5": "dc1c76b68a7cee866680294bdbf341fc",
                "sha256": "a905dc37ab28bf7092d63dfca1e4333e162fa69fe79379118cef32e5859b4389"
            },
            "downloads": -1,
            "filename": "langchain_aisdk_adapter-0.0.1a1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dc1c76b68a7cee866680294bdbf341fc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9.0",
            "size": 26552,
            "upload_time": "2025-07-16T06:35:44",
            "upload_time_iso_8601": "2025-07-16T06:35:44.708795Z",
            "url": "https://files.pythonhosted.org/packages/24/60/d9e22504f183154d3e73c751b3b837c3ae6177a83410ffe8b497dcce4baa/langchain_aisdk_adapter-0.0.1a1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0bd2aaba28827d636fe31072c56baae8b1767a413cb9d5e2bdd42d32904e228b",
                "md5": "f562d73f2b2d4e5f72d92e5a6c375917",
                "sha256": "f6a01bd8d12f3c22036cc098aa3fd84c6f3090da28ef12fd9f112d5089b839e2"
            },
            "downloads": -1,
            "filename": "langchain_aisdk_adapter-0.0.1a1.tar.gz",
            "has_sig": false,
            "md5_digest": "f562d73f2b2d4e5f72d92e5a6c375917",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9.0",
            "size": 56964,
            "upload_time": "2025-07-16T06:35:46",
            "upload_time_iso_8601": "2025-07-16T06:35:46.296380Z",
            "url": "https://files.pythonhosted.org/packages/0b/d2/aaba28827d636fe31072c56baae8b1767a413cb9d5e2bdd42d32904e228b/langchain_aisdk_adapter-0.0.1a1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-16 06:35:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lointain",
    "github_project": "langchain_aisdk_adapter",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "langchain-aisdk-adapter"
}
        
Elapsed time: 0.64129s