Name | langchain-mcp-adapters JSON |
Version |
0.1.9
JSON |
| download |
home_page | None |
Summary | Make Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph agents. |
upload_time | 2025-07-09 15:56:14 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LangChain MCP Adapters
This library provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph).

## Features
- 🛠️ Convert MCP tools into [LangChain tools](https://python.langchain.com/docs/concepts/tools/) that can be used with [LangGraph](https://github.com/langchain-ai/langgraph) agents
- 📦 A client implementation that allows you to connect to multiple MCP servers and load tools from them
## Installation
```bash
pip install langchain-mcp-adapters
```
## Quickstart
Here is a simple example of using the MCP tools with a LangGraph agent.
```bash
pip install langchain-mcp-adapters langgraph "langchain[openai]"
export OPENAI_API_KEY=<your_api_key>
```
### Server
First, let's create an MCP server that can add and multiply numbers.
```python
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
```
### Client
```python
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
server_params = StdioServerParameters(
command="python",
# Make sure to update to the full absolute path to your math_server.py file
args=["/path/to/math_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
# Create and run the agent
agent = create_react_agent("openai:gpt-4.1", tools)
agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```
## Multiple MCP Servers
The library also allows you to connect to multiple MCP servers and load tools from them:
### Server
```python
# math_server.py
...
# weather_server.py
from typing import List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__":
mcp.run(transport="streamable-http")
```
```bash
python weather_server.py
```
### Client
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
client = MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# Make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp/",
"transport": "streamable_http",
}
}
)
tools = await client.get_tools()
agent = create_react_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
```
> [!note]
> Example above will start a new MCP `ClientSession` for each tool invocation. If you would like to explicitly start a session for a given server, you can do:
>
> ```python
> from langchain_mcp_adapters.tools import load_mcp_tools
>
> client = MultiServerMCPClient({...})
> async with client.session("math") as session:
> tools = await load_mcp_tools(session)
> ```
## Streamable HTTP
MCP now supports [streamable HTTP](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) transport.
To start an [example](examples/servers/streamable-http-stateless/) streamable HTTP server, run the following:
```bash
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
```
Alternatively, you can use FastMCP directly (as in the examples above).
To use it with Python MCP SDK `streamablehttp_client`:
```python
# Use server from examples/servers/streamable-http-stateless/
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langgraph.prebuilt import create_react_agent
from langchain_mcp_adapters.tools import load_mcp_tools
async with streamablehttp_client("http://localhost:3000/mcp/") as (read, write, _):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
agent = create_react_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```
Use it with `MultiServerMCPClient`:
```python
# Use server from examples/servers/streamable-http-stateless/
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
client = MultiServerMCPClient(
{
"math": {
"transport": "streamable_http",
"url": "http://localhost:3000/mcp/"
},
}
)
tools = await client.get_tools()
agent = create_react_agent("openai:gpt-4.1", tools)
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
```
## Passing runtime headers
When connecting to MCP servers, you can include custom headers (e.g., for authentication or tracing) using the `headers` field in the connection configuration. This is supported for the following transports:
* `sse`
* `streamable_http`
### Example: passing headers with `MultiServerMCPClient`
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
client = MultiServerMCPClient(
{
"weather": {
"transport": "streamable_http",
"url": "http://localhost:8000/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN",
"X-Custom-Header": "custom-value"
},
}
}
)
tools = await client.get_tools()
agent = create_react_agent("openai:gpt-4.1", tools)
response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
```
> Only `sse` and `streamable_http` transports support runtime headers. These headers are passed with every HTTP request to the MCP server.
## Using with LangGraph StateGraph
```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4.1")
client = MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["./examples/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp/",
"transport": "streamable_http",
}
}
)
tools = await client.get_tools()
def call_model(state: MessagesState):
response = model.bind_tools(tools).invoke(state["messages"])
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_node(ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges(
"call_model",
tools_condition,
)
builder.add_edge("tools", "call_model")
graph = builder.compile()
math_response = await graph.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await graph.ainvoke({"messages": "what is the weather in nyc?"})
```
## Using with LangGraph API Server
> [!TIP]
> Check out [this guide](https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/) on getting started with LangGraph API server.
If you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:
```python
# graph.py
from contextlib import asynccontextmanager
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
async def make_graph():
client = MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/mcp/",
"transport": "streamable_http",
}
}
)
tools = await client.get_tools()
agent = create_react_agent("openai:gpt-4.1", tools)
return agent
```
In your [`langgraph.json`](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#configuration-file) make sure to specify `make_graph` as your graph entrypoint:
```json
{
"dependencies": ["."],
"graphs": {
"agent": "./graph.py:make_graph"
}
}
```
## Add LangChain tools to a FastMCP server
Use `to_fastmcp` to convert LangChain tools to FastMCP, and then add them to the `FastMCP` server via the initializer:
> [!NOTE]
> `tools` argument is only available in FastMCP as of `mcp >= 1.9.1`
```python
from langchain_core.tools import tool
from langchain_mcp_adapters.tools import to_fastmcp
from mcp.server.fastmcp import FastMCP
@tool
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
fastmcp_tool = to_fastmcp(add)
mcp = FastMCP("Math", tools=[fastmcp_tool])
mcp.run(transport="stdio")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-mcp-adapters",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Vadym Barda <19161700+vbarda@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/b9/74/e36003a43136f9095a5f968c730fbfe894f94284ebe6d2b50bb17d41b8b5/langchain_mcp_adapters-0.1.9.tar.gz",
"platform": null,
"description": "# LangChain MCP Adapters\n\nThis library provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph).\n\n\n\n## Features\n\n- \ud83d\udee0\ufe0f Convert MCP tools into [LangChain tools](https://python.langchain.com/docs/concepts/tools/) that can be used with [LangGraph](https://github.com/langchain-ai/langgraph) agents\n- \ud83d\udce6 A client implementation that allows you to connect to multiple MCP servers and load tools from them\n\n## Installation\n\n```bash\npip install langchain-mcp-adapters\n```\n\n## Quickstart\n\nHere is a simple example of using the MCP tools with a LangGraph agent.\n\n```bash\npip install langchain-mcp-adapters langgraph \"langchain[openai]\"\n\nexport OPENAI_API_KEY=<your_api_key>\n```\n\n### Server\n\nFirst, let's create an MCP server that can add and multiply numbers.\n\n```python\n# math_server.py\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"Math\")\n\n@mcp.tool()\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers\"\"\"\n return a + b\n\n@mcp.tool()\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two numbers\"\"\"\n return a * b\n\nif __name__ == \"__main__\":\n mcp.run(transport=\"stdio\")\n```\n\n### Client\n\n```python\n# Create server parameters for stdio connection\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\n\nfrom langchain_mcp_adapters.tools import load_mcp_tools\nfrom langgraph.prebuilt import create_react_agent\n\nserver_params = StdioServerParameters(\n command=\"python\",\n # Make sure to update to the full absolute path to your math_server.py file\n args=[\"/path/to/math_server.py\"],\n)\n\nasync with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write) as session:\n # Initialize the connection\n await session.initialize()\n\n # Get tools\n tools = await load_mcp_tools(session)\n\n # Create and run the agent\n agent = create_react_agent(\"openai:gpt-4.1\", tools)\n agent_response = await agent.ainvoke({\"messages\": \"what's (3 + 5) x 12?\"})\n```\n\n## Multiple MCP Servers\n\nThe library also allows you to connect to multiple MCP servers and load tools from them:\n\n### Server\n\n```python\n# math_server.py\n...\n\n# weather_server.py\nfrom typing import List\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"Weather\")\n\n@mcp.tool()\nasync def get_weather(location: str) -> str:\n \"\"\"Get weather for location.\"\"\"\n return \"It's always sunny in New York\"\n\nif __name__ == \"__main__\":\n mcp.run(transport=\"streamable-http\")\n```\n\n```bash\npython weather_server.py\n```\n\n### Client\n\n```python\nfrom langchain_mcp_adapters.client import MultiServerMCPClient\nfrom langgraph.prebuilt import create_react_agent\n\nclient = MultiServerMCPClient(\n {\n \"math\": {\n \"command\": \"python\",\n # Make sure to update to the full absolute path to your math_server.py file\n \"args\": [\"/path/to/math_server.py\"],\n \"transport\": \"stdio\",\n },\n \"weather\": {\n # Make sure you start your weather server on port 8000\n \"url\": \"http://localhost:8000/mcp/\",\n \"transport\": \"streamable_http\",\n }\n }\n)\ntools = await client.get_tools()\nagent = create_react_agent(\"openai:gpt-4.1\", tools)\nmath_response = await agent.ainvoke({\"messages\": \"what's (3 + 5) x 12?\"})\nweather_response = await agent.ainvoke({\"messages\": \"what is the weather in nyc?\"})\n```\n\n> [!note]\n> Example above will start a new MCP `ClientSession` for each tool invocation. If you would like to explicitly start a session for a given server, you can do:\n>\n> ```python\n> from langchain_mcp_adapters.tools import load_mcp_tools\n>\n> client = MultiServerMCPClient({...})\n> async with client.session(\"math\") as session:\n> tools = await load_mcp_tools(session)\n> ```\n\n## Streamable HTTP\n\nMCP now supports [streamable HTTP](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http) transport.\n\nTo start an [example](examples/servers/streamable-http-stateless/) streamable HTTP server, run the following:\n\n```bash\ncd examples/servers/streamable-http-stateless/\nuv run mcp-simple-streamablehttp-stateless --port 3000\n```\n\nAlternatively, you can use FastMCP directly (as in the examples above).\n\nTo use it with Python MCP SDK `streamablehttp_client`:\n\n```python\n# Use server from examples/servers/streamable-http-stateless/\n\nfrom mcp import ClientSession\nfrom mcp.client.streamable_http import streamablehttp_client\n\nfrom langgraph.prebuilt import create_react_agent\nfrom langchain_mcp_adapters.tools import load_mcp_tools\n\nasync with streamablehttp_client(\"http://localhost:3000/mcp/\") as (read, write, _):\n async with ClientSession(read, write) as session:\n # Initialize the connection\n await session.initialize()\n\n # Get tools\n tools = await load_mcp_tools(session)\n agent = create_react_agent(\"openai:gpt-4.1\", tools)\n math_response = await agent.ainvoke({\"messages\": \"what's (3 + 5) x 12?\"})\n```\n\nUse it with `MultiServerMCPClient`:\n\n```python\n# Use server from examples/servers/streamable-http-stateless/\nfrom langchain_mcp_adapters.client import MultiServerMCPClient\nfrom langgraph.prebuilt import create_react_agent\n\nclient = MultiServerMCPClient(\n {\n \"math\": {\n \"transport\": \"streamable_http\",\n \"url\": \"http://localhost:3000/mcp/\"\n },\n }\n)\ntools = await client.get_tools()\nagent = create_react_agent(\"openai:gpt-4.1\", tools)\nmath_response = await agent.ainvoke({\"messages\": \"what's (3 + 5) x 12?\"})\n```\n\n## Passing runtime headers\n\nWhen connecting to MCP servers, you can include custom headers (e.g., for authentication or tracing) using the `headers` field in the connection configuration. This is supported for the following transports:\n\n* `sse`\n* `streamable_http`\n\n### Example: passing headers with `MultiServerMCPClient`\n\n```python\nfrom langchain_mcp_adapters.client import MultiServerMCPClient\nfrom langgraph.prebuilt import create_react_agent\n\nclient = MultiServerMCPClient(\n {\n \"weather\": {\n \"transport\": \"streamable_http\",\n \"url\": \"http://localhost:8000/mcp\",\n \"headers\": {\n \"Authorization\": \"Bearer YOUR_TOKEN\",\n \"X-Custom-Header\": \"custom-value\"\n },\n }\n }\n)\ntools = await client.get_tools()\nagent = create_react_agent(\"openai:gpt-4.1\", tools)\nresponse = await agent.ainvoke({\"messages\": \"what is the weather in nyc?\"})\n```\n\n> Only `sse` and `streamable_http` transports support runtime headers. These headers are passed with every HTTP request to the MCP server.\n\n\n## Using with LangGraph StateGraph\n\n```python\nfrom langchain_mcp_adapters.client import MultiServerMCPClient\nfrom langgraph.graph import StateGraph, MessagesState, START\nfrom langgraph.prebuilt import ToolNode, tools_condition\n\nfrom langchain.chat_models import init_chat_model\nmodel = init_chat_model(\"openai:gpt-4.1\")\n\nclient = MultiServerMCPClient(\n {\n \"math\": {\n \"command\": \"python\",\n # Make sure to update to the full absolute path to your math_server.py file\n \"args\": [\"./examples/math_server.py\"],\n \"transport\": \"stdio\",\n },\n \"weather\": {\n # make sure you start your weather server on port 8000\n \"url\": \"http://localhost:8000/mcp/\",\n \"transport\": \"streamable_http\",\n }\n }\n)\ntools = await client.get_tools()\n\ndef call_model(state: MessagesState):\n response = model.bind_tools(tools).invoke(state[\"messages\"])\n return {\"messages\": response}\n\nbuilder = StateGraph(MessagesState)\nbuilder.add_node(call_model)\nbuilder.add_node(ToolNode(tools))\nbuilder.add_edge(START, \"call_model\")\nbuilder.add_conditional_edges(\n \"call_model\",\n tools_condition,\n)\nbuilder.add_edge(\"tools\", \"call_model\")\ngraph = builder.compile()\nmath_response = await graph.ainvoke({\"messages\": \"what's (3 + 5) x 12?\"})\nweather_response = await graph.ainvoke({\"messages\": \"what is the weather in nyc?\"})\n```\n\n## Using with LangGraph API Server\n\n> [!TIP]\n> Check out [this guide](https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/) on getting started with LangGraph API server.\n\nIf you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:\n\n```python\n# graph.py\nfrom contextlib import asynccontextmanager\nfrom langchain_mcp_adapters.client import MultiServerMCPClient\nfrom langgraph.prebuilt import create_react_agent\n\nasync def make_graph():\n client = MultiServerMCPClient(\n {\n \"math\": {\n \"command\": \"python\",\n # Make sure to update to the full absolute path to your math_server.py file\n \"args\": [\"/path/to/math_server.py\"],\n \"transport\": \"stdio\",\n },\n \"weather\": {\n # make sure you start your weather server on port 8000\n \"url\": \"http://localhost:8000/mcp/\",\n \"transport\": \"streamable_http\",\n }\n }\n )\n tools = await client.get_tools()\n agent = create_react_agent(\"openai:gpt-4.1\", tools)\n return agent\n```\n\nIn your [`langgraph.json`](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#configuration-file) make sure to specify `make_graph` as your graph entrypoint:\n\n```json\n{\n \"dependencies\": [\".\"],\n \"graphs\": {\n \"agent\": \"./graph.py:make_graph\"\n }\n}\n```\n\n## Add LangChain tools to a FastMCP server\n\nUse `to_fastmcp` to convert LangChain tools to FastMCP, and then add them to the `FastMCP` server via the initializer:\n\n> [!NOTE]\n> `tools` argument is only available in FastMCP as of `mcp >= 1.9.1`\n\n```python\nfrom langchain_core.tools import tool\nfrom langchain_mcp_adapters.tools import to_fastmcp\nfrom mcp.server.fastmcp import FastMCP\n\n\n@tool\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers\"\"\"\n return a + b\n\n\nfastmcp_tool = to_fastmcp(add)\n\nmcp = FastMCP(\"Math\", tools=[fastmcp_tool])\nmcp.run(transport=\"stdio\")\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Make Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph agents.",
"version": "0.1.9",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a4eb9e98822d3db22beff44449a8f61fca208d4f59d592a7ce67ce4c400b8f8f",
"md5": "94f6ee0219cc9b178d7ba4f5fe311a66",
"sha256": "fd131009c60c9e5a864f96576bbe757fc1809abd604891cb2e5d6e8aebd6975c"
},
"downloads": -1,
"filename": "langchain_mcp_adapters-0.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "94f6ee0219cc9b178d7ba4f5fe311a66",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 15300,
"upload_time": "2025-07-09T15:56:13",
"upload_time_iso_8601": "2025-07-09T15:56:13.316662Z",
"url": "https://files.pythonhosted.org/packages/a4/eb/9e98822d3db22beff44449a8f61fca208d4f59d592a7ce67ce4c400b8f8f/langchain_mcp_adapters-0.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b974e36003a43136f9095a5f968c730fbfe894f94284ebe6d2b50bb17d41b8b5",
"md5": "d8d2e0c128c2502439a3783a501d661e",
"sha256": "0018cf7b5f7bc4c044e05ec20fcb9ebe345311c8d1060c61d411188001ab3aab"
},
"downloads": -1,
"filename": "langchain_mcp_adapters-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "d8d2e0c128c2502439a3783a501d661e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 22101,
"upload_time": "2025-07-09T15:56:14",
"upload_time_iso_8601": "2025-07-09T15:56:14.455974Z",
"url": "https://files.pythonhosted.org/packages/b9/74/e36003a43136f9095a5f968c730fbfe894f94284ebe6d2b50bb17d41b8b5/langchain_mcp_adapters-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 15:56:14",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "langchain-mcp-adapters"
}