# MCP to LangChain Tools Conversion Library / Python [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/) [](https://dependents.info/hideya/langchain-mcp-tools-py)
A simple, lightweight library to use
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
server tools from LangChain.
<img width="500px" alt="langchain-mcp-tools-diagram" src="https://raw.githubusercontent.com/hideya/langchain-mcp-tools-py/refs/heads/main/docs/images/langchain-mcp-tools-diagram.png" />
Its simplicity and extra features for local MCP servers can make it useful as a basis for your own customizations.
However, it only supports text results of tool calls and does not support MCP features other than tools.
[LangChain's **official LangChain MCP Adapters** library](https://pypi.org/project/langchain-mcp-adapters/),
which supports comprehensive integration with LangChain, has been released.
You may want to consider using it if you don't have specific needs for this library.
## Prerequisites
- Python 3.11+
## Installation
```bash
pip install langchain-mcp-tools
```
## Quick Start
`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations
that follow the same structure as
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
but only the contents of the `mcpServers` property,
and is expressed as a `dict`, e.g.:
```python
mcp_servers = {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "."]
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": f"Bearer {os.environ.get('GITHUB_PERSONAL_ACCESS_TOKEN', '')}"
}
},
}
tools, cleanup = await convert_mcp_to_langchain_tools(
mcp_servers
)
```
This utility function initializes all specified MCP servers in parallel,
and returns LangChain Tools
([`tools: list[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))
by gathering available MCP tools from the servers,
and by wrapping them into LangChain tools.
It also returns an async callback function (`cleanup: McpServerCleanupFn`)
to be invoked to close all MCP server sessions when finished.
The returned tools can be used with LangChain, e.g.:
```python
# from langchain.chat_models import init_chat_model
llm = init_chat_model("google_genai:gemini-2.5-flash")
# from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
llm,
tools
)
```
A minimal but complete working usage example can be found
[in this example in the langchain-mcp-tools-py-usage repo](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)
For hands-on experimentation with MCP server integration,
try [this MCP Client CLI tool built with this library](https://pypi.org/project/mcp-chat/)
A TypeScript equivalent of this utility is available
[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)
## Introduction
This package is intended to simplify the use of
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
server tools with LangChain / Python.
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is the de facto industry standard
that dramatically expands the scope of LLMs by enabling the integration of external tools and resources,
including DBs, Cloud Storages, GitHub, Docker, Slack, and more.
There are quite a few useful MCP servers already available.
See [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers).
This utility's goal is to make these numerous MCP servers easily accessible from LangChain.
It contains a utility function `convert_mcp_to_langchain_tools()`.
This async function handles parallel initialization of specified multiple MCP servers
and converts their available tools into a list of LangChain-compatible tools.
For detailed information on how to use this library, please refer to the following document:
["Supercharging LangChain: Integrating 2000+ MCP with ReAct"](https://medium.com/@h1deya/supercharging-langchain-integrating-450-mcp-with-react-d4e467cbf41a).
## MCP Protocol Support
This library supports **MCP Protocol version 2025-03-26** and maintains backwards compatibility with version 2024-11-05.
It follows the [official MCP specification](https://modelcontextprotocol.io/specification/2025-03-26/) for transport selection and backwards compatibility.
### Limitations
- **Tool Return Types**: Currently, only text results of tool calls are supported.
The library uses LangChain's `response_format: 'content'` (the default), which only supports text strings.
While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.
- **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.
### Note
- **Passing PATH Env Variable**: The library automatically adds the `PATH` environment variable to stdio server configrations if not explicitly provided to ensure servers can find required executables.
## API docs
Can be found [here](https://hideya.github.io/langchain-mcp-tools-py/)
## Building from Source
See [README_DEV.md](https://github.com/hideya/langchain-mcp-tools-py/blob/main/README_DEV.md) for details.
## Features
### stderr Redirection for Local MCP Server
A new key `"errlog"` has been introduced to specify a file-like object
to which local (stdio) MCP server's stderr is redirected.
```python
log_path = f"mcp-server-{server_name}.log"
log_file = open(log_path, "w")
mcp_servers[server_name]["errlog"] = log_file
```
A usage example can be found [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/3bd35d9fb49f4b631fe3d0cc8491d43cbf69693b/src/example.py#L88-L108).
The key name `errlog` is derived from
[`stdio_client()`'s argument `errlog`](https://github.com/modelcontextprotocol/python-sdk/blob/babb477dffa33f46cdc886bc885eb1d521151430/src/mcp/client/stdio/__init__.py#L96).
### Working Directory Configuration for Local MCP Servers
The working directory that is used when spawning a local (stdio) MCP server
can be specified with the `"cwd"` key as follows:
```python
"local-server-name": {
"command": "...",
"args": [...],
"cwd": "/working/directory" # the working dir to be use by the server
},
```
The key name `cwd` is derived from
Python SDK's [`StdioServerParameters`](https://github.com/modelcontextprotocol/python-sdk/blob/babb477dffa33f46cdc886bc885eb1d521151430/src/mcp/client/stdio/__init__.py#L76-L77).
### Transport Selection Priority
The library selects transports using the following priority order:
1. **Explicit transport/type field** (must match URL protocol if URL provided)
2. **URL protocol auto-detection** (http/https → StreamableHTTP → SSE, ws/wss → WebSocket)
3. **Command presence** → Stdio transport
4. **Error** if none of the above match
This ensures predictable behavior while allowing flexibility for different deployment scenarios.
### Remote MCP Server Support
`mcp_servers` configuration for Streamable HTTP, SSE (Server-Sent Events) and Websocket servers are as follows:
```py
# Auto-detection: tries Streamable HTTP first, falls back to SSE on 4xx errors
"auto-detect-server": {
"url": f"http://{server_host}:{server_port}/..."
},
# Explicit Streamable HTTP
"streamable-http-server": {
"url": f"http://{server_host}:{server_port}/...",
"transport": "streamable_http"
# "type": "http" # VSCode-style config also works instead of the above
},
# Explicit SSE
"sse-server-name": {
"url": f"http://{sse_server_host}:{sse_server_port}/...",
"transport": "sse" # or `"type": "sse"`
},
# WebSocket
"ws-server-name": {
"url": f"ws://${ws_server_host}:${ws_server_port}/..."
# optionally `"transport": "ws"` or `"type": "ws"`
},
```
The `"headers"` key can be used to pass HTTP headers to Streamable HTTP and SSE connection.
```py
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": f"Bearer {os.environ.get('GITHUB_PERSONAL_ACCESS_TOKEN')}"
}
},
```
NOTE: When accessing the GitHub MCP server, [GitHub PAT (Personal Access Token)](https://github.com/settings/personal-access-tokens)
alone is not enough; your GitHub account must have an active Copilot subscription or be assigned a Copilot license through your organization.
**Auto-detection behavior (default):**
- For HTTP/HTTPS URLs without explicit `transport`, the library follows [MCP specification recommendations](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#backwards-compatibility)
- First attempts Streamable HTTP transport
- If Streamable HTTP fails with a 4xx error, automatically falls back to SSE transport
- Non-4xx errors (network issues, etc.) are re-thrown without fallback
**Explicit transport selection:**
- Set `"transport": "streamable_http"` (or VSCode-style config `"type": "http"`) to force Streamable HTTP (no fallback)
- Set `"transport": "sse"` to force SSE transport
- WebSocket URLs (`ws://` or `wss://`) always use WebSocket transport
Streamable HTTP is the modern MCP transport that replaces the older HTTP+SSE transport. According to the [official MCP documentation](https://modelcontextprotocol.io/docs/concepts/transports): "SSE as a standalone transport is deprecated as of protocol version 2025-03-26. It has been replaced by Streamable HTTP, which incorporates SSE as an optional streaming mechanism."
### Authentication Support for Streamable HTTP Connections
The library supports OAuth 2.1 authentication for Streamable HTTP connections:
```py
from mcp.client.auth import OAuthClientProvider
...
# Create OAuth authentication provider
oauth_auth = OAuthClientProvider(
server_url="https://...",
client_metadata=...,
storage=...,
redirect_handler=...,
callback_handler=...,
)
# Test configuration with OAuth auth
mcp_servers = {
"secure-streamable-server": {
"url": "https://.../mcp/",
// To avoid auto protocol fallback, specify the protocol explicitly when using authentication
"transport": "streamable_http", // or `"type": "http",`
"auth": oauth_auth,
"timeout": 30.0
},
}
```
Test implementations are provided:
- **Streamable HTTP Authentication Tests**:
- MCP client uses this library: [streamable_http_oauth_test_client.py](https://github.com/hideya/langchain-mcp-tools-py/tree/main/testfiles/streamable_http_oauth_test_client.py)
- Test MCP Server: [streamable_http_oauth_test_server.py](https://github.com/hideya/langchain-mcp-tools-py/tree/main/testfiles/streamable_http_oauth_test_server.py)
### Authentication Support for SSE Connections (Legacy)
The library also supports authentication for SSE connections to MCP servers.
Note that SSE transport is deprecated; Streamable HTTP is the recommended approach.
## Change Log
Can be found [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/CHANGELOG.md)
## Appendix
### Troubleshooting
1. **Enable debug logging**: Set the log level to DEBUG to see detailed connection and execution logs:
```
tools, cleanup = await convert_mcp_to_langchain_tools(
mcp_servers,
logging.DEBUG
)
```
2. **Check server errlog**: For stdio MCP servers, use `errlog` redirection to capture server error output
3. **Test explicit transports**: Try forcing specific transport types to isolate auto-detection issues
4. **Verify server independently**: Refer to [Debugging Section in MCP documentation](https://modelcontextprotocol.io/docs/tools/debugging)
### Troubleshooting Authentication Issues
When authentication errors occur, they often generate massive logs that make it difficult to identify that authentication is the root cause.
To address this problem, this library performs authentication pre-validation for HTTP/HTTPS MCP servers before attempting the actual MCP connection.
This ensures that clear error messages like `Authentication failed (401 Unauthorized)` or `Authentication failed (403 Forbidden)` appear at the end of the logs, rather than being buried in the middle of extensive error output.
**Important:** This pre-validation is specific to this library and not part of the official MCP specification.
In rare cases, it may interfere with certain MCP server behaviors.
#### When and How to Disable Pre-validation
Set `"__pre_validate_authentication": False` in your server config if:
- Using OAuth flows that require complex authentication handshakes
- The MCP server doesn't accept simple HTTP POST requests for validation
- You're experiencing false negatives in the auth validation
**Example:**
```python
"oauth-server": {
"url": "https://api.example.com/mcp/",
"auth": oauth_provider, # Complex OAuth provider
"__pre_validate_authentication": False # Skip the pre-validation
}
```
### Debugging Authentication
1. **Check your tokens/credentials** - Most auth failures are due to expired or incorrect tokens
2. **Verify token permissions** - Some MCP servers require specific scopes (e.g., GitHub Copilot license)
3. **Test with curl** - Try a simple HTTP request to verify your auth setup:
```bash
curl -H "Authorization: Bearer your-token" https://api.example.com/mcp/
```
### Resource Management
The returned `cleanup` function properly handles resource cleanup:
- Closes all MCP server connections concurrently
- Logs any cleanup failures
- Continues cleanup of remaining servers even if some fail
- Should always be called when done using the tools
```python
try:
tools, cleanup = await convert_mcp_to_langchain_tools(mcp_servers)
# Use tools with your LLM
finally:
# cleanup can be undefined when an exeption occurs during initialization
if "cleanup" in locals():
await cleanup()
```
### For Developers
See [TECHNICAL.md](TECHNICAL.md) for technical details about implementation challenges and solutions.
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-mcp-tools-test-20250807",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "modelcontextprotocol, mcp, mcp-client, langchain, langchain-python, tool-call, tool-calling, python",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/82/61/94863eafcb3f3b16bed16870828cb8b2e83303ae6746dfd92c35e7bd6078/langchain_mcp_tools_test_20250807-0.2.12.tar.gz",
"platform": null,
"description": "# MCP to LangChain Tools Conversion Library / Python [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/langchain-mcp-tools/) [](https://dependents.info/hideya/langchain-mcp-tools-py)\n\nA simple, lightweight library to use \n[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)\nserver tools from LangChain.\n\n<img width=\"500px\" alt=\"langchain-mcp-tools-diagram\" src=\"https://raw.githubusercontent.com/hideya/langchain-mcp-tools-py/refs/heads/main/docs/images/langchain-mcp-tools-diagram.png\" />\n\nIts simplicity and extra features for local MCP servers can make it useful as a basis for your own customizations.\nHowever, it only supports text results of tool calls and does not support MCP features other than tools.\n\n[LangChain's **official LangChain MCP Adapters** library](https://pypi.org/project/langchain-mcp-adapters/),\nwhich supports comprehensive integration with LangChain, has been released.\nYou may want to consider using it if you don't have specific needs for this library.\n\n## Prerequisites\n\n- Python 3.11+\n\n## Installation\n\n```bash\npip install langchain-mcp-tools\n```\n\n## Quick Start\n\n`convert_mcp_to_langchain_tools()` utility function accepts MCP server configurations\nthat follow the same structure as\n[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),\nbut only the contents of the `mcpServers` property,\nand is expressed as a `dict`, e.g.:\n\n```python\nmcp_servers = {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \".\"]\n },\n \"fetch\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-server-fetch\"]\n },\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\",\n \"headers\": {\n \"Authorization\": f\"Bearer {os.environ.get('GITHUB_PERSONAL_ACCESS_TOKEN', '')}\"\n }\n },\n}\n\ntools, cleanup = await convert_mcp_to_langchain_tools(\n mcp_servers\n)\n```\n\nThis utility function initializes all specified MCP servers in parallel,\nand returns LangChain Tools\n([`tools: list[BaseTool]`](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html#langchain_core.tools.base.BaseTool))\nby gathering available MCP tools from the servers,\nand by wrapping them into LangChain tools.\nIt also returns an async callback function (`cleanup: McpServerCleanupFn`)\nto be invoked to close all MCP server sessions when finished.\n\nThe returned tools can be used with LangChain, e.g.:\n\n```python\n# from langchain.chat_models import init_chat_model\nllm = init_chat_model(\"google_genai:gemini-2.5-flash\")\n\n# from langgraph.prebuilt import create_react_agent\nagent = create_react_agent(\n llm,\n tools\n)\n```\n\nA minimal but complete working usage example can be found\n[in this example in the langchain-mcp-tools-py-usage repo](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/main/src/example.py)\n\nFor hands-on experimentation with MCP server integration,\ntry [this MCP Client CLI tool built with this library](https://pypi.org/project/mcp-chat/)\n\nA TypeScript equivalent of this utility is available\n[here](https://www.npmjs.com/package/@h1deya/langchain-mcp-tools)\n\n## Introduction\n\nThis package is intended to simplify the use of\n[Model Context Protocol (MCP)](https://modelcontextprotocol.io/)\nserver tools with LangChain / Python.\n\n[Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is the de facto industry standard\nthat dramatically expands the scope of LLMs by enabling the integration of external tools and resources,\nincluding DBs, Cloud Storages, GitHub, Docker, Slack, and more.\nThere are quite a few useful MCP servers already available.\nSee [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers).\n\nThis utility's goal is to make these numerous MCP servers easily accessible from LangChain. \nIt contains a utility function `convert_mcp_to_langchain_tools()`. \nThis async function handles parallel initialization of specified multiple MCP servers\nand converts their available tools into a list of LangChain-compatible tools.\n\nFor detailed information on how to use this library, please refer to the following document:\n[\"Supercharging LangChain: Integrating 2000+ MCP with ReAct\"](https://medium.com/@h1deya/supercharging-langchain-integrating-450-mcp-with-react-d4e467cbf41a).\n\n## MCP Protocol Support\n\nThis library supports **MCP Protocol version 2025-03-26** and maintains backwards compatibility with version 2024-11-05.\nIt follows the [official MCP specification](https://modelcontextprotocol.io/specification/2025-03-26/) for transport selection and backwards compatibility.\n\n### Limitations\n\n- **Tool Return Types**: Currently, only text results of tool calls are supported.\nThe library uses LangChain's `response_format: 'content'` (the default), which only supports text strings.\nWhile MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.\n- **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.\n\n### Note\n\n- **Passing PATH Env Variable**: The library automatically adds the `PATH` environment variable to stdio server configrations if not explicitly provided to ensure servers can find required executables.\n\n## API docs\n\nCan be found [here](https://hideya.github.io/langchain-mcp-tools-py/)\n\n## Building from Source\n\nSee [README_DEV.md](https://github.com/hideya/langchain-mcp-tools-py/blob/main/README_DEV.md) for details.\n\n## Features\n\n### stderr Redirection for Local MCP Server\n\nA new key `\"errlog\"` has been introduced to specify a file-like object\nto which local (stdio) MCP server's stderr is redirected.\n\n```python\n log_path = f\"mcp-server-{server_name}.log\"\n log_file = open(log_path, \"w\")\n mcp_servers[server_name][\"errlog\"] = log_file\n```\n\nA usage example can be found [here](https://github.com/hideya/langchain-mcp-tools-py-usage/blob/3bd35d9fb49f4b631fe3d0cc8491d43cbf69693b/src/example.py#L88-L108). \nThe key name `errlog` is derived from\n[`stdio_client()`'s argument `errlog`](https://github.com/modelcontextprotocol/python-sdk/blob/babb477dffa33f46cdc886bc885eb1d521151430/src/mcp/client/stdio/__init__.py#L96). \n\n### Working Directory Configuration for Local MCP Servers\n\nThe working directory that is used when spawning a local (stdio) MCP server\ncan be specified with the `\"cwd\"` key as follows:\n\n```python\n \"local-server-name\": {\n \"command\": \"...\",\n \"args\": [...],\n \"cwd\": \"/working/directory\" # the working dir to be use by the server\n },\n```\n\nThe key name `cwd` is derived from\nPython SDK's [`StdioServerParameters`](https://github.com/modelcontextprotocol/python-sdk/blob/babb477dffa33f46cdc886bc885eb1d521151430/src/mcp/client/stdio/__init__.py#L76-L77).\n\n### Transport Selection Priority\n\nThe library selects transports using the following priority order:\n\n1. **Explicit transport/type field** (must match URL protocol if URL provided)\n2. **URL protocol auto-detection** (http/https \u2192 StreamableHTTP \u2192 SSE, ws/wss \u2192 WebSocket)\n3. **Command presence** \u2192 Stdio transport\n4. **Error** if none of the above match\n\nThis ensures predictable behavior while allowing flexibility for different deployment scenarios.\n\n### Remote MCP Server Support\n\n`mcp_servers` configuration for Streamable HTTP, SSE (Server-Sent Events) and Websocket servers are as follows:\n\n```py\n # Auto-detection: tries Streamable HTTP first, falls back to SSE on 4xx errors\n \"auto-detect-server\": {\n \"url\": f\"http://{server_host}:{server_port}/...\"\n },\n\n # Explicit Streamable HTTP\n \"streamable-http-server\": {\n \"url\": f\"http://{server_host}:{server_port}/...\",\n \"transport\": \"streamable_http\"\n # \"type\": \"http\" # VSCode-style config also works instead of the above\n },\n\n # Explicit SSE\n \"sse-server-name\": {\n \"url\": f\"http://{sse_server_host}:{sse_server_port}/...\",\n \"transport\": \"sse\" # or `\"type\": \"sse\"`\n },\n\n # WebSocket\n \"ws-server-name\": {\n \"url\": f\"ws://${ws_server_host}:${ws_server_port}/...\"\n # optionally `\"transport\": \"ws\"` or `\"type\": \"ws\"`\n },\n```\n\nThe `\"headers\"` key can be used to pass HTTP headers to Streamable HTTP and SSE connection. \n\n```py\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\",\n \"headers\": {\n \"Authorization\": f\"Bearer {os.environ.get('GITHUB_PERSONAL_ACCESS_TOKEN')}\"\n }\n },\n```\n\nNOTE: When accessing the GitHub MCP server, [GitHub PAT (Personal Access Token)](https://github.com/settings/personal-access-tokens)\nalone is not enough; your GitHub account must have an active Copilot subscription or be assigned a Copilot license through your organization.\n\n**Auto-detection behavior (default):**\n- For HTTP/HTTPS URLs without explicit `transport`, the library follows [MCP specification recommendations](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#backwards-compatibility)\n- First attempts Streamable HTTP transport\n- If Streamable HTTP fails with a 4xx error, automatically falls back to SSE transport\n- Non-4xx errors (network issues, etc.) are re-thrown without fallback\n\n**Explicit transport selection:**\n- Set `\"transport\": \"streamable_http\"` (or VSCode-style config `\"type\": \"http\"`) to force Streamable HTTP (no fallback)\n- Set `\"transport\": \"sse\"` to force SSE transport\n- WebSocket URLs (`ws://` or `wss://`) always use WebSocket transport\n\nStreamable HTTP is the modern MCP transport that replaces the older HTTP+SSE transport. According to the [official MCP documentation](https://modelcontextprotocol.io/docs/concepts/transports): \"SSE as a standalone transport is deprecated as of protocol version 2025-03-26. It has been replaced by Streamable HTTP, which incorporates SSE as an optional streaming mechanism.\"\n\n### Authentication Support for Streamable HTTP Connections\n\nThe library supports OAuth 2.1 authentication for Streamable HTTP connections:\n\n```py\nfrom mcp.client.auth import OAuthClientProvider\n...\n\n # Create OAuth authentication provider\n oauth_auth = OAuthClientProvider(\n server_url=\"https://...\",\n client_metadata=...,\n storage=...,\n redirect_handler=...,\n callback_handler=...,\n )\n\n # Test configuration with OAuth auth\n mcp_servers = {\n \"secure-streamable-server\": {\n \"url\": \"https://.../mcp/\",\n // To avoid auto protocol fallback, specify the protocol explicitly when using authentication\n \"transport\": \"streamable_http\", // or `\"type\": \"http\",`\n \"auth\": oauth_auth,\n \"timeout\": 30.0\n },\n }\n```\n\nTest implementations are provided:\n\n- **Streamable HTTP Authentication Tests**:\n - MCP client uses this library: [streamable_http_oauth_test_client.py](https://github.com/hideya/langchain-mcp-tools-py/tree/main/testfiles/streamable_http_oauth_test_client.py)\n - Test MCP Server: [streamable_http_oauth_test_server.py](https://github.com/hideya/langchain-mcp-tools-py/tree/main/testfiles/streamable_http_oauth_test_server.py)\n\n### Authentication Support for SSE Connections (Legacy)\n\nThe library also supports authentication for SSE connections to MCP servers.\nNote that SSE transport is deprecated; Streamable HTTP is the recommended approach.\n\n## Change Log\n\nCan be found [here](https://github.com/hideya/langchain-mcp-tools-py/blob/main/CHANGELOG.md)\n\n## Appendix\n\n### Troubleshooting\n\n1. **Enable debug logging**: Set the log level to DEBUG to see detailed connection and execution logs: \n\n ```\n tools, cleanup = await convert_mcp_to_langchain_tools(\n mcp_servers,\n logging.DEBUG\n )\n ```\n2. **Check server errlog**: For stdio MCP servers, use `errlog` redirection to capture server error output\n3. **Test explicit transports**: Try forcing specific transport types to isolate auto-detection issues\n4. **Verify server independently**: Refer to [Debugging Section in MCP documentation](https://modelcontextprotocol.io/docs/tools/debugging)\n\n### Troubleshooting Authentication Issues\n\nWhen authentication errors occur, they often generate massive logs that make it difficult to identify that authentication is the root cause.\n\nTo address this problem, this library performs authentication pre-validation for HTTP/HTTPS MCP servers before attempting the actual MCP connection.\nThis ensures that clear error messages like `Authentication failed (401 Unauthorized)` or `Authentication failed (403 Forbidden)` appear at the end of the logs, rather than being buried in the middle of extensive error output.\n\n**Important:** This pre-validation is specific to this library and not part of the official MCP specification.\nIn rare cases, it may interfere with certain MCP server behaviors.\n\n#### When and How to Disable Pre-validation\nSet `\"__pre_validate_authentication\": False` in your server config if:\n- Using OAuth flows that require complex authentication handshakes\n- The MCP server doesn't accept simple HTTP POST requests for validation\n- You're experiencing false negatives in the auth validation\n\n**Example:**\n```python\n\"oauth-server\": {\n \"url\": \"https://api.example.com/mcp/\",\n \"auth\": oauth_provider, # Complex OAuth provider\n \"__pre_validate_authentication\": False # Skip the pre-validation\n}\n```\n\n### Debugging Authentication\n1. **Check your tokens/credentials** - Most auth failures are due to expired or incorrect tokens\n2. **Verify token permissions** - Some MCP servers require specific scopes (e.g., GitHub Copilot license)\n3. **Test with curl** - Try a simple HTTP request to verify your auth setup:\n\n ```bash\n curl -H \"Authorization: Bearer your-token\" https://api.example.com/mcp/\n ```\n\n### Resource Management\n\nThe returned `cleanup` function properly handles resource cleanup:\n\n- Closes all MCP server connections concurrently\n- Logs any cleanup failures\n- Continues cleanup of remaining servers even if some fail\n- Should always be called when done using the tools\n\n```python\ntry:\n tools, cleanup = await convert_mcp_to_langchain_tools(mcp_servers)\n\n # Use tools with your LLM\n\nfinally:\n # cleanup can be undefined when an exeption occurs during initialization\n if \"cleanup\" in locals():\n await cleanup()\n```\n\n### For Developers\n\nSee [TECHNICAL.md](TECHNICAL.md) for technical details about implementation challenges and solutions.\n",
"bugtrack_url": null,
"license": null,
"summary": "Model Context Protocol (MCP) To LangChain Tools Conversion Utility",
"version": "0.2.12",
"project_urls": {
"Bug Tracker": "https://github.com/hideya/langchain-mcp-tools-py/issues",
"Source Code": "https://github.com/hideya/langchain-mcp-tools-py"
},
"split_keywords": [
"modelcontextprotocol",
" mcp",
" mcp-client",
" langchain",
" langchain-python",
" tool-call",
" tool-calling",
" python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "699407e7aeff37d613e9fd9838b320c9ad83e7c6eddb89466acf47eb8c24c209",
"md5": "96dd3fdbcebcd4d1b99dd8f0fa8febfd",
"sha256": "52172928f25447f5b5d271411469bed7c0cf77ec1917bea357816e67310cfe72"
},
"downloads": -1,
"filename": "langchain_mcp_tools_test_20250807-0.2.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "96dd3fdbcebcd4d1b99dd8f0fa8febfd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 23690,
"upload_time": "2025-08-07T05:53:20",
"upload_time_iso_8601": "2025-08-07T05:53:20.405500Z",
"url": "https://files.pythonhosted.org/packages/69/94/07e7aeff37d613e9fd9838b320c9ad83e7c6eddb89466acf47eb8c24c209/langchain_mcp_tools_test_20250807-0.2.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "826194863eafcb3f3b16bed16870828cb8b2e83303ae6746dfd92c35e7bd6078",
"md5": "1bf09684b63634ecca7106b8502c458c",
"sha256": "713b901d41d84dbee78dd4be6643fb5f482b05c06712b0ced84e5e0d54810df0"
},
"downloads": -1,
"filename": "langchain_mcp_tools_test_20250807-0.2.12.tar.gz",
"has_sig": false,
"md5_digest": "1bf09684b63634ecca7106b8502c458c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 27698,
"upload_time": "2025-08-07T05:53:22",
"upload_time_iso_8601": "2025-08-07T05:53:22.144743Z",
"url": "https://files.pythonhosted.org/packages/82/61/94863eafcb3f3b16bed16870828cb8b2e83303ae6746dfd92c35e7bd6078/langchain_mcp_tools_test_20250807-0.2.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-07 05:53:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hideya",
"github_project": "langchain-mcp-tools-py",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "langchain-mcp-tools-test-20250807"
}