# Simple MCP Client to Explore MCP Servers [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/mcp-chat/)
**Quickly test and explore MCP servers from the command line!**
A simple, text-based CLI client for [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers built with LangChain and Python.
Suitable for testing MCP servers, exploring their capabilities, and prototyping integrations.
Internally it uses [LangChain ReAct Agent](https://langchain-ai.github.io/langgraph/reference/agents/) and
a utility function `convert_mcp_to_langchain_tools()` from [`langchain_mcp_tools`](https://pypi.org/project/langchain-mcp-tools/).
A TypeScript equivalent of this utility is available [here](https://www.npmjs.com/package/@h1deya/mcp-try-cli)
## Prerequisites
- Python 3.11+
- [optional] [`uv` (`uvx`)](https://docs.astral.sh/uv/getting-started/installation/)
installed to run Python package-based MCP servers
- [optional] [npm 7+ (`npx`)](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
to run Node.js package-based MCP servers
- LLM API keys from
[OpenAI](https://platform.openai.com/api-keys),
[Anthropic](https://console.anthropic.com/settings/keys),
[Google AI Studio (for GenAI/Gemini)](https://aistudio.google.com/apikey),
and/or
[xAI](https://console.x.ai/),
as needed
## Quick Start
- Install `mcp-chat` tool.
This can take up to a few minutes to complete:
```bash
pip install mcp-chat
```
- Configure LLM and MCP Servers settings via the configuration file, `llm_mcp_config.json5`
```bash
code llm_mcp_config.json5
```
The following is a simple configuration for quick testing:
```json5
{
"llm": {
"model_provider": "openai",
"model": "gpt-4o-mini",
// "model_provider": "anthropic",
// "model": "claude-3-5-haiku-latest",
// "model_provider": "google_genai",
// "model": "gemini-2.0-flash",
// "model_provider": "xai",
// "model": "grok-3-mini",
},
"mcp_servers": {
"us-weather": { // US weather only
"command": "npx",
"args": ["-y", "@h1deya/mcp-server-weather"]
},
},
"example_queries": [
"Tell me how LLMs work in a few sentences",
"Are there any weather alerts in California?",
],
}
```
- Set up API keys
```bash
echo "ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-proj-...
GOOGLE_API_KEY=AI...
XAI_API_KEY=xai-..." > .env
code .env
```
- Run the tool
```bash
mcp-chat
```
By default, it reads the configuration file, `llm_mcp_config.json5`, from the current directory.
Then, it applies the environment variables specified in the `.env` file,
as well as the ones that are already defined.
## Building from Source
See [README_DEV.md](https://github.com/hideya/mcp-client-langchain-py/blob/main/README_DEV.md) for details.
## Features
- **Easy setup**: Works out of the box with popular MCP servers
- **Flexible configuration**: JSON5 config with environment variable support
- **Multiple LLM providers**: OpenAI, Anthropic, Google (GenAI)
- **Command & URL servers**: Support for both local and remote MCP servers
- **Local MCP Server logging**: Save stdio MCP server logs with customizable log directory
- **Interactive testing**: Example queries for the convenience of repeated testing
## Limitations
- **Tool Return Types**: Currently, only text results of tool calls are supported.
It uses LangChain's `response_format: 'content'` (the default) internally, which only supports text strings.
While MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.
- **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.
## Usage
### Basic Usage
```bash
mcp-chat
```
By default, it reads the configuration file, `llm_mcp_config.json5`, from the current directory.
Then, it applies the environment variables specified in the `.env` file,
as well as the ones that are already defined.
It outputs local MCP server logs to the current directory.
### With Options
```bash
# Specify the config file to use
mcp-chat --config my-config.json5
# Store local (stdio) MCP server logs in specific directory
mcp-chat --log-dir ./logs
# Enable verbose logging
mcp-chat --verbose
# Show help
mcp-chat --help
```
## Supported LLM Providers
- **OpenAI**: `gpt-4o`, `gpt-4o-mini`, etc.
- **Anthropic**: `claude-sonnet-4-0`, `claude-3-5-haiku-latest`, etc.
- **Google (GenAI)**: `gemini-2.0-flash`, `gemini-1.5-pro`, etc.
## Configuration
Create a `llm_mcp_config.json5` file:
- [The configuration file format](https://github.com/hideya/mcp-client-langchain-py/blob/main/llm_mcp_config.json5)
for MCP servers follows the same structure as
[Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),
with one difference: the key name `mcpServers` has been changed
to `mcp_servers` to follow the snake_case convention
commonly used in JSON configuration files.
- The file format is [JSON5](https://json5.org/),
where comments and trailing commas are allowed.
- The format is further extended to replace `${...}` notations
with the values of corresponding environment variables.
- Keep all the credentials and private info in the `.env` file
and refer to them with `${...}` notation as needed
```json5
{
"llm": {
"model_provider": "openai",
"model": "gpt-4.1-nano",
// model: "o4-mini",
},
// "llm": {
// "model_provider": "anthropic",
// "model": "claude-3-5-haiku-latest",
// // "model": "claude-sonnet-4-0",
// },
// "llm": {
// "model_provider": "google_genai",
// "model": "gemini-2.5-flash",
// // "model": "gemini-2.5-pro",
// }
"example_queries": [
"Tell me how LLMs work in a few sentences",
"Are there any weather alerts in California?",
"Read the news headlines on bbc.com",
],
"mcp_servers": {
// Local MCP server that uses `npx`
"weather": {
"command": "npx",
"args": [ "-y", "@h1deya/mcp-server-weather" ]
},
// Another local server that uses `uvx`
"fetch": {
"command": "uvx",
"args": [ "mcp-server-fetch" ]
},
"brave-search": {
"command": "npx",
"args": [ "-y", "@modelcontextprotocol/server-brave-search" ],
"env": { "BRAVE_API_KEY": "${BRAVE_API_KEY}" }
},
// Remote MCP server via URL
// Auto-detection: tries Streamable HTTP first, falls back to SSE
"remote-mcp-server": {
"url": "https://api.example.com/..."
},
// Server with authentication
"github": {
"type": "http", // recommended to specify the protocol explicitly when authentication is used
"url": "https://api.githubcopilot.com/mcp/",
"headers": {
"Authorization": "Bearer ${GITHUB_PERSONAL_ACCESS_TOKEN}"
}
}
}
}
```
### Environment Variables
Create a `.env` file for API keys:
```bash
OPENAI_API_KEY=sk-ant-...
ANTHROPIC_API_KEY=sk-proj-...
GOOGLE_API_KEY=AI...
# Other services as needed
GITHUB_PERSONAL_ACCESS_TOKEN=github_pat_...
BRAVE_API_KEY=BSA...
```
## Popular MCP Servers to Try
There are quite a few useful MCP servers already available:
- [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers)
## Troubleshooting
- Make sure your configuration and .env files are correct, especially the spelling of the API keys
- Check the local MCP server logs
- Use `--verbose` flag to view the detailed logs
- Refer to [Debugging Section in MCP documentation](https://modelcontextprotocol.io/docs/tools/debugging)
## License
MIT License - see [LICENSE](LICENSE) file for details.
## Contributing
Issues and pull requests welcome! This tool aims to make MCP server testing as simple as possible.
Raw data
{
"_id": null,
"home_page": null,
"name": "mcp-chat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "cli, client, explore, langchain, mcp, model-context-protocol, python, quick, simple, test, tools, try",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/98/61/6c71a1fe900c16e295e8492b3534e6960090a13efca928a7f5e212fdb0bf/mcp_chat-0.2.12.tar.gz",
"platform": null,
"description": "# Simple MCP Client to Explore MCP Servers [](https://github.com/hideya/langchain-mcp-tools-py/blob/main/LICENSE) [](https://pypi.org/project/mcp-chat/)\n\n\n**Quickly test and explore MCP servers from the command line!**\n\nA simple, text-based CLI client for [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers built with LangChain and Python. \nSuitable for testing MCP servers, exploring their capabilities, and prototyping integrations.\n\nInternally it uses [LangChain ReAct Agent](https://langchain-ai.github.io/langgraph/reference/agents/) and\na utility function `convert_mcp_to_langchain_tools()` from [`langchain_mcp_tools`](https://pypi.org/project/langchain-mcp-tools/). \n\nA TypeScript equivalent of this utility is available [here](https://www.npmjs.com/package/@h1deya/mcp-try-cli)\n\n## Prerequisites\n\n- Python 3.11+\n- [optional] [`uv` (`uvx`)](https://docs.astral.sh/uv/getting-started/installation/)\n installed to run Python package-based MCP servers\n- [optional] [npm 7+ (`npx`)](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)\n to run Node.js package-based MCP servers\n- LLM API keys from\n [OpenAI](https://platform.openai.com/api-keys),\n [Anthropic](https://console.anthropic.com/settings/keys),\n [Google AI Studio (for GenAI/Gemini)](https://aistudio.google.com/apikey),\n and/or\n [xAI](https://console.x.ai/),\n as needed\n\n## Quick Start\n\n- Install `mcp-chat` tool.\n This can take up to a few minutes to complete:\n ```bash\n pip install mcp-chat\n ```\n\n- Configure LLM and MCP Servers settings via the configuration file, `llm_mcp_config.json5`\n ```bash\n code llm_mcp_config.json5\n ```\n\n The following is a simple configuration for quick testing:\n ```json5\n {\n \"llm\": {\n \"model_provider\": \"openai\",\n \"model\": \"gpt-4o-mini\",\n // \"model_provider\": \"anthropic\",\n // \"model\": \"claude-3-5-haiku-latest\",\n // \"model_provider\": \"google_genai\",\n // \"model\": \"gemini-2.0-flash\",\n // \"model_provider\": \"xai\",\n // \"model\": \"grok-3-mini\",\n },\n\n \"mcp_servers\": {\n \"us-weather\": { // US weather only\n \"command\": \"npx\", \n \"args\": [\"-y\", \"@h1deya/mcp-server-weather\"]\n },\n },\n\n \"example_queries\": [\n \"Tell me how LLMs work in a few sentences\",\n \"Are there any weather alerts in California?\",\n ],\n }\n ```\n\n- Set up API keys\n ```bash\n echo \"ANTHROPIC_API_KEY=sk-ant-... \n OPENAI_API_KEY=sk-proj-...\n GOOGLE_API_KEY=AI...\n XAI_API_KEY=xai-...\" > .env\n \n code .env\n ```\n\n- Run the tool\n ```bash\n mcp-chat\n ```\n By default, it reads the configuration file, `llm_mcp_config.json5`, from the current directory. \n Then, it applies the environment variables specified in the `.env` file,\n as well as the ones that are already defined.\n\n## Building from Source\n\nSee [README_DEV.md](https://github.com/hideya/mcp-client-langchain-py/blob/main/README_DEV.md) for details.\n\n## Features\n\n- **Easy setup**: Works out of the box with popular MCP servers\n- **Flexible configuration**: JSON5 config with environment variable support\n- **Multiple LLM providers**: OpenAI, Anthropic, Google (GenAI)\n- **Command & URL servers**: Support for both local and remote MCP servers\n- **Local MCP Server logging**: Save stdio MCP server logs with customizable log directory\n- **Interactive testing**: Example queries for the convenience of repeated testing\n\n## Limitations\n\n- **Tool Return Types**: Currently, only text results of tool calls are supported.\nIt uses LangChain's `response_format: 'content'` (the default) internally, which only supports text strings.\nWhile MCP tools can return multiple content types (text, images, etc.), this library currently filters and uses only text content.\n- **MCP Features**: Only MCP [Tools](https://modelcontextprotocol.io/docs/concepts/tools) are supported. Other MCP features like Resources, Prompts, and Sampling are not implemented.\n\n## Usage\n\n### Basic Usage\n\n```bash\nmcp-chat\n```\n\nBy default, it reads the configuration file, `llm_mcp_config.json5`, from the current directory. \nThen, it applies the environment variables specified in the `.env` file,\nas well as the ones that are already defined. \nIt outputs local MCP server logs to the current directory.\n\n### With Options\n\n```bash\n# Specify the config file to use\nmcp-chat --config my-config.json5\n\n# Store local (stdio) MCP server logs in specific directory\nmcp-chat --log-dir ./logs\n\n# Enable verbose logging\nmcp-chat --verbose\n\n# Show help\nmcp-chat --help\n```\n\n## Supported LLM Providers\n\n- **OpenAI**: `gpt-4o`, `gpt-4o-mini`, etc.\n- **Anthropic**: `claude-sonnet-4-0`, `claude-3-5-haiku-latest`, etc.\n- **Google (GenAI)**: `gemini-2.0-flash`, `gemini-1.5-pro`, etc.\n\n## Configuration\n\nCreate a `llm_mcp_config.json5` file:\n\n- [The configuration file format](https://github.com/hideya/mcp-client-langchain-py/blob/main/llm_mcp_config.json5)\n for MCP servers follows the same structure as\n [Claude for Desktop](https://modelcontextprotocol.io/quickstart/user),\n with one difference: the key name `mcpServers` has been changed\n to `mcp_servers` to follow the snake_case convention\n commonly used in JSON configuration files.\n- The file format is [JSON5](https://json5.org/),\n where comments and trailing commas are allowed.\n- The format is further extended to replace `${...}` notations\n with the values of corresponding environment variables.\n- Keep all the credentials and private info in the `.env` file\n and refer to them with `${...}` notation as needed\n\n```json5\n{\n \"llm\": {\n \"model_provider\": \"openai\",\n \"model\": \"gpt-4.1-nano\",\n // model: \"o4-mini\",\n },\n \n // \"llm\": {\n // \"model_provider\": \"anthropic\",\n // \"model\": \"claude-3-5-haiku-latest\",\n // // \"model\": \"claude-sonnet-4-0\",\n // },\n\n // \"llm\": {\n // \"model_provider\": \"google_genai\",\n // \"model\": \"gemini-2.5-flash\",\n // // \"model\": \"gemini-2.5-pro\",\n // }\n\n \"example_queries\": [\n \"Tell me how LLMs work in a few sentences\",\n \"Are there any weather alerts in California?\",\n \"Read the news headlines on bbc.com\",\n ],\n\n \"mcp_servers\": {\n // Local MCP server that uses `npx`\n \"weather\": {\n \"command\": \"npx\", \n \"args\": [ \"-y\", \"@h1deya/mcp-server-weather\" ]\n },\n\n // Another local server that uses `uvx`\n \"fetch\": {\n \"command\": \"uvx\",\n \"args\": [ \"mcp-server-fetch\" ]\n },\n\n \"brave-search\": {\n \"command\": \"npx\",\n \"args\": [ \"-y\", \"@modelcontextprotocol/server-brave-search\" ],\n \"env\": { \"BRAVE_API_KEY\": \"${BRAVE_API_KEY}\" }\n },\n\n // Remote MCP server via URL\n // Auto-detection: tries Streamable HTTP first, falls back to SSE\n \"remote-mcp-server\": {\n \"url\": \"https://api.example.com/...\"\n },\n\n // Server with authentication\n \"github\": {\n \"type\": \"http\", // recommended to specify the protocol explicitly when authentication is used\n \"url\": \"https://api.githubcopilot.com/mcp/\",\n \"headers\": {\n \"Authorization\": \"Bearer ${GITHUB_PERSONAL_ACCESS_TOKEN}\"\n }\n }\n }\n}\n```\n\n### Environment Variables\n\nCreate a `.env` file for API keys:\n\n```bash\nOPENAI_API_KEY=sk-ant-...\nANTHROPIC_API_KEY=sk-proj-...\nGOOGLE_API_KEY=AI...\n\n# Other services as needed\nGITHUB_PERSONAL_ACCESS_TOKEN=github_pat_...\nBRAVE_API_KEY=BSA...\n```\n\n## Popular MCP Servers to Try\n\nThere are quite a few useful MCP servers already available:\n\n- [MCP Server Listing on the Official Site](https://github.com/modelcontextprotocol/servers?tab=readme-ov-file#model-context-protocol-servers)\n\n## Troubleshooting\n\n- Make sure your configuration and .env files are correct, especially the spelling of the API keys\n- Check the local MCP server logs\n- Use `--verbose` flag to view the detailed logs\n- Refer to [Debugging Section in MCP documentation](https://modelcontextprotocol.io/docs/tools/debugging)\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## Contributing\n\nIssues and pull requests welcome! This tool aims to make MCP server testing as simple as possible.\n",
"bugtrack_url": null,
"license": null,
"summary": "Simple MCP Client to quickly test and explore MCP servers from the command line",
"version": "0.2.12",
"project_urls": {
"Bug Tracker": "https://github.com/hideya/mcp-client-langchain-py/issues",
"Source Code": "https://github.com/hideya/mcp-client-langchain-py"
},
"split_keywords": [
"cli",
" client",
" explore",
" langchain",
" mcp",
" model-context-protocol",
" python",
" quick",
" simple",
" test",
" tools",
" try"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5a1daa3a79da9a15f22a1088ada2075e25469639f3eb929e3169cb545cca4f01",
"md5": "a6e0afa0e3c593c30a92b810e321e687",
"sha256": "7dc1ccdd5b12619de67829d218766265a008aaa724bbe8788daad6c36a4cd3d1"
},
"downloads": -1,
"filename": "mcp_chat-0.2.12-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a6e0afa0e3c593c30a92b810e321e687",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 11293,
"upload_time": "2025-08-06T00:02:51",
"upload_time_iso_8601": "2025-08-06T00:02:51.415502Z",
"url": "https://files.pythonhosted.org/packages/5a/1d/aa3a79da9a15f22a1088ada2075e25469639f3eb929e3169cb545cca4f01/mcp_chat-0.2.12-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "98616c71a1fe900c16e295e8492b3534e6960090a13efca928a7f5e212fdb0bf",
"md5": "99669f5607270da511459d39330ca7f1",
"sha256": "de7a1f24efd7e4cf3746d33c3eb98a3a26460dcc42db9a62fd05f3c2e9cb832d"
},
"downloads": -1,
"filename": "mcp_chat-0.2.12.tar.gz",
"has_sig": false,
"md5_digest": "99669f5607270da511459d39330ca7f1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 135625,
"upload_time": "2025-08-06T00:02:53",
"upload_time_iso_8601": "2025-08-06T00:02:53.335799Z",
"url": "https://files.pythonhosted.org/packages/98/61/6c71a1fe900c16e295e8492b3534e6960090a13efca928a7f5e212fdb0bf/mcp_chat-0.2.12.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-06 00:02:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hideya",
"github_project": "mcp-client-langchain-py",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "mcp-chat"
}