Name | mcp-cli JSON |
Version |
0.2.2
JSON |
| download |
home_page | None |
Summary | A cli for the Model Context Provider |
upload_time | 2025-07-08 14:22:43 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | MIT |
keywords |
llm
openai
claude
mcp
cli
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# MCP CLI - Model Context Protocol Command Line Interface
A powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the [CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor) and [CHUK-LLM](https://github.com/chrishayuk/chuk-llm), providing tool usage, conversation management, and multiple operational modes.
## 🔄 Architecture Overview
The MCP CLI is built on a modular architecture with clean separation of concerns:
- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)**: Async-native tool execution and MCP server communication
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)**: Unified LLM provider configuration and client management
- **MCP CLI**: Rich user interface and command orchestration (this project)
## 🌟 Features
### Multiple Operational Modes
- **Chat Mode**: Conversational interface with streaming responses and automated tool usage
- **Interactive Mode**: Command-driven shell interface for direct server operations
- **Command Mode**: Unix-friendly mode for scriptable automation and pipelines
- **Direct Commands**: Run individual commands without entering interactive mode
### Advanced Chat Interface
- **Streaming Responses**: Real-time response generation with live UI updates
- **Concurrent Tool Execution**: Execute multiple tools simultaneously while preserving conversation order
- **Smart Interruption**: Interrupt streaming responses or tool execution with Ctrl+C
- **Performance Metrics**: Response timing, words/second, and execution statistics
- **Rich Formatting**: Markdown rendering, syntax highlighting, and progress indicators
### Comprehensive Provider Support
- **OpenAI**: GPT models (`gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo`, etc.)
- **Anthropic**: Claude models (`claude-3-opus`, `claude-3-sonnet`, `claude-3-haiku`)
- **Ollama**: Local models (`llama3.2`, `qwen2.5-coder`, `deepseek-coder`, etc.)
- **Custom Providers**: Extensible architecture for additional providers
- **Dynamic Switching**: Change providers and models mid-conversation
### Robust Tool System
- **Automatic Discovery**: Server-provided tools are automatically detected and catalogued
- **Provider Adaptation**: Tool names are automatically sanitized for provider compatibility
- **Concurrent Execution**: Multiple tools can run simultaneously with proper coordination
- **Rich Progress Display**: Real-time progress indicators and execution timing
- **Tool History**: Complete audit trail of all tool executions
- **Streaming Tool Calls**: Support for tools that return streaming data
### Advanced Configuration Management
- **Environment Integration**: API keys and settings via environment variables
- **File-based Config**: YAML and JSON configuration files
- **User Preferences**: Persistent settings for active providers and models
- **Validation & Diagnostics**: Built-in provider health checks and configuration validation
### Enhanced User Experience
- **Cross-Platform Support**: Windows, macOS, and Linux with platform-specific optimizations
- **Rich Console Output**: Colorful, formatted output with automatic fallbacks
- **Command Completion**: Context-aware tab completion for all interfaces
- **Comprehensive Help**: Detailed help system with examples and usage patterns
- **Graceful Error Handling**: User-friendly error messages with troubleshooting hints
## 📋 Prerequisites
- **Python 3.11 or higher**
- **API Keys** (as needed):
- OpenAI: `OPENAI_API_KEY` environment variable
- Anthropic: `ANTHROPIC_API_KEY` environment variable
- Custom providers: Provider-specific configuration
- **Local Services** (as needed):
- Ollama: Local installation for Ollama models
- **MCP Servers**: Server configuration file (default: `server_config.json`)
## 🚀 Installation
### Using UVX
To install uxx, use the following instructions:
https://docs.astral.sh/uv/getting-started/installation/
Once installed you can test it works using:
```bash
uvx mcp-cli --help
```
or use interactive mode
```bash
uvx mcp-cli interactive
```
### Install from Source
1. **Clone the repository**:
```bash
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
```
2. **Install the package**:
```bash
pip install -e "."
```
3. **Verify installation**:
```bash
mcp-cli --help
```
### Using UV (Recommended)
UV provides faster dependency resolution and better environment management:
```bash
# Install UV if not already installed
pip install uv
# Install dependencies
uv sync --reinstall
# Run with UV
uv run mcp-cli --help
```
## 🧰 Global Configuration
### Command-line Arguments
Global options available for all modes and commands:
- `--server`: Specify server(s) to connect to (comma-separated)
- `--config-file`: Path to server configuration file (default: `server_config.json`)
- `--provider`: LLM provider (`openai`, `anthropic`, `ollama`, etc.)
- `--model`: Specific model to use (provider-dependent)
- `--disable-filesystem`: Disable filesystem access (default: enabled)
- `--api-base`: Override API endpoint URL
- `--api-key`: Override API key
- `--verbose`: Enable detailed logging
- `--quiet`: Suppress non-essential output
### Environment Variables
```bash
export LLM_PROVIDER=openai # Default provider
export LLM_MODEL=gpt-4o-mini # Default model
export OPENAI_API_KEY=sk-... # OpenAI API key
export ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key
export MCP_TOOL_TIMEOUT=120 # Tool execution timeout (seconds)
```
## 🌐 Available Modes
### 1. Chat Mode (Default)
Provides a natural language interface with streaming responses and automatic tool usage:
```bash
# Default mode (no subcommand needed)
mcp-cli --server sqlite
# Explicit chat mode
mcp-cli chat --server sqlite
# With specific provider and model
mcp-cli chat --server sqlite --provider anthropic --model claude-3-sonnet
# With custom configuration
mcp-cli chat --server sqlite --provider openai --api-key sk-... --model gpt-4o
```
### 2. Interactive Mode
Command-driven shell interface for direct server operations:
```bash
mcp-cli interactive --server sqlite
# With provider selection
mcp-cli interactive --server sqlite --provider ollama --model llama3.2
```
### 3. Command Mode
Unix-friendly interface for automation and scripting:
```bash
# Process text with LLM
mcp-cli cmd --server sqlite --prompt "Analyze this data" --input data.txt
# Execute tools directly
mcp-cli cmd --server sqlite --tool list_tables --output tables.json
# Pipeline-friendly processing
echo "SELECT * FROM users LIMIT 5" | mcp-cli cmd --server sqlite --tool read_query --input -
```
### 4. Direct Commands
Execute individual commands without entering interactive mode:
```bash
# List available tools
mcp-cli tools --server sqlite
# Show provider configuration
mcp-cli provider list
# Ping servers
mcp-cli ping --server sqlite
# List resources
mcp-cli resources --server sqlite
```
## 🤖 Using Chat Mode
Chat mode provides the most advanced interface with streaming responses and intelligent tool usage.
### Starting Chat Mode
```bash
# Simple startup
mcp-cli --server sqlite
# Multiple servers
mcp-cli --server sqlite,filesystem
# Specific provider configuration
mcp-cli --server sqlite --provider anthropic --model claude-3-opus
```
### Chat Commands (Slash Commands)
#### Provider & Model Management
```bash
/provider # Show current configuration
/provider list # List all providers
/provider config # Show detailed configuration
/provider diagnostic # Test provider connectivity
/provider set openai api_key sk-... # Configure provider settings
/provider anthropic # Switch to Anthropic
/provider openai gpt-4o # Switch provider and model
/model # Show current model
/model gpt-4o # Switch to specific model
/models # List available models
```
#### Tool Management
```bash
/tools # List available tools
/tools --all # Show detailed tool information
/tools --raw # Show raw JSON definitions
/tools call # Interactive tool execution
/toolhistory # Show tool execution history
/th -n 5 # Last 5 tool calls
/th 3 # Details for call #3
/th --json # Full history as JSON
```
#### Conversation Management
```bash
/conversation # Show conversation history
/ch -n 10 # Last 10 messages
/ch 5 # Details for message #5
/ch --json # Full history as JSON
/save conversation.json # Save conversation to file
/compact # Summarize conversation
/clear # Clear conversation history
/cls # Clear screen only
```
#### Session Control
```bash
/verbose # Toggle verbose/compact display
/interrupt # Stop running operations
/servers # List connected servers
/help # Show all commands
/help tools # Help for specific command
/exit # Exit chat mode
```
### Chat Features
#### Streaming Responses
- Real-time text generation with live updates
- Performance metrics (words/second, response time)
- Graceful interruption with Ctrl+C
- Progressive markdown rendering
#### Tool Execution
- Automatic tool discovery and usage
- Concurrent execution with progress indicators
- Verbose and compact display modes
- Complete execution history and timing
#### Provider Integration
- Seamless switching between providers
- Model-specific optimizations
- API key and endpoint management
- Health monitoring and diagnostics
## 🖥️ Using Interactive Mode
Interactive mode provides a command shell for direct server interaction.
### Starting Interactive Mode
```bash
mcp-cli interactive --server sqlite
```
### Interactive Commands
```bash
help # Show available commands
exit # Exit interactive mode
clear # Clear terminal
# Provider management
provider # Show current provider
provider list # List providers
provider anthropic # Switch provider
# Tool operations
tools # List tools
tools --all # Detailed tool info
tools call # Interactive tool execution
# Server operations
servers # List servers
ping # Ping all servers
resources # List resources
prompts # List prompts
```
## 📄 Using Command Mode
Command mode provides Unix-friendly automation capabilities.
### Command Mode Options
```bash
--input FILE # Input file (- for stdin)
--output FILE # Output file (- for stdout)
--prompt TEXT # Prompt template
--tool TOOL # Execute specific tool
--tool-args JSON # Tool arguments as JSON
--system-prompt TEXT # Custom system prompt
--raw # Raw output without formatting
--single-turn # Disable multi-turn conversation
--max-turns N # Maximum conversation turns
```
### Examples
```bash
# Text processing
echo "Analyze this data" | mcp-cli cmd --server sqlite --input - --output analysis.txt
# Tool execution
mcp-cli cmd --server sqlite --tool list_tables --raw
# Complex queries
mcp-cli cmd --server sqlite --tool read_query --tool-args '{"query": "SELECT COUNT(*) FROM users"}'
# Batch processing with GNU Parallel
ls *.txt | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary --prompt "Summarize: {{input}}"
```
## 🔧 Provider Configuration
### Automatic Configuration
The CLI automatically manages provider configurations using the CHUK-LLM library:
```bash
# Configure a provider
mcp-cli provider set openai api_key sk-your-key-here
mcp-cli provider set anthropic api_base https://api.anthropic.com
# Test configuration
mcp-cli provider diagnostic openai
# List available models
mcp-cli provider list
```
### Manual Configuration
Providers are configured in `~/.chuk_llm/providers.yaml`:
```yaml
openai:
api_base: https://api.openai.com/v1
default_model: gpt-4o-mini
anthropic:
api_base: https://api.anthropic.com
default_model: claude-3-sonnet
ollama:
api_base: http://localhost:11434
default_model: llama3.2
```
API keys are stored securely in `~/.chuk_llm/.env`:
```bash
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
```
## 📂 Server Configuration
Create a `server_config.json` file with your MCP server configurations:
```json
{
"mcpServers": {
"sqlite": {
"command": "python",
"args": ["-m", "mcp_server.sqlite_server"],
"env": {
"DATABASE_PATH": "database.db"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"],
"env": {}
},
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "your-brave-api-key"
}
}
}
}
```
## 📈 Advanced Usage Examples
### Multi-Provider Workflow
```bash
# Start with OpenAI
mcp-cli chat --server sqlite --provider openai --model gpt-4o
# In chat, switch to Anthropic for reasoning tasks
> /provider anthropic claude-3-opus
# Switch to Ollama for local processing
> /provider ollama llama3.2
# Compare responses across providers
> /provider openai
> What's the capital of France?
> /provider anthropic
> What's the capital of France?
```
### Complex Tool Workflows
```bash
# Database analysis workflow
> List all tables in the database
[Tool: list_tables] → products, customers, orders
> Show me the schema for the products table
[Tool: describe_table] → id, name, price, category, stock
> Find the top 10 most expensive products
[Tool: read_query] → SELECT name, price FROM products ORDER BY price DESC LIMIT 10
> Export this data to a CSV file
[Tool: write_file] → Saved to expensive_products.csv
```
### Automation and Scripting
```bash
# Batch data processing
for file in data/*.csv; do
mcp-cli cmd --server sqlite \
--tool analyze_data \
--tool-args "{\"file_path\": \"$file\"}" \
--output "results/$(basename "$file" .csv)_analysis.json"
done
# Pipeline processing
cat input.txt | \
mcp-cli cmd --server sqlite --prompt "Extract key entities" --input - | \
mcp-cli cmd --server sqlite --prompt "Categorize these entities" --input - > output.txt
```
### Performance Monitoring
```bash
# Enable verbose mode for detailed timing
> /verbose
# Monitor tool execution times
> /toolhistory
Tool Call History (15 calls)
# | Tool | Arguments | Time
1 | list_tables | {} | 0.12s
2 | read_query | {"query": "SELECT..."} | 0.45s
...
# Check provider performance
> /provider diagnostic
Provider Diagnostics
Provider | Status | Response Time | Features
openai | ✅ Ready | 234ms | 📡🔧👁️
anthropic | ✅ Ready | 187ms | 📡🔧
ollama | ✅ Ready | 56ms | 📡🔧
```
## 🔍 Troubleshooting
### Common Issues
1. **"Missing argument 'KWARGS'" error**:
```bash
# Use equals sign format
mcp-cli chat --server=sqlite --provider=openai
# Or add double dash
mcp-cli chat -- --server sqlite --provider openai
```
2. **Provider not found**:
```bash
mcp-cli provider diagnostic
mcp-cli provider set <provider> api_key <your-key>
```
3. **Tool execution timeout**:
```bash
export MCP_TOOL_TIMEOUT=300 # 5 minutes
```
4. **Connection issues**:
```bash
mcp-cli ping --server <server-name>
mcp-cli servers
```
### Debug Mode
Enable verbose logging for troubleshooting:
```bash
mcp-cli --verbose chat --server sqlite
mcp-cli --log-level DEBUG interactive --server sqlite
```
## 🔒 Security Considerations
- **API Keys**: Stored securely in environment variables or protected files
- **File Access**: Filesystem access can be disabled with `--disable-filesystem`
- **Tool Validation**: All tool calls are validated before execution
- **Timeout Protection**: Configurable timeouts prevent hanging operations
- **Server Isolation**: Each server runs in its own process
## 🚀 Performance Features
- **Concurrent Tool Execution**: Multiple tools can run simultaneously
- **Streaming Responses**: Real-time response generation
- **Connection Pooling**: Efficient reuse of client connections
- **Caching**: Tool metadata and provider configurations are cached
- **Async Architecture**: Non-blocking operations throughout
## 📦 Dependencies
Core dependencies are organized into feature groups:
- **cli**: Rich terminal UI, command completion, provider integrations
- **dev**: Development tools, testing utilities, linting
- **chuk-tool-processor**: Core tool execution and MCP communication
- **chuk-llm**: Unified LLM provider management
Install with specific features:
```bash
pip install "mcp-cli[cli]" # Basic CLI features
pip install "mcp-cli[cli,dev]" # CLI with development tools
```
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Development Setup
```bash
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
pip install -e ".[cli,dev]"
pre-commit install
```
### Running Tests
```bash
pytest
pytest --cov=mcp_cli --cov-report=html
```
## 📜 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Async-native tool execution
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - Unified LLM provider management
- **[Rich](https://github.com/Textualize/rich)** - Beautiful terminal formatting
- **[Typer](https://typer.tiangolo.com/)** - CLI framework
- **[Prompt Toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit)** - Interactive input
## 🔗 Related Projects
- **[Model Context Protocol](https://modelcontextprotocol.io/)** - Core protocol specification
- **[MCP Servers](https://github.com/modelcontextprotocol/servers)** - Official MCP server implementations
- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Tool execution engine
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - LLM provider abstraction
Raw data
{
"_id": null,
"home_page": null,
"name": "mcp-cli",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "llm, openai, claude, mcp, cli",
"author": null,
"author_email": "Chris Hay <chrishayuk@somejunkmailbox.com>",
"download_url": "https://files.pythonhosted.org/packages/2e/3b/d5489b7c5f4ea7533e43bca1cefa86f71d6fd8a9b1b1a649146990328d95/mcp_cli-0.2.2.tar.gz",
"platform": null,
"description": "# MCP CLI - Model Context Protocol Command Line Interface\n\nA powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the [CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor) and [CHUK-LLM](https://github.com/chrishayuk/chuk-llm), providing tool usage, conversation management, and multiple operational modes.\n\n## \ud83d\udd04 Architecture Overview\n\nThe MCP CLI is built on a modular architecture with clean separation of concerns:\n\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)**: Async-native tool execution and MCP server communication\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)**: Unified LLM provider configuration and client management\n- **MCP CLI**: Rich user interface and command orchestration (this project)\n\n## \ud83c\udf1f Features\n\n### Multiple Operational Modes\n- **Chat Mode**: Conversational interface with streaming responses and automated tool usage\n- **Interactive Mode**: Command-driven shell interface for direct server operations\n- **Command Mode**: Unix-friendly mode for scriptable automation and pipelines\n- **Direct Commands**: Run individual commands without entering interactive mode\n\n### Advanced Chat Interface\n- **Streaming Responses**: Real-time response generation with live UI updates\n- **Concurrent Tool Execution**: Execute multiple tools simultaneously while preserving conversation order\n- **Smart Interruption**: Interrupt streaming responses or tool execution with Ctrl+C\n- **Performance Metrics**: Response timing, words/second, and execution statistics\n- **Rich Formatting**: Markdown rendering, syntax highlighting, and progress indicators\n\n### Comprehensive Provider Support\n- **OpenAI**: GPT models (`gpt-4o`, `gpt-4o-mini`, `gpt-4-turbo`, etc.)\n- **Anthropic**: Claude models (`claude-3-opus`, `claude-3-sonnet`, `claude-3-haiku`)\n- **Ollama**: Local models (`llama3.2`, `qwen2.5-coder`, `deepseek-coder`, etc.)\n- **Custom Providers**: Extensible architecture for additional providers\n- **Dynamic Switching**: Change providers and models mid-conversation\n\n### Robust Tool System\n- **Automatic Discovery**: Server-provided tools are automatically detected and catalogued\n- **Provider Adaptation**: Tool names are automatically sanitized for provider compatibility\n- **Concurrent Execution**: Multiple tools can run simultaneously with proper coordination\n- **Rich Progress Display**: Real-time progress indicators and execution timing\n- **Tool History**: Complete audit trail of all tool executions\n- **Streaming Tool Calls**: Support for tools that return streaming data\n\n### Advanced Configuration Management\n- **Environment Integration**: API keys and settings via environment variables\n- **File-based Config**: YAML and JSON configuration files\n- **User Preferences**: Persistent settings for active providers and models\n- **Validation & Diagnostics**: Built-in provider health checks and configuration validation\n\n### Enhanced User Experience\n- **Cross-Platform Support**: Windows, macOS, and Linux with platform-specific optimizations\n- **Rich Console Output**: Colorful, formatted output with automatic fallbacks\n- **Command Completion**: Context-aware tab completion for all interfaces\n- **Comprehensive Help**: Detailed help system with examples and usage patterns\n- **Graceful Error Handling**: User-friendly error messages with troubleshooting hints\n\n## \ud83d\udccb Prerequisites\n\n- **Python 3.11 or higher**\n- **API Keys** (as needed):\n - OpenAI: `OPENAI_API_KEY` environment variable\n - Anthropic: `ANTHROPIC_API_KEY` environment variable\n - Custom providers: Provider-specific configuration\n- **Local Services** (as needed):\n - Ollama: Local installation for Ollama models\n- **MCP Servers**: Server configuration file (default: `server_config.json`)\n\n## \ud83d\ude80 Installation\n\n### Using UVX\nTo install uxx, use the following instructions:\n\nhttps://docs.astral.sh/uv/getting-started/installation/\n\nOnce installed you can test it works using:\n\n```bash\nuvx mcp-cli --help\n```\n\nor use interactive mode\n\n```bash\nuvx mcp-cli interactive\n```\n\n### Install from Source\n\n1. **Clone the repository**:\n```bash\ngit clone https://github.com/chrishayuk/mcp-cli\ncd mcp-cli \n```\n\n2. **Install the package**:\n```bash\npip install -e \".\"\n```\n\n3. **Verify installation**:\n```bash\nmcp-cli --help\n```\n\n### Using UV (Recommended)\n\nUV provides faster dependency resolution and better environment management:\n\n```bash\n# Install UV if not already installed\npip install uv\n\n# Install dependencies\nuv sync --reinstall\n\n# Run with UV\nuv run mcp-cli --help\n```\n\n## \ud83e\uddf0 Global Configuration\n\n### Command-line Arguments\n\nGlobal options available for all modes and commands:\n\n- `--server`: Specify server(s) to connect to (comma-separated)\n- `--config-file`: Path to server configuration file (default: `server_config.json`)\n- `--provider`: LLM provider (`openai`, `anthropic`, `ollama`, etc.)\n- `--model`: Specific model to use (provider-dependent)\n- `--disable-filesystem`: Disable filesystem access (default: enabled)\n- `--api-base`: Override API endpoint URL\n- `--api-key`: Override API key\n- `--verbose`: Enable detailed logging\n- `--quiet`: Suppress non-essential output\n\n### Environment Variables\n\n```bash\nexport LLM_PROVIDER=openai # Default provider\nexport LLM_MODEL=gpt-4o-mini # Default model\nexport OPENAI_API_KEY=sk-... # OpenAI API key\nexport ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key\nexport MCP_TOOL_TIMEOUT=120 # Tool execution timeout (seconds)\n```\n\n## \ud83c\udf10 Available Modes\n\n### 1. Chat Mode (Default)\n\nProvides a natural language interface with streaming responses and automatic tool usage:\n\n```bash\n# Default mode (no subcommand needed)\nmcp-cli --server sqlite\n\n# Explicit chat mode\nmcp-cli chat --server sqlite\n\n# With specific provider and model\nmcp-cli chat --server sqlite --provider anthropic --model claude-3-sonnet\n\n# With custom configuration\nmcp-cli chat --server sqlite --provider openai --api-key sk-... --model gpt-4o\n```\n\n### 2. Interactive Mode\n\nCommand-driven shell interface for direct server operations:\n\n```bash\nmcp-cli interactive --server sqlite\n\n# With provider selection\nmcp-cli interactive --server sqlite --provider ollama --model llama3.2\n```\n\n### 3. Command Mode\n\nUnix-friendly interface for automation and scripting:\n\n```bash\n# Process text with LLM\nmcp-cli cmd --server sqlite --prompt \"Analyze this data\" --input data.txt\n\n# Execute tools directly\nmcp-cli cmd --server sqlite --tool list_tables --output tables.json\n\n# Pipeline-friendly processing\necho \"SELECT * FROM users LIMIT 5\" | mcp-cli cmd --server sqlite --tool read_query --input -\n```\n\n### 4. Direct Commands\n\nExecute individual commands without entering interactive mode:\n\n```bash\n# List available tools\nmcp-cli tools --server sqlite\n\n# Show provider configuration\nmcp-cli provider list\n\n# Ping servers\nmcp-cli ping --server sqlite\n\n# List resources\nmcp-cli resources --server sqlite\n```\n\n## \ud83e\udd16 Using Chat Mode\n\nChat mode provides the most advanced interface with streaming responses and intelligent tool usage.\n\n### Starting Chat Mode\n\n```bash\n# Simple startup\nmcp-cli --server sqlite\n\n# Multiple servers\nmcp-cli --server sqlite,filesystem\n\n# Specific provider configuration\nmcp-cli --server sqlite --provider anthropic --model claude-3-opus\n```\n\n### Chat Commands (Slash Commands)\n\n#### Provider & Model Management\n```bash\n/provider # Show current configuration\n/provider list # List all providers\n/provider config # Show detailed configuration\n/provider diagnostic # Test provider connectivity\n/provider set openai api_key sk-... # Configure provider settings\n/provider anthropic # Switch to Anthropic\n/provider openai gpt-4o # Switch provider and model\n\n/model # Show current model\n/model gpt-4o # Switch to specific model\n/models # List available models\n```\n\n#### Tool Management\n```bash\n/tools # List available tools\n/tools --all # Show detailed tool information\n/tools --raw # Show raw JSON definitions\n/tools call # Interactive tool execution\n\n/toolhistory # Show tool execution history\n/th -n 5 # Last 5 tool calls\n/th 3 # Details for call #3\n/th --json # Full history as JSON\n```\n\n#### Conversation Management\n```bash\n/conversation # Show conversation history\n/ch -n 10 # Last 10 messages\n/ch 5 # Details for message #5\n/ch --json # Full history as JSON\n\n/save conversation.json # Save conversation to file\n/compact # Summarize conversation\n/clear # Clear conversation history\n/cls # Clear screen only\n```\n\n#### Session Control\n```bash\n/verbose # Toggle verbose/compact display\n/interrupt # Stop running operations\n/servers # List connected servers\n/help # Show all commands\n/help tools # Help for specific command\n/exit # Exit chat mode\n```\n\n### Chat Features\n\n#### Streaming Responses\n- Real-time text generation with live updates\n- Performance metrics (words/second, response time)\n- Graceful interruption with Ctrl+C\n- Progressive markdown rendering\n\n#### Tool Execution\n- Automatic tool discovery and usage\n- Concurrent execution with progress indicators\n- Verbose and compact display modes\n- Complete execution history and timing\n\n#### Provider Integration\n- Seamless switching between providers\n- Model-specific optimizations\n- API key and endpoint management\n- Health monitoring and diagnostics\n\n## \ud83d\udda5\ufe0f Using Interactive Mode\n\nInteractive mode provides a command shell for direct server interaction.\n\n### Starting Interactive Mode\n\n```bash\nmcp-cli interactive --server sqlite\n```\n\n### Interactive Commands\n\n```bash\nhelp # Show available commands\nexit # Exit interactive mode\nclear # Clear terminal\n\n# Provider management\nprovider # Show current provider\nprovider list # List providers\nprovider anthropic # Switch provider\n\n# Tool operations\ntools # List tools\ntools --all # Detailed tool info\ntools call # Interactive tool execution\n\n# Server operations\nservers # List servers\nping # Ping all servers\nresources # List resources\nprompts # List prompts\n```\n\n## \ud83d\udcc4 Using Command Mode\n\nCommand mode provides Unix-friendly automation capabilities.\n\n### Command Mode Options\n\n```bash\n--input FILE # Input file (- for stdin)\n--output FILE # Output file (- for stdout)\n--prompt TEXT # Prompt template\n--tool TOOL # Execute specific tool\n--tool-args JSON # Tool arguments as JSON\n--system-prompt TEXT # Custom system prompt\n--raw # Raw output without formatting\n--single-turn # Disable multi-turn conversation\n--max-turns N # Maximum conversation turns\n```\n\n### Examples\n\n```bash\n# Text processing\necho \"Analyze this data\" | mcp-cli cmd --server sqlite --input - --output analysis.txt\n\n# Tool execution\nmcp-cli cmd --server sqlite --tool list_tables --raw\n\n# Complex queries\nmcp-cli cmd --server sqlite --tool read_query --tool-args '{\"query\": \"SELECT COUNT(*) FROM users\"}'\n\n# Batch processing with GNU Parallel\nls *.txt | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary --prompt \"Summarize: {{input}}\"\n```\n\n## \ud83d\udd27 Provider Configuration\n\n### Automatic Configuration\n\nThe CLI automatically manages provider configurations using the CHUK-LLM library:\n\n```bash\n# Configure a provider\nmcp-cli provider set openai api_key sk-your-key-here\nmcp-cli provider set anthropic api_base https://api.anthropic.com\n\n# Test configuration\nmcp-cli provider diagnostic openai\n\n# List available models\nmcp-cli provider list\n```\n\n### Manual Configuration\n\nProviders are configured in `~/.chuk_llm/providers.yaml`:\n\n```yaml\nopenai:\n api_base: https://api.openai.com/v1\n default_model: gpt-4o-mini\n\nanthropic:\n api_base: https://api.anthropic.com\n default_model: claude-3-sonnet\n\nollama:\n api_base: http://localhost:11434\n default_model: llama3.2\n```\n\nAPI keys are stored securely in `~/.chuk_llm/.env`:\n\n```bash\nOPENAI_API_KEY=sk-your-key-here\nANTHROPIC_API_KEY=sk-ant-your-key-here\n```\n\n## \ud83d\udcc2 Server Configuration\n\nCreate a `server_config.json` file with your MCP server configurations:\n\n```json\n{\n \"mcpServers\": {\n \"sqlite\": {\n \"command\": \"python\",\n \"args\": [\"-m\", \"mcp_server.sqlite_server\"],\n \"env\": {\n \"DATABASE_PATH\": \"database.db\"\n }\n },\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/path/to/allowed/files\"],\n \"env\": {}\n },\n \"brave-search\": {\n \"command\": \"npx\",\n \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n \"env\": {\n \"BRAVE_API_KEY\": \"your-brave-api-key\"\n }\n }\n }\n}\n```\n\n## \ud83d\udcc8 Advanced Usage Examples\n\n### Multi-Provider Workflow\n\n```bash\n# Start with OpenAI\nmcp-cli chat --server sqlite --provider openai --model gpt-4o\n\n# In chat, switch to Anthropic for reasoning tasks\n> /provider anthropic claude-3-opus\n\n# Switch to Ollama for local processing\n> /provider ollama llama3.2\n\n# Compare responses across providers\n> /provider openai\n> What's the capital of France?\n> /provider anthropic \n> What's the capital of France?\n```\n\n### Complex Tool Workflows\n\n```bash\n# Database analysis workflow\n> List all tables in the database\n[Tool: list_tables] \u2192 products, customers, orders\n\n> Show me the schema for the products table\n[Tool: describe_table] \u2192 id, name, price, category, stock\n\n> Find the top 10 most expensive products\n[Tool: read_query] \u2192 SELECT name, price FROM products ORDER BY price DESC LIMIT 10\n\n> Export this data to a CSV file\n[Tool: write_file] \u2192 Saved to expensive_products.csv\n```\n\n### Automation and Scripting\n\n```bash\n# Batch data processing\nfor file in data/*.csv; do\n mcp-cli cmd --server sqlite \\\n --tool analyze_data \\\n --tool-args \"{\\\"file_path\\\": \\\"$file\\\"}\" \\\n --output \"results/$(basename \"$file\" .csv)_analysis.json\"\ndone\n\n# Pipeline processing\ncat input.txt | \\\n mcp-cli cmd --server sqlite --prompt \"Extract key entities\" --input - | \\\n mcp-cli cmd --server sqlite --prompt \"Categorize these entities\" --input - > output.txt\n```\n\n### Performance Monitoring\n\n```bash\n# Enable verbose mode for detailed timing\n> /verbose\n\n# Monitor tool execution times\n> /toolhistory\nTool Call History (15 calls)\n# | Tool | Arguments | Time\n1 | list_tables | {} | 0.12s\n2 | read_query | {\"query\": \"SELECT...\"} | 0.45s\n...\n\n# Check provider performance\n> /provider diagnostic\nProvider Diagnostics\nProvider | Status | Response Time | Features\nopenai | \u2705 Ready | 234ms | \ud83d\udce1\ud83d\udd27\ud83d\udc41\ufe0f\nanthropic | \u2705 Ready | 187ms | \ud83d\udce1\ud83d\udd27\nollama | \u2705 Ready | 56ms | \ud83d\udce1\ud83d\udd27\n```\n\n## \ud83d\udd0d Troubleshooting\n\n### Common Issues\n\n1. **\"Missing argument 'KWARGS'\" error**:\n ```bash\n # Use equals sign format\n mcp-cli chat --server=sqlite --provider=openai\n \n # Or add double dash\n mcp-cli chat -- --server sqlite --provider openai\n ```\n\n2. **Provider not found**:\n ```bash\n mcp-cli provider diagnostic\n mcp-cli provider set <provider> api_key <your-key>\n ```\n\n3. **Tool execution timeout**:\n ```bash\n export MCP_TOOL_TIMEOUT=300 # 5 minutes\n ```\n\n4. **Connection issues**:\n ```bash\n mcp-cli ping --server <server-name>\n mcp-cli servers\n ```\n\n### Debug Mode\n\nEnable verbose logging for troubleshooting:\n\n```bash\nmcp-cli --verbose chat --server sqlite\nmcp-cli --log-level DEBUG interactive --server sqlite\n```\n\n## \ud83d\udd12 Security Considerations\n\n- **API Keys**: Stored securely in environment variables or protected files\n- **File Access**: Filesystem access can be disabled with `--disable-filesystem`\n- **Tool Validation**: All tool calls are validated before execution\n- **Timeout Protection**: Configurable timeouts prevent hanging operations\n- **Server Isolation**: Each server runs in its own process\n\n## \ud83d\ude80 Performance Features\n\n- **Concurrent Tool Execution**: Multiple tools can run simultaneously\n- **Streaming Responses**: Real-time response generation\n- **Connection Pooling**: Efficient reuse of client connections\n- **Caching**: Tool metadata and provider configurations are cached\n- **Async Architecture**: Non-blocking operations throughout\n\n## \ud83d\udce6 Dependencies\n\nCore dependencies are organized into feature groups:\n\n- **cli**: Rich terminal UI, command completion, provider integrations\n- **dev**: Development tools, testing utilities, linting\n- **chuk-tool-processor**: Core tool execution and MCP communication\n- **chuk-llm**: Unified LLM provider management\n\nInstall with specific features:\n```bash\npip install \"mcp-cli[cli]\" # Basic CLI features\npip install \"mcp-cli[cli,dev]\" # CLI with development tools\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n### Development Setup\n\n```bash\ngit clone https://github.com/chrishayuk/mcp-cli\ncd mcp-cli\npip install -e \".[cli,dev]\"\npre-commit install\n```\n\n### Running Tests\n\n```bash\npytest\npytest --cov=mcp_cli --cov-report=html\n```\n\n## \ud83d\udcdc License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Async-native tool execution\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - Unified LLM provider management\n- **[Rich](https://github.com/Textualize/rich)** - Beautiful terminal formatting\n- **[Typer](https://typer.tiangolo.com/)** - CLI framework\n- **[Prompt Toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit)** - Interactive input\n\n## \ud83d\udd17 Related Projects\n\n- **[Model Context Protocol](https://modelcontextprotocol.io/)** - Core protocol specification\n- **[MCP Servers](https://github.com/modelcontextprotocol/servers)** - Official MCP server implementations\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Tool execution engine\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - LLM provider abstraction\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A cli for the Model Context Provider",
"version": "0.2.2",
"project_urls": null,
"split_keywords": [
"llm",
" openai",
" claude",
" mcp",
" cli"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "07fbdb7affb3b5f9141852bea2ce0d05e56c372920d45022f9356bd21513f730",
"md5": "33574230ba125bf4eb705b2b80691ae2",
"sha256": "b048a9a2d236448a87f4065fe14ca28d5c74c2ec0f94d782b1a28d4a55d0574f"
},
"downloads": -1,
"filename": "mcp_cli-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "33574230ba125bf4eb705b2b80691ae2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 168157,
"upload_time": "2025-07-08T14:22:42",
"upload_time_iso_8601": "2025-07-08T14:22:42.533883Z",
"url": "https://files.pythonhosted.org/packages/07/fb/db7affb3b5f9141852bea2ce0d05e56c372920d45022f9356bd21513f730/mcp_cli-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2e3bd5489b7c5f4ea7533e43bca1cefa86f71d6fd8a9b1b1a649146990328d95",
"md5": "2c6bee6ca71da2917c893e9e835f1281",
"sha256": "f6cb90f1c94e53842298aae83f4e462f46c96f5ce3b8f426374c5ffa90786177"
},
"downloads": -1,
"filename": "mcp_cli-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "2c6bee6ca71da2917c893e9e835f1281",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 133342,
"upload_time": "2025-07-08T14:22:43",
"upload_time_iso_8601": "2025-07-08T14:22:43.799463Z",
"url": "https://files.pythonhosted.org/packages/2e/3b/d5489b7c5f4ea7533e43bca1cefa86f71d6fd8a9b1b1a649146990328d95/mcp_cli-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-08 14:22:43",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "mcp-cli"
}