mcp-cli


Namemcp-cli JSON
Version 0.8.6 PyPI version JSON
download
home_pageNone
SummaryA cli for the Model Context Provider
upload_time2025-10-19 23:55:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT
keywords llm openai claude mcp cli
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MCP CLI - Model Context Protocol Command Line Interface

A powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the [CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor) and [CHUK-LLM](https://github.com/chrishayuk/chuk-llm), providing tool usage, conversation management, and multiple operational modes.

**Default Configuration**: MCP CLI defaults to using Ollama with the `gpt-oss` reasoning model for local, privacy-focused operation without requiring API keys.

## 🔄 Architecture Overview

The MCP CLI is built on a modular architecture with clean separation of concerns:

- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)**: Async-native tool execution and MCP server communication
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)**: Unified LLM provider configuration and client management with 200+ auto-generated functions
- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)**: Enhanced terminal UI with themes, cross-platform terminal management, and rich formatting
- **MCP CLI**: Command orchestration and integration layer (this project)

## 🌟 Features

### Multiple Operational Modes
- **Chat Mode**: Conversational interface with streaming responses and automated tool usage (default: Ollama/gpt-oss)
- **Interactive Mode**: Command-driven shell interface for direct server operations
- **Command Mode**: Unix-friendly mode for scriptable automation and pipelines
- **Direct Commands**: Run individual commands without entering interactive mode

### Advanced Chat Interface
- **Streaming Responses**: Real-time response generation with live UI updates
- **Reasoning Visibility**: See AI's thinking process with reasoning models (gpt-oss, GPT-5, Claude 4)
- **Concurrent Tool Execution**: Execute multiple tools simultaneously while preserving conversation order
- **Smart Interruption**: Interrupt streaming responses or tool execution with Ctrl+C
- **Performance Metrics**: Response timing, words/second, and execution statistics
- **Rich Formatting**: Markdown rendering, syntax highlighting, and progress indicators

### Comprehensive Provider Support

MCP CLI supports all providers and models from CHUK-LLM, including cutting-edge reasoning models:

| Provider | Key Models | Special Features |
|----------|------------|------------------|
| **Ollama** (Default) | 🧠 gpt-oss, llama3.3, llama3.2, qwen3, qwen2.5-coder, deepseek-coder, granite3.3, mistral, gemma3, phi3, codellama | Local reasoning models, privacy-focused, no API key required |
| **OpenAI** | 🚀 GPT-5 family (gpt-5, gpt-5-mini, gpt-5-nano), GPT-4o family, O3 series (o3, o3-mini) | Advanced reasoning, function calling, vision |
| **Anthropic** | 🧠 Claude 4 family (claude-4-1-opus, claude-4-sonnet), Claude 3.5 Sonnet | Enhanced reasoning, long context |
| **Azure OpenAI** 🏢 | Enterprise GPT-5, GPT-4 models | Private endpoints, compliance, audit logs |
| **Google Gemini** | Gemini 2.0 Flash, Gemini 1.5 Pro | Multimodal, fast inference |
| **Groq** ⚡ | Llama 3.1 models, Mixtral | Ultra-fast inference (500+ tokens/sec) |
| **Perplexity** 🌐 | Sonar models | Real-time web search with citations |
| **IBM watsonx** 🏢 | Granite, Llama models | Enterprise compliance |
| **Mistral AI** 🇪🇺 | Mistral Large, Medium | European, efficient models |

### Robust Tool System
- **Automatic Discovery**: Server-provided tools are automatically detected and catalogued
- **Provider Adaptation**: Tool names are automatically sanitized for provider compatibility
- **Concurrent Execution**: Multiple tools can run simultaneously with proper coordination
- **Rich Progress Display**: Real-time progress indicators and execution timing
- **Tool History**: Complete audit trail of all tool executions
- **Streaming Tool Calls**: Support for tools that return streaming data

### Advanced Configuration Management
- **Environment Integration**: API keys and settings via environment variables
- **File-based Config**: YAML and JSON configuration files
- **User Preferences**: Persistent settings for active providers and models
- **Validation & Diagnostics**: Built-in provider health checks and configuration validation

### Enhanced User Experience
- **Cross-Platform Support**: Windows, macOS, and Linux with platform-specific optimizations via chuk-term
- **Rich Console Output**: Powered by chuk-term with 8 built-in themes (default, dark, light, minimal, terminal, monokai, dracula, solarized)
- **Advanced Terminal Management**: Cross-platform terminal operations including clearing, resizing, color detection, and cursor control
- **Interactive UI Components**: User input handling through chuk-term's prompt system (ask, confirm, select_from_list, select_multiple)
- **Command Completion**: Context-aware tab completion for all interfaces
- **Comprehensive Help**: Detailed help system with examples and usage patterns
- **Graceful Error Handling**: User-friendly error messages with troubleshooting hints

## 📋 Prerequisites

- **Python 3.11 or higher**
- **For Local Operation (Default)**:
  - Ollama: Install from [ollama.ai](https://ollama.ai)
  - Pull the default reasoning model: `ollama pull gpt-oss`
- **For Cloud Providers** (Optional):
  - OpenAI: `OPENAI_API_KEY` environment variable (for GPT-5, GPT-4, O3 models)
  - Anthropic: `ANTHROPIC_API_KEY` environment variable (for Claude 4, Claude 3.5)
  - Azure: `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT` (for enterprise GPT-5)
  - Google: `GEMINI_API_KEY` (for Gemini models)
  - Groq: `GROQ_API_KEY` (for fast Llama models)
  - Custom providers: Provider-specific configuration
- **MCP Servers**: Server configuration file (default: `server_config.json`)

## 🚀 Installation

### Quick Start with Ollama (Default)

1. **Install Ollama** (if not already installed):
```bash
# macOS/Linux
curl -fsSL https://ollama.ai/install.sh | sh

# Or visit https://ollama.ai for other installation methods
```

2. **Pull the default reasoning model**:
```bash
ollama pull gpt-oss  # Open-source reasoning model with thinking visibility
```

3. **Install and run MCP CLI**:
```bash
# Using uvx (recommended)
uvx mcp-cli --help

# Or install from source
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
pip install -e "."
mcp-cli --help
```

### Using Different Models

```bash
# === LOCAL MODELS (No API Key Required) ===

# Use default reasoning model (gpt-oss)
mcp-cli --server sqlite

# Use other Ollama models
mcp-cli --model llama3.3              # Latest Llama
mcp-cli --model qwen2.5-coder         # Coding-focused
mcp-cli --model deepseek-coder        # Another coding model
mcp-cli --model granite3.3            # IBM Granite

# === CLOUD PROVIDERS (API Keys Required) ===

# GPT-5 Family (requires OpenAI API key)
mcp-cli --provider openai --model gpt-5          # Full GPT-5 with reasoning
mcp-cli --provider openai --model gpt-5-mini     # Efficient GPT-5 variant
mcp-cli --provider openai --model gpt-5-nano     # Ultra-lightweight GPT-5

# GPT-4 Family
mcp-cli --provider openai --model gpt-4o         # GPT-4 Optimized
mcp-cli --provider openai --model gpt-4o-mini    # Smaller GPT-4

# O3 Reasoning Models
mcp-cli --provider openai --model o3             # O3 reasoning
mcp-cli --provider openai --model o3-mini        # Efficient O3

# Claude 4 Family (requires Anthropic API key)
mcp-cli --provider anthropic --model claude-4-1-opus    # Most advanced Claude
mcp-cli --provider anthropic --model claude-4-sonnet    # Balanced Claude 4
mcp-cli --provider anthropic --model claude-3-5-sonnet  # Claude 3.5

# Enterprise Azure (requires Azure configuration)
mcp-cli --provider azure_openai --model gpt-5    # Enterprise GPT-5

# Other Providers
mcp-cli --provider gemini --model gemini-2.0-flash      # Google Gemini
mcp-cli --provider groq --model llama-3.1-70b          # Fast Llama via Groq
```

## 🧰 Global Configuration

### Default Configuration

MCP CLI defaults to:
- **Provider**: `ollama` (local, no API key required)
- **Model**: `gpt-oss` (open-source reasoning model with thinking visibility)

### Command-line Arguments

Global options available for all modes and commands:

- `--server`: Specify server(s) to connect to (comma-separated)
- `--config-file`: Path to server configuration file (default: `server_config.json`)
- `--provider`: LLM provider (default: `ollama`)
- `--model`: Specific model to use (default: `gpt-oss` for Ollama)
- `--disable-filesystem`: Disable filesystem access (default: enabled)
- `--api-base`: Override API endpoint URL
- `--api-key`: Override API key (not needed for Ollama)
- `--verbose`: Enable detailed logging
- `--quiet`: Suppress non-essential output

### Environment Variables

```bash
# Override defaults
export LLM_PROVIDER=ollama              # Default provider (already the default)
export LLM_MODEL=gpt-oss                # Default model (already the default)

# For cloud providers (optional)
export OPENAI_API_KEY=sk-...           # For GPT-5, GPT-4, O3 models
export ANTHROPIC_API_KEY=sk-ant-...    # For Claude 4, Claude 3.5
export AZURE_OPENAI_API_KEY=sk-...     # For enterprise GPT-5
export AZURE_OPENAI_ENDPOINT=https://...
export GEMINI_API_KEY=...              # For Gemini models
export GROQ_API_KEY=...                # For Groq fast inference

# Tool configuration
export MCP_TOOL_TIMEOUT=120            # Tool execution timeout (seconds)
```

## 🌐 Available Modes

### 1. Chat Mode (Default)

Provides a natural language interface with streaming responses and automatic tool usage:

```bash
# Default mode with Ollama/gpt-oss reasoning model (no API key needed)
mcp-cli --server sqlite

# See the AI's thinking process with reasoning models
mcp-cli --server sqlite --model gpt-oss     # Open-source reasoning
mcp-cli --server sqlite --provider openai --model gpt-5  # GPT-5 reasoning
mcp-cli --server sqlite --provider anthropic --model claude-4-1-opus  # Claude 4 reasoning

# Use different local models
mcp-cli --server sqlite --model llama3.3
mcp-cli --server sqlite --model qwen2.5-coder

# Switch to cloud providers (requires API keys)
mcp-cli chat --server sqlite --provider openai --model gpt-5
mcp-cli chat --server sqlite --provider anthropic --model claude-4-sonnet
```

### 2. Interactive Mode

Command-driven shell interface for direct server operations:

```bash
mcp-cli interactive --server sqlite

# With specific models
mcp-cli interactive --server sqlite --model gpt-oss       # Local reasoning
mcp-cli interactive --server sqlite --provider openai --model gpt-5  # Cloud GPT-5
```

### 3. Command Mode

Unix-friendly interface for automation and scripting:

```bash
# Process text with reasoning models
mcp-cli cmd --server sqlite --model gpt-oss --prompt "Think through this step by step" --input data.txt

# Use GPT-5 for complex reasoning
mcp-cli cmd --server sqlite --provider openai --model gpt-5 --prompt "Analyze this data" --input data.txt

# Execute tools directly
mcp-cli cmd --server sqlite --tool list_tables --output tables.json

# Pipeline-friendly processing
echo "SELECT * FROM users LIMIT 5" | mcp-cli cmd --server sqlite --tool read_query --input -
```

### 4. Direct Commands

Execute individual commands without entering interactive mode:

```bash
# List available tools
mcp-cli tools --server sqlite

# Show provider configuration
mcp-cli provider list

# Show available models for current provider
mcp-cli models

# Show models for specific provider
mcp-cli models openai    # Shows GPT-5, GPT-4, O3 models
mcp-cli models anthropic # Shows Claude 4, Claude 3.5 models
mcp-cli models ollama    # Shows gpt-oss, llama3.3, etc.

# Ping servers
mcp-cli ping --server sqlite

# List resources
mcp-cli resources --server sqlite

# UI Theme Management
mcp-cli theme                     # Show current theme and list available
mcp-cli theme dark                # Switch to dark theme
mcp-cli theme --select            # Interactive theme selector
mcp-cli theme --list              # List all available themes
```

## 🤖 Using Chat Mode

Chat mode provides the most advanced interface with streaming responses and intelligent tool usage.

### Starting Chat Mode

```bash
# Simple startup with default reasoning model (gpt-oss)
mcp-cli --server sqlite

# Multiple servers
mcp-cli --server sqlite,filesystem

# With advanced reasoning models
mcp-cli --server sqlite --provider openai --model gpt-5
mcp-cli --server sqlite --provider anthropic --model claude-4-1-opus
```

### Chat Commands (Slash Commands)

#### Provider & Model Management
```bash
/provider                           # Show current configuration (default: ollama)
/provider list                      # List all providers
/provider config                    # Show detailed configuration
/provider diagnostic               # Test provider connectivity
/provider set ollama api_base http://localhost:11434  # Configure Ollama endpoint
/provider openai                   # Switch to OpenAI (requires API key)
/provider anthropic                # Switch to Anthropic (requires API key)
/provider openai gpt-5             # Switch to OpenAI GPT-5

# Custom Provider Management
/provider custom                   # List custom providers
/provider add localai http://localhost:8080/v1 gpt-4  # Add custom provider
/provider remove localai           # Remove custom provider

/model                             # Show current model (default: gpt-oss)
/model llama3.3                    # Switch to different Ollama model
/model gpt-5                       # Switch to GPT-5 (if using OpenAI)
/model claude-4-1-opus             # Switch to Claude 4 (if using Anthropic)
/models                            # List available models for current provider
```

#### Tool Management
```bash
/tools                             # List available tools
/tools --all                       # Show detailed tool information
/tools --raw                       # Show raw JSON definitions
/tools call                        # Interactive tool execution

/toolhistory                       # Show tool execution history
/th -n 5                          # Last 5 tool calls
/th 3                             # Details for call #3
/th --json                        # Full history as JSON
```

#### Server Management (Runtime Configuration)
```bash
/server                            # List all configured servers
/server list                       # List servers (alias)
/server list all                   # Include disabled servers

# Add servers at runtime (persists in ~/.mcp-cli/preferences.json)
/server add <name> stdio <command> [args...]
/server add sqlite stdio uvx mcp-server-sqlite --db-path test.db
/server add playwright stdio npx @playwright/mcp@latest
/server add time stdio uvx mcp-server-time
/server add fs stdio npx @modelcontextprotocol/server-filesystem /path/to/dir

# HTTP/SSE server examples with authentication
/server add github --transport http --header "Authorization: Bearer ghp_token" -- https://api.github.com/mcp
/server add myapi --transport http --env API_KEY=secret -- https://api.example.com/mcp
/server add events --transport sse -- https://events.example.com/sse

# Manage server state
/server enable <name>              # Enable a disabled server
/server disable <name>             # Disable without removing
/server remove <name>              # Remove user-added server
/server ping <name>                # Test server connectivity

# Server details
/server <name>                     # Show server configuration details
```

**Note**: Servers added via `/server add` are stored in `~/.mcp-cli/preferences.json` and persist across sessions. Project servers remain in `server_config.json`.

#### Conversation Management
```bash
/conversation                      # Show conversation history
/ch -n 10                         # Last 10 messages
/ch 5                             # Details for message #5
/ch --json                        # Full history as JSON

/save conversation.json            # Save conversation to file
/compact                          # Summarize conversation
/clear                            # Clear conversation history
/cls                              # Clear screen only
```

#### UI Customization
```bash
/theme                            # Interactive theme selector with preview
/theme dark                       # Switch to dark theme
/theme monokai                    # Switch to monokai theme

# Available themes: default, dark, light, minimal, terminal, monokai, dracula, solarized
# Themes are persisted across sessions
```

#### Session Control
```bash
/verbose                          # Toggle verbose/compact display (Default: Enabled)
/confirm                          # Toggle tool call confirmation (Default: Enabled)
/interrupt                        # Stop running operations
/server                           # Manage MCP servers (see Server Management above)
/help                            # Show all commands
/help tools                       # Help for specific command
/exit                            # Exit chat mode
```

### Chat Features

#### Streaming Responses with Reasoning Visibility
- **🧠 Reasoning Models**: See the AI's thinking process with gpt-oss, GPT-5, Claude 4
- **Real-time Generation**: Watch text appear token by token
- **Performance Metrics**: Words/second, response time
- **Graceful Interruption**: Ctrl+C to stop streaming
- **Progressive Rendering**: Markdown formatted as it streams

#### Tool Execution
- Automatic tool discovery and usage
- Concurrent execution with progress indicators
- Verbose and compact display modes
- Complete execution history and timing

#### Provider Integration
- Seamless switching between providers
- Model-specific optimizations
- API key and endpoint management
- Health monitoring and diagnostics

## 🖥️ Using Interactive Mode

Interactive mode provides a command shell for direct server interaction.

### Starting Interactive Mode

```bash
mcp-cli interactive --server sqlite
```

### Interactive Commands

```bash
help                              # Show available commands
exit                              # Exit interactive mode
clear                             # Clear terminal

# Provider management
provider                          # Show current provider
provider list                     # List providers
provider anthropic                # Switch provider
provider openai gpt-5             # Switch to GPT-5

# Model management
model                             # Show current model
model gpt-oss                     # Switch to reasoning model
model claude-4-1-opus             # Switch to Claude 4
models                            # List available models

# Tool operations
tools                             # List tools
tools --all                       # Detailed tool info
tools call                        # Interactive tool execution

# Server operations
servers                           # List servers
ping                              # Ping all servers
resources                         # List resources
prompts                           # List prompts
```

## 📄 Using Command Mode

Command mode provides Unix-friendly automation capabilities.

### Command Mode Options

```bash
--input FILE                      # Input file (- for stdin)
--output FILE                     # Output file (- for stdout)
--prompt TEXT                     # Prompt template
--tool TOOL                       # Execute specific tool
--tool-args JSON                  # Tool arguments as JSON
--system-prompt TEXT              # Custom system prompt
--raw                             # Raw output without formatting
--single-turn                     # Disable multi-turn conversation
--max-turns N                     # Maximum conversation turns
```

### Examples

```bash
# Text processing with reasoning models
echo "Analyze this data" | mcp-cli cmd --server sqlite --model gpt-oss --input - --output analysis.txt

# Use GPT-5 for complex analysis
mcp-cli cmd --server sqlite --provider openai --model gpt-5 --prompt "Provide strategic analysis" --input report.txt

# Tool execution
mcp-cli cmd --server sqlite --tool list_tables --raw

# Complex queries
mcp-cli cmd --server sqlite --tool read_query --tool-args '{"query": "SELECT COUNT(*) FROM users"}'

# Batch processing with GNU Parallel
ls *.txt | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary --prompt "Summarize: {{input}}"
```

## 🔧 Provider Configuration

### Ollama Configuration (Default)

Ollama runs locally by default on `http://localhost:11434`. To use reasoning and other models:

```bash
# Pull reasoning and other models for Ollama
ollama pull gpt-oss          # Default reasoning model
ollama pull llama3.3         # Latest Llama
ollama pull llama3.2         # Llama 3.2
ollama pull qwen3            # Qwen 3
ollama pull qwen2.5-coder    # Coding-focused
ollama pull deepseek-coder   # DeepSeek coder
ollama pull granite3.3       # IBM Granite
ollama pull mistral          # Mistral
ollama pull gemma3           # Google Gemma
ollama pull phi3             # Microsoft Phi
ollama pull codellama        # Code Llama

# List available Ollama models
ollama list

# Configure remote Ollama server
mcp-cli provider set ollama api_base http://remote-server:11434
```

### Cloud Provider Configuration

To use cloud providers with advanced models, configure API keys:

```bash
# Configure OpenAI (for GPT-5, GPT-4, O3 models)
mcp-cli provider set openai api_key sk-your-key-here

# Configure Anthropic (for Claude 4, Claude 3.5)
mcp-cli provider set anthropic api_key sk-ant-your-key-here

# Configure Azure OpenAI (for enterprise GPT-5)
mcp-cli provider set azure_openai api_key sk-your-key-here
mcp-cli provider set azure_openai api_base https://your-resource.openai.azure.com

# Configure other providers
mcp-cli provider set gemini api_key your-gemini-key
mcp-cli provider set groq api_key your-groq-key

# Test configuration
mcp-cli provider diagnostic openai
mcp-cli provider diagnostic anthropic
```

### Custom OpenAI-Compatible Providers

MCP CLI supports adding custom OpenAI-compatible providers (LocalAI, custom proxies, etc.):

```bash
# Add a custom provider (persisted across sessions)
mcp-cli provider add localai http://localhost:8080/v1 gpt-4 gpt-3.5-turbo
mcp-cli provider add myproxy https://proxy.example.com/v1 custom-model-1 custom-model-2

# Set API key via environment variable (never stored in config)
export LOCALAI_API_KEY=your-api-key
export MYPROXY_API_KEY=your-api-key

# List custom providers
mcp-cli provider custom

# Use custom provider
mcp-cli --provider localai --server sqlite
mcp-cli --provider myproxy --model custom-model-1 --server sqlite

# Remove custom provider
mcp-cli provider remove localai

# Runtime provider (session-only, not persisted)
mcp-cli --provider temp-ai --api-base https://api.temp.com/v1 --api-key test-key --server sqlite
```

**Security Note**: API keys are NEVER stored in configuration files. Use environment variables following the pattern `{PROVIDER_NAME}_API_KEY` or pass via `--api-key` for session-only use.

### Manual Configuration

The `chuk_llm` library configuration in `~/.chuk_llm/config.yaml`:

```yaml
ollama:
  api_base: http://localhost:11434
  default_model: gpt-oss

openai:
  api_base: https://api.openai.com/v1
  default_model: gpt-5

anthropic:
  api_base: https://api.anthropic.com
  default_model: claude-4-1-opus

azure_openai:
  api_base: https://your-resource.openai.azure.com
  default_model: gpt-5

gemini:
  api_base: https://generativelanguage.googleapis.com
  default_model: gemini-2.0-flash

groq:
  api_base: https://api.groq.com
  default_model: llama-3.1-70b
```

API keys (if using cloud providers) in `~/.chuk_llm/.env`:

```bash
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
AZURE_OPENAI_API_KEY=sk-your-azure-key-here
GEMINI_API_KEY=your-gemini-key
GROQ_API_KEY=your-groq-key
```

## 📂 Server Configuration

MCP CLI supports two types of server configurations:

1. **Project Servers** (`server_config.json`): Shared project-level configurations
2. **User Servers** (`~/.mcp-cli/preferences.json`): Personal runtime-added servers that persist across sessions

### Project Configuration

Create a `server_config.json` file with your MCP server configurations:

```json
{
  "mcpServers": {
    "sqlite": {
      "command": "python",
      "args": ["-m", "mcp_server.sqlite_server"],
      "env": {
        "DATABASE_PATH": "database.db"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"],
      "env": {}
    },
    "brave-search": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-brave-search"],
      "env": {
        "BRAVE_API_KEY": "your-brave-api-key"
      }
    }
  }
}
```

### Runtime Server Management

Add servers dynamically during runtime without editing configuration files:

```bash
# Add STDIO servers (most common)
mcp-cli
> /server add sqlite stdio uvx mcp-server-sqlite --db-path mydata.db
> /server add playwright stdio npx @playwright/mcp@latest
> /server add time stdio uvx mcp-server-time

# Add HTTP servers with authentication
> /server add github --transport http --header "Authorization: Bearer ghp_token" -- https://api.github.com/mcp
> /server add myapi --transport http --env API_KEY=secret -- https://api.example.com/mcp

# Add SSE (Server-Sent Events) servers
> /server add events --transport sse -- https://events.example.com/sse

# Manage servers
> /server list                     # Show all servers
> /server disable sqlite           # Temporarily disable
> /server enable sqlite            # Re-enable
> /server remove myapi             # Remove user-added server
```

**Key Points:**
- User-added servers persist in `~/.mcp-cli/preferences.json`
- Survive application restarts
- Can be enabled/disabled without removal
- Support STDIO, HTTP, and SSE transports
- Environment variables and headers for authentication

## 📈 Advanced Usage Examples

### Reasoning Model Comparison

```bash
# Compare reasoning across different models
> /provider ollama
> /model gpt-oss
> Think through this problem step by step: If a train leaves New York at 3 PM...
[See the complete thinking process with gpt-oss]

> /provider openai
> /model gpt-5
> Think through this problem step by step: If a train leaves New York at 3 PM...
[See GPT-5's reasoning approach]

> /provider anthropic
> /model claude-4-1-opus
> Think through this problem step by step: If a train leaves New York at 3 PM...
[See Claude 4's analytical process]
```

### Local-First Workflow with Reasoning

```bash
# Start with default Ollama/gpt-oss (no API key needed)
mcp-cli chat --server sqlite

# Use reasoning model for complex problems
> Think through this database optimization problem step by step
[gpt-oss shows its complete thinking process before answering]

# Try different local models for different tasks
> /model llama3.3              # General purpose
> /model qwen2.5-coder         # For coding tasks
> /model deepseek-coder        # Alternative coding model
> /model granite3.3            # IBM's model
> /model gpt-oss               # Back to reasoning model

# Switch to cloud when needed (requires API keys)
> /provider openai
> /model gpt-5
> Complex enterprise architecture design...

> /provider anthropic
> /model claude-4-1-opus
> Detailed strategic analysis...

> /provider ollama
> /model gpt-oss
> Continue with local processing...
```

### Multi-Provider Workflow

```bash
# Start with local reasoning (default, no API key)
mcp-cli chat --server sqlite

# Compare responses across providers
> /provider ollama
> What's the best way to optimize this SQL query?

> /provider openai gpt-5        # Requires API key
> What's the best way to optimize this SQL query?

> /provider anthropic claude-4-sonnet  # Requires API key
> What's the best way to optimize this SQL query?

# Use each provider's strengths
> /provider ollama gpt-oss      # Local reasoning, privacy
> /provider openai gpt-5        # Advanced reasoning
> /provider anthropic claude-4-1-opus  # Deep analysis
> /provider groq llama-3.1-70b  # Ultra-fast responses
```

### Complex Tool Workflows with Reasoning

```bash
# Use reasoning model for complex database tasks
> /model gpt-oss
> I need to analyze our database performance. Think through what we should check first.
[gpt-oss shows thinking: "First, I should check the table structure, then indexes, then query patterns..."]
[Tool: list_tables] → products, customers, orders

> Now analyze the indexes and suggest optimizations
[gpt-oss thinks through index analysis]
[Tool: describe_table] → Shows current indexes
[Tool: read_query] → Analyzes query patterns

> Create an optimization plan based on your analysis
[Complete reasoning process followed by specific recommendations]
```

### Automation and Scripting

```bash
# Batch processing with different models
for file in data/*.csv; do
  # Use reasoning model for analysis
  mcp-cli cmd --server sqlite \
    --model gpt-oss \
    --prompt "Analyze this data and think through patterns" \
    --input "$file" \
    --output "analysis/$(basename "$file" .csv)_reasoning.txt"
  
  # Use coding model for generating scripts
  mcp-cli cmd --server sqlite \
    --model qwen2.5-coder \
    --prompt "Generate Python code to process this data" \
    --input "$file" \
    --output "scripts/$(basename "$file" .csv)_script.py"
done

# Pipeline with reasoning
cat complex_problem.txt | \
  mcp-cli cmd --model gpt-oss --prompt "Think through this step by step" --input - | \
  mcp-cli cmd --model llama3.3 --prompt "Summarize the key points" --input - > solution.txt
```

### Performance Monitoring

```bash
# Check provider and model performance
> /provider diagnostic
Provider Diagnostics
Provider      | Status      | Response Time | Features      | Models
ollama        | ✅ Ready    | 56ms         | 📡🔧         | gpt-oss, llama3.3, qwen3, ...
openai        | ✅ Ready    | 234ms        | 📡🔧👁️      | gpt-5, gpt-4o, o3, ...
anthropic     | ✅ Ready    | 187ms        | 📡🔧         | claude-4-1-opus, claude-4-sonnet, ...
azure_openai  | ✅ Ready    | 198ms        | 📡🔧👁️      | gpt-5, gpt-4o, ...
gemini        | ✅ Ready    | 156ms        | 📡🔧👁️      | gemini-2.0-flash, ...
groq          | ✅ Ready    | 45ms         | 📡🔧         | llama-3.1-70b, ...

# Check available models
> /models
Models for ollama (Current Provider)
Model                | Status
gpt-oss             | Current & Default (Reasoning)
llama3.3            | Available
llama3.2            | Available
qwen2.5-coder       | Available
deepseek-coder      | Available
granite3.3          | Available
... and 6 more

# Monitor tool execution with reasoning
> /verbose
> /model gpt-oss
> Analyze the database and optimize the slowest queries
[Shows complete thinking process]
[Tool execution with timing]
```

## 🔍 Troubleshooting

### Common Issues

1. **Ollama not running** (default provider):
   ```bash
   # Start Ollama service
   ollama serve
   
   # Or check if it's running
   curl http://localhost:11434/api/tags
   ```

2. **Model not found**:
   ```bash
   # For Ollama (default), pull the model first
   ollama pull gpt-oss      # Reasoning model
   ollama pull llama3.3     # Latest Llama
   ollama pull qwen2.5-coder # Coding model
   
   # List available models
   ollama list
   
   # For cloud providers, check supported models
   mcp-cli models openai     # Shows GPT-5, GPT-4, O3 models
   mcp-cli models anthropic  # Shows Claude 4, Claude 3.5 models
   ```

3. **Provider not found or API key missing**:
   ```bash
   # Check available providers
   mcp-cli provider list
   
   # For cloud providers, set API keys
   mcp-cli provider set openai api_key sk-your-key
   mcp-cli provider set anthropic api_key sk-ant-your-key
   
   # Test connection
   mcp-cli provider diagnostic openai
   ```

4. **Connection issues with Ollama**:
   ```bash
   # Check Ollama is running
   ollama list
   
   # Test connection
   mcp-cli provider diagnostic ollama
   
   # Configure custom endpoint if needed
   mcp-cli provider set ollama api_base http://localhost:11434
   ```

### Debug Mode

Enable verbose logging for troubleshooting:

```bash
mcp-cli --verbose chat --server sqlite
mcp-cli --log-level DEBUG interactive --server sqlite
```

## 🔒 Security Considerations

- **Local by Default**: Ollama with gpt-oss runs locally, keeping your data private
- **API Keys**: Only needed for cloud providers (OpenAI, Anthropic, etc.), stored securely
- **File Access**: Filesystem access can be disabled with `--disable-filesystem`
- **Tool Validation**: All tool calls are validated before execution
- **Timeout Protection**: Configurable timeouts prevent hanging operations
- **Server Isolation**: Each server runs in its own process

## 🚀 Performance Features

- **Local Processing**: Default Ollama provider minimizes latency
- **Reasoning Visibility**: See AI thinking process with gpt-oss, GPT-5, Claude 4
- **Concurrent Tool Execution**: Multiple tools can run simultaneously
- **Streaming Responses**: Real-time response generation
- **Connection Pooling**: Efficient reuse of client connections
- **Caching**: Tool metadata and provider configurations are cached
- **Async Architecture**: Non-blocking operations throughout

## 📦 Dependencies

Core dependencies are organized into feature groups:

- **cli**: Terminal UI and command framework (Rich, Typer, chuk-term)
- **dev**: Development tools, testing utilities, linting
- **chuk-tool-processor**: Core tool execution and MCP communication
- **chuk-llm**: Unified LLM provider management with 200+ auto-generated functions
- **chuk-term**: Enhanced terminal UI with themes, prompts, and cross-platform support

Install with specific features:
```bash
pip install "mcp-cli[cli]"        # Basic CLI features
pip install "mcp-cli[cli,dev]"    # CLI with development tools
```

## 🤝 Contributing

We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.

### Development Setup

```bash
git clone https://github.com/chrishayuk/mcp-cli
cd mcp-cli
pip install -e ".[cli,dev]"
pre-commit install
```

### Demo Scripts

Explore the capabilities of MCP CLI:

```bash
# Custom Provider Management Demos

# Interactive walkthrough demo (educational)
uv run examples/custom_provider_demo.py

# Working demo with actual inference (requires OPENAI_API_KEY)
uv run examples/custom_provider_working_demo.py

# Simple shell script demo (requires OPENAI_API_KEY)
bash examples/custom_provider_simple_demo.sh

# Terminal management features (chuk-term)
uv run examples/ui_terminal_demo.py

# Output system with themes (chuk-term)
uv run examples/ui_output_demo.py

# Streaming UI capabilities (chuk-term)
uv run examples/ui_streaming_demo.py
```

### Running Tests

```bash
pytest
pytest --cov=mcp_cli --cov-report=html
```

## 📜 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Async-native tool execution
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - Unified LLM provider management with GPT-5, Claude 4, and reasoning model support
- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)** - Enhanced terminal UI with themes and cross-platform support
- **[Rich](https://github.com/Textualize/rich)** - Beautiful terminal formatting
- **[Typer](https://typer.tiangolo.com/)** - CLI framework
- **[Prompt Toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit)** - Interactive input

## 🔗 Related Projects

- **[Model Context Protocol](https://modelcontextprotocol.io/)** - Core protocol specification
- **[MCP Servers](https://github.com/modelcontextprotocol/servers)** - Official MCP server implementations
- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Tool execution engine
- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - LLM provider abstraction with GPT-5, Claude 4, O3 series support
- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)** - Terminal UI library with themes and cross-platform support

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mcp-cli",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "llm, openai, claude, mcp, cli",
    "author": null,
    "author_email": "Chris Hay <chrishayuk@somejunkmailbox.com>",
    "download_url": "https://files.pythonhosted.org/packages/88/4a/abf81e9d6a3f7ffedf924451f42217cb0639d5a8200453ecb186fdfa3e5a/mcp_cli-0.8.6.tar.gz",
    "platform": null,
    "description": "# MCP CLI - Model Context Protocol Command Line Interface\n\nA powerful, feature-rich command-line interface for interacting with Model Context Protocol servers. This client enables seamless communication with LLMs through integration with the [CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor) and [CHUK-LLM](https://github.com/chrishayuk/chuk-llm), providing tool usage, conversation management, and multiple operational modes.\n\n**Default Configuration**: MCP CLI defaults to using Ollama with the `gpt-oss` reasoning model for local, privacy-focused operation without requiring API keys.\n\n## \ud83d\udd04 Architecture Overview\n\nThe MCP CLI is built on a modular architecture with clean separation of concerns:\n\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)**: Async-native tool execution and MCP server communication\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)**: Unified LLM provider configuration and client management with 200+ auto-generated functions\n- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)**: Enhanced terminal UI with themes, cross-platform terminal management, and rich formatting\n- **MCP CLI**: Command orchestration and integration layer (this project)\n\n## \ud83c\udf1f Features\n\n### Multiple Operational Modes\n- **Chat Mode**: Conversational interface with streaming responses and automated tool usage (default: Ollama/gpt-oss)\n- **Interactive Mode**: Command-driven shell interface for direct server operations\n- **Command Mode**: Unix-friendly mode for scriptable automation and pipelines\n- **Direct Commands**: Run individual commands without entering interactive mode\n\n### Advanced Chat Interface\n- **Streaming Responses**: Real-time response generation with live UI updates\n- **Reasoning Visibility**: See AI's thinking process with reasoning models (gpt-oss, GPT-5, Claude 4)\n- **Concurrent Tool Execution**: Execute multiple tools simultaneously while preserving conversation order\n- **Smart Interruption**: Interrupt streaming responses or tool execution with Ctrl+C\n- **Performance Metrics**: Response timing, words/second, and execution statistics\n- **Rich Formatting**: Markdown rendering, syntax highlighting, and progress indicators\n\n### Comprehensive Provider Support\n\nMCP CLI supports all providers and models from CHUK-LLM, including cutting-edge reasoning models:\n\n| Provider | Key Models | Special Features |\n|----------|------------|------------------|\n| **Ollama** (Default) | \ud83e\udde0 gpt-oss, llama3.3, llama3.2, qwen3, qwen2.5-coder, deepseek-coder, granite3.3, mistral, gemma3, phi3, codellama | Local reasoning models, privacy-focused, no API key required |\n| **OpenAI** | \ud83d\ude80 GPT-5 family (gpt-5, gpt-5-mini, gpt-5-nano), GPT-4o family, O3 series (o3, o3-mini) | Advanced reasoning, function calling, vision |\n| **Anthropic** | \ud83e\udde0 Claude 4 family (claude-4-1-opus, claude-4-sonnet), Claude 3.5 Sonnet | Enhanced reasoning, long context |\n| **Azure OpenAI** \ud83c\udfe2 | Enterprise GPT-5, GPT-4 models | Private endpoints, compliance, audit logs |\n| **Google Gemini** | Gemini 2.0 Flash, Gemini 1.5 Pro | Multimodal, fast inference |\n| **Groq** \u26a1 | Llama 3.1 models, Mixtral | Ultra-fast inference (500+ tokens/sec) |\n| **Perplexity** \ud83c\udf10 | Sonar models | Real-time web search with citations |\n| **IBM watsonx** \ud83c\udfe2 | Granite, Llama models | Enterprise compliance |\n| **Mistral AI** \ud83c\uddea\ud83c\uddfa | Mistral Large, Medium | European, efficient models |\n\n### Robust Tool System\n- **Automatic Discovery**: Server-provided tools are automatically detected and catalogued\n- **Provider Adaptation**: Tool names are automatically sanitized for provider compatibility\n- **Concurrent Execution**: Multiple tools can run simultaneously with proper coordination\n- **Rich Progress Display**: Real-time progress indicators and execution timing\n- **Tool History**: Complete audit trail of all tool executions\n- **Streaming Tool Calls**: Support for tools that return streaming data\n\n### Advanced Configuration Management\n- **Environment Integration**: API keys and settings via environment variables\n- **File-based Config**: YAML and JSON configuration files\n- **User Preferences**: Persistent settings for active providers and models\n- **Validation & Diagnostics**: Built-in provider health checks and configuration validation\n\n### Enhanced User Experience\n- **Cross-Platform Support**: Windows, macOS, and Linux with platform-specific optimizations via chuk-term\n- **Rich Console Output**: Powered by chuk-term with 8 built-in themes (default, dark, light, minimal, terminal, monokai, dracula, solarized)\n- **Advanced Terminal Management**: Cross-platform terminal operations including clearing, resizing, color detection, and cursor control\n- **Interactive UI Components**: User input handling through chuk-term's prompt system (ask, confirm, select_from_list, select_multiple)\n- **Command Completion**: Context-aware tab completion for all interfaces\n- **Comprehensive Help**: Detailed help system with examples and usage patterns\n- **Graceful Error Handling**: User-friendly error messages with troubleshooting hints\n\n## \ud83d\udccb Prerequisites\n\n- **Python 3.11 or higher**\n- **For Local Operation (Default)**:\n  - Ollama: Install from [ollama.ai](https://ollama.ai)\n  - Pull the default reasoning model: `ollama pull gpt-oss`\n- **For Cloud Providers** (Optional):\n  - OpenAI: `OPENAI_API_KEY` environment variable (for GPT-5, GPT-4, O3 models)\n  - Anthropic: `ANTHROPIC_API_KEY` environment variable (for Claude 4, Claude 3.5)\n  - Azure: `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_ENDPOINT` (for enterprise GPT-5)\n  - Google: `GEMINI_API_KEY` (for Gemini models)\n  - Groq: `GROQ_API_KEY` (for fast Llama models)\n  - Custom providers: Provider-specific configuration\n- **MCP Servers**: Server configuration file (default: `server_config.json`)\n\n## \ud83d\ude80 Installation\n\n### Quick Start with Ollama (Default)\n\n1. **Install Ollama** (if not already installed):\n```bash\n# macOS/Linux\ncurl -fsSL https://ollama.ai/install.sh | sh\n\n# Or visit https://ollama.ai for other installation methods\n```\n\n2. **Pull the default reasoning model**:\n```bash\nollama pull gpt-oss  # Open-source reasoning model with thinking visibility\n```\n\n3. **Install and run MCP CLI**:\n```bash\n# Using uvx (recommended)\nuvx mcp-cli --help\n\n# Or install from source\ngit clone https://github.com/chrishayuk/mcp-cli\ncd mcp-cli\npip install -e \".\"\nmcp-cli --help\n```\n\n### Using Different Models\n\n```bash\n# === LOCAL MODELS (No API Key Required) ===\n\n# Use default reasoning model (gpt-oss)\nmcp-cli --server sqlite\n\n# Use other Ollama models\nmcp-cli --model llama3.3              # Latest Llama\nmcp-cli --model qwen2.5-coder         # Coding-focused\nmcp-cli --model deepseek-coder        # Another coding model\nmcp-cli --model granite3.3            # IBM Granite\n\n# === CLOUD PROVIDERS (API Keys Required) ===\n\n# GPT-5 Family (requires OpenAI API key)\nmcp-cli --provider openai --model gpt-5          # Full GPT-5 with reasoning\nmcp-cli --provider openai --model gpt-5-mini     # Efficient GPT-5 variant\nmcp-cli --provider openai --model gpt-5-nano     # Ultra-lightweight GPT-5\n\n# GPT-4 Family\nmcp-cli --provider openai --model gpt-4o         # GPT-4 Optimized\nmcp-cli --provider openai --model gpt-4o-mini    # Smaller GPT-4\n\n# O3 Reasoning Models\nmcp-cli --provider openai --model o3             # O3 reasoning\nmcp-cli --provider openai --model o3-mini        # Efficient O3\n\n# Claude 4 Family (requires Anthropic API key)\nmcp-cli --provider anthropic --model claude-4-1-opus    # Most advanced Claude\nmcp-cli --provider anthropic --model claude-4-sonnet    # Balanced Claude 4\nmcp-cli --provider anthropic --model claude-3-5-sonnet  # Claude 3.5\n\n# Enterprise Azure (requires Azure configuration)\nmcp-cli --provider azure_openai --model gpt-5    # Enterprise GPT-5\n\n# Other Providers\nmcp-cli --provider gemini --model gemini-2.0-flash      # Google Gemini\nmcp-cli --provider groq --model llama-3.1-70b          # Fast Llama via Groq\n```\n\n## \ud83e\uddf0 Global Configuration\n\n### Default Configuration\n\nMCP CLI defaults to:\n- **Provider**: `ollama` (local, no API key required)\n- **Model**: `gpt-oss` (open-source reasoning model with thinking visibility)\n\n### Command-line Arguments\n\nGlobal options available for all modes and commands:\n\n- `--server`: Specify server(s) to connect to (comma-separated)\n- `--config-file`: Path to server configuration file (default: `server_config.json`)\n- `--provider`: LLM provider (default: `ollama`)\n- `--model`: Specific model to use (default: `gpt-oss` for Ollama)\n- `--disable-filesystem`: Disable filesystem access (default: enabled)\n- `--api-base`: Override API endpoint URL\n- `--api-key`: Override API key (not needed for Ollama)\n- `--verbose`: Enable detailed logging\n- `--quiet`: Suppress non-essential output\n\n### Environment Variables\n\n```bash\n# Override defaults\nexport LLM_PROVIDER=ollama              # Default provider (already the default)\nexport LLM_MODEL=gpt-oss                # Default model (already the default)\n\n# For cloud providers (optional)\nexport OPENAI_API_KEY=sk-...           # For GPT-5, GPT-4, O3 models\nexport ANTHROPIC_API_KEY=sk-ant-...    # For Claude 4, Claude 3.5\nexport AZURE_OPENAI_API_KEY=sk-...     # For enterprise GPT-5\nexport AZURE_OPENAI_ENDPOINT=https://...\nexport GEMINI_API_KEY=...              # For Gemini models\nexport GROQ_API_KEY=...                # For Groq fast inference\n\n# Tool configuration\nexport MCP_TOOL_TIMEOUT=120            # Tool execution timeout (seconds)\n```\n\n## \ud83c\udf10 Available Modes\n\n### 1. Chat Mode (Default)\n\nProvides a natural language interface with streaming responses and automatic tool usage:\n\n```bash\n# Default mode with Ollama/gpt-oss reasoning model (no API key needed)\nmcp-cli --server sqlite\n\n# See the AI's thinking process with reasoning models\nmcp-cli --server sqlite --model gpt-oss     # Open-source reasoning\nmcp-cli --server sqlite --provider openai --model gpt-5  # GPT-5 reasoning\nmcp-cli --server sqlite --provider anthropic --model claude-4-1-opus  # Claude 4 reasoning\n\n# Use different local models\nmcp-cli --server sqlite --model llama3.3\nmcp-cli --server sqlite --model qwen2.5-coder\n\n# Switch to cloud providers (requires API keys)\nmcp-cli chat --server sqlite --provider openai --model gpt-5\nmcp-cli chat --server sqlite --provider anthropic --model claude-4-sonnet\n```\n\n### 2. Interactive Mode\n\nCommand-driven shell interface for direct server operations:\n\n```bash\nmcp-cli interactive --server sqlite\n\n# With specific models\nmcp-cli interactive --server sqlite --model gpt-oss       # Local reasoning\nmcp-cli interactive --server sqlite --provider openai --model gpt-5  # Cloud GPT-5\n```\n\n### 3. Command Mode\n\nUnix-friendly interface for automation and scripting:\n\n```bash\n# Process text with reasoning models\nmcp-cli cmd --server sqlite --model gpt-oss --prompt \"Think through this step by step\" --input data.txt\n\n# Use GPT-5 for complex reasoning\nmcp-cli cmd --server sqlite --provider openai --model gpt-5 --prompt \"Analyze this data\" --input data.txt\n\n# Execute tools directly\nmcp-cli cmd --server sqlite --tool list_tables --output tables.json\n\n# Pipeline-friendly processing\necho \"SELECT * FROM users LIMIT 5\" | mcp-cli cmd --server sqlite --tool read_query --input -\n```\n\n### 4. Direct Commands\n\nExecute individual commands without entering interactive mode:\n\n```bash\n# List available tools\nmcp-cli tools --server sqlite\n\n# Show provider configuration\nmcp-cli provider list\n\n# Show available models for current provider\nmcp-cli models\n\n# Show models for specific provider\nmcp-cli models openai    # Shows GPT-5, GPT-4, O3 models\nmcp-cli models anthropic # Shows Claude 4, Claude 3.5 models\nmcp-cli models ollama    # Shows gpt-oss, llama3.3, etc.\n\n# Ping servers\nmcp-cli ping --server sqlite\n\n# List resources\nmcp-cli resources --server sqlite\n\n# UI Theme Management\nmcp-cli theme                     # Show current theme and list available\nmcp-cli theme dark                # Switch to dark theme\nmcp-cli theme --select            # Interactive theme selector\nmcp-cli theme --list              # List all available themes\n```\n\n## \ud83e\udd16 Using Chat Mode\n\nChat mode provides the most advanced interface with streaming responses and intelligent tool usage.\n\n### Starting Chat Mode\n\n```bash\n# Simple startup with default reasoning model (gpt-oss)\nmcp-cli --server sqlite\n\n# Multiple servers\nmcp-cli --server sqlite,filesystem\n\n# With advanced reasoning models\nmcp-cli --server sqlite --provider openai --model gpt-5\nmcp-cli --server sqlite --provider anthropic --model claude-4-1-opus\n```\n\n### Chat Commands (Slash Commands)\n\n#### Provider & Model Management\n```bash\n/provider                           # Show current configuration (default: ollama)\n/provider list                      # List all providers\n/provider config                    # Show detailed configuration\n/provider diagnostic               # Test provider connectivity\n/provider set ollama api_base http://localhost:11434  # Configure Ollama endpoint\n/provider openai                   # Switch to OpenAI (requires API key)\n/provider anthropic                # Switch to Anthropic (requires API key)\n/provider openai gpt-5             # Switch to OpenAI GPT-5\n\n# Custom Provider Management\n/provider custom                   # List custom providers\n/provider add localai http://localhost:8080/v1 gpt-4  # Add custom provider\n/provider remove localai           # Remove custom provider\n\n/model                             # Show current model (default: gpt-oss)\n/model llama3.3                    # Switch to different Ollama model\n/model gpt-5                       # Switch to GPT-5 (if using OpenAI)\n/model claude-4-1-opus             # Switch to Claude 4 (if using Anthropic)\n/models                            # List available models for current provider\n```\n\n#### Tool Management\n```bash\n/tools                             # List available tools\n/tools --all                       # Show detailed tool information\n/tools --raw                       # Show raw JSON definitions\n/tools call                        # Interactive tool execution\n\n/toolhistory                       # Show tool execution history\n/th -n 5                          # Last 5 tool calls\n/th 3                             # Details for call #3\n/th --json                        # Full history as JSON\n```\n\n#### Server Management (Runtime Configuration)\n```bash\n/server                            # List all configured servers\n/server list                       # List servers (alias)\n/server list all                   # Include disabled servers\n\n# Add servers at runtime (persists in ~/.mcp-cli/preferences.json)\n/server add <name> stdio <command> [args...]\n/server add sqlite stdio uvx mcp-server-sqlite --db-path test.db\n/server add playwright stdio npx @playwright/mcp@latest\n/server add time stdio uvx mcp-server-time\n/server add fs stdio npx @modelcontextprotocol/server-filesystem /path/to/dir\n\n# HTTP/SSE server examples with authentication\n/server add github --transport http --header \"Authorization: Bearer ghp_token\" -- https://api.github.com/mcp\n/server add myapi --transport http --env API_KEY=secret -- https://api.example.com/mcp\n/server add events --transport sse -- https://events.example.com/sse\n\n# Manage server state\n/server enable <name>              # Enable a disabled server\n/server disable <name>             # Disable without removing\n/server remove <name>              # Remove user-added server\n/server ping <name>                # Test server connectivity\n\n# Server details\n/server <name>                     # Show server configuration details\n```\n\n**Note**: Servers added via `/server add` are stored in `~/.mcp-cli/preferences.json` and persist across sessions. Project servers remain in `server_config.json`.\n\n#### Conversation Management\n```bash\n/conversation                      # Show conversation history\n/ch -n 10                         # Last 10 messages\n/ch 5                             # Details for message #5\n/ch --json                        # Full history as JSON\n\n/save conversation.json            # Save conversation to file\n/compact                          # Summarize conversation\n/clear                            # Clear conversation history\n/cls                              # Clear screen only\n```\n\n#### UI Customization\n```bash\n/theme                            # Interactive theme selector with preview\n/theme dark                       # Switch to dark theme\n/theme monokai                    # Switch to monokai theme\n\n# Available themes: default, dark, light, minimal, terminal, monokai, dracula, solarized\n# Themes are persisted across sessions\n```\n\n#### Session Control\n```bash\n/verbose                          # Toggle verbose/compact display (Default: Enabled)\n/confirm                          # Toggle tool call confirmation (Default: Enabled)\n/interrupt                        # Stop running operations\n/server                           # Manage MCP servers (see Server Management above)\n/help                            # Show all commands\n/help tools                       # Help for specific command\n/exit                            # Exit chat mode\n```\n\n### Chat Features\n\n#### Streaming Responses with Reasoning Visibility\n- **\ud83e\udde0 Reasoning Models**: See the AI's thinking process with gpt-oss, GPT-5, Claude 4\n- **Real-time Generation**: Watch text appear token by token\n- **Performance Metrics**: Words/second, response time\n- **Graceful Interruption**: Ctrl+C to stop streaming\n- **Progressive Rendering**: Markdown formatted as it streams\n\n#### Tool Execution\n- Automatic tool discovery and usage\n- Concurrent execution with progress indicators\n- Verbose and compact display modes\n- Complete execution history and timing\n\n#### Provider Integration\n- Seamless switching between providers\n- Model-specific optimizations\n- API key and endpoint management\n- Health monitoring and diagnostics\n\n## \ud83d\udda5\ufe0f Using Interactive Mode\n\nInteractive mode provides a command shell for direct server interaction.\n\n### Starting Interactive Mode\n\n```bash\nmcp-cli interactive --server sqlite\n```\n\n### Interactive Commands\n\n```bash\nhelp                              # Show available commands\nexit                              # Exit interactive mode\nclear                             # Clear terminal\n\n# Provider management\nprovider                          # Show current provider\nprovider list                     # List providers\nprovider anthropic                # Switch provider\nprovider openai gpt-5             # Switch to GPT-5\n\n# Model management\nmodel                             # Show current model\nmodel gpt-oss                     # Switch to reasoning model\nmodel claude-4-1-opus             # Switch to Claude 4\nmodels                            # List available models\n\n# Tool operations\ntools                             # List tools\ntools --all                       # Detailed tool info\ntools call                        # Interactive tool execution\n\n# Server operations\nservers                           # List servers\nping                              # Ping all servers\nresources                         # List resources\nprompts                           # List prompts\n```\n\n## \ud83d\udcc4 Using Command Mode\n\nCommand mode provides Unix-friendly automation capabilities.\n\n### Command Mode Options\n\n```bash\n--input FILE                      # Input file (- for stdin)\n--output FILE                     # Output file (- for stdout)\n--prompt TEXT                     # Prompt template\n--tool TOOL                       # Execute specific tool\n--tool-args JSON                  # Tool arguments as JSON\n--system-prompt TEXT              # Custom system prompt\n--raw                             # Raw output without formatting\n--single-turn                     # Disable multi-turn conversation\n--max-turns N                     # Maximum conversation turns\n```\n\n### Examples\n\n```bash\n# Text processing with reasoning models\necho \"Analyze this data\" | mcp-cli cmd --server sqlite --model gpt-oss --input - --output analysis.txt\n\n# Use GPT-5 for complex analysis\nmcp-cli cmd --server sqlite --provider openai --model gpt-5 --prompt \"Provide strategic analysis\" --input report.txt\n\n# Tool execution\nmcp-cli cmd --server sqlite --tool list_tables --raw\n\n# Complex queries\nmcp-cli cmd --server sqlite --tool read_query --tool-args '{\"query\": \"SELECT COUNT(*) FROM users\"}'\n\n# Batch processing with GNU Parallel\nls *.txt | parallel mcp-cli cmd --server sqlite --input {} --output {}.summary --prompt \"Summarize: {{input}}\"\n```\n\n## \ud83d\udd27 Provider Configuration\n\n### Ollama Configuration (Default)\n\nOllama runs locally by default on `http://localhost:11434`. To use reasoning and other models:\n\n```bash\n# Pull reasoning and other models for Ollama\nollama pull gpt-oss          # Default reasoning model\nollama pull llama3.3         # Latest Llama\nollama pull llama3.2         # Llama 3.2\nollama pull qwen3            # Qwen 3\nollama pull qwen2.5-coder    # Coding-focused\nollama pull deepseek-coder   # DeepSeek coder\nollama pull granite3.3       # IBM Granite\nollama pull mistral          # Mistral\nollama pull gemma3           # Google Gemma\nollama pull phi3             # Microsoft Phi\nollama pull codellama        # Code Llama\n\n# List available Ollama models\nollama list\n\n# Configure remote Ollama server\nmcp-cli provider set ollama api_base http://remote-server:11434\n```\n\n### Cloud Provider Configuration\n\nTo use cloud providers with advanced models, configure API keys:\n\n```bash\n# Configure OpenAI (for GPT-5, GPT-4, O3 models)\nmcp-cli provider set openai api_key sk-your-key-here\n\n# Configure Anthropic (for Claude 4, Claude 3.5)\nmcp-cli provider set anthropic api_key sk-ant-your-key-here\n\n# Configure Azure OpenAI (for enterprise GPT-5)\nmcp-cli provider set azure_openai api_key sk-your-key-here\nmcp-cli provider set azure_openai api_base https://your-resource.openai.azure.com\n\n# Configure other providers\nmcp-cli provider set gemini api_key your-gemini-key\nmcp-cli provider set groq api_key your-groq-key\n\n# Test configuration\nmcp-cli provider diagnostic openai\nmcp-cli provider diagnostic anthropic\n```\n\n### Custom OpenAI-Compatible Providers\n\nMCP CLI supports adding custom OpenAI-compatible providers (LocalAI, custom proxies, etc.):\n\n```bash\n# Add a custom provider (persisted across sessions)\nmcp-cli provider add localai http://localhost:8080/v1 gpt-4 gpt-3.5-turbo\nmcp-cli provider add myproxy https://proxy.example.com/v1 custom-model-1 custom-model-2\n\n# Set API key via environment variable (never stored in config)\nexport LOCALAI_API_KEY=your-api-key\nexport MYPROXY_API_KEY=your-api-key\n\n# List custom providers\nmcp-cli provider custom\n\n# Use custom provider\nmcp-cli --provider localai --server sqlite\nmcp-cli --provider myproxy --model custom-model-1 --server sqlite\n\n# Remove custom provider\nmcp-cli provider remove localai\n\n# Runtime provider (session-only, not persisted)\nmcp-cli --provider temp-ai --api-base https://api.temp.com/v1 --api-key test-key --server sqlite\n```\n\n**Security Note**: API keys are NEVER stored in configuration files. Use environment variables following the pattern `{PROVIDER_NAME}_API_KEY` or pass via `--api-key` for session-only use.\n\n### Manual Configuration\n\nThe `chuk_llm` library configuration in `~/.chuk_llm/config.yaml`:\n\n```yaml\nollama:\n  api_base: http://localhost:11434\n  default_model: gpt-oss\n\nopenai:\n  api_base: https://api.openai.com/v1\n  default_model: gpt-5\n\nanthropic:\n  api_base: https://api.anthropic.com\n  default_model: claude-4-1-opus\n\nazure_openai:\n  api_base: https://your-resource.openai.azure.com\n  default_model: gpt-5\n\ngemini:\n  api_base: https://generativelanguage.googleapis.com\n  default_model: gemini-2.0-flash\n\ngroq:\n  api_base: https://api.groq.com\n  default_model: llama-3.1-70b\n```\n\nAPI keys (if using cloud providers) in `~/.chuk_llm/.env`:\n\n```bash\nOPENAI_API_KEY=sk-your-key-here\nANTHROPIC_API_KEY=sk-ant-your-key-here\nAZURE_OPENAI_API_KEY=sk-your-azure-key-here\nGEMINI_API_KEY=your-gemini-key\nGROQ_API_KEY=your-groq-key\n```\n\n## \ud83d\udcc2 Server Configuration\n\nMCP CLI supports two types of server configurations:\n\n1. **Project Servers** (`server_config.json`): Shared project-level configurations\n2. **User Servers** (`~/.mcp-cli/preferences.json`): Personal runtime-added servers that persist across sessions\n\n### Project Configuration\n\nCreate a `server_config.json` file with your MCP server configurations:\n\n```json\n{\n  \"mcpServers\": {\n    \"sqlite\": {\n      \"command\": \"python\",\n      \"args\": [\"-m\", \"mcp_server.sqlite_server\"],\n      \"env\": {\n        \"DATABASE_PATH\": \"database.db\"\n      }\n    },\n    \"filesystem\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"@modelcontextprotocol/server-filesystem\", \"/path/to/allowed/files\"],\n      \"env\": {}\n    },\n    \"brave-search\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"@modelcontextprotocol/server-brave-search\"],\n      \"env\": {\n        \"BRAVE_API_KEY\": \"your-brave-api-key\"\n      }\n    }\n  }\n}\n```\n\n### Runtime Server Management\n\nAdd servers dynamically during runtime without editing configuration files:\n\n```bash\n# Add STDIO servers (most common)\nmcp-cli\n> /server add sqlite stdio uvx mcp-server-sqlite --db-path mydata.db\n> /server add playwright stdio npx @playwright/mcp@latest\n> /server add time stdio uvx mcp-server-time\n\n# Add HTTP servers with authentication\n> /server add github --transport http --header \"Authorization: Bearer ghp_token\" -- https://api.github.com/mcp\n> /server add myapi --transport http --env API_KEY=secret -- https://api.example.com/mcp\n\n# Add SSE (Server-Sent Events) servers\n> /server add events --transport sse -- https://events.example.com/sse\n\n# Manage servers\n> /server list                     # Show all servers\n> /server disable sqlite           # Temporarily disable\n> /server enable sqlite            # Re-enable\n> /server remove myapi             # Remove user-added server\n```\n\n**Key Points:**\n- User-added servers persist in `~/.mcp-cli/preferences.json`\n- Survive application restarts\n- Can be enabled/disabled without removal\n- Support STDIO, HTTP, and SSE transports\n- Environment variables and headers for authentication\n\n## \ud83d\udcc8 Advanced Usage Examples\n\n### Reasoning Model Comparison\n\n```bash\n# Compare reasoning across different models\n> /provider ollama\n> /model gpt-oss\n> Think through this problem step by step: If a train leaves New York at 3 PM...\n[See the complete thinking process with gpt-oss]\n\n> /provider openai\n> /model gpt-5\n> Think through this problem step by step: If a train leaves New York at 3 PM...\n[See GPT-5's reasoning approach]\n\n> /provider anthropic\n> /model claude-4-1-opus\n> Think through this problem step by step: If a train leaves New York at 3 PM...\n[See Claude 4's analytical process]\n```\n\n### Local-First Workflow with Reasoning\n\n```bash\n# Start with default Ollama/gpt-oss (no API key needed)\nmcp-cli chat --server sqlite\n\n# Use reasoning model for complex problems\n> Think through this database optimization problem step by step\n[gpt-oss shows its complete thinking process before answering]\n\n# Try different local models for different tasks\n> /model llama3.3              # General purpose\n> /model qwen2.5-coder         # For coding tasks\n> /model deepseek-coder        # Alternative coding model\n> /model granite3.3            # IBM's model\n> /model gpt-oss               # Back to reasoning model\n\n# Switch to cloud when needed (requires API keys)\n> /provider openai\n> /model gpt-5\n> Complex enterprise architecture design...\n\n> /provider anthropic\n> /model claude-4-1-opus\n> Detailed strategic analysis...\n\n> /provider ollama\n> /model gpt-oss\n> Continue with local processing...\n```\n\n### Multi-Provider Workflow\n\n```bash\n# Start with local reasoning (default, no API key)\nmcp-cli chat --server sqlite\n\n# Compare responses across providers\n> /provider ollama\n> What's the best way to optimize this SQL query?\n\n> /provider openai gpt-5        # Requires API key\n> What's the best way to optimize this SQL query?\n\n> /provider anthropic claude-4-sonnet  # Requires API key\n> What's the best way to optimize this SQL query?\n\n# Use each provider's strengths\n> /provider ollama gpt-oss      # Local reasoning, privacy\n> /provider openai gpt-5        # Advanced reasoning\n> /provider anthropic claude-4-1-opus  # Deep analysis\n> /provider groq llama-3.1-70b  # Ultra-fast responses\n```\n\n### Complex Tool Workflows with Reasoning\n\n```bash\n# Use reasoning model for complex database tasks\n> /model gpt-oss\n> I need to analyze our database performance. Think through what we should check first.\n[gpt-oss shows thinking: \"First, I should check the table structure, then indexes, then query patterns...\"]\n[Tool: list_tables] \u2192 products, customers, orders\n\n> Now analyze the indexes and suggest optimizations\n[gpt-oss thinks through index analysis]\n[Tool: describe_table] \u2192 Shows current indexes\n[Tool: read_query] \u2192 Analyzes query patterns\n\n> Create an optimization plan based on your analysis\n[Complete reasoning process followed by specific recommendations]\n```\n\n### Automation and Scripting\n\n```bash\n# Batch processing with different models\nfor file in data/*.csv; do\n  # Use reasoning model for analysis\n  mcp-cli cmd --server sqlite \\\n    --model gpt-oss \\\n    --prompt \"Analyze this data and think through patterns\" \\\n    --input \"$file\" \\\n    --output \"analysis/$(basename \"$file\" .csv)_reasoning.txt\"\n  \n  # Use coding model for generating scripts\n  mcp-cli cmd --server sqlite \\\n    --model qwen2.5-coder \\\n    --prompt \"Generate Python code to process this data\" \\\n    --input \"$file\" \\\n    --output \"scripts/$(basename \"$file\" .csv)_script.py\"\ndone\n\n# Pipeline with reasoning\ncat complex_problem.txt | \\\n  mcp-cli cmd --model gpt-oss --prompt \"Think through this step by step\" --input - | \\\n  mcp-cli cmd --model llama3.3 --prompt \"Summarize the key points\" --input - > solution.txt\n```\n\n### Performance Monitoring\n\n```bash\n# Check provider and model performance\n> /provider diagnostic\nProvider Diagnostics\nProvider      | Status      | Response Time | Features      | Models\nollama        | \u2705 Ready    | 56ms         | \ud83d\udce1\ud83d\udd27         | gpt-oss, llama3.3, qwen3, ...\nopenai        | \u2705 Ready    | 234ms        | \ud83d\udce1\ud83d\udd27\ud83d\udc41\ufe0f      | gpt-5, gpt-4o, o3, ...\nanthropic     | \u2705 Ready    | 187ms        | \ud83d\udce1\ud83d\udd27         | claude-4-1-opus, claude-4-sonnet, ...\nazure_openai  | \u2705 Ready    | 198ms        | \ud83d\udce1\ud83d\udd27\ud83d\udc41\ufe0f      | gpt-5, gpt-4o, ...\ngemini        | \u2705 Ready    | 156ms        | \ud83d\udce1\ud83d\udd27\ud83d\udc41\ufe0f      | gemini-2.0-flash, ...\ngroq          | \u2705 Ready    | 45ms         | \ud83d\udce1\ud83d\udd27         | llama-3.1-70b, ...\n\n# Check available models\n> /models\nModels for ollama (Current Provider)\nModel                | Status\ngpt-oss             | Current & Default (Reasoning)\nllama3.3            | Available\nllama3.2            | Available\nqwen2.5-coder       | Available\ndeepseek-coder      | Available\ngranite3.3          | Available\n... and 6 more\n\n# Monitor tool execution with reasoning\n> /verbose\n> /model gpt-oss\n> Analyze the database and optimize the slowest queries\n[Shows complete thinking process]\n[Tool execution with timing]\n```\n\n## \ud83d\udd0d Troubleshooting\n\n### Common Issues\n\n1. **Ollama not running** (default provider):\n   ```bash\n   # Start Ollama service\n   ollama serve\n   \n   # Or check if it's running\n   curl http://localhost:11434/api/tags\n   ```\n\n2. **Model not found**:\n   ```bash\n   # For Ollama (default), pull the model first\n   ollama pull gpt-oss      # Reasoning model\n   ollama pull llama3.3     # Latest Llama\n   ollama pull qwen2.5-coder # Coding model\n   \n   # List available models\n   ollama list\n   \n   # For cloud providers, check supported models\n   mcp-cli models openai     # Shows GPT-5, GPT-4, O3 models\n   mcp-cli models anthropic  # Shows Claude 4, Claude 3.5 models\n   ```\n\n3. **Provider not found or API key missing**:\n   ```bash\n   # Check available providers\n   mcp-cli provider list\n   \n   # For cloud providers, set API keys\n   mcp-cli provider set openai api_key sk-your-key\n   mcp-cli provider set anthropic api_key sk-ant-your-key\n   \n   # Test connection\n   mcp-cli provider diagnostic openai\n   ```\n\n4. **Connection issues with Ollama**:\n   ```bash\n   # Check Ollama is running\n   ollama list\n   \n   # Test connection\n   mcp-cli provider diagnostic ollama\n   \n   # Configure custom endpoint if needed\n   mcp-cli provider set ollama api_base http://localhost:11434\n   ```\n\n### Debug Mode\n\nEnable verbose logging for troubleshooting:\n\n```bash\nmcp-cli --verbose chat --server sqlite\nmcp-cli --log-level DEBUG interactive --server sqlite\n```\n\n## \ud83d\udd12 Security Considerations\n\n- **Local by Default**: Ollama with gpt-oss runs locally, keeping your data private\n- **API Keys**: Only needed for cloud providers (OpenAI, Anthropic, etc.), stored securely\n- **File Access**: Filesystem access can be disabled with `--disable-filesystem`\n- **Tool Validation**: All tool calls are validated before execution\n- **Timeout Protection**: Configurable timeouts prevent hanging operations\n- **Server Isolation**: Each server runs in its own process\n\n## \ud83d\ude80 Performance Features\n\n- **Local Processing**: Default Ollama provider minimizes latency\n- **Reasoning Visibility**: See AI thinking process with gpt-oss, GPT-5, Claude 4\n- **Concurrent Tool Execution**: Multiple tools can run simultaneously\n- **Streaming Responses**: Real-time response generation\n- **Connection Pooling**: Efficient reuse of client connections\n- **Caching**: Tool metadata and provider configurations are cached\n- **Async Architecture**: Non-blocking operations throughout\n\n## \ud83d\udce6 Dependencies\n\nCore dependencies are organized into feature groups:\n\n- **cli**: Terminal UI and command framework (Rich, Typer, chuk-term)\n- **dev**: Development tools, testing utilities, linting\n- **chuk-tool-processor**: Core tool execution and MCP communication\n- **chuk-llm**: Unified LLM provider management with 200+ auto-generated functions\n- **chuk-term**: Enhanced terminal UI with themes, prompts, and cross-platform support\n\nInstall with specific features:\n```bash\npip install \"mcp-cli[cli]\"        # Basic CLI features\npip install \"mcp-cli[cli,dev]\"    # CLI with development tools\n```\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n### Development Setup\n\n```bash\ngit clone https://github.com/chrishayuk/mcp-cli\ncd mcp-cli\npip install -e \".[cli,dev]\"\npre-commit install\n```\n\n### Demo Scripts\n\nExplore the capabilities of MCP CLI:\n\n```bash\n# Custom Provider Management Demos\n\n# Interactive walkthrough demo (educational)\nuv run examples/custom_provider_demo.py\n\n# Working demo with actual inference (requires OPENAI_API_KEY)\nuv run examples/custom_provider_working_demo.py\n\n# Simple shell script demo (requires OPENAI_API_KEY)\nbash examples/custom_provider_simple_demo.sh\n\n# Terminal management features (chuk-term)\nuv run examples/ui_terminal_demo.py\n\n# Output system with themes (chuk-term)\nuv run examples/ui_output_demo.py\n\n# Streaming UI capabilities (chuk-term)\nuv run examples/ui_streaming_demo.py\n```\n\n### Running Tests\n\n```bash\npytest\npytest --cov=mcp_cli --cov-report=html\n```\n\n## \ud83d\udcdc License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Async-native tool execution\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - Unified LLM provider management with GPT-5, Claude 4, and reasoning model support\n- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)** - Enhanced terminal UI with themes and cross-platform support\n- **[Rich](https://github.com/Textualize/rich)** - Beautiful terminal formatting\n- **[Typer](https://typer.tiangolo.com/)** - CLI framework\n- **[Prompt Toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit)** - Interactive input\n\n## \ud83d\udd17 Related Projects\n\n- **[Model Context Protocol](https://modelcontextprotocol.io/)** - Core protocol specification\n- **[MCP Servers](https://github.com/modelcontextprotocol/servers)** - Official MCP server implementations\n- **[CHUK Tool Processor](https://github.com/chrishayuk/chuk-tool-processor)** - Tool execution engine\n- **[CHUK-LLM](https://github.com/chrishayuk/chuk-llm)** - LLM provider abstraction with GPT-5, Claude 4, O3 series support\n- **[CHUK-Term](https://github.com/chrishayuk/chuk-term)** - Terminal UI library with themes and cross-platform support\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A cli for the Model Context Provider",
    "version": "0.8.6",
    "project_urls": null,
    "split_keywords": [
        "llm",
        " openai",
        " claude",
        " mcp",
        " cli"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3e199ab47a5d874cbcf606661c928de71bf9dda4ae05f37ac1ddbe3c3f0cb064",
                "md5": "0e6552c2e5ea415c909c5824a6fb504d",
                "sha256": "f80bc21d247ce2954b415e2ca358cadd6f12ca05a5c4127406bc9a79de577e43"
            },
            "downloads": -1,
            "filename": "mcp_cli-0.8.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0e6552c2e5ea415c909c5824a6fb504d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 237927,
            "upload_time": "2025-10-19T23:55:15",
            "upload_time_iso_8601": "2025-10-19T23:55:15.535738Z",
            "url": "https://files.pythonhosted.org/packages/3e/19/9ab47a5d874cbcf606661c928de71bf9dda4ae05f37ac1ddbe3c3f0cb064/mcp_cli-0.8.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "884aabf81e9d6a3f7ffedf924451f42217cb0639d5a8200453ecb186fdfa3e5a",
                "md5": "704342581999cdcf6fa87a08bd7aec02",
                "sha256": "c512c9136c7072f804e6e531c44947d57315be707dd25d2bf8081ee09cd1c7e2"
            },
            "downloads": -1,
            "filename": "mcp_cli-0.8.6.tar.gz",
            "has_sig": false,
            "md5_digest": "704342581999cdcf6fa87a08bd7aec02",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 206576,
            "upload_time": "2025-10-19T23:55:16",
            "upload_time_iso_8601": "2025-10-19T23:55:16.849293Z",
            "url": "https://files.pythonhosted.org/packages/88/4a/abf81e9d6a3f7ffedf924451f42217cb0639d5a8200453ecb186fdfa3e5a/mcp_cli-0.8.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-19 23:55:16",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "mcp-cli"
}
        
Elapsed time: 0.54652s