# ai-cli-chat Multi-model AI Chat at the CLI `ai` featuring round-table discussions
[](https://badge.fury.io/py/ai-cli-chat)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/ai-cli/ai-cli/actions)
A powerful command-line interface for interacting with multiple AI models, featuring round-table discussions where different AI models can collaborate and critique each other's responses.
## ๐ Quick Start
### Installation
```bash
pipx install ai-cli-chat
```
### Basic Setup
**Configure API Keys** (choose your preferred method):
```bash
ai init
```
This will create `~/.ai-cli/config.toml` and `~/.ai-cli/.env` template
To get a quick start, fulfill the API key in the `~/.ai-cli/.env` file, i.e
```
OPENAI_API_KEY=xxx
```
**Verify Setup**:
```bash
ai version
ai chat "Hello there!"
```
## โจ Features
- **๐ค Multi-Model Support**: OpenAI GPT-4, Anthropic Claude, Google Gemini, Ollama (local models)
- **๐ฌ Three Interaction Modes**:
- **Single Chat**: Quick one-off conversations
- **Interactive Session**: Multi-turn conversations with history
- **Round-Table Discussions**: Multiple AI models discussing topics together
- **โก Real-time Streaming**: See responses as they're generated
- **๐จ Rich Terminal UI**: Beautiful formatting with markdown support
- **โ๏ธ Flexible Configuration**: Per-model settings, API key management
### Usage Examples
#### Single Chat
```bash
# Quick question
ai chat "What is machine learning?"
# Use specific model
ai chat --model anthropic/claude-3-sonnet "Explain quantum computing"
```
#### Interactive Session
```bash
# Start interactive mode
ai interactive
# Within interactive mode:
# /help - Show available commands
# /model gpt-4 - Switch to different model
# /roundtable - Start round-table discussion
# /exit - Exit session
```
#### Round-Table Discussions
You need to enable at least two AI model to use this. Or create at least two `models` in the config with the same model provider.
```
[roundtable]
enabled_models = [ "openai/gpt-4", "gemini"]
...
[models."openai/gpt-4"]
provider = "openai"
model = "gpt-4"
...
[models.gemini]
provider = "gemini"
model = "gemini-2.5-flash"
...
```
```bash
# Multiple AI models discuss a topic
ai chat -rt "give me 3 domain name suggestion for a b2c saas that help user to convert their fav newsletter into podcast"
# Parallel responses (all models respond simultaneously)
ai chat --roundtable --parallel "Compare Python vs JavaScript"
```
## ๐ ๏ธ Configuration
### Model Management
```bash
# List available models
ai config list
# Add a new model
ai config add-model my-gpt4 \
--provider openai \
--model gpt-4 \
--api-key env:OPENAI_API_KEY
# Set default model
ai config set default_model my-gpt4
```
### Round-Table Setup
```bash
# Add models to round-table discussions
ai config roundtable --add openai/gpt-4
ai config roundtable --add anthropic/claude-3-5-sonnet
# List round-table participants
ai config roundtable --list
```
### Environment Variables
```bash
# Check environment status
ai config env --show
# Create example .env file
ai config env --init
```
## ๐ Template Configurations
The project includes several pre-built configuration templates in `config-examples/` for common use cases:
### Available Templates
- **basic-roundtable.toml** - Simple two-model collaborative discussion
- **multi-model-roundtable.toml** - Complex discussions with multiple models and roles
- **creative-writing.toml** - Optimized for creative writing and storytelling
- **code-review.toml** - Technical code review and programming discussions
- **research-analysis.toml** - Academic research and analytical tasks
- **debate-format.toml** - Structured debates between models
- **problem-solving.toml** - Collaborative problem-solving sessions
### Using Templates
```bash
# Method 1: Copy a template to your config directory
cp config-examples/basic-roundtable.toml ~/.ai-cli/config.toml
# Method 2: Initialize base config then customize
ai init
ai config roundtable --add openai/gpt-4
ai config roundtable --add anthropic/claude-3-5-sonnet
```
### Role-based Configuration Example
```toml
[roundtable]
enabled_models = ["openai/gpt-4", "anthropic/claude-3-5-sonnet", "gemini"]
use_role_based_prompting = true
role_rotation = true
discussion_rounds = 3
# Optional: Restrict which roles specific models can play
# System uses 4 predefined roles: generator, critic, refiner, evaluator
[roundtable.role_assignments]
"openai/gpt-4" = ["generator", "refiner"] # Best for creative generation
"anthropic/claude-3-5-sonnet" = ["critic", "evaluator"] # Best for analysis
[models."openai/gpt-4"]
provider = "openai"
model = "gpt-4"
api_key = "env:OPENAI_API_KEY"
temperature = 0.8 # Higher creativity for generation tasks
[models."anthropic/claude-3-5-sonnet"]
provider = "anthropic"
model = "claude-3-5-sonnet"
api_key = "env:ANTHROPIC_API_KEY"
temperature = 0.3 # Lower temperature for critical analysis
[models.gemini]
provider = "gemini"
model = "gemini-2.0-flash-thinking-exp"
api_key = "env:GEMINI_API_KEY"
# No role restrictions - can play any of the 4 roles
```
## ๐ Supported Models
| Provider | Model | Notes |
|----------|-------|-------|
| OpenAI | gpt-4, gpt-3.5-turbo | Requires `OPENAI_API_KEY` |
| Anthropic | claude-3-5-sonnet, claude-3-haiku | Requires `ANTHROPIC_API_KEY` |
| Google | gemini-pro | Requires `GEMINI_API_KEY` |
| Ollama | llama2, codellama, etc. | Local models, no API key needed |
## ๐ง Advanced Configuration
The CLI stores configuration in `~/.ai-cli/config.toml`. You can customize:
- **Model Settings**: Temperature, max tokens, context window, API endpoints
- **Round-Table Behavior**: Discussion rounds, role-based prompting, parallel responses
- **UI Preferences**: Theme, streaming, formatting, model icons
- **Role Assignments**: Which models can play which of the 4 predefined roles
### Complete Configuration Example
```toml
default_model = "openai/gpt-4"
# Individual model configurations
[models."openai/gpt-4"]
provider = "openai"
model = "gpt-4"
api_key = "env:OPENAI_API_KEY"
temperature = 0.7
max_tokens = 4000
timeout_seconds = 30
[models."anthropic/claude-3-5-sonnet"]
provider = "anthropic"
model = "claude-3-5-sonnet"
api_key = "env:ANTHROPIC_API_KEY"
temperature = 0.8
max_tokens = 8000
# Additional model configurations
[models."gemini/gemini-pro"]
provider = "gemini"
model = "gemini-pro"
api_key = "env:GEMINI_API_KEY"
temperature = 0.7
max_tokens = 4000
# Round-table configuration
[roundtable]
enabled_models = ["openai/gpt-4", "anthropic/claude-3-5-sonnet", "gemini/gemini-pro"]
discussion_rounds = 3
parallel_responses = false
use_role_based_prompting = true
role_rotation = true
timeout_seconds = 60
# UI customization
[ui]
theme = "dark"
streaming = true
format = "markdown"
show_model_icons = true
```
### Configuration Sections Explained
**Model Settings:**
- `temperature`: Creativity level (0.0-2.0)
- `max_tokens`: Response length limit
- `provider`: AI provider (openai, anthropic, gemini, ollama)
- `model`: Specific model name
- `api_key`: API key (can use env: prefix)
- `endpoint`: Custom API endpoint (optional)
**Round-table Options:**
- `use_role_based_prompting`: Enable specialized roles
- `role_rotation`: Models switch roles between rounds
- `discussion_rounds`: Number of conversation rounds
**UI Customization:**
- `show_model_icons`: Display model indicators
## ๐ค Round-Table Discussions Explained
Round-table mode is the **unique selling point** of AI CLI, featuring advanced **role-based prompting** that goes beyond simple multi-model chat:
### Core Features
1. **Sequential Mode** (default): Models respond one after another, building on previous responses
2. **Parallel Mode** (`--parallel`): All models respond to the original prompt simultaneously
3. **Role-based Prompting**: Automatic assignment of 4 predefined roles (generator, critic, refiner, evaluator)
4. **Multiple Rounds**: Configurable discussion rounds for deeper exploration
5. **Role Rotation**: Models can switch roles between rounds for diverse perspectives
### Role-based Prompting Examples
**Two-Model Roundtable (Sequential Roles):**
```bash
ai chat --roundtable "How can we reduce customer churn in our SaaS product?"
# Round 1: GPT-4 (Generator) creates initial suggestions
# Round 1: Claude (Critic) analyzes and critiques GPT-4's suggestions
# Round 2: Claude (Refiner) improves the suggestions
# Round 2: GPT-4 (Critic) provides final critique
```
**Multi-Model Roundtable (All 4 Roles):**
```bash
ai chat --roundtable "Design a comprehensive social media strategy"
# Round 1: Models A&B (Generators) create different strategy approaches
# Round 1: Models C&D (Critics) analyze and identify issues
# Round 2: Models A&B (Refiners) improve strategies based on critiques
# Round 3: Model A (Evaluator) ranks all strategies and provides final recommendation
```
**Role Rotation in Action:**
```bash
ai chat --roundtable "Compare Python vs JavaScript for web development"
# GPT-4 starts as Generator โ becomes Critic in round 2
# Claude starts as Critic โ becomes Refiner in round 2
# System automatically rotates roles to get diverse perspectives
```
### Why Role-based Round-tables?
- **Structured Discussions**: 4 predefined roles (generator, critic, refiner, evaluator) create organized conversations
- **Quality Improvement**: Iterative critique and refinement process enhances initial ideas
- **Multiple Perspectives**: Role rotation ensures models approach problems from different angles
- **Automatic Workflow**: System handles role assignment and prompt templating automatically
- **Reduced Bias**: Multiple models and roles minimize single-perspective limitations
This creates **structured collaborative discussions** where models systematically generate, critique, refine, and evaluate ideas - **like having a well-organized brainstorming session with clear roles**.
### How Role-based Prompting Works
**Implementation details:**
- **Role Templates**: Hardcoded prompt templates for the 4 roles (generator, critic, refiner, evaluator)
- **Automatic Assignment**: System automatically assigns roles to models based on round and model count
- **No Custom System Prompts**: Individual models cannot have custom system prompts in configuration
- **Role Behavior**: Each role uses its predefined template from the `RolePromptTemplates` class
## ๐งช Development
### Setup
```bash
# Clone repository
git clone https://github.com/ai-cli/ai-cli.git
cd ai-cli
# Install with uv (recommended)
uv sync --extra dev
# Or with pip
pip install -e ".[dev]"
```
### Testing
```bash
# Run tests
uv run pytest
# With coverage
uv run pytest --cov=ai_cli
# Run linting
uv run ruff check src/ai_cli/
uv run ruff format src/ai_cli/
uv run mypy src/ai_cli/
```
### Pre-commit Hooks
```bash
uv run pre-commit install
```
### Project Structure
```
ai-cli/
โโโ src/ai_cli/ # Main package source
โ โโโ __init__.py # Package initialization
โ โโโ cli.py # CLI entry point and commands
โ โโโ config/ # Configuration management
โ โ โโโ manager.py # Config file handling
โ โ โโโ models.py # Pydantic data models
โ โโโ core/ # Core business logic
โ โ โโโ chat.py # Chat engine and round-table logic
โ โ โโโ messages.py # Message data structures
โ โ โโโ roles.py # Role-based prompting system
โ โโโ providers/ # AI provider abstractions
โ โ โโโ base.py # Abstract provider interface
โ โ โโโ factory.py # Provider factory pattern
โ โ โโโ litellm_provider.py # LiteLLM implementation
โ โโโ ui/ # User interface components
โ โ โโโ interactive.py # Interactive chat session
โ โ โโโ streaming.py # Real-time response streaming
โ โโโ utils/ # Utility functions
โ โโโ env.py # Environment variable handling
โโโ tests/ # Test suite
โโโ config-examples/ # Template configurations
โโโ features-doc/ # Feature documentation
โโโ pyproject.toml # Project configuration
โโโ README.md # This file
```
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## ๐ Acknowledgments
- Built with [Typer](https://typer.tiangolo.com/) for the CLI framework
- [Rich](https://rich.readthedocs.io/) for beautiful terminal output
- [LiteLLM](https://litellm.ai/) for universal model access
- Inspired by the need for collaborative AI conversations
Raw data
{
"_id": null,
"home_page": null,
"name": "ai-cli-chat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Yusi <5696168+YusiZhang@users.noreply.github.com>",
"keywords": "ai, chat, chatgpt, cli, conversation, llm, roundtable",
"author": null,
"author_email": "Yusi <5696168+YusiZhang@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/20/c6/3eadc65fdaa6ea09610ae031ec80d89f9646c2d3b008b8067d0b2127e112/ai_cli_chat-1.0.1.tar.gz",
"platform": null,
"description": "# ai-cli-chat Multi-model AI Chat at the CLI `ai` featuring round-table discussions\n\n[](https://badge.fury.io/py/ai-cli-chat)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n[](https://github.com/ai-cli/ai-cli/actions)\n\nA powerful command-line interface for interacting with multiple AI models, featuring round-table discussions where different AI models can collaborate and critique each other's responses.\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npipx install ai-cli-chat\n```\n\n### Basic Setup\n\n**Configure API Keys** (choose your preferred method):\n ```bash\n ai init\n ```\n\n This will create `~/.ai-cli/config.toml` and `~/.ai-cli/.env` template\n To get a quick start, fulfill the API key in the `~/.ai-cli/.env` file, i.e\n ```\n OPENAI_API_KEY=xxx\n ```\n\n**Verify Setup**:\n ```bash\n ai version\n ai chat \"Hello there!\"\n ```\n\n## \u2728 Features\n\n- **\ud83e\udd16 Multi-Model Support**: OpenAI GPT-4, Anthropic Claude, Google Gemini, Ollama (local models)\n- **\ud83d\udcac Three Interaction Modes**:\n - **Single Chat**: Quick one-off conversations\n - **Interactive Session**: Multi-turn conversations with history\n - **Round-Table Discussions**: Multiple AI models discussing topics together\n- **\u26a1 Real-time Streaming**: See responses as they're generated\n- **\ud83c\udfa8 Rich Terminal UI**: Beautiful formatting with markdown support\n- **\u2699\ufe0f Flexible Configuration**: Per-model settings, API key management\n\n### Usage Examples\n\n#### Single Chat\n```bash\n# Quick question\nai chat \"What is machine learning?\"\n\n# Use specific model\nai chat --model anthropic/claude-3-sonnet \"Explain quantum computing\"\n```\n\n#### Interactive Session\n```bash\n# Start interactive mode\nai interactive\n\n# Within interactive mode:\n# /help - Show available commands\n# /model gpt-4 - Switch to different model\n# /roundtable - Start round-table discussion\n# /exit - Exit session\n```\n\n#### Round-Table Discussions\nYou need to enable at least two AI model to use this. Or create at least two `models` in the config with the same model provider.\n```\n[roundtable]\nenabled_models = [ \"openai/gpt-4\", \"gemini\"]\n...\n\n[models.\"openai/gpt-4\"]\nprovider = \"openai\"\nmodel = \"gpt-4\"\n...\n\n[models.gemini]\nprovider = \"gemini\"\nmodel = \"gemini-2.5-flash\"\n...\n```\n```bash\n# Multiple AI models discuss a topic\nai chat -rt \"give me 3 domain name suggestion for a b2c saas that help user to convert their fav newsletter into podcast\"\n\n# Parallel responses (all models respond simultaneously)\nai chat --roundtable --parallel \"Compare Python vs JavaScript\"\n```\n\n## \ud83d\udee0\ufe0f Configuration\n\n### Model Management\n```bash\n# List available models\nai config list\n\n# Add a new model\nai config add-model my-gpt4 \\\n --provider openai \\\n --model gpt-4 \\\n --api-key env:OPENAI_API_KEY\n\n# Set default model\nai config set default_model my-gpt4\n```\n\n### Round-Table Setup\n```bash\n# Add models to round-table discussions\nai config roundtable --add openai/gpt-4\nai config roundtable --add anthropic/claude-3-5-sonnet\n\n# List round-table participants\nai config roundtable --list\n```\n\n### Environment Variables\n```bash\n# Check environment status\nai config env --show\n\n# Create example .env file\nai config env --init\n```\n\n## \ud83d\udcc4 Template Configurations\n\nThe project includes several pre-built configuration templates in `config-examples/` for common use cases:\n\n### Available Templates\n- **basic-roundtable.toml** - Simple two-model collaborative discussion\n- **multi-model-roundtable.toml** - Complex discussions with multiple models and roles\n- **creative-writing.toml** - Optimized for creative writing and storytelling\n- **code-review.toml** - Technical code review and programming discussions\n- **research-analysis.toml** - Academic research and analytical tasks\n- **debate-format.toml** - Structured debates between models\n- **problem-solving.toml** - Collaborative problem-solving sessions\n\n### Using Templates\n```bash\n# Method 1: Copy a template to your config directory\ncp config-examples/basic-roundtable.toml ~/.ai-cli/config.toml\n\n# Method 2: Initialize base config then customize\nai init\nai config roundtable --add openai/gpt-4\nai config roundtable --add anthropic/claude-3-5-sonnet\n```\n\n### Role-based Configuration Example\n```toml\n[roundtable]\nenabled_models = [\"openai/gpt-4\", \"anthropic/claude-3-5-sonnet\", \"gemini\"]\nuse_role_based_prompting = true\nrole_rotation = true\ndiscussion_rounds = 3\n\n# Optional: Restrict which roles specific models can play\n# System uses 4 predefined roles: generator, critic, refiner, evaluator\n[roundtable.role_assignments]\n\"openai/gpt-4\" = [\"generator\", \"refiner\"] # Best for creative generation\n\"anthropic/claude-3-5-sonnet\" = [\"critic\", \"evaluator\"] # Best for analysis\n\n[models.\"openai/gpt-4\"]\nprovider = \"openai\"\nmodel = \"gpt-4\"\napi_key = \"env:OPENAI_API_KEY\"\ntemperature = 0.8 # Higher creativity for generation tasks\n\n[models.\"anthropic/claude-3-5-sonnet\"]\nprovider = \"anthropic\"\nmodel = \"claude-3-5-sonnet\"\napi_key = \"env:ANTHROPIC_API_KEY\"\ntemperature = 0.3 # Lower temperature for critical analysis\n\n[models.gemini]\nprovider = \"gemini\"\nmodel = \"gemini-2.0-flash-thinking-exp\"\napi_key = \"env:GEMINI_API_KEY\"\n# No role restrictions - can play any of the 4 roles\n```\n\n## \ud83d\udccb Supported Models\n\n| Provider | Model | Notes |\n|----------|-------|-------|\n| OpenAI | gpt-4, gpt-3.5-turbo | Requires `OPENAI_API_KEY` |\n| Anthropic | claude-3-5-sonnet, claude-3-haiku | Requires `ANTHROPIC_API_KEY` |\n| Google | gemini-pro | Requires `GEMINI_API_KEY` |\n| Ollama | llama2, codellama, etc. | Local models, no API key needed |\n\n## \ud83d\udd27 Advanced Configuration\n\nThe CLI stores configuration in `~/.ai-cli/config.toml`. You can customize:\n\n- **Model Settings**: Temperature, max tokens, context window, API endpoints\n- **Round-Table Behavior**: Discussion rounds, role-based prompting, parallel responses\n- **UI Preferences**: Theme, streaming, formatting, model icons\n- **Role Assignments**: Which models can play which of the 4 predefined roles\n\n### Complete Configuration Example\n```toml\ndefault_model = \"openai/gpt-4\"\n\n# Individual model configurations\n[models.\"openai/gpt-4\"]\nprovider = \"openai\"\nmodel = \"gpt-4\"\napi_key = \"env:OPENAI_API_KEY\"\ntemperature = 0.7\nmax_tokens = 4000\ntimeout_seconds = 30\n\n[models.\"anthropic/claude-3-5-sonnet\"]\nprovider = \"anthropic\"\nmodel = \"claude-3-5-sonnet\"\napi_key = \"env:ANTHROPIC_API_KEY\"\ntemperature = 0.8\nmax_tokens = 8000\n\n# Additional model configurations\n[models.\"gemini/gemini-pro\"]\nprovider = \"gemini\"\nmodel = \"gemini-pro\"\napi_key = \"env:GEMINI_API_KEY\"\ntemperature = 0.7\nmax_tokens = 4000\n\n# Round-table configuration\n[roundtable]\nenabled_models = [\"openai/gpt-4\", \"anthropic/claude-3-5-sonnet\", \"gemini/gemini-pro\"]\ndiscussion_rounds = 3\nparallel_responses = false\nuse_role_based_prompting = true\nrole_rotation = true\ntimeout_seconds = 60\n\n# UI customization\n[ui]\ntheme = \"dark\"\nstreaming = true\nformat = \"markdown\"\nshow_model_icons = true\n```\n\n### Configuration Sections Explained\n\n**Model Settings:**\n- `temperature`: Creativity level (0.0-2.0)\n- `max_tokens`: Response length limit\n- `provider`: AI provider (openai, anthropic, gemini, ollama)\n- `model`: Specific model name\n- `api_key`: API key (can use env: prefix)\n- `endpoint`: Custom API endpoint (optional)\n\n**Round-table Options:**\n- `use_role_based_prompting`: Enable specialized roles\n- `role_rotation`: Models switch roles between rounds\n- `discussion_rounds`: Number of conversation rounds\n\n**UI Customization:**\n- `show_model_icons`: Display model indicators\n\n## \ud83e\udd1d Round-Table Discussions Explained\n\nRound-table mode is the **unique selling point** of AI CLI, featuring advanced **role-based prompting** that goes beyond simple multi-model chat:\n\n### Core Features\n1. **Sequential Mode** (default): Models respond one after another, building on previous responses\n2. **Parallel Mode** (`--parallel`): All models respond to the original prompt simultaneously\n3. **Role-based Prompting**: Automatic assignment of 4 predefined roles (generator, critic, refiner, evaluator)\n4. **Multiple Rounds**: Configurable discussion rounds for deeper exploration\n5. **Role Rotation**: Models can switch roles between rounds for diverse perspectives\n\n### Role-based Prompting Examples\n\n**Two-Model Roundtable (Sequential Roles):**\n```bash\nai chat --roundtable \"How can we reduce customer churn in our SaaS product?\"\n# Round 1: GPT-4 (Generator) creates initial suggestions\n# Round 1: Claude (Critic) analyzes and critiques GPT-4's suggestions\n# Round 2: Claude (Refiner) improves the suggestions\n# Round 2: GPT-4 (Critic) provides final critique\n```\n\n**Multi-Model Roundtable (All 4 Roles):**\n```bash\nai chat --roundtable \"Design a comprehensive social media strategy\"\n# Round 1: Models A&B (Generators) create different strategy approaches\n# Round 1: Models C&D (Critics) analyze and identify issues\n# Round 2: Models A&B (Refiners) improve strategies based on critiques\n# Round 3: Model A (Evaluator) ranks all strategies and provides final recommendation\n```\n\n**Role Rotation in Action:**\n```bash\nai chat --roundtable \"Compare Python vs JavaScript for web development\"\n# GPT-4 starts as Generator \u2192 becomes Critic in round 2\n# Claude starts as Critic \u2192 becomes Refiner in round 2\n# System automatically rotates roles to get diverse perspectives\n```\n\n### Why Role-based Round-tables?\n- **Structured Discussions**: 4 predefined roles (generator, critic, refiner, evaluator) create organized conversations\n- **Quality Improvement**: Iterative critique and refinement process enhances initial ideas\n- **Multiple Perspectives**: Role rotation ensures models approach problems from different angles\n- **Automatic Workflow**: System handles role assignment and prompt templating automatically\n- **Reduced Bias**: Multiple models and roles minimize single-perspective limitations\n\nThis creates **structured collaborative discussions** where models systematically generate, critique, refine, and evaluate ideas - **like having a well-organized brainstorming session with clear roles**.\n\n### How Role-based Prompting Works\n**Implementation details:**\n- **Role Templates**: Hardcoded prompt templates for the 4 roles (generator, critic, refiner, evaluator)\n- **Automatic Assignment**: System automatically assigns roles to models based on round and model count\n- **No Custom System Prompts**: Individual models cannot have custom system prompts in configuration\n- **Role Behavior**: Each role uses its predefined template from the `RolePromptTemplates` class\n\n## \ud83e\uddea Development\n\n### Setup\n```bash\n# Clone repository\ngit clone https://github.com/ai-cli/ai-cli.git\ncd ai-cli\n\n# Install with uv (recommended)\nuv sync --extra dev\n\n# Or with pip\npip install -e \".[dev]\"\n```\n\n### Testing\n```bash\n# Run tests\nuv run pytest\n\n# With coverage\nuv run pytest --cov=ai_cli\n\n# Run linting\nuv run ruff check src/ai_cli/\nuv run ruff format src/ai_cli/\nuv run mypy src/ai_cli/\n```\n\n### Pre-commit Hooks\n```bash\nuv run pre-commit install\n```\n\n### Project Structure\n```\nai-cli/\n\u251c\u2500\u2500 src/ai_cli/ # Main package source\n\u2502 \u251c\u2500\u2500 __init__.py # Package initialization\n\u2502 \u251c\u2500\u2500 cli.py # CLI entry point and commands\n\u2502 \u251c\u2500\u2500 config/ # Configuration management\n\u2502 \u2502 \u251c\u2500\u2500 manager.py # Config file handling\n\u2502 \u2502 \u2514\u2500\u2500 models.py # Pydantic data models\n\u2502 \u251c\u2500\u2500 core/ # Core business logic\n\u2502 \u2502 \u251c\u2500\u2500 chat.py # Chat engine and round-table logic\n\u2502 \u2502 \u251c\u2500\u2500 messages.py # Message data structures\n\u2502 \u2502 \u2514\u2500\u2500 roles.py # Role-based prompting system\n\u2502 \u251c\u2500\u2500 providers/ # AI provider abstractions\n\u2502 \u2502 \u251c\u2500\u2500 base.py # Abstract provider interface\n\u2502 \u2502 \u251c\u2500\u2500 factory.py # Provider factory pattern\n\u2502 \u2502 \u2514\u2500\u2500 litellm_provider.py # LiteLLM implementation\n\u2502 \u251c\u2500\u2500 ui/ # User interface components\n\u2502 \u2502 \u251c\u2500\u2500 interactive.py # Interactive chat session\n\u2502 \u2502 \u2514\u2500\u2500 streaming.py # Real-time response streaming\n\u2502 \u2514\u2500\u2500 utils/ # Utility functions\n\u2502 \u2514\u2500\u2500 env.py # Environment variable handling\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 config-examples/ # Template configurations\n\u251c\u2500\u2500 features-doc/ # Feature documentation\n\u251c\u2500\u2500 pyproject.toml # Project configuration\n\u2514\u2500\u2500 README.md # This file\n```\n\n## \ud83d\udcdd License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built with [Typer](https://typer.tiangolo.com/) for the CLI framework\n- [Rich](https://rich.readthedocs.io/) for beautiful terminal output\n- [LiteLLM](https://litellm.ai/) for universal model access\n- Inspired by the need for collaborative AI conversations\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Multi-model AI Chat at the CLI featuring round-table discussions",
"version": "1.0.1",
"project_urls": {
"Changelog": "https://github.com/YusiZhang/ai-cli/releases",
"Documentation": "https://github.com/YusiZhang/ai-cli#readme",
"Homepage": "https://github.com/YusiZhang/ai-cli",
"Issues": "https://github.com/YusiZhang/ai-cli/issues",
"Repository": "https://github.com/YusiZhang/ai-cli"
},
"split_keywords": [
"ai",
" chat",
" chatgpt",
" cli",
" conversation",
" llm",
" roundtable"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "03c4275b033c00a291bae3fb25f758b3b7d526f5b06022adabb2a6135b7a99fd",
"md5": "0a656bbce5e03c6d302767553dc788cc",
"sha256": "6df2c06464fc05822e84de43dc20e8de1ec252960a6f3298c14c51fe823f215b"
},
"downloads": -1,
"filename": "ai_cli_chat-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0a656bbce5e03c6d302767553dc788cc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 36027,
"upload_time": "2025-07-31T06:48:34",
"upload_time_iso_8601": "2025-07-31T06:48:34.956170Z",
"url": "https://files.pythonhosted.org/packages/03/c4/275b033c00a291bae3fb25f758b3b7d526f5b06022adabb2a6135b7a99fd/ai_cli_chat-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "20c63eadc65fdaa6ea09610ae031ec80d89f9646c2d3b008b8067d0b2127e112",
"md5": "be43643b502f40b2237db60b806a8e06",
"sha256": "13c0cc574abcbf0cbbc2a0f29ad8049094207c1e5b166c6e1febc437f4382233"
},
"downloads": -1,
"filename": "ai_cli_chat-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "be43643b502f40b2237db60b806a8e06",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 202923,
"upload_time": "2025-07-31T06:48:36",
"upload_time_iso_8601": "2025-07-31T06:48:36.596044Z",
"url": "https://files.pythonhosted.org/packages/20/c6/3eadc65fdaa6ea09610ae031ec80d89f9646c2d3b008b8067d0b2127e112/ai_cli_chat-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-31 06:48:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "YusiZhang",
"github_project": "ai-cli",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ai-cli-chat"
}