Name | omnimancer-cli JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | A unified command-line interface for multiple AI language models |
upload_time | 2025-08-30 23:36:46 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
ai
cli
chat
openai
claude
language-model
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Omnimancer CLI
A unified command-line interface for multiple AI providers - chat with Claude, OpenAI, Gemini, and 10+ other AI models through a single, intuitive tool.
## Quick Start
### Installation
**Using pipx (recommended):**
```bash
pipx install omnimancer-cli
```
**Using pip:**
```bash
pip install omnimancer-cli
```
### Available Commands
After installation, use any of these commands:
- **`omnimancer`** - Full command name
- **`omn`** - Quick alias ⚡
- **`omniman`** - Alternative alias
### First Run
```bash
omn # or omnimancer, or omniman
```
On first run, you'll be guided through setup:
```
🚀 Omnimancer Setup Wizard
Select a provider to configure:
1. Claude (Anthropic)
2. OpenAI
3. Google Gemini
4. Perplexity AI
5. Ollama (Local)
...
Choose [1]: 1
Enter your Claude API key: sk-ant-...
✅ Configuration complete!
>>> Hello! How can you help me today?
🤖 Claude: I'm Claude, an AI assistant created by Anthropic...
```
## Basic Usage
```bash
# Start Omnimancer
omn
# Start chatting
>>> What's the weather like?
# Switch models mid-conversation
>>> /switch openai gpt-4o
>>> Now using GPT-4. How are you different?
# Check available providers and models
>>> /providers
>>> /models
# Save conversations
>>> /save my-chat
# Load previous conversations
>>> /load my-chat
# Get help
>>> /help
```
## Agent Mode & File Operations
Omnimancer includes advanced agent capabilities that allow AI models to perform file operations with your explicit approval:
### 🤖 **Autonomous Agent Features**
- **File Creation**: Create new files with AI-generated content
- **File Modification**: Edit existing files with intelligent changes
- **Code Refactoring**: Restructure and improve existing code
- **Documentation Generation**: Create comprehensive documentation
- **Project Setup**: Initialize new projects with proper structure
### 🔒 **Secure Approval System**
Every file operation requires your explicit approval with:
```bash
🔍 File Operation Approval Required
📄 Creating: data_analyzer.py
📊 Risk Level: Low | 🟢
📏 Size: 1,247 bytes (45 lines)
[Y] Approve [N] Deny [D] View Details [Q] Quit
```
### 🎨 **Rich Visual Interface**
- **Syntax Highlighting**: Code displayed with proper formatting
- **Diff Views**: See exactly what changes before approval
- **Risk Assessment**: Operations rated Low/Medium/High/Critical
- **Batch Operations**: Handle multiple files efficiently
### ⚡ **Quick Examples**
```bash
# Ask AI to create files
>>> Create a Python script to analyze CSV data
🔍 Shows preview → [Y] to approve → ✅ File created
# Request code modifications
>>> Add error handling to this function
🔍 Shows diff view → [Y] to approve → ✅ File updated
# Batch project setup
>>> Set up a Flask web application
🔍 Shows 8 files → [A] approve all → ✅ Project ready
```
[**📖 Full Documentation**](docs/agent-approval-system.md) | [**🛡️ Security Guide**](docs/security.md)
## Supported Providers
| Provider | API Key Required | Best For |
|----------|------------------|----------|
| **Claude** | [Anthropic Console](https://console.anthropic.com/) | Complex reasoning, analysis |
| **Claude Code** | Anthropic API key | IDE integration, coding |
| **OpenAI** | [OpenAI Platform](https://platform.openai.com/) | General purpose, coding |
| **Gemini** | [Google AI Studio](https://aistudio.google.com/) | Large context, research |
| **Perplexity** | [Perplexity](https://www.perplexity.ai/) | Real-time web search |
| **xAI (Grok)** | [xAI Console](https://console.x.ai/) | Creative tasks, real-time info |
| **Mistral** | [Mistral Platform](https://mistral.ai/) | Code generation, efficiency |
| **AWS Bedrock** | [AWS Console](https://console.aws.amazon.com/bedrock/) | AWS integration |
| **Ollama** | No API key (local) | Privacy, offline use |
| **Azure OpenAI** | Azure setup required | Enterprise |
| **Vertex AI** | Google Cloud setup | Enterprise |
| **OpenRouter** | [OpenRouter](https://openrouter.ai/) | 100+ models access |
| **Cohere** | [Cohere Platform](https://cohere.com/) | Multilingual, embeddings |
## Commands
### Core Commands
| Command | Description |
|---------|-------------|
| `/help` | Show all commands |
| `/setup` | Run interactive setup wizard |
| `/quit` | Exit Omnimancer |
| `/clear` | Clear screen |
### Model & Provider Management
| Command | Description |
|---------|-------------|
| `/models` | List available models |
| `/providers` | Show configured providers |
| `/switch [provider] [model]` | Change provider/model |
| `/validate [provider]` | Validate provider configurations |
| `/health [provider]` | Check provider health status |
| `/repair [provider]` | Repair provider issues |
| `/diagnose [provider]` | Run diagnostic tests |
### Conversation Management
| Command | Description |
|---------|-------------|
| `/save [name]` | Save current conversation |
| `/load [name]` | Load saved conversation |
| `/list` | List saved conversations |
| `/history` | Conversation history management |
### Agent & File Operations
| Command | Description |
|---------|-------------|
| `/agent` | Enable/disable agent mode |
| `/agents` | Manage agent configurations |
| `/approvals` | View/manage file operation approvals |
| `/permissions` | Configure security permissions |
### Tool Integration
| Command | Description |
|---------|-------------|
| `/tools` | Show available tools |
| `/mcp` | MCP server management |
### Model Management
| Command | Description |
|---------|-------------|
| `/add-model` | Add custom model |
| `/remove-model` | Remove custom model |
| `/list-custom-models` | List custom models |
### System
| Command | Description |
|---------|-------------|
| `/status` | Show system status |
## Configuration
Omnimancer stores encrypted configuration in `~/.omnimancer/config.json`.
### Manual Configuration
```bash
# Add a provider
omnimancer --config
# Or edit configuration interactively
>>> /config
```
### Environment Variables
```bash
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
export GOOGLE_API_KEY="your-key"
# ... then run omnimancer
```
## Local AI with Ollama
For privacy and offline use:
```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama server
ollama serve
# Download a model
ollama pull llama3.1
# Configure Omnimancer
omn
>>> /switch ollama llama3.1
```
## Tool Integration (MCP)
Enable AI tool calling for file operations, web search, and more:
```bash
# Install UV for MCP servers
curl -LsSf https://astral.sh/uv/install.sh | sh
# Check tool status
>>> /tools
>>> /mcp status
```
Popular MCP tools:
- **Filesystem**: File operations
- **Web Search**: Real-time search
- **Git**: Repository management
## Examples
### Basic Chat
```
>>> Explain quantum computing in simple terms
🤖 Claude: Quantum computing is like having a super-powered calculator...
>>> /switch openai gpt-4o
>>> How would you explain it differently?
🤖 GPT-4: I'd compare quantum computing to exploring a maze...
```
### Code Generation
```
>>> Write a Python function to calculate fibonacci numbers
🤖 Claude: Here's an efficient implementation using memoization:
```python
def fibonacci(n, memo={}):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
return memo[n]
```
### Model Comparison
```
>>> /switch claude claude-3-5-sonnet
>>> What's 15 * 24?
🤖 Claude: 15 × 24 = 360
>>> /switch openai gpt-4o
>>> What's 15 * 24?
🤖 GPT-4: 15 × 24 = 360
```
## Advanced Features
- **Conversation Management**: Save/load chat history
- **Model Switching**: Compare responses between providers
- **Tool Calling**: AI can execute code, search web, manage files
- **Health Monitoring**: Provider status and diagnostics
- **Configuration Templates**: Pre-configured setups for different use cases
## Development
```bash
git clone https://gitlab.com/jite-ai/omnimancer
cd omnimancer
pip install -e ".[dev]"
pytest
```
## Troubleshooting
### Common Issues
**"No providers configured"**
```bash
omn # Run setup wizard
>>> /setup
```
**"Invalid API key"**
- Check key format (Claude: `sk-ant-`, OpenAI: `sk-`, etc.)
- Verify key at provider's website
- Use `/validate` command to test configuration
**"Ollama connection failed"**
```bash
ollama serve # Start Ollama server
ollama pull llama3.1 # Download a model
```
**Check system health:**
```bash
omn
>>> /health # Check all providers
>>> /diagnose # Run diagnostics
>>> /validate # Validate configurations
```
**Debug mode:**
```bash
export OMNIMANCER_DEBUG=1
omn
```
## License
MIT License - see [LICENSE](LICENSE) file.
## Links
- [GitHub Repository]https://gitlab.com/jite-ai/omnimancer)
- [Issues](https://gitlab.com/jite-ai/omnimancer/issues)
- [Documentation](https://gitlab.com/jite-ai/omnimancer/docs)
---
**Omnimancer CLI** - One tool, multiple AI providers, endless possibilities.
Raw data
{
"_id": null,
"home_page": null,
"name": "omnimancer-cli",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, cli, chat, openai, claude, language-model",
"author": null,
"author_email": "Kellan Strong <kellan@jite.ai>",
"download_url": "https://files.pythonhosted.org/packages/76/9d/99966748a6b2d5c284271efb5f0e43af1132cc7e2ef7da26b1de0b5e929a/omnimancer_cli-0.1.0.tar.gz",
"platform": null,
"description": "# Omnimancer CLI\n\nA unified command-line interface for multiple AI providers - chat with Claude, OpenAI, Gemini, and 10+ other AI models through a single, intuitive tool.\n\n## Quick Start\n\n### Installation\n\n**Using pipx (recommended):**\n```bash\npipx install omnimancer-cli\n```\n\n**Using pip:**\n```bash\npip install omnimancer-cli\n```\n\n### Available Commands\n\nAfter installation, use any of these commands:\n- **`omnimancer`** - Full command name\n- **`omn`** - Quick alias \u26a1\n- **`omniman`** - Alternative alias\n\n### First Run\n\n```bash\nomn # or omnimancer, or omniman\n```\n\nOn first run, you'll be guided through setup:\n\n```\n\ud83d\ude80 Omnimancer Setup Wizard\n\nSelect a provider to configure:\n1. Claude (Anthropic)\n2. OpenAI \n3. Google Gemini\n4. Perplexity AI\n5. Ollama (Local)\n...\n\nChoose [1]: 1\n\nEnter your Claude API key: sk-ant-...\n\u2705 Configuration complete!\n\n>>> Hello! How can you help me today?\n\ud83e\udd16 Claude: I'm Claude, an AI assistant created by Anthropic...\n```\n\n## Basic Usage\n\n```bash\n# Start Omnimancer\nomn\n\n# Start chatting\n>>> What's the weather like?\n\n# Switch models mid-conversation \n>>> /switch openai gpt-4o\n>>> Now using GPT-4. How are you different?\n\n# Check available providers and models\n>>> /providers\n>>> /models\n\n# Save conversations\n>>> /save my-chat\n\n# Load previous conversations\n>>> /load my-chat\n\n# Get help\n>>> /help\n```\n\n## Agent Mode & File Operations\n\nOmnimancer includes advanced agent capabilities that allow AI models to perform file operations with your explicit approval:\n\n### \ud83e\udd16 **Autonomous Agent Features**\n- **File Creation**: Create new files with AI-generated content\n- **File Modification**: Edit existing files with intelligent changes\n- **Code Refactoring**: Restructure and improve existing code\n- **Documentation Generation**: Create comprehensive documentation\n- **Project Setup**: Initialize new projects with proper structure\n\n### \ud83d\udd12 **Secure Approval System**\nEvery file operation requires your explicit approval with:\n\n```bash\n\ud83d\udd0d File Operation Approval Required\n\ud83d\udcc4 Creating: data_analyzer.py\n\ud83d\udcca Risk Level: Low | \ud83d\udfe2 \n\ud83d\udccf Size: 1,247 bytes (45 lines)\n\n[Y] Approve [N] Deny [D] View Details [Q] Quit\n```\n\n### \ud83c\udfa8 **Rich Visual Interface**\n- **Syntax Highlighting**: Code displayed with proper formatting\n- **Diff Views**: See exactly what changes before approval\n- **Risk Assessment**: Operations rated Low/Medium/High/Critical\n- **Batch Operations**: Handle multiple files efficiently\n\n### \u26a1 **Quick Examples**\n\n```bash\n# Ask AI to create files\n>>> Create a Python script to analyze CSV data\n\ud83d\udd0d Shows preview \u2192 [Y] to approve \u2192 \u2705 File created\n\n# Request code modifications \n>>> Add error handling to this function\n\ud83d\udd0d Shows diff view \u2192 [Y] to approve \u2192 \u2705 File updated\n\n# Batch project setup\n>>> Set up a Flask web application\n\ud83d\udd0d Shows 8 files \u2192 [A] approve all \u2192 \u2705 Project ready\n```\n\n[**\ud83d\udcd6 Full Documentation**](docs/agent-approval-system.md) | [**\ud83d\udee1\ufe0f Security Guide**](docs/security.md)\n\n## Supported Providers\n\n| Provider | API Key Required | Best For |\n|----------|------------------|----------|\n| **Claude** | [Anthropic Console](https://console.anthropic.com/) | Complex reasoning, analysis |\n| **Claude Code** | Anthropic API key | IDE integration, coding |\n| **OpenAI** | [OpenAI Platform](https://platform.openai.com/) | General purpose, coding |\n| **Gemini** | [Google AI Studio](https://aistudio.google.com/) | Large context, research |\n| **Perplexity** | [Perplexity](https://www.perplexity.ai/) | Real-time web search |\n| **xAI (Grok)** | [xAI Console](https://console.x.ai/) | Creative tasks, real-time info |\n| **Mistral** | [Mistral Platform](https://mistral.ai/) | Code generation, efficiency |\n| **AWS Bedrock** | [AWS Console](https://console.aws.amazon.com/bedrock/) | AWS integration |\n| **Ollama** | No API key (local) | Privacy, offline use |\n| **Azure OpenAI** | Azure setup required | Enterprise |\n| **Vertex AI** | Google Cloud setup | Enterprise |\n| **OpenRouter** | [OpenRouter](https://openrouter.ai/) | 100+ models access |\n| **Cohere** | [Cohere Platform](https://cohere.com/) | Multilingual, embeddings |\n\n## Commands\n\n### Core Commands\n| Command | Description |\n|---------|-------------|\n| `/help` | Show all commands |\n| `/setup` | Run interactive setup wizard |\n| `/quit` | Exit Omnimancer |\n| `/clear` | Clear screen |\n\n### Model & Provider Management\n| Command | Description |\n|---------|-------------|\n| `/models` | List available models |\n| `/providers` | Show configured providers |\n| `/switch [provider] [model]` | Change provider/model |\n| `/validate [provider]` | Validate provider configurations |\n| `/health [provider]` | Check provider health status |\n| `/repair [provider]` | Repair provider issues |\n| `/diagnose [provider]` | Run diagnostic tests |\n\n### Conversation Management\n| Command | Description |\n|---------|-------------|\n| `/save [name]` | Save current conversation |\n| `/load [name]` | Load saved conversation |\n| `/list` | List saved conversations |\n| `/history` | Conversation history management |\n\n### Agent & File Operations\n| Command | Description |\n|---------|-------------|\n| `/agent` | Enable/disable agent mode |\n| `/agents` | Manage agent configurations |\n| `/approvals` | View/manage file operation approvals |\n| `/permissions` | Configure security permissions |\n\n### Tool Integration\n| Command | Description |\n|---------|-------------|\n| `/tools` | Show available tools |\n| `/mcp` | MCP server management |\n\n### Model Management\n| Command | Description |\n|---------|-------------|\n| `/add-model` | Add custom model |\n| `/remove-model` | Remove custom model |\n| `/list-custom-models` | List custom models |\n\n### System\n| Command | Description |\n|---------|-------------|\n| `/status` | Show system status |\n\n## Configuration\n\nOmnimancer stores encrypted configuration in `~/.omnimancer/config.json`.\n\n### Manual Configuration\n\n```bash\n# Add a provider\nomnimancer --config\n\n# Or edit configuration interactively\n>>> /config\n```\n\n### Environment Variables\n\n```bash\nexport ANTHROPIC_API_KEY=\"your-key\"\nexport OPENAI_API_KEY=\"your-key\"\nexport GOOGLE_API_KEY=\"your-key\"\n# ... then run omnimancer\n```\n\n## Local AI with Ollama\n\nFor privacy and offline use:\n\n```bash\n# Install Ollama\ncurl -fsSL https://ollama.ai/install.sh | sh\n\n# Start Ollama server\nollama serve\n\n# Download a model\nollama pull llama3.1\n\n# Configure Omnimancer\nomn\n>>> /switch ollama llama3.1\n```\n\n## Tool Integration (MCP)\n\nEnable AI tool calling for file operations, web search, and more:\n\n```bash\n# Install UV for MCP servers\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Check tool status\n>>> /tools\n>>> /mcp status\n```\n\nPopular MCP tools:\n- **Filesystem**: File operations\n- **Web Search**: Real-time search \n- **Git**: Repository management\n\n## Examples\n\n### Basic Chat\n```\n>>> Explain quantum computing in simple terms\n\ud83e\udd16 Claude: Quantum computing is like having a super-powered calculator...\n\n>>> /switch openai gpt-4o \n>>> How would you explain it differently?\n\ud83e\udd16 GPT-4: I'd compare quantum computing to exploring a maze...\n```\n\n### Code Generation\n```\n>>> Write a Python function to calculate fibonacci numbers\n\ud83e\udd16 Claude: Here's an efficient implementation using memoization:\n\n```python\ndef fibonacci(n, memo={}):\n if n in memo:\n return memo[n]\n if n <= 1:\n return n\n memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)\n return memo[n]\n```\n\n### Model Comparison\n```\n>>> /switch claude claude-3-5-sonnet\n>>> What's 15 * 24?\n\ud83e\udd16 Claude: 15 \u00d7 24 = 360\n\n>>> /switch openai gpt-4o\n>>> What's 15 * 24? \n\ud83e\udd16 GPT-4: 15 \u00d7 24 = 360\n```\n\n## Advanced Features\n\n- **Conversation Management**: Save/load chat history\n- **Model Switching**: Compare responses between providers\n- **Tool Calling**: AI can execute code, search web, manage files\n- **Health Monitoring**: Provider status and diagnostics\n- **Configuration Templates**: Pre-configured setups for different use cases\n\n## Development\n\n```bash\ngit clone https://gitlab.com/jite-ai/omnimancer\ncd omnimancer\npip install -e \".[dev]\"\npytest\n```\n\n## Troubleshooting\n\n### Common Issues\n\n**\"No providers configured\"**\n```bash\nomn # Run setup wizard\n>>> /setup\n```\n\n**\"Invalid API key\"**\n- Check key format (Claude: `sk-ant-`, OpenAI: `sk-`, etc.)\n- Verify key at provider's website\n- Use `/validate` command to test configuration\n\n**\"Ollama connection failed\"**\n```bash\nollama serve # Start Ollama server\nollama pull llama3.1 # Download a model\n```\n\n**Check system health:**\n```bash\nomn\n>>> /health # Check all providers\n>>> /diagnose # Run diagnostics\n>>> /validate # Validate configurations\n```\n\n**Debug mode:**\n```bash\nexport OMNIMANCER_DEBUG=1\nomn\n```\n\n## License\n\nMIT License - see [LICENSE](LICENSE) file.\n\n## Links\n\n- [GitHub Repository]https://gitlab.com/jite-ai/omnimancer)\n- [Issues](https://gitlab.com/jite-ai/omnimancer/issues)\n- [Documentation](https://gitlab.com/jite-ai/omnimancer/docs)\n\n---\n\n**Omnimancer CLI** - One tool, multiple AI providers, endless possibilities.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A unified command-line interface for multiple AI language models",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://gitlab.com/jite-ai/omnimancer",
"Issues": "https://gitlab.com/jite-ai/omnimancer/issues",
"Repository": "https://gitlab.com/jite-ai/omnimancer"
},
"split_keywords": [
"ai",
" cli",
" chat",
" openai",
" claude",
" language-model"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4105f4d308135443ee324d1806df9a416e2531f3bc30947754398f1bdf66f10f",
"md5": "45da029fdd27fcdf2a4991f6713976d9",
"sha256": "d9c0ec9ff35f8ce575b00fd72e1c5b00faaa2a30150ee56328bf83fd44586cd3"
},
"downloads": -1,
"filename": "omnimancer_cli-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "45da029fdd27fcdf2a4991f6713976d9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 614100,
"upload_time": "2025-08-30T23:36:44",
"upload_time_iso_8601": "2025-08-30T23:36:44.301362Z",
"url": "https://files.pythonhosted.org/packages/41/05/f4d308135443ee324d1806df9a416e2531f3bc30947754398f1bdf66f10f/omnimancer_cli-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "769d99966748a6b2d5c284271efb5f0e43af1132cc7e2ef7da26b1de0b5e929a",
"md5": "efc6b03728cd87cbd51b81b30cf51ea9",
"sha256": "23e8b2502078bf9390dd9edc0a8b96f8d9741382ab04723719e73e059a6c172e"
},
"downloads": -1,
"filename": "omnimancer_cli-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "efc6b03728cd87cbd51b81b30cf51ea9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 645552,
"upload_time": "2025-08-30T23:36:46",
"upload_time_iso_8601": "2025-08-30T23:36:46.029266Z",
"url": "https://files.pythonhosted.org/packages/76/9d/99966748a6b2d5c284271efb5f0e43af1132cc7e2ef7da26b1de0b5e929a/omnimancer_cli-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-30 23:36:46",
"github": false,
"gitlab": true,
"bitbucket": false,
"codeberg": false,
"gitlab_user": "jite-ai",
"gitlab_project": "omnimancer",
"lcname": "omnimancer-cli"
}