beaglemind-cli


Namebeaglemind-cli JSON
Version 1.0.4 PyPI version JSON
download
home_pageNone
SummaryIntelligent documentation assistant CLI for Beagleboard projects
upload_time2025-08-24 02:13:20
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords cli documentation ai rag beagleboard
VCS
bugtrack_url
requirements requests openai click rich python-dotenv
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ---
title: Beaglemind Rag Poc
emoji: 👀
colorFrom: red
colorTo: purple
sdk: gradio
sdk_version: 5.35.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

# BeagleMind CLI

An intelligent documentation assistant CLI tool for Beagleboard projects that uses RAG (Retrieval-Augmented Generation) to answer questions about codebases and documentation.

## Features

- **Multi-backend LLM support**: Use both cloud (Groq) and local (Ollama) language models
- **Intelligent search**: Advanced semantic search with reranking and filtering
- **Rich CLI interface**: Beautiful command-line interface with syntax highlighting
- **Persistent configuration**: Save your preferences for seamless usage
- **Source attribution**: Get references to original documentation and code

## Installation

### Development Installation

```bash
# Clone the repository
git clone https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC
cd BeagleMind-RAG-PoC

# Install in development mode
pip install -e .
```

### Using pip

```bash
pip install beaglemind-cli
```


## Environment Setup

### For Groq (Cloud)

Set your Groq API key:

```bash
export GROQ_API_KEY="your-api-key-here"
```

### For OpenAI (Cloud)

Set your OpenAI API key:

```bash
export OPENAI_API_KEY="your-api-key-here"
```

### For Ollama (Local)

1. Install Ollama: https://ollama.ai
2. Pull a supported model:
   ```bash
   ollama pull qwen3:1.7b
   ```
3. Ensure Ollama is running:
   ```bash
   ollama serve
   ```

## Quick Start


### 1. List Available Models

See what language models are available:

```bash
# List all models
beaglemind list-models

# List models for specific backend
beaglemind list-models --backend groq
beaglemind list-models --backend ollama
```

### 2. Start Chatting

Ask questions about the documentation:

```bash
# Simple question
beaglemind chat -p "How do I configure the BeagleY-AI board?"

# With specific model and backend
beaglemind chat -p "Show me GPIO examples" --backend groq --model llama-3.3-70b-versatile

# With sources shown
beaglemind chat -p "What are the pin configurations?" --sources
```

## CLI Commands
### `beaglemind list-models`

List available language models.

**Options:**
- `--backend, -b`: Show models for specific backend (groq/ollama)

**Examples:**
```bash
beaglemind list-models
beaglemind list-models --backend groq
```

### `beaglemind chat`

Chat with BeagleMind using natural language.

**Options:**
- `--prompt, -p`: Your question (required)
- `--backend, -b`: LLM backend (groq/ollama)
- `--model, -m`: Specific model to use
- `--temperature, -t`: Response creativity (0.0-1.0)
- `--strategy, -s`: Search strategy (adaptive/multi_query/context_aware/default)
- `--sources`: Show source references

**Examples:**
```bash
# Basic usage
beaglemind chat -p "How to flash an image to BeagleY-AI?"

# Advanced usage
beaglemind chat \
  -p "Show me Python GPIO examples" \
  --backend groq \
  --model llama-3.3-70b-versatile \
  --temperature 0.2 \
  --strategy adaptive \
  --sources

# Code-focused questions
beaglemind chat -p "How to implement I2C communication?" --sources

# Documentation questions  
beaglemind chat -p "What are the system requirements?" --strategy context_aware
```

### Interactive Chat Mode

You can start an interactive multi-turn chat session (REPL) that remembers context and lets you toggle features live.

Start it by simply running the chat command without a prompt:

```bash
beaglemind chat
```

Or force it explicitly:

```bash
beaglemind chat --interactive
```

During the session you can use these inline commands (type them as messages):

| Command | Description |
|---------|-------------|
| `/help` | Show available commands and tips |
| `/sources` | Toggle display of source documents for answers |
| `/tools` | Enable/disable tool usage (file creation, code analysis, etc.) |
| `/config` | Show current backend/model/session settings |
| `/clear` | Clear the screen and keep session state |
| `/exit` or `/quit` | End the interactive session |

Example interactive flow:

```text
$ beaglemind chat
BeagleMind (1) > How do I configure GPIO?
...answer...
BeagleMind (2) > /sources
✓ Source display: enabled
BeagleMind (3) > Give me a Python example
...answer with sources...
BeagleMind (4) > /tools
✓ Tool usage: disabled
BeagleMind (5) > /exit
```

Tips:
1. Use `/sources` when you need provenance; turn it off for faster, cleaner output.
2. Disable tools (`/tools`) if you want read-only behavior.
3. Ask follow-ups naturally; prior Q&A stays in context for better answers.


## Available Models

### Groq (Cloud)
- llama-3.3-70b-versatile
- llama-3.1-8b-instant
- gemma2-9b-it
- meta-llama/llama-4-scout-17b-16e-instruct
- meta-llama/llama-4-maverick-17b-128e-instruct

### OpenAI (Cloud)
- gpt-4o
- gpt-4o-mini
- gpt-4-turbo
- gpt-3.5-turbo
- o1-preview
- o1-mini

### Ollama (Local)
- qwen3:1.7b
- smollm2:360m
- deepseek-r1:1.5b

## Tips for Best Results

1. **Be specific**: "How to configure GPIO pins on BeagleY-AI?" vs "GPIO help"

2. **Use technical terms**: Include model names, component names, exact error messages

3. **Ask follow-up questions**: Build on previous responses for deeper understanding

4. **Use --sources**: See exactly where information comes from

5. **Try different strategies**: Some work better for different question types

## Troubleshooting

### "BeagleMind is not initialized"
Run `beaglemind init` first.

### "No API Key" for Groq
Set the GROQ_API_KEY environment variable.

### "No API Key" for OpenAI  
Set the OPENAI_API_KEY environment variable.

### "Service Down" for Ollama  
Ensure Ollama is running: `ollama serve`

### "Model not available"
Check `beaglemind list-models` for available options.

## Development

### Running from Source

```bash
# Make the script executable
chmod +x beaglemind

# Run directly
./beaglemind --help

# Or with Python
python -m src.cli --help
```

### Adding New Models

Edit the model lists in `src/cli.py`:

```python
GROQ_MODELS = [
    "new-model-name",
    # ... existing models
]

OPENAI_MODELS = [
    "new-openai-model",
    # ... existing models
]

OLLAMA_MODELS = [
    "new-local-model",
    # ... existing models  
]
```

## License

MIT License - see LICENSE file for details.

## Support

- GitHub Issues: [Create an issue](https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC/issues)
- Community: [BeagleBoard forums](https://forum.beagleboard.org/)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "beaglemind-cli",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "cli, documentation, AI, RAG, beagleboard",
    "author": null,
    "author_email": "Fayez Zouari <fayez.zouari@insat.ucar.tn>",
    "download_url": "https://files.pythonhosted.org/packages/0b/e4/df9b89e0c36d2471b9b0e5a29e84ff8e9a456cf71b962c98d707c9125ab0/beaglemind_cli-1.0.4.tar.gz",
    "platform": null,
    "description": "---\ntitle: Beaglemind Rag Poc\nemoji: \ud83d\udc40\ncolorFrom: red\ncolorTo: purple\nsdk: gradio\nsdk_version: 5.35.0\napp_file: app.py\npinned: false\n---\nCheck out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference\n\n# BeagleMind CLI\n\nAn intelligent documentation assistant CLI tool for Beagleboard projects that uses RAG (Retrieval-Augmented Generation) to answer questions about codebases and documentation.\n\n## Features\n\n- **Multi-backend LLM support**: Use both cloud (Groq) and local (Ollama) language models\n- **Intelligent search**: Advanced semantic search with reranking and filtering\n- **Rich CLI interface**: Beautiful command-line interface with syntax highlighting\n- **Persistent configuration**: Save your preferences for seamless usage\n- **Source attribution**: Get references to original documentation and code\n\n## Installation\n\n### Development Installation\n\n```bash\n# Clone the repository\ngit clone https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC\ncd BeagleMind-RAG-PoC\n\n# Install in development mode\npip install -e .\n```\n\n### Using pip\n\n```bash\npip install beaglemind-cli\n```\n\n\n## Environment Setup\n\n### For Groq (Cloud)\n\nSet your Groq API key:\n\n```bash\nexport GROQ_API_KEY=\"your-api-key-here\"\n```\n\n### For OpenAI (Cloud)\n\nSet your OpenAI API key:\n\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```\n\n### For Ollama (Local)\n\n1. Install Ollama: https://ollama.ai\n2. Pull a supported model:\n   ```bash\n   ollama pull qwen3:1.7b\n   ```\n3. Ensure Ollama is running:\n   ```bash\n   ollama serve\n   ```\n\n## Quick Start\n\n\n### 1. List Available Models\n\nSee what language models are available:\n\n```bash\n# List all models\nbeaglemind list-models\n\n# List models for specific backend\nbeaglemind list-models --backend groq\nbeaglemind list-models --backend ollama\n```\n\n### 2. Start Chatting\n\nAsk questions about the documentation:\n\n```bash\n# Simple question\nbeaglemind chat -p \"How do I configure the BeagleY-AI board?\"\n\n# With specific model and backend\nbeaglemind chat -p \"Show me GPIO examples\" --backend groq --model llama-3.3-70b-versatile\n\n# With sources shown\nbeaglemind chat -p \"What are the pin configurations?\" --sources\n```\n\n## CLI Commands\n### `beaglemind list-models`\n\nList available language models.\n\n**Options:**\n- `--backend, -b`: Show models for specific backend (groq/ollama)\n\n**Examples:**\n```bash\nbeaglemind list-models\nbeaglemind list-models --backend groq\n```\n\n### `beaglemind chat`\n\nChat with BeagleMind using natural language.\n\n**Options:**\n- `--prompt, -p`: Your question (required)\n- `--backend, -b`: LLM backend (groq/ollama)\n- `--model, -m`: Specific model to use\n- `--temperature, -t`: Response creativity (0.0-1.0)\n- `--strategy, -s`: Search strategy (adaptive/multi_query/context_aware/default)\n- `--sources`: Show source references\n\n**Examples:**\n```bash\n# Basic usage\nbeaglemind chat -p \"How to flash an image to BeagleY-AI?\"\n\n# Advanced usage\nbeaglemind chat \\\n  -p \"Show me Python GPIO examples\" \\\n  --backend groq \\\n  --model llama-3.3-70b-versatile \\\n  --temperature 0.2 \\\n  --strategy adaptive \\\n  --sources\n\n# Code-focused questions\nbeaglemind chat -p \"How to implement I2C communication?\" --sources\n\n# Documentation questions  \nbeaglemind chat -p \"What are the system requirements?\" --strategy context_aware\n```\n\n### Interactive Chat Mode\n\nYou can start an interactive multi-turn chat session (REPL) that remembers context and lets you toggle features live.\n\nStart it by simply running the chat command without a prompt:\n\n```bash\nbeaglemind chat\n```\n\nOr force it explicitly:\n\n```bash\nbeaglemind chat --interactive\n```\n\nDuring the session you can use these inline commands (type them as messages):\n\n| Command | Description |\n|---------|-------------|\n| `/help` | Show available commands and tips |\n| `/sources` | Toggle display of source documents for answers |\n| `/tools` | Enable/disable tool usage (file creation, code analysis, etc.) |\n| `/config` | Show current backend/model/session settings |\n| `/clear` | Clear the screen and keep session state |\n| `/exit` or `/quit` | End the interactive session |\n\nExample interactive flow:\n\n```text\n$ beaglemind chat\nBeagleMind (1) > How do I configure GPIO?\n...answer...\nBeagleMind (2) > /sources\n\u2713 Source display: enabled\nBeagleMind (3) > Give me a Python example\n...answer with sources...\nBeagleMind (4) > /tools\n\u2713 Tool usage: disabled\nBeagleMind (5) > /exit\n```\n\nTips:\n1. Use `/sources` when you need provenance; turn it off for faster, cleaner output.\n2. Disable tools (`/tools`) if you want read-only behavior.\n3. Ask follow-ups naturally; prior Q&A stays in context for better answers.\n\n\n## Available Models\n\n### Groq (Cloud)\n- llama-3.3-70b-versatile\n- llama-3.1-8b-instant\n- gemma2-9b-it\n- meta-llama/llama-4-scout-17b-16e-instruct\n- meta-llama/llama-4-maverick-17b-128e-instruct\n\n### OpenAI (Cloud)\n- gpt-4o\n- gpt-4o-mini\n- gpt-4-turbo\n- gpt-3.5-turbo\n- o1-preview\n- o1-mini\n\n### Ollama (Local)\n- qwen3:1.7b\n- smollm2:360m\n- deepseek-r1:1.5b\n\n## Tips for Best Results\n\n1. **Be specific**: \"How to configure GPIO pins on BeagleY-AI?\" vs \"GPIO help\"\n\n2. **Use technical terms**: Include model names, component names, exact error messages\n\n3. **Ask follow-up questions**: Build on previous responses for deeper understanding\n\n4. **Use --sources**: See exactly where information comes from\n\n5. **Try different strategies**: Some work better for different question types\n\n## Troubleshooting\n\n### \"BeagleMind is not initialized\"\nRun `beaglemind init` first.\n\n### \"No API Key\" for Groq\nSet the GROQ_API_KEY environment variable.\n\n### \"No API Key\" for OpenAI  \nSet the OPENAI_API_KEY environment variable.\n\n### \"Service Down\" for Ollama  \nEnsure Ollama is running: `ollama serve`\n\n### \"Model not available\"\nCheck `beaglemind list-models` for available options.\n\n## Development\n\n### Running from Source\n\n```bash\n# Make the script executable\nchmod +x beaglemind\n\n# Run directly\n./beaglemind --help\n\n# Or with Python\npython -m src.cli --help\n```\n\n### Adding New Models\n\nEdit the model lists in `src/cli.py`:\n\n```python\nGROQ_MODELS = [\n    \"new-model-name\",\n    # ... existing models\n]\n\nOPENAI_MODELS = [\n    \"new-openai-model\",\n    # ... existing models\n]\n\nOLLAMA_MODELS = [\n    \"new-local-model\",\n    # ... existing models  \n]\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\n- GitHub Issues: [Create an issue](https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC/issues)\n- Community: [BeagleBoard forums](https://forum.beagleboard.org/)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Intelligent documentation assistant CLI for Beagleboard projects",
    "version": "1.0.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC/issues",
        "Homepage": "https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC",
        "Repository": "https://github.com/beagleboard-gsoc/BeagleMind-RAG-PoC"
    },
    "split_keywords": [
        "cli",
        " documentation",
        " ai",
        " rag",
        " beagleboard"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9f582d8b2e797afbd24bb357e402ed0e2c1d5c734996f83ec124697394810c01",
                "md5": "305ef6b21c6e7934d00654b7ea597381",
                "sha256": "04841193631128554ab4212dfc06011514fe625694d8f33f7c4d7ac41893e556"
            },
            "downloads": -1,
            "filename": "beaglemind_cli-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "305ef6b21c6e7934d00654b7ea597381",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 41609,
            "upload_time": "2025-08-24T02:13:19",
            "upload_time_iso_8601": "2025-08-24T02:13:19.129467Z",
            "url": "https://files.pythonhosted.org/packages/9f/58/2d8b2e797afbd24bb357e402ed0e2c1d5c734996f83ec124697394810c01/beaglemind_cli-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0be4df9b89e0c36d2471b9b0e5a29e84ff8e9a456cf71b962c98d707c9125ab0",
                "md5": "e835e732891b3ad3be394983540a7b02",
                "sha256": "e4b9bc6c41feaa3e4cd75dc44163a1c7ec7459967ec58a37e31fdd502f035899"
            },
            "downloads": -1,
            "filename": "beaglemind_cli-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "e835e732891b3ad3be394983540a7b02",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 38788,
            "upload_time": "2025-08-24T02:13:20",
            "upload_time_iso_8601": "2025-08-24T02:13:20.430203Z",
            "url": "https://files.pythonhosted.org/packages/0b/e4/df9b89e0c36d2471b9b0e5a29e84ff8e9a456cf71b962c98d707c9125ab0/beaglemind_cli-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-24 02:13:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "beagleboard-gsoc",
    "github_project": "BeagleMind-RAG-PoC",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.32.0"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    ">=",
                    "1.88.0"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    ">=",
                    "8.2.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    ">=",
                    "14.0.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    ">=",
                    "1.1.0"
                ]
            ]
        }
    ],
    "lcname": "beaglemind-cli"
}
        
Elapsed time: 0.97440s