neuron-v0.4


Nameneuron-v0.4 JSON
Version 0.4.91 PyPI version JSON
download
home_pagehttps://github.com/devpatel/neuron-ai-assistant
SummaryA local AI assistant with OpenLLaMA and Mistral models, advanced identity protection and hardware optimization
upload_time2025-10-20 07:53:28
maintainerNone
docs_urlNone
authorDev Patel
requires_python>=3.8
licenseMIT
keywords ai assistant chatbot llm language-model openllama mistral transformers local-ai privacy cuda pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Neuron AI Assistant 

A powerful local AI assistant with advanced identity protection, hardware optimization, and comprehensive conversation management.

**Created by:** Dev Patel  
**Version:** 0.4.91

##  Features

-  **Identity Protection**: Built-in safeguards against prompt injection and identity tampering
-  **Hardware Optimization**: Auto-detects CPU, GPU (CUDA), Apple Silicon (MPS), RAM, and VRAM
-  **Multiple Models**: Support for OpenLLaMA 7B v2 (CPU/GPU) and Mistral 7B (GPU-optimized)
-  **Conversation Management**: Save, export, and manage chat history
-  **Config Security**: Cryptographic signing and automatic backups
-  **Error Recovery**: Automatic backup restoration and config migration
-  **Resource Management**: Dynamic token limits and OOM handling
-  **Diagnostic Tools**: Built-in system health checks

##  Requirements

- **Python**: 3.8 or higher
- **RAM**: Minimum 4GB (8GB+ recommended)
- **Disk Space**: 20GB free (for model downloads)
- **GPU** (Optional): NVIDIA with CUDA support for better performance

##  Installation

### Option 1: From PyPI (when published)
```bash
pip install neuron-ai-assistant
```

### Option 2: From Source
```bash
# Clone the repository
git clone https://github.com/devpatel/neuron-ai-assistant.git
cd neuron-ai-assistant

# Install dependencies
pip install -r requirements.txt

# Or install with all features
pip install -e .[all]
```

### Option 3: GPU Support
```bash
# For NVIDIA GPU (CUDA 11.8)
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# For NVIDIA GPU (CUDA 12.1)
pip install torch==2.0.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
```

### Option 4: CPU Only (Smaller)
```bash
pip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
```

##  Quick Start

### First Run
```bash
python neuron_assistant.py
```

On first run, you'll be asked to:
1. Enter your name
2. Select a model (OpenLLaMA 7B v2 or Mistral 7B)
3. Wait for model download (if needed)

### Using the Assistant
```python
# After installation
neuron
# or
neuron-assistant
```

## 💻 Commands

| Command | Description |
|---------|-------------|
| `/help` | Show available commands |
| `/clear` | Clear conversation history |
| `/save` | Save conversation to text file |
| `/export` | Export conversation to JSON |
| `/stats` | Show system statistics |
| `/tokens <n>` | Set max tokens (16-1024) |
| `/model` | Change AI model |
| `/migrate` | Fix/update old configs |
| `/diagnose` | Run system diagnostics |
| `/reset` | Reset assistant completely |
| `/exit` | Exit gracefully |

## 🔧 Configuration

The assistant creates these files automatically:
- `config.json` - User and model settings
- `config.sig` - Cryptographic signature
- `models/` - Downloaded AI models
- `backups/` - Config backups (last 5)
- `.neuron.lock` - Instance lock file

## Model Comparison

| Model | Size | RAM | VRAM | Speed | Quality |
|-------|------|-----|------|-------|---------|
| OpenLLaMA 7B v2 | 13.5GB | 8GB | 0GB | Medium | Excellent |
| Mistral 7B | 14GB | 16GB | 12GB | Medium | Excellent |

##  Advanced Usage

### Set HuggingFace Token
```bash
export HF_TOKEN="your_token_here"
python neuron_assistant.py
```

### Custom Token Limit
```python
from neuron_assistant import NeuronAssistant

assistant = NeuronAssistant()
assistant.set_max_tokens(256)
```

### Programmatic Use
```python
from neuron_assistant import NeuronAssistant

# Initialize
assistant = NeuronAssistant(hf_token="optional_token")

# Chat
response = assistant.chat("Hello! How are you?")
print(response)

# Save conversation
assistant.save_history("my_chat.txt")
assistant.export_history_json("my_chat.json")
```

## 🐛 Troubleshooting

### Model Download Fails
```bash
# Check disk space
df -h

# Verify internet connection
ping huggingface.co

# Manual download location
ls models/
```

### Config Corrupted
```bash
# Run diagnostics
# In chat: /diagnose

# Migrate config
# In chat: /migrate

# Last resort - reset
# In chat: /reset
```

### Out of Memory
```bash
# Use OpenLLaMA model instead of Mistral
# Reduce token limit: /tokens 64
# Clear history: /clear
```

### GPU Not Detected
```bash
# Check CUDA installation
python -c "import torch; print(torch.cuda.is_available())"

# Reinstall PyTorch with CUDA
pip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
```

## 🔒 Security Features

- **Creator Lock**: Hardcoded creator name prevents identity theft
- **Config Signing**: RSA signatures verify config integrity
- **Prompt Injection Detection**: Blocks manipulation attempts
- **Output Sanitization**: Removes references to other AI companies
- **Backup System**: Auto-backups before changes

##  License

MIT License - See LICENSE file for details

##  Contributing

Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Submit a pull request

##  Support

- **Issues**: [GitHub Issues](https://github.com/devpatel/neuron-ai-assistant/issues)
- **Email**: dev@example.com
- **Docs**: [Wiki](https://github.com/devpatel/neuron-ai-assistant/wiki)

##  Acknowledgments

- Built with [PyTorch](https://pytorch.org/)
- Uses [Transformers](https://huggingface.co/transformers)
- Supports [OpenLLaMA](https://huggingface.co/openlm-research/open_llama_7b_v2)
- Model: [Mistral AI](https://mistral.ai/)

##  Changelog

### v0.4.91 (Current)
- OpenLLaMA 7B v2 and Mistral 7B model support
- Removed GPT4All dependency
- Enhanced model selection with smart recommendations
- Improved first-run experience
- Advanced identity protection
- Config migration system
- Comprehensive diagnostics
- Improved error handling
- Backup/restore functionality

---

**Made with heart by Dev Patel**

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/devpatel/neuron-ai-assistant",
    "name": "neuron-v0.4",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, assistant, chatbot, llm, language-model, openllama, mistral, transformers, local-ai, privacy, cuda, pytorch",
    "author": "Dev Patel",
    "author_email": "devpatel@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4c/7c/fe79d4cdc765b46571684a64cd08af2b9fe96a857e66673c179952da4a96/neuron_v0_4-0.4.91.tar.gz",
    "platform": null,
    "description": "# Neuron AI Assistant \r\n\r\nA powerful local AI assistant with advanced identity protection, hardware optimization, and comprehensive conversation management.\r\n\r\n**Created by:** Dev Patel  \r\n**Version:** 0.4.91\r\n\r\n##  Features\r\n\r\n-  **Identity Protection**: Built-in safeguards against prompt injection and identity tampering\r\n-  **Hardware Optimization**: Auto-detects CPU, GPU (CUDA), Apple Silicon (MPS), RAM, and VRAM\r\n-  **Multiple Models**: Support for OpenLLaMA 7B v2 (CPU/GPU) and Mistral 7B (GPU-optimized)\r\n-  **Conversation Management**: Save, export, and manage chat history\r\n-  **Config Security**: Cryptographic signing and automatic backups\r\n-  **Error Recovery**: Automatic backup restoration and config migration\r\n-  **Resource Management**: Dynamic token limits and OOM handling\r\n-  **Diagnostic Tools**: Built-in system health checks\r\n\r\n##  Requirements\r\n\r\n- **Python**: 3.8 or higher\r\n- **RAM**: Minimum 4GB (8GB+ recommended)\r\n- **Disk Space**: 20GB free (for model downloads)\r\n- **GPU** (Optional): NVIDIA with CUDA support for better performance\r\n\r\n##  Installation\r\n\r\n### Option 1: From PyPI (when published)\r\n```bash\r\npip install neuron-ai-assistant\r\n```\r\n\r\n### Option 2: From Source\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/devpatel/neuron-ai-assistant.git\r\ncd neuron-ai-assistant\r\n\r\n# Install dependencies\r\npip install -r requirements.txt\r\n\r\n# Or install with all features\r\npip install -e .[all]\r\n```\r\n\r\n### Option 3: GPU Support\r\n```bash\r\n# For NVIDIA GPU (CUDA 11.8)\r\npip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118\r\npip install -r requirements.txt\r\n\r\n# For NVIDIA GPU (CUDA 12.1)\r\npip install torch==2.0.0+cu121 --extra-index-url https://download.pytorch.org/whl/cu121\r\npip install -r requirements.txt\r\n```\r\n\r\n### Option 4: CPU Only (Smaller)\r\n```bash\r\npip install torch==2.0.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu\r\npip install -r requirements.txt\r\n```\r\n\r\n##  Quick Start\r\n\r\n### First Run\r\n```bash\r\npython neuron_assistant.py\r\n```\r\n\r\nOn first run, you'll be asked to:\r\n1. Enter your name\r\n2. Select a model (OpenLLaMA 7B v2 or Mistral 7B)\r\n3. Wait for model download (if needed)\r\n\r\n### Using the Assistant\r\n```python\r\n# After installation\r\nneuron\r\n# or\r\nneuron-assistant\r\n```\r\n\r\n## \ud83d\udcbb Commands\r\n\r\n| Command | Description |\r\n|---------|-------------|\r\n| `/help` | Show available commands |\r\n| `/clear` | Clear conversation history |\r\n| `/save` | Save conversation to text file |\r\n| `/export` | Export conversation to JSON |\r\n| `/stats` | Show system statistics |\r\n| `/tokens <n>` | Set max tokens (16-1024) |\r\n| `/model` | Change AI model |\r\n| `/migrate` | Fix/update old configs |\r\n| `/diagnose` | Run system diagnostics |\r\n| `/reset` | Reset assistant completely |\r\n| `/exit` | Exit gracefully |\r\n\r\n## \ud83d\udd27 Configuration\r\n\r\nThe assistant creates these files automatically:\r\n- `config.json` - User and model settings\r\n- `config.sig` - Cryptographic signature\r\n- `models/` - Downloaded AI models\r\n- `backups/` - Config backups (last 5)\r\n- `.neuron.lock` - Instance lock file\r\n\r\n## Model Comparison\r\n\r\n| Model | Size | RAM | VRAM | Speed | Quality |\r\n|-------|------|-----|------|-------|---------|\r\n| OpenLLaMA 7B v2 | 13.5GB | 8GB | 0GB | Medium | Excellent |\r\n| Mistral 7B | 14GB | 16GB | 12GB | Medium | Excellent |\r\n\r\n##  Advanced Usage\r\n\r\n### Set HuggingFace Token\r\n```bash\r\nexport HF_TOKEN=\"your_token_here\"\r\npython neuron_assistant.py\r\n```\r\n\r\n### Custom Token Limit\r\n```python\r\nfrom neuron_assistant import NeuronAssistant\r\n\r\nassistant = NeuronAssistant()\r\nassistant.set_max_tokens(256)\r\n```\r\n\r\n### Programmatic Use\r\n```python\r\nfrom neuron_assistant import NeuronAssistant\r\n\r\n# Initialize\r\nassistant = NeuronAssistant(hf_token=\"optional_token\")\r\n\r\n# Chat\r\nresponse = assistant.chat(\"Hello! How are you?\")\r\nprint(response)\r\n\r\n# Save conversation\r\nassistant.save_history(\"my_chat.txt\")\r\nassistant.export_history_json(\"my_chat.json\")\r\n```\r\n\r\n## \ud83d\udc1b Troubleshooting\r\n\r\n### Model Download Fails\r\n```bash\r\n# Check disk space\r\ndf -h\r\n\r\n# Verify internet connection\r\nping huggingface.co\r\n\r\n# Manual download location\r\nls models/\r\n```\r\n\r\n### Config Corrupted\r\n```bash\r\n# Run diagnostics\r\n# In chat: /diagnose\r\n\r\n# Migrate config\r\n# In chat: /migrate\r\n\r\n# Last resort - reset\r\n# In chat: /reset\r\n```\r\n\r\n### Out of Memory\r\n```bash\r\n# Use OpenLLaMA model instead of Mistral\r\n# Reduce token limit: /tokens 64\r\n# Clear history: /clear\r\n```\r\n\r\n### GPU Not Detected\r\n```bash\r\n# Check CUDA installation\r\npython -c \"import torch; print(torch.cuda.is_available())\"\r\n\r\n# Reinstall PyTorch with CUDA\r\npip install torch==2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118\r\n```\r\n\r\n## \ud83d\udd12 Security Features\r\n\r\n- **Creator Lock**: Hardcoded creator name prevents identity theft\r\n- **Config Signing**: RSA signatures verify config integrity\r\n- **Prompt Injection Detection**: Blocks manipulation attempts\r\n- **Output Sanitization**: Removes references to other AI companies\r\n- **Backup System**: Auto-backups before changes\r\n\r\n##  License\r\n\r\nMIT License - See LICENSE file for details\r\n\r\n##  Contributing\r\n\r\nContributions welcome! Please:\r\n1. Fork the repository\r\n2. Create a feature branch\r\n3. Make your changes\r\n4. Submit a pull request\r\n\r\n##  Support\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/devpatel/neuron-ai-assistant/issues)\r\n- **Email**: dev@example.com\r\n- **Docs**: [Wiki](https://github.com/devpatel/neuron-ai-assistant/wiki)\r\n\r\n##  Acknowledgments\r\n\r\n- Built with [PyTorch](https://pytorch.org/)\r\n- Uses [Transformers](https://huggingface.co/transformers)\r\n- Supports [OpenLLaMA](https://huggingface.co/openlm-research/open_llama_7b_v2)\r\n- Model: [Mistral AI](https://mistral.ai/)\r\n\r\n##  Changelog\r\n\r\n### v0.4.91 (Current)\r\n- OpenLLaMA 7B v2 and Mistral 7B model support\r\n- Removed GPT4All dependency\r\n- Enhanced model selection with smart recommendations\r\n- Improved first-run experience\r\n- Advanced identity protection\r\n- Config migration system\r\n- Comprehensive diagnostics\r\n- Improved error handling\r\n- Backup/restore functionality\r\n\r\n---\r\n\r\n**Made with heart by Dev Patel**\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A local AI assistant with OpenLLaMA and Mistral models, advanced identity protection and hardware optimization",
    "version": "0.4.91",
    "project_urls": {
        "Bug Tracker": "https://github.com/devpatel/neuron-ai-assistant/issues",
        "Documentation": "https://github.com/devpatel/neuron-ai-assistant/wiki",
        "Homepage": "https://github.com/devpatel/neuron-ai-assistant",
        "Source Code": "https://github.com/devpatel/neuron-ai-assistant"
    },
    "split_keywords": [
        "ai",
        " assistant",
        " chatbot",
        " llm",
        " language-model",
        " openllama",
        " mistral",
        " transformers",
        " local-ai",
        " privacy",
        " cuda",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6534fb7bfe7f314357ca7384768a9de3054518ba6054b9d746e3779f80d79ecb",
                "md5": "d057708659ba496b8722f5f7d9382991",
                "sha256": "d8fb57579d05fa8fae580a291099fdad6979a8e8c9df138c03745c2f7a7d007c"
            },
            "downloads": -1,
            "filename": "neuron_v0_4-0.4.91-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d057708659ba496b8722f5f7d9382991",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 28602,
            "upload_time": "2025-10-20T07:53:27",
            "upload_time_iso_8601": "2025-10-20T07:53:27.262743Z",
            "url": "https://files.pythonhosted.org/packages/65/34/fb7bfe7f314357ca7384768a9de3054518ba6054b9d746e3779f80d79ecb/neuron_v0_4-0.4.91-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4c7cfe79d4cdc765b46571684a64cd08af2b9fe96a857e66673c179952da4a96",
                "md5": "ad992e587e27d64db0f1f0014f4b994a",
                "sha256": "68a99e428bc8c6141589a7d870d5f2a34305d067a71a376255f266bbf13c794e"
            },
            "downloads": -1,
            "filename": "neuron_v0_4-0.4.91.tar.gz",
            "has_sig": false,
            "md5_digest": "ad992e587e27d64db0f1f0014f4b994a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 31875,
            "upload_time": "2025-10-20T07:53:28",
            "upload_time_iso_8601": "2025-10-20T07:53:28.676445Z",
            "url": "https://files.pythonhosted.org/packages/4c/7c/fe79d4cdc765b46571684a64cd08af2b9fe96a857e66673c179952da4a96/neuron_v0_4-0.4.91.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-20 07:53:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "devpatel",
    "github_project": "neuron-ai-assistant",
    "github_not_found": true,
    "lcname": "neuron-v0.4"
}
        
Elapsed time: 3.02934s