# Ghost Brain
The intelligent AI brain component for the Ghost project, providing memory management, RAG capabilities, and LLM integration.
## Features
- **Memory Management**: SQLite-based conversation history with embeddings
- **RAG System**: Semantic search with citations using `txtai`
- **LLM Integration**: Support for OpenAI and Google Gemini models
- **HTTP API**: RESTful interface for plugin communication
- **Intelligent Chunking**: Automatic conversation segmentation and summarization
- **Flexible Deployment**: Run locally, on remote desktops, or in the cloud
- **Mobile Support**: Connect mobile Obsidian to desktop brain servers
## Installation
```bash
pip install -e .
```
## Usage
### Command Line Interface (CLI)
Ghost Brain provides a comprehensive CLI for easy configuration and server management:
```bash
# Start server with default settings
ghost-brain server
# Start server on custom port
ghost-brain server --port 9000
# Start server with debug logging
ghost-brain server --host 0.0.0.0 --port 8000 --log-level debug
# Start server with custom database and backend
ghost-brain server --db-path ./data/memory.db --memory-backend txtai
# Show current configuration
ghost-brain config
# Show configuration in JSON format
ghost-brain config --format json
# Show environment variables documentation
ghost-brain env-docs
# Show version information
ghost-brain version
```
### CLI Commands
| Command | Description | Options |
|---------|-------------|---------|
| `server` | Start the brain server | `--host`, `--port`, `--log-level`, `--reload`, `--db-path`, `--memory-backend`, etc. |
| `config` | Show current configuration | `--format {text,json,env}` |
| `env-docs` | Show environment variables documentation | None |
| `version` | Show version information | None |
### CLI Examples
#### Development
```bash
# Start with auto-reload for development
ghost-brain server --reload --log-level debug
# Use custom database path
ghost-brain server --db-path ./dev/memory.db
# Override multiple settings
ghost-brain server --port 9000 --memory-backend txtai --temperature 0.5
```
#### Production
```bash
# Start on all interfaces
ghost-brain server --host 0.0.0.0 --port 8000
# Use production settings
ghost-brain server --log-level info --memory-backend txtai
```
#### Configuration Management
```bash
# Check current configuration
ghost-brain config
# Export configuration as environment variables
ghost-brain config --format env > .env
# Get configuration in JSON for automation
ghost-brain config --format json
```
### As a Service
```bash
ghost-brain
```
### As a Python Package
```python
from ghost_brain import Brain
brain = Brain()
response = await brain.process_message("Hello, how are you?")
```
### Deployment Options
#### Local Development
```bash
# Start brain server locally
python -m ghost_brain.server
# Default: http://localhost:8000
```
#### Remote Desktop (for mobile users)
```bash
# Install on desktop
python -m pip install ghost-brain
# Start brain server
python -m ghost_brain.server
# Find desktop IP
ifconfig # Mac/Linux
ipconfig # Windows
# Mobile users connect to: http://YOUR_DESKTOP_IP:8000
```
#### Cloud Deployment
```bash
# Deploy to Railway, Heroku, etc.
# Users connect to: https://your-deployment-url.com
```
#### Team Sharing
```bash
# Deploy to shared server
# All team members use same URL in their Obsidian settings
```
## Development
```bash
# Create the dev environment
pip install -r requirements.txt
# Install in editable mode
pip install -e .
# Run tests
pytest
# Start development server
uvicorn ghost_brain.server:app --reload
```
## API Endpoints
- `POST /chat` - Process a chat message
- `GET /health` - Health check
- `POST /memory/search` - Search memory
- `GET /memory/stats` - Get memory statistics
- `GET /settings` - Get current settings
- `POST /settings` - Update settings
## Configuration
Ghost Brain can be configured entirely through environment variables, making it perfect for cloud deployments, Docker containers, and team sharing.
### Environment Variables (Recommended)
The brain server supports comprehensive configuration via environment variables:
```bash
# API Keys
OPENAI_API_KEY=sk-your-openai-key
GEMINI_API_KEY=your-gemini-key
# Server Configuration
BRAIN_SERVER_PORT=8000
BRAIN_SERVER_HOST=0.0.0.0
# Memory Configuration
BRAIN_DB_PATH=./data/memory.db
BRAIN_MEMORY_BACKEND=txtai
# General Settings
BRAIN_LOG_LEVEL=info
BRAIN_SYSTEM_PROMPT="You are a helpful AI assistant."
```
**📖 [Complete Environment Variables Documentation](ENVIRONMENT_VARIABLES.md)**
### Quick Configuration Examples
#### Local Development
```bash
# Start with custom port
BRAIN_SERVER_PORT=9000 python -m ghost_brain.server
# Start with debug logging
BRAIN_LOG_LEVEL=debug python -m ghost_brain.server
```
#### Cloud Deployment (Railway)
```bash
# Set in Railway environment variables
OPENAI_API_KEY=sk-your-key
BRAIN_SERVER_HOST=0.0.0.0
BRAIN_SERVER_PORT=8000
BRAIN_DB_PATH=/app/data/memory.db
```
#### Team Sharing
```bash
# Shared server configuration
BRAIN_SERVER_HOST=0.0.0.0
BRAIN_SERVER_PORT=8000
BRAIN_DB_PATH=/shared/memory.db
BRAIN_MEMORY_BACKEND=txtai
```
### Brain Server Settings (Obsidian Plugin)
The Obsidian plugin supports configurable brain server connections:
- **Local Mode**: Default localhost:8000 configuration
- **Custom Mode**: Connect to any brain server URL
- **Port Configuration**: Customize local brain server port
- **Status Monitoring**: Real-time connection health checks
### Settings Interface
```typescript
brainServer: {
useCustomBrainServer: boolean;
brainServerUrl: string;
brainServerPort: number;
}
```
### Configuration Testing
Test your configuration:
```bash
# Run configuration test
python test_environment_config.py
# Check configuration via API
curl http://localhost:8000/config/environment-variables
```
## Architecture
The brain consists of several core modules:
- **Memory Manager**: Handles conversation storage and retrieval
- **LLM Handler**: Manages API calls to language models
- **RAG Engine**: Provides semantic search capabilities
- **HTTP Server**: Exposes functionality via REST API
- **Settings Manager**: Handles configuration and deployment options
## Use Cases
### Desktop Users
- Default localhost:8000 configuration
- No changes needed
- Brain server runs on same machine
### Mobile Users
- Install brain on desktop
- Use desktop IP address in mobile settings
- No Python installation needed on mobile
### Cloud/Team Users
- Deploy brain to cloud service
- Share one brain server across team members
- Centralized memory and processing
## Testing
```bash
# Test brain server connectivity
python test_brain_server.py
# Test Obsidian integration
python test_obsidian_integration.py
# Test settings integration
python test_settings_integration.py
```
## Contributing
We welcome contributions to Ghost Brain!
### Adding New Chat Archive Importers
If you want to add support for importing chat archives from other AI providers (Claude, Gemini, Bing, etc.), please:
- Open an issue or discussion with a sample export file and format description.
- Submit a pull request with a new parser or endpoint for the provider's format.
- See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines on code style, testing, and submitting PRs.
We encourage community contributions to expand Ghost Brain's compatibility with more chat platforms!
Raw data
{
"_id": null,
"home_page": "https://github.com/AmberLee2427/ghost-brain",
"name": "ghost-brain",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "ai assistant memory rag embeddings llm cli environment-variables",
"author": "Ghost Project Team",
"author_email": "team@ghost.ai",
"download_url": "https://files.pythonhosted.org/packages/32/25/5e069cd335db2d1c76d42370951573485f3ad648576d57240b2cf4cf3a96/ghost-brain-0.1.3.tar.gz",
"platform": null,
"description": "# Ghost Brain\n\nThe intelligent AI brain component for the Ghost project, providing memory management, RAG capabilities, and LLM integration.\n\n## Features\n\n- **Memory Management**: SQLite-based conversation history with embeddings\n- **RAG System**: Semantic search with citations using `txtai`\n- **LLM Integration**: Support for OpenAI and Google Gemini models\n- **HTTP API**: RESTful interface for plugin communication\n- **Intelligent Chunking**: Automatic conversation segmentation and summarization\n- **Flexible Deployment**: Run locally, on remote desktops, or in the cloud\n- **Mobile Support**: Connect mobile Obsidian to desktop brain servers\n\n## Installation\n\n```bash\npip install -e .\n```\n\n## Usage\n\n### Command Line Interface (CLI)\n\nGhost Brain provides a comprehensive CLI for easy configuration and server management:\n\n```bash\n# Start server with default settings\nghost-brain server\n\n# Start server on custom port\nghost-brain server --port 9000\n\n# Start server with debug logging\nghost-brain server --host 0.0.0.0 --port 8000 --log-level debug\n\n# Start server with custom database and backend\nghost-brain server --db-path ./data/memory.db --memory-backend txtai\n\n# Show current configuration\nghost-brain config\n\n# Show configuration in JSON format\nghost-brain config --format json\n\n# Show environment variables documentation\nghost-brain env-docs\n\n# Show version information\nghost-brain version\n```\n\n### CLI Commands\n\n| Command | Description | Options |\n|---------|-------------|---------|\n| `server` | Start the brain server | `--host`, `--port`, `--log-level`, `--reload`, `--db-path`, `--memory-backend`, etc. |\n| `config` | Show current configuration | `--format {text,json,env}` |\n| `env-docs` | Show environment variables documentation | None |\n| `version` | Show version information | None |\n\n### CLI Examples\n\n#### Development\n```bash\n# Start with auto-reload for development\nghost-brain server --reload --log-level debug\n\n# Use custom database path\nghost-brain server --db-path ./dev/memory.db\n\n# Override multiple settings\nghost-brain server --port 9000 --memory-backend txtai --temperature 0.5\n```\n\n#### Production\n```bash\n# Start on all interfaces\nghost-brain server --host 0.0.0.0 --port 8000\n\n# Use production settings\nghost-brain server --log-level info --memory-backend txtai\n```\n\n#### Configuration Management\n```bash\n# Check current configuration\nghost-brain config\n\n# Export configuration as environment variables\nghost-brain config --format env > .env\n\n# Get configuration in JSON for automation\nghost-brain config --format json\n```\n\n### As a Service\n```bash\nghost-brain\n```\n\n### As a Python Package\n```python\nfrom ghost_brain import Brain\n\nbrain = Brain()\nresponse = await brain.process_message(\"Hello, how are you?\")\n```\n\n### Deployment Options\n\n#### Local Development\n```bash\n# Start brain server locally\npython -m ghost_brain.server\n# Default: http://localhost:8000\n```\n\n#### Remote Desktop (for mobile users)\n```bash\n# Install on desktop\npython -m pip install ghost-brain\n\n# Start brain server\npython -m ghost_brain.server\n\n# Find desktop IP\nifconfig # Mac/Linux\nipconfig # Windows\n\n# Mobile users connect to: http://YOUR_DESKTOP_IP:8000\n```\n\n#### Cloud Deployment\n```bash\n# Deploy to Railway, Heroku, etc.\n# Users connect to: https://your-deployment-url.com\n```\n\n#### Team Sharing\n```bash\n# Deploy to shared server\n# All team members use same URL in their Obsidian settings\n```\n\n## Development\n\n```bash\n# Create the dev environment\npip install -r requirements.txt\n\n# Install in editable mode\npip install -e .\n\n# Run tests\npytest\n\n# Start development server\nuvicorn ghost_brain.server:app --reload\n```\n\n## API Endpoints\n\n- `POST /chat` - Process a chat message\n- `GET /health` - Health check\n- `POST /memory/search` - Search memory\n- `GET /memory/stats` - Get memory statistics\n- `GET /settings` - Get current settings\n- `POST /settings` - Update settings\n\n## Configuration\n\nGhost Brain can be configured entirely through environment variables, making it perfect for cloud deployments, Docker containers, and team sharing.\n\n### Environment Variables (Recommended)\n\nThe brain server supports comprehensive configuration via environment variables:\n\n```bash\n# API Keys\nOPENAI_API_KEY=sk-your-openai-key\nGEMINI_API_KEY=your-gemini-key\n\n# Server Configuration\nBRAIN_SERVER_PORT=8000\nBRAIN_SERVER_HOST=0.0.0.0\n\n# Memory Configuration\nBRAIN_DB_PATH=./data/memory.db\nBRAIN_MEMORY_BACKEND=txtai\n\n# General Settings\nBRAIN_LOG_LEVEL=info\nBRAIN_SYSTEM_PROMPT=\"You are a helpful AI assistant.\"\n```\n\n**\ud83d\udcd6 [Complete Environment Variables Documentation](ENVIRONMENT_VARIABLES.md)**\n\n### Quick Configuration Examples\n\n#### Local Development\n```bash\n# Start with custom port\nBRAIN_SERVER_PORT=9000 python -m ghost_brain.server\n\n# Start with debug logging\nBRAIN_LOG_LEVEL=debug python -m ghost_brain.server\n```\n\n#### Cloud Deployment (Railway)\n```bash\n# Set in Railway environment variables\nOPENAI_API_KEY=sk-your-key\nBRAIN_SERVER_HOST=0.0.0.0\nBRAIN_SERVER_PORT=8000\nBRAIN_DB_PATH=/app/data/memory.db\n```\n\n#### Team Sharing\n```bash\n# Shared server configuration\nBRAIN_SERVER_HOST=0.0.0.0\nBRAIN_SERVER_PORT=8000\nBRAIN_DB_PATH=/shared/memory.db\nBRAIN_MEMORY_BACKEND=txtai\n```\n\n### Brain Server Settings (Obsidian Plugin)\nThe Obsidian plugin supports configurable brain server connections:\n\n- **Local Mode**: Default localhost:8000 configuration\n- **Custom Mode**: Connect to any brain server URL\n- **Port Configuration**: Customize local brain server port\n- **Status Monitoring**: Real-time connection health checks\n\n### Settings Interface\n```typescript\nbrainServer: {\n useCustomBrainServer: boolean;\n brainServerUrl: string;\n brainServerPort: number;\n}\n```\n\n### Configuration Testing\n\nTest your configuration:\n```bash\n# Run configuration test\npython test_environment_config.py\n\n# Check configuration via API\ncurl http://localhost:8000/config/environment-variables\n```\n\n## Architecture\n\nThe brain consists of several core modules:\n\n- **Memory Manager**: Handles conversation storage and retrieval\n- **LLM Handler**: Manages API calls to language models\n- **RAG Engine**: Provides semantic search capabilities\n- **HTTP Server**: Exposes functionality via REST API\n- **Settings Manager**: Handles configuration and deployment options\n\n## Use Cases\n\n### Desktop Users\n- Default localhost:8000 configuration\n- No changes needed\n- Brain server runs on same machine\n\n### Mobile Users\n- Install brain on desktop\n- Use desktop IP address in mobile settings\n- No Python installation needed on mobile\n\n### Cloud/Team Users\n- Deploy brain to cloud service\n- Share one brain server across team members\n- Centralized memory and processing\n\n## Testing\n\n```bash\n# Test brain server connectivity\npython test_brain_server.py\n\n# Test Obsidian integration\npython test_obsidian_integration.py\n\n# Test settings integration\npython test_settings_integration.py\n```\n\n## Contributing\n\nWe welcome contributions to Ghost Brain!\n\n### Adding New Chat Archive Importers\nIf you want to add support for importing chat archives from other AI providers (Claude, Gemini, Bing, etc.), please:\n- Open an issue or discussion with a sample export file and format description.\n- Submit a pull request with a new parser or endpoint for the provider's format.\n- See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines on code style, testing, and submitting PRs.\n\nWe encourage community contributions to expand Ghost Brain's compatibility with more chat platforms! \n",
"bugtrack_url": null,
"license": null,
"summary": "Advanced memory management and RAG capabilities for AI assistants with CLI and environment variable configuration",
"version": "0.1.3",
"project_urls": {
"Bug Reports": "https://github.com/AmberLee2427/ghost-brain/issues",
"Documentation": "https://github.com/AmberLee2427/ghost-brain#readme",
"Homepage": "https://github.com/AmberLee2427/ghost-brain",
"Source": "https://github.com/AmberLee2427/ghost-brain"
},
"split_keywords": [
"ai",
"assistant",
"memory",
"rag",
"embeddings",
"llm",
"cli",
"environment-variables"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9a3281ff250de2b1cd86ac02ee52e34be9c251c56098a87f55b58c48570e3bbf",
"md5": "1e88ca50c5ad4dfe10593287f580c2cb",
"sha256": "2d7172fad08783c5ab8b82efe8e8a7fbbc7aa83035b141424b368783bf39867c"
},
"downloads": -1,
"filename": "ghost_brain-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1e88ca50c5ad4dfe10593287f580c2cb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 28373,
"upload_time": "2025-07-20T22:06:26",
"upload_time_iso_8601": "2025-07-20T22:06:26.319376Z",
"url": "https://files.pythonhosted.org/packages/9a/32/81ff250de2b1cd86ac02ee52e34be9c251c56098a87f55b58c48570e3bbf/ghost_brain-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "32255e069cd335db2d1c76d42370951573485f3ad648576d57240b2cf4cf3a96",
"md5": "9039228750136f2a0f0ff9f3a8cc8c7d",
"sha256": "8945fc75fea80dbacacdcd5ba2391f335a721d426d5b523b6df1cb45b857fce9"
},
"downloads": -1,
"filename": "ghost-brain-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "9039228750136f2a0f0ff9f3a8cc8c7d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 27594,
"upload_time": "2025-07-20T22:06:27",
"upload_time_iso_8601": "2025-07-20T22:06:27.466466Z",
"url": "https://files.pythonhosted.org/packages/32/25/5e069cd335db2d1c76d42370951573485f3ad648576d57240b2cf4cf3a96/ghost-brain-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 22:06:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AmberLee2427",
"github_project": "ghost-brain",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "fastapi",
"specs": [
[
"==",
"0.104.1"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.24.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "sentence-transformers",
"specs": [
[
">=",
"2.2.2"
]
]
},
{
"name": "huggingface_hub",
"specs": [
[
">=",
"0.19.0"
]
]
},
{
"name": "faiss-cpu",
"specs": [
[
"==",
"1.7.4"
]
]
},
{
"name": "numpy",
"specs": [
[
"<",
"2.0.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.3.2"
]
]
},
{
"name": "txtai",
"specs": [
[
"==",
"8.6.0"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.3.7"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
"==",
"0.3.2"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.31.0"
]
]
},
{
"name": "aiofiles",
"specs": [
[
"==",
"23.2.1"
]
]
},
{
"name": "python-multipart",
"specs": [
[
"==",
"0.0.20"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"7.4.3"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
"==",
"0.21.1"
]
]
}
],
"lcname": "ghost-brain"
}