ragmax


Nameragmax JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/yourusername/ragmax
SummaryRAGMax - Advanced RAG memory system for AI platforms via MCP
upload_time2025-10-11 19:47:20
maintainerNone
docs_urlNone
authorVish Siddharth
requires_python>=3.8
licenseMIT
keywords ai memory mcp claude chatgpt rag vector-search
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Universal AI Memory

A production-ready MCP (Model Context Protocol) server providing universal memory across all AI platforms (Claude, ChatGPT, Gemini, Perplexity, etc.)

## 🎯 Features

- **Universal Memory**: Share context across Claude Desktop, ChatGPT, Gemini, and any MCP-compatible platform
- **Smart Chunking**: Contextual embeddings using Anthropic's approach
- **Hybrid Search**: Vector similarity + keyword search with automatic reranking
- **3-Tier Storage**: Hot (Redis), Warm (Qdrant), Cold (PostgreSQL)
- **Automatic User ID**: No need to specify userId - automatically consistent per machine
- **Production Ready**: Error handling, caching, batch operations

## πŸš€ Quick Start

### Prerequisites

- Node.js 18+
- Docker & Docker Compose
- API key from [Voyage AI](https://www.voyageai.com/) or [Cohere](https://cohere.com/)

### Installation

```bash
# 1. Clone and install
git clone <your-repo>
cd universal-ai-memory
npm install

# 2. Configure environment
cp .env.example .env
# Edit .env and add your API keys

# 3. Start databases
docker-compose up -d

# 4. Initialize database
cat src/storage/schema.sql | docker-compose exec -T postgres psql -U postgres -d ai_memory

# 5. Build
npm run build
```

### Configuration

Edit `.env`:
```env
# Required: Choose one
VOYAGE_API_KEY=your_voyage_key
# OR
COHERE_API_KEY=your_cohere_key

# Optional: Set your user ID (defaults to machine ID)
DEFAULT_USER_ID=your_name
```

## πŸ“± Platform Setup

### Claude Desktop (Kiro)

Create or edit `.kiro/settings/mcp.json`:

```json
{
  "mcpServers": {
    "universal-memory": {
      "command": "node",
      "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
      "env": {
        "VOYAGE_API_KEY": "your_key_here",
        "DEFAULT_USER_ID": "your_name"
      },
      "disabled": false,
      "autoApprove": ["search_memory", "add_to_memory"]
    }
  }
}
```

**Important**: Use absolute path! Restart Claude Desktop after configuration.

### Claude Desktop (Official)

Edit `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):

```json
{
  "mcpServers": {
    "universal-memory": {
      "command": "node",
      "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
      "env": {
        "VOYAGE_API_KEY": "your_key_here",
        "DEFAULT_USER_ID": "your_name"
      }
    }
  }
}
```

Restart Claude Desktop.

### ChatGPT Desktop (with MCP support)

Edit `~/Library/Application Support/ChatGPT/mcp_config.json`:

```json
{
  "mcpServers": {
    "universal-memory": {
      "command": "node",
      "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
      "env": {
        "VOYAGE_API_KEY": "your_key_here",
        "DEFAULT_USER_ID": "your_name"
      }
    }
  }
}
```

Restart ChatGPT.

### Continue.dev (VS Code Extension)

Edit `~/.continue/config.json`:

```json
{
  "mcpServers": [
    {
      "name": "universal-memory",
      "command": "node",
      "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
      "env": {
        "VOYAGE_API_KEY": "your_key_here",
        "DEFAULT_USER_ID": "your_name"
      }
    }
  ]
}
```

Reload VS Code.

### Cline (VS Code Extension)

Edit `.vscode/settings.json` in your workspace:

```json
{
  "cline.mcpServers": {
    "universal-memory": {
      "command": "node",
      "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
      "env": {
        "VOYAGE_API_KEY": "your_key_here",
        "DEFAULT_USER_ID": "your_name"
      }
    }
  }
}
```

Reload window.

### Generic MCP Client

For any MCP-compatible client, use:

```json
{
  "command": "node",
  "args": ["/absolute/path/to/universal-ai-memory/dist/index.js"],
  "env": {
    "POSTGRES_HOST": "localhost",
    "POSTGRES_PORT": "5432",
    "POSTGRES_DB": "ai_memory",
    "POSTGRES_USER": "postgres",
    "POSTGRES_PASSWORD": "postgres",
    "REDIS_HOST": "localhost",
    "REDIS_PORT": "6379",
    "QDRANT_URL": "http://localhost:6333",
    "VOYAGE_API_KEY": "your_key_here",
    "DEFAULT_USER_ID": "your_name"
  }
}
```

## πŸ§ͺ Testing

After setup, test in your AI platform:

```javascript
// Add a memory (userId is automatic)
add_to_memory({
  platform: "claude",
  conversationId: "test_001",
  content: "I love TypeScript and building AI applications",
  role: "user"
})

// Search memory (userId is automatic)
search_memory({
  query: "What do I love?",
  limit: 5
})
```

Or just ask naturally:
- "Remember that I prefer functional programming"
- "What are my coding preferences?"
- "What did I say about TypeScript?"

## πŸ”§ Available Tools

### `search_memory`

Search through all memories using hybrid semantic + keyword search.

**Parameters:**
- `query` (required): Search query
- `userId` (optional): User identifier (auto-detected)
- `limit` (optional): Max results (default: 10)
- `platform` (optional): Filter by platform
- `minScore` (optional): Minimum relevance (0-1)

### `add_to_memory`

Add new content to memory with automatic chunking and embedding.

**Parameters:**
- `platform` (required): Platform name (claude, chatgpt, gemini, etc.)
- `conversationId` (required): Conversation identifier
- `content` (required): Content to remember
- `role` (required): "user" or "assistant"
- `userId` (optional): User identifier (auto-detected)
- `metadata` (optional): Additional metadata

## πŸ“Š Architecture

```
User Input
    ↓
PostgreSQL (full content + metadata)
    ↓
Redis (hot cache - last 50 messages)
    ↓
Smart Chunking (contextual embeddings)
    ↓
Voyage AI / Cohere (embeddings)
    ↓
Qdrant (vector search) + PostgreSQL (backup)
```

**Search Flow:**
```
Query β†’ Redis Cache β†’ Generate Embedding β†’ Hybrid Search β†’ Rerank β†’ Results
```

## πŸ› οΈ Maintenance

```bash
# View logs
docker-compose logs -f

# Restart services
docker-compose restart

# Backup database
docker-compose exec postgres pg_dump -U postgres ai_memory > backup.sql

# Clear cache
docker-compose exec redis redis-cli FLUSHALL

# Check database
docker-compose exec postgres psql -U postgres -d ai_memory
```

## πŸ“ˆ Performance

- **Add Memory**: 200-500ms (includes embedding)
- **Search (hot)**: <10ms
- **Search (cold)**: 50-200ms
- **Storage**: ~1KB per message

## πŸ” Security

- User isolation via userId
- No cross-user data leakage
- Local database (no external sharing)
- API keys in environment variables

## πŸ—ΊοΈ Roadmap

See [ROADMAP.md](ROADMAP.md) for planned features:
- Web search integration
- Multi-modal support (images, code)
- Knowledge graph
- Team collaboration
- And more...

## πŸ“š Documentation

- [ARCHITECTURE.md](ARCHITECTURE.md) - Technical architecture details
- [PROJECT_SUMMARY.md](PROJECT_SUMMARY.md) - Project overview
- [ROADMAP.md](ROADMAP.md) - Future features

## πŸ› Troubleshooting

### MCP Server Not Connecting
1. Check absolute path in config
2. Verify `dist/index.js` exists
3. Check databases are running: `docker-compose ps`
4. Restart the AI platform completely

### No Search Results
1. Make sure you added memories first
2. Check userId is consistent
3. Verify databases are running
4. Check logs: `docker-compose logs`

### Embedding Errors
1. Verify API key is correct
2. Check API credits/quota
3. Try alternative provider (Voyage ↔ Cohere)

### Database Connection Errors
```bash
docker-compose restart
docker-compose logs postgres
```

## πŸ’‘ Tips

- Use consistent `conversationId` for threaded conversations
- Add metadata tags for better organization
- The system learns from all platforms - ask in Claude what you told ChatGPT!
- Hot cache makes repeated searches instant

## πŸ“„ License

MIT

## 🀝 Contributing

Contributions welcome! Please open an issue or PR.

---

**Built with**: TypeScript, PostgreSQL, Redis, Qdrant, Voyage AI, Cohere
**MCP Version**: 0.5.0
**Status**: Production Ready βœ…

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yourusername/ragmax",
    "name": "ragmax",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai, memory, mcp, claude, chatgpt, rag, vector-search",
    "author": "Vish Siddharth",
    "author_email": "Vish Siddharth <your.email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/27/dc/762694fdc65ac9b064a145766a6df9d8c981505e99e1569ed026640fc377/ragmax-1.0.0.tar.gz",
    "platform": null,
    "description": "# Universal AI Memory\n\nA production-ready MCP (Model Context Protocol) server providing universal memory across all AI platforms (Claude, ChatGPT, Gemini, Perplexity, etc.)\n\n## \ud83c\udfaf Features\n\n- **Universal Memory**: Share context across Claude Desktop, ChatGPT, Gemini, and any MCP-compatible platform\n- **Smart Chunking**: Contextual embeddings using Anthropic's approach\n- **Hybrid Search**: Vector similarity + keyword search with automatic reranking\n- **3-Tier Storage**: Hot (Redis), Warm (Qdrant), Cold (PostgreSQL)\n- **Automatic User ID**: No need to specify userId - automatically consistent per machine\n- **Production Ready**: Error handling, caching, batch operations\n\n## \ud83d\ude80 Quick Start\n\n### Prerequisites\n\n- Node.js 18+\n- Docker & Docker Compose\n- API key from [Voyage AI](https://www.voyageai.com/) or [Cohere](https://cohere.com/)\n\n### Installation\n\n```bash\n# 1. Clone and install\ngit clone <your-repo>\ncd universal-ai-memory\nnpm install\n\n# 2. Configure environment\ncp .env.example .env\n# Edit .env and add your API keys\n\n# 3. Start databases\ndocker-compose up -d\n\n# 4. Initialize database\ncat src/storage/schema.sql | docker-compose exec -T postgres psql -U postgres -d ai_memory\n\n# 5. Build\nnpm run build\n```\n\n### Configuration\n\nEdit `.env`:\n```env\n# Required: Choose one\nVOYAGE_API_KEY=your_voyage_key\n# OR\nCOHERE_API_KEY=your_cohere_key\n\n# Optional: Set your user ID (defaults to machine ID)\nDEFAULT_USER_ID=your_name\n```\n\n## \ud83d\udcf1 Platform Setup\n\n### Claude Desktop (Kiro)\n\nCreate or edit `.kiro/settings/mcp.json`:\n\n```json\n{\n  \"mcpServers\": {\n    \"universal-memory\": {\n      \"command\": \"node\",\n      \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n      \"env\": {\n        \"VOYAGE_API_KEY\": \"your_key_here\",\n        \"DEFAULT_USER_ID\": \"your_name\"\n      },\n      \"disabled\": false,\n      \"autoApprove\": [\"search_memory\", \"add_to_memory\"]\n    }\n  }\n}\n```\n\n**Important**: Use absolute path! Restart Claude Desktop after configuration.\n\n### Claude Desktop (Official)\n\nEdit `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):\n\n```json\n{\n  \"mcpServers\": {\n    \"universal-memory\": {\n      \"command\": \"node\",\n      \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n      \"env\": {\n        \"VOYAGE_API_KEY\": \"your_key_here\",\n        \"DEFAULT_USER_ID\": \"your_name\"\n      }\n    }\n  }\n}\n```\n\nRestart Claude Desktop.\n\n### ChatGPT Desktop (with MCP support)\n\nEdit `~/Library/Application Support/ChatGPT/mcp_config.json`:\n\n```json\n{\n  \"mcpServers\": {\n    \"universal-memory\": {\n      \"command\": \"node\",\n      \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n      \"env\": {\n        \"VOYAGE_API_KEY\": \"your_key_here\",\n        \"DEFAULT_USER_ID\": \"your_name\"\n      }\n    }\n  }\n}\n```\n\nRestart ChatGPT.\n\n### Continue.dev (VS Code Extension)\n\nEdit `~/.continue/config.json`:\n\n```json\n{\n  \"mcpServers\": [\n    {\n      \"name\": \"universal-memory\",\n      \"command\": \"node\",\n      \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n      \"env\": {\n        \"VOYAGE_API_KEY\": \"your_key_here\",\n        \"DEFAULT_USER_ID\": \"your_name\"\n      }\n    }\n  ]\n}\n```\n\nReload VS Code.\n\n### Cline (VS Code Extension)\n\nEdit `.vscode/settings.json` in your workspace:\n\n```json\n{\n  \"cline.mcpServers\": {\n    \"universal-memory\": {\n      \"command\": \"node\",\n      \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n      \"env\": {\n        \"VOYAGE_API_KEY\": \"your_key_here\",\n        \"DEFAULT_USER_ID\": \"your_name\"\n      }\n    }\n  }\n}\n```\n\nReload window.\n\n### Generic MCP Client\n\nFor any MCP-compatible client, use:\n\n```json\n{\n  \"command\": \"node\",\n  \"args\": [\"/absolute/path/to/universal-ai-memory/dist/index.js\"],\n  \"env\": {\n    \"POSTGRES_HOST\": \"localhost\",\n    \"POSTGRES_PORT\": \"5432\",\n    \"POSTGRES_DB\": \"ai_memory\",\n    \"POSTGRES_USER\": \"postgres\",\n    \"POSTGRES_PASSWORD\": \"postgres\",\n    \"REDIS_HOST\": \"localhost\",\n    \"REDIS_PORT\": \"6379\",\n    \"QDRANT_URL\": \"http://localhost:6333\",\n    \"VOYAGE_API_KEY\": \"your_key_here\",\n    \"DEFAULT_USER_ID\": \"your_name\"\n  }\n}\n```\n\n## \ud83e\uddea Testing\n\nAfter setup, test in your AI platform:\n\n```javascript\n// Add a memory (userId is automatic)\nadd_to_memory({\n  platform: \"claude\",\n  conversationId: \"test_001\",\n  content: \"I love TypeScript and building AI applications\",\n  role: \"user\"\n})\n\n// Search memory (userId is automatic)\nsearch_memory({\n  query: \"What do I love?\",\n  limit: 5\n})\n```\n\nOr just ask naturally:\n- \"Remember that I prefer functional programming\"\n- \"What are my coding preferences?\"\n- \"What did I say about TypeScript?\"\n\n## \ud83d\udd27 Available Tools\n\n### `search_memory`\n\nSearch through all memories using hybrid semantic + keyword search.\n\n**Parameters:**\n- `query` (required): Search query\n- `userId` (optional): User identifier (auto-detected)\n- `limit` (optional): Max results (default: 10)\n- `platform` (optional): Filter by platform\n- `minScore` (optional): Minimum relevance (0-1)\n\n### `add_to_memory`\n\nAdd new content to memory with automatic chunking and embedding.\n\n**Parameters:**\n- `platform` (required): Platform name (claude, chatgpt, gemini, etc.)\n- `conversationId` (required): Conversation identifier\n- `content` (required): Content to remember\n- `role` (required): \"user\" or \"assistant\"\n- `userId` (optional): User identifier (auto-detected)\n- `metadata` (optional): Additional metadata\n\n## \ud83d\udcca Architecture\n\n```\nUser Input\n    \u2193\nPostgreSQL (full content + metadata)\n    \u2193\nRedis (hot cache - last 50 messages)\n    \u2193\nSmart Chunking (contextual embeddings)\n    \u2193\nVoyage AI / Cohere (embeddings)\n    \u2193\nQdrant (vector search) + PostgreSQL (backup)\n```\n\n**Search Flow:**\n```\nQuery \u2192 Redis Cache \u2192 Generate Embedding \u2192 Hybrid Search \u2192 Rerank \u2192 Results\n```\n\n## \ud83d\udee0\ufe0f Maintenance\n\n```bash\n# View logs\ndocker-compose logs -f\n\n# Restart services\ndocker-compose restart\n\n# Backup database\ndocker-compose exec postgres pg_dump -U postgres ai_memory > backup.sql\n\n# Clear cache\ndocker-compose exec redis redis-cli FLUSHALL\n\n# Check database\ndocker-compose exec postgres psql -U postgres -d ai_memory\n```\n\n## \ud83d\udcc8 Performance\n\n- **Add Memory**: 200-500ms (includes embedding)\n- **Search (hot)**: <10ms\n- **Search (cold)**: 50-200ms\n- **Storage**: ~1KB per message\n\n## \ud83d\udd10 Security\n\n- User isolation via userId\n- No cross-user data leakage\n- Local database (no external sharing)\n- API keys in environment variables\n\n## \ud83d\uddfa\ufe0f Roadmap\n\nSee [ROADMAP.md](ROADMAP.md) for planned features:\n- Web search integration\n- Multi-modal support (images, code)\n- Knowledge graph\n- Team collaboration\n- And more...\n\n## \ud83d\udcda Documentation\n\n- [ARCHITECTURE.md](ARCHITECTURE.md) - Technical architecture details\n- [PROJECT_SUMMARY.md](PROJECT_SUMMARY.md) - Project overview\n- [ROADMAP.md](ROADMAP.md) - Future features\n\n## \ud83d\udc1b Troubleshooting\n\n### MCP Server Not Connecting\n1. Check absolute path in config\n2. Verify `dist/index.js` exists\n3. Check databases are running: `docker-compose ps`\n4. Restart the AI platform completely\n\n### No Search Results\n1. Make sure you added memories first\n2. Check userId is consistent\n3. Verify databases are running\n4. Check logs: `docker-compose logs`\n\n### Embedding Errors\n1. Verify API key is correct\n2. Check API credits/quota\n3. Try alternative provider (Voyage \u2194 Cohere)\n\n### Database Connection Errors\n```bash\ndocker-compose restart\ndocker-compose logs postgres\n```\n\n## \ud83d\udca1 Tips\n\n- Use consistent `conversationId` for threaded conversations\n- Add metadata tags for better organization\n- The system learns from all platforms - ask in Claude what you told ChatGPT!\n- Hot cache makes repeated searches instant\n\n## \ud83d\udcc4 License\n\nMIT\n\n## \ud83e\udd1d Contributing\n\nContributions welcome! Please open an issue or PR.\n\n---\n\n**Built with**: TypeScript, PostgreSQL, Redis, Qdrant, Voyage AI, Cohere\n**MCP Version**: 0.5.0\n**Status**: Production Ready \u2705\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "RAGMax - Advanced RAG memory system for AI platforms via MCP",
    "version": "1.0.0",
    "project_urls": {
        "Documentation": "https://github.com/yourusername/ragmax#readme",
        "Homepage": "https://github.com/yourusername/ragmax",
        "Issues": "https://github.com/yourusername/ragmax/issues",
        "Repository": "https://github.com/yourusername/ragmax"
    },
    "split_keywords": [
        "ai",
        " memory",
        " mcp",
        " claude",
        " chatgpt",
        " rag",
        " vector-search"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "cf099a98a957b1998a615d2d1a7ac7e9ed4b70f052574c4384e3e74252465d27",
                "md5": "b9c79d262951bf97e5a6d3b86436e4c4",
                "sha256": "cf321c9aeca40ba7badf35d1427aa972c59b94fa91542206f72e29c291cbf99e"
            },
            "downloads": -1,
            "filename": "ragmax-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b9c79d262951bf97e5a6d3b86436e4c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13821,
            "upload_time": "2025-10-11T19:47:18",
            "upload_time_iso_8601": "2025-10-11T19:47:18.799685Z",
            "url": "https://files.pythonhosted.org/packages/cf/09/9a98a957b1998a615d2d1a7ac7e9ed4b70f052574c4384e3e74252465d27/ragmax-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "27dc762694fdc65ac9b064a145766a6df9d8c981505e99e1569ed026640fc377",
                "md5": "7c52bfda1986eaf982bf2cc71846b1c0",
                "sha256": "98631d0ffc562563d8251efcda7b92c184dbcac5dd23e319d0013d35fe17c15d"
            },
            "downloads": -1,
            "filename": "ragmax-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7c52bfda1986eaf982bf2cc71846b1c0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 26545,
            "upload_time": "2025-10-11T19:47:20",
            "upload_time_iso_8601": "2025-10-11T19:47:20.589843Z",
            "url": "https://files.pythonhosted.org/packages/27/dc/762694fdc65ac9b064a145766a6df9d8c981505e99e1569ed026640fc377/ragmax-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-11 19:47:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "ragmax",
    "github_not_found": true,
    "lcname": "ragmax"
}
        
Elapsed time: 1.28032s