memg-core


Namememg-core JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
SummaryLightweight memory system for AI agents with vector search and graph storage
upload_time2025-08-13 15:35:28
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseNone
keywords ai memory vector-search graph-database mcp agents
VCS
bugtrack_url
requirements qdrant-client kuzu fastembed python-dotenv pydantic
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MEMG Core

**Lightweight memory system for AI agents with dual storage (Qdrant + Kuzu)**

## Features

- **Vector Search**: Fast semantic search with Qdrant
- **Graph Storage**: Optional relationship analysis with Kuzu
- **AI Integration**: Automated entity extraction with Google Gemini
- **MCP Compatible**: Ready-to-use MCP server for AI agents
- **Lightweight**: Minimal dependencies, optimized for performance

## Quick Start

### Option 1: Docker (Recommended)
```bash
# 1. Create configuration
cp env.example .env
# Edit .env and set your GOOGLE_API_KEY

# 2. Run MEMG MCP Server (359MB)
docker run -d \
  -p 8787:8787 \
  --env-file .env \
  ghcr.io/genovo-ai/memg-core-mcp:latest

# 3. Test it's working
curl http://localhost:8787/health
```

### Option 2: Python Package (Core Library)
```bash
pip install memg-core

# Set up environment (for examples/tests)
cp env.example .env
# Edit .env and set your GOOGLE_API_KEY

# Use the core library in your app; the MCP server is provided via Docker image
# Example usage shown below in the Usage section.
```

### Development setup
```bash
# 1) Create virtualenv and install slim runtime deps for library usage
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt

# 2) For running tests and linters locally, install dev deps
pip install -r requirements-dev.txt

# 3) Run tests
export MEMG_TEMPLATE="software_development"
export QDRANT_STORAGE_PATH="$HOME/.local/share/qdrant"
export KUZU_DB_PATH="$HOME/.local/share/kuzu/memg.db"
mkdir -p "$QDRANT_STORAGE_PATH" "$HOME/.local/share/kuzu"
PYTHONPATH=$(pwd)/src pytest -q
```

## Usage

```python
from memg_core import add_memory, search_memories
from memg_core.models.core import Memory, MemoryType

# Add a note
note = Memory(user_id="u1", content="Python is great for AI", memory_type=MemoryType.NOTE)
add_memory(note)

# Search
import asyncio
asyncio.run(search_memories("python ai", user_id="u1"))
```

### YAML registries (optional)

Core ships with three tiny registries under `integration/config/`:

- `core.minimal.yaml`: basic types `note`, `document`, `task` with anchors and generic relations
- `core.software_dev.yaml`: adds `bug` + `solution` and `bug_solution` relation
- `core.knowledge.yaml`: `concept` + `document` with `mentions`/`derived_from`

Enable:

```bash
export MEMG_ENABLE_YAML_SCHEMA=true
export MEMG_YAML_SCHEMA=$(pwd)/integration/config/core.minimal.yaml
```

## Evaluation

Use the built-in scripts to generate a synthetic dataset that covers all entity and memory types, and then run repeatable evaluations each iteration.

### 1) Generate dataset
```bash
python scripts/generate_synthetic_dataset.py \
  --output ./data/memg_synth.jsonl \
  --num 200 \
  --user eval_user
```

This creates JSONL rows containing a `memory` plus associated `entities` and `relationships`, exercising:
- All `EntityType` values (TECHNOLOGY, DATABASE, COMPONENT, ERROR, SOLUTION, FILE_TYPE, etc.)
- Multiple `MemoryType`s: document, note, conversation, task
- Basic `MENTIONS` relationships

### 2) Offline validation (no external services)
Validates schema and database compatibility quickly without embeddings or storage.
```bash
python scripts/evaluate_memg.py --data ./data/memg_synth.jsonl --mode offline
```
Output summary includes rows, counts, and error/warning totals to track across iterations.

### 3) Live processing (embeddings + storage)
Requires environment configured (e.g., `GOOGLE_API_KEY`) and storage reachable. It runs the Unified pipeline and validates the resulting memories.
```bash
python scripts/evaluate_memg.py --data ./data/memg_synth.jsonl --mode live
```

Tip: Commit the dataset and compare results over time in CI to catch regressions.

## Configuration

Configure via `.env` file (copy from `env.example`):

```bash
# Required
GOOGLE_API_KEY=your_google_api_key_here

# Core settings
GEMINI_MODEL=gemini-2.0-flash
MEMORY_SYSTEM_MCP_PORT=8787
MEMG_TEMPLATE=software_development

# Storage
BASE_MEMORY_PATH=$HOME/.local/share/memory_system
QDRANT_COLLECTION=memories
EMBEDDING_DIMENSION_LEN=768
```

## Requirements

- Python 3.11+
- Google API key for Gemini

## Links

- [Repository](https://github.com/genovo-ai/memg-core)
- [Issues](https://github.com/genovo-ai/memg-core/issues)
- [Documentation](https://github.com/genovo-ai/memg-core#readme)

## License

MIT License - see LICENSE file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "memg-core",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "ai, memory, vector-search, graph-database, mcp, agents",
    "author": null,
    "author_email": "Genovo AI <dev@genovo.ai>",
    "download_url": "https://files.pythonhosted.org/packages/48/6c/248cd4ef15f9fb937c462b1c8b95306cdddc430f41ee4e89aef2a70e78dc/memg_core-0.3.0.tar.gz",
    "platform": null,
    "description": "# MEMG Core\n\n**Lightweight memory system for AI agents with dual storage (Qdrant + Kuzu)**\n\n## Features\n\n- **Vector Search**: Fast semantic search with Qdrant\n- **Graph Storage**: Optional relationship analysis with Kuzu\n- **AI Integration**: Automated entity extraction with Google Gemini\n- **MCP Compatible**: Ready-to-use MCP server for AI agents\n- **Lightweight**: Minimal dependencies, optimized for performance\n\n## Quick Start\n\n### Option 1: Docker (Recommended)\n```bash\n# 1. Create configuration\ncp env.example .env\n# Edit .env and set your GOOGLE_API_KEY\n\n# 2. Run MEMG MCP Server (359MB)\ndocker run -d \\\n  -p 8787:8787 \\\n  --env-file .env \\\n  ghcr.io/genovo-ai/memg-core-mcp:latest\n\n# 3. Test it's working\ncurl http://localhost:8787/health\n```\n\n### Option 2: Python Package (Core Library)\n```bash\npip install memg-core\n\n# Set up environment (for examples/tests)\ncp env.example .env\n# Edit .env and set your GOOGLE_API_KEY\n\n# Use the core library in your app; the MCP server is provided via Docker image\n# Example usage shown below in the Usage section.\n```\n\n### Development setup\n```bash\n# 1) Create virtualenv and install slim runtime deps for library usage\npython3 -m venv .venv && source .venv/bin/activate\npip install -r requirements.txt\n\n# 2) For running tests and linters locally, install dev deps\npip install -r requirements-dev.txt\n\n# 3) Run tests\nexport MEMG_TEMPLATE=\"software_development\"\nexport QDRANT_STORAGE_PATH=\"$HOME/.local/share/qdrant\"\nexport KUZU_DB_PATH=\"$HOME/.local/share/kuzu/memg.db\"\nmkdir -p \"$QDRANT_STORAGE_PATH\" \"$HOME/.local/share/kuzu\"\nPYTHONPATH=$(pwd)/src pytest -q\n```\n\n## Usage\n\n```python\nfrom memg_core import add_memory, search_memories\nfrom memg_core.models.core import Memory, MemoryType\n\n# Add a note\nnote = Memory(user_id=\"u1\", content=\"Python is great for AI\", memory_type=MemoryType.NOTE)\nadd_memory(note)\n\n# Search\nimport asyncio\nasyncio.run(search_memories(\"python ai\", user_id=\"u1\"))\n```\n\n### YAML registries (optional)\n\nCore ships with three tiny registries under `integration/config/`:\n\n- `core.minimal.yaml`: basic types `note`, `document`, `task` with anchors and generic relations\n- `core.software_dev.yaml`: adds `bug` + `solution` and `bug_solution` relation\n- `core.knowledge.yaml`: `concept` + `document` with `mentions`/`derived_from`\n\nEnable:\n\n```bash\nexport MEMG_ENABLE_YAML_SCHEMA=true\nexport MEMG_YAML_SCHEMA=$(pwd)/integration/config/core.minimal.yaml\n```\n\n## Evaluation\n\nUse the built-in scripts to generate a synthetic dataset that covers all entity and memory types, and then run repeatable evaluations each iteration.\n\n### 1) Generate dataset\n```bash\npython scripts/generate_synthetic_dataset.py \\\n  --output ./data/memg_synth.jsonl \\\n  --num 200 \\\n  --user eval_user\n```\n\nThis creates JSONL rows containing a `memory` plus associated `entities` and `relationships`, exercising:\n- All `EntityType` values (TECHNOLOGY, DATABASE, COMPONENT, ERROR, SOLUTION, FILE_TYPE, etc.)\n- Multiple `MemoryType`s: document, note, conversation, task\n- Basic `MENTIONS` relationships\n\n### 2) Offline validation (no external services)\nValidates schema and database compatibility quickly without embeddings or storage.\n```bash\npython scripts/evaluate_memg.py --data ./data/memg_synth.jsonl --mode offline\n```\nOutput summary includes rows, counts, and error/warning totals to track across iterations.\n\n### 3) Live processing (embeddings + storage)\nRequires environment configured (e.g., `GOOGLE_API_KEY`) and storage reachable. It runs the Unified pipeline and validates the resulting memories.\n```bash\npython scripts/evaluate_memg.py --data ./data/memg_synth.jsonl --mode live\n```\n\nTip: Commit the dataset and compare results over time in CI to catch regressions.\n\n## Configuration\n\nConfigure via `.env` file (copy from `env.example`):\n\n```bash\n# Required\nGOOGLE_API_KEY=your_google_api_key_here\n\n# Core settings\nGEMINI_MODEL=gemini-2.0-flash\nMEMORY_SYSTEM_MCP_PORT=8787\nMEMG_TEMPLATE=software_development\n\n# Storage\nBASE_MEMORY_PATH=$HOME/.local/share/memory_system\nQDRANT_COLLECTION=memories\nEMBEDDING_DIMENSION_LEN=768\n```\n\n## Requirements\n\n- Python 3.11+\n- Google API key for Gemini\n\n## Links\n\n- [Repository](https://github.com/genovo-ai/memg-core)\n- [Issues](https://github.com/genovo-ai/memg-core/issues)\n- [Documentation](https://github.com/genovo-ai/memg-core#readme)\n\n## License\n\nMIT License - see LICENSE file for details.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Lightweight memory system for AI agents with vector search and graph storage",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/genovo-ai/memg-core",
        "Issues": "https://github.com/genovo-ai/memg-core/issues",
        "Repository": "https://github.com/genovo-ai/memg-core"
    },
    "split_keywords": [
        "ai",
        " memory",
        " vector-search",
        " graph-database",
        " mcp",
        " agents"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "63d4a20b966cd84599c554a9d5213c80313d7308c60b592b1969de0f71adf6cf",
                "md5": "5425a8879b27cfc8cdb1a5c2afd94764",
                "sha256": "78d52e2440f3bdde47e6c269f9c310d1650d9f2ceeea1ea974b0e4763e20afad"
            },
            "downloads": -1,
            "filename": "memg_core-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5425a8879b27cfc8cdb1a5c2afd94764",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 31331,
            "upload_time": "2025-08-13T15:35:27",
            "upload_time_iso_8601": "2025-08-13T15:35:27.810368Z",
            "url": "https://files.pythonhosted.org/packages/63/d4/a20b966cd84599c554a9d5213c80313d7308c60b592b1969de0f71adf6cf/memg_core-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "486c248cd4ef15f9fb937c462b1c8b95306cdddc430f41ee4e89aef2a70e78dc",
                "md5": "d5b63c9512770033f33c63f8ff2683ad",
                "sha256": "2ecbec2d1962a50b2fd0ed8f364db22c546ab48d5931f1b3cfcd5445e211e0fc"
            },
            "downloads": -1,
            "filename": "memg_core-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d5b63c9512770033f33c63f8ff2683ad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 85888,
            "upload_time": "2025-08-13T15:35:28",
            "upload_time_iso_8601": "2025-08-13T15:35:28.709135Z",
            "url": "https://files.pythonhosted.org/packages/48/6c/248cd4ef15f9fb937c462b1c8b95306cdddc430f41ee4e89aef2a70e78dc/memg_core-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-13 15:35:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "genovo-ai",
    "github_project": "memg-core",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "qdrant-client",
            "specs": [
                [
                    "==",
                    "1.15.1"
                ]
            ]
        },
        {
            "name": "kuzu",
            "specs": [
                [
                    "==",
                    "0.11.1"
                ]
            ]
        },
        {
            "name": "fastembed",
            "specs": [
                [
                    ">=",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    ">=",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.11.7"
                ]
            ]
        }
    ],
    "lcname": "memg-core"
}
        
Elapsed time: 1.23484s