Name | mnemex JSON |
Version |
0.5.2
JSON |
| download |
home_page | None |
Summary | Mnemex: Temporal memory management for AI assistants with human-like dynamics |
upload_time | 2025-10-20 22:47:34 |
maintainer | None |
docs_url | None |
author | Mnemex |
requires_python | >=3.10 |
license | MIT |
keywords |
ai
llm
mcp
memex
memory
mnemonic
temporal-decay
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Mnemex: Temporal Memory for AI
<!-- mcp-name: io.github.simplemindedbot/mnemex -->
A Model Context Protocol (MCP) server providing **human-like memory dynamics** for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the [Ebbinghaus forgetting curve](https://en.wikipedia.org/wiki/Forgetting_curve).
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://github.com/simplemindedbot/mnemex/actions/workflows/tests.yml)
[](https://github.com/simplemindedbot/mnemex/actions/workflows/security.yml)
[](https://codecov.io/gh/simplemindedbot/mnemex)
[](https://github.com/simplemindedbot/mnemex/actions/workflows/security.yml)
> [!WARNING]
> **π§ ACTIVE DEVELOPMENT - EXPECT BUGS π§**
>
> This project is under active development and should be considered **experimental**. You will likely encounter bugs, breaking changes, and incomplete features. Use at your own risk. Please report issues on GitHub, but understand that this is research code, not production-ready software.
>
> **Known issues:**
> - API may change without notice between versions
> - Test coverage is incomplete
> **π New to this project?** Start with the [ELI5 Guide](ELI5.md) for a simple explanation of what this does and how to use it.
## Overview
This repository contains research, design, and a complete implementation of a short-term memory system that combines:
- **Novel temporal decay algorithm** based on cognitive science
- **Reinforcement learning** through usage patterns
- **Two-layer architecture** (STM + LTM) for working and permanent memory
- **Smart prompting patterns** for natural LLM integration
- **Git-friendly storage** with human-readable JSONL
- **Knowledge graph** with entities and relations
## Why Mnemex?
### π Privacy & Transparency
**All data stored locally on your machine** - no cloud services, no tracking, no data sharing.
- **Short-term memory**: Human-readable JSONL files (`~/.config/mnemex/jsonl/`)
- One JSON object per line
- Easy to inspect, version control, and backup
- Git-friendly format for tracking changes
- **Long-term memory**: Markdown files optimized for Obsidian
- YAML frontmatter with metadata
- Wikilinks for connections
- Permanent storage you control
You own your data. You can read it, edit it, delete it, or version control it - all without any special tools.
## Core Algorithm
The temporal decay scoring function:
$$
\Large \text{score}(t) = (n_{\text{use}})^\beta \cdot e^{-\lambda \cdot \Delta t} \cdot s
$$
Where:
- $\large n_{\text{use}}$ - Use count (number of accesses)
- $\large \beta$ (beta) - Sub-linear use count weighting (default: 0.6)
- $\large \lambda = \frac{\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)
- $\large \Delta t$ - Time since last access (seconds)
- $\large s$ - Strength parameter $\in [0, 2]$ (importance multiplier)
Thresholds:
- $\large \tau_{\text{forget}}$ (default 0.05) β if score < this, forget
- $\large \tau_{\text{promote}}$ (default 0.65) β if score β₯ this, promote (or if $\large n_{\text{use}}\ge5$ in 14 days)
Decay Models:
- PowerβLaw (default): heavier tail; most humanβlike retention
- Exponential: lighter tail; forgets sooner
- TwoβComponent: fast early forgetting + heavier tail
See detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.
## Tuning Cheat Sheet
- Balanced (default)
- Half-life: 3 days (Ξ» β 2.67e-6)
- Ξ² = 0.6, Ο_forget = 0.05, Ο_promote = 0.65, use_countβ₯5 in 14d
- Strength: 1.0 (bump to 1.3β2.0 for critical)
- Highβvelocity context (ephemeral notes, rapid switching)
- Half-life: 12β24 hours (Ξ» β 1.60e-5 to 8.02e-6)
- Ξ² = 0.8β0.9, Ο_forget = 0.10β0.15, Ο_promote = 0.70β0.75
- Long retention (research/archival)
- Half-life: 7β14 days (Ξ» β 1.15e-6 to 5.73e-7)
- Ξ² = 0.3β0.5, Ο_forget = 0.02β0.05, Ο_promote = 0.50β0.60
- Preference/decision heavy assistants
- Half-life: 3β7 days; Ξ² = 0.6β0.8
- Strength defaults: 1.3β1.5 for preferences; 1.8β2.0 for decisions
- Aggressive space control
- Raise Ο_forget to 0.08β0.12 and/or shorten half-life; schedule weekly GC
- Environment template
- MNEMEX_DECAY_LAMBDA=2.673e-6, MNEMEX_DECAY_BETA=0.6
- MNEMEX_FORGET_THRESHOLD=0.05, MNEMEX_PROMOTE_THRESHOLD=0.65
- MNEMEX_PROMOTE_USE_COUNT=5, MNEMEX_PROMOTE_TIME_WINDOW=14
**Decision thresholds:**
- Forget: $\text{score} < 0.05$ β delete memory
- Promote: $\text{score} \geq 0.65$ OR $n_{\text{use}} \geq 5$ within 14 days β move to LTM
## Key Innovations
### 1. Temporal Decay with Reinforcement
Unlike traditional caching (TTL, LRU), memories are scored continuously based on:
- **Recency** - Exponential decay over time
- **Frequency** - Use count with sub-linear weighting
- **Importance** - Adjustable strength parameter
This creates memory dynamics that closely mimic human cognition.
### 2. Smart Prompting System
Patterns for making AI assistants use memory naturally:
**Auto-Save**
```
User: "I prefer TypeScript over JavaScript"
β Automatically saved with tags: [preferences, typescript, programming]
```
**Auto-Recall**
```
User: "Can you help with another TypeScript project?"
β Automatically retrieves preferences and conventions
```
**Auto-Reinforce**
```
User: "Yes, still using TypeScript"
β Memory strength increased, decay slowed
```
No explicit memory commands needed - just natural conversation.
### 3. Natural Spaced Repetition
Inspired by how concepts naturally reinforce across different contexts (the "Maslow effect" - remembering Maslow's hierarchy better when it appears in history, economics, and sociology classes).
**No flashcards. No explicit review sessions. Just natural conversation.**
**How it works:**
1. **Review Priority Calculation** - Memories in the "danger zone" (0.15-0.35 decay score) get highest priority
2. **Cross-Domain Detection** - Detects when memories are used in different contexts (tag Jaccard similarity <30%)
3. **Automatic Reinforcement** - Memories strengthen naturally when used, especially across domains
4. **Blended Search** - Review candidates appear in 30% of search results (configurable)
**Usage pattern:**
```
User: "Can you help with authentication in my API?"
β System searches, retrieves JWT preference memory
β System uses memory to answer question
β System calls observe_memory_usage with context tags [api, auth, backend]
β Cross-domain usage detected (original tags: [security, jwt, preferences])
β Memory automatically reinforced, strength boosted
β Next search naturally surfaces memories needing review
```
**Configuration:**
```bash
MNEMEX_REVIEW_BLEND_RATIO=0.3 # 30% review candidates in search
MNEMEX_REVIEW_DANGER_ZONE_MIN=0.15 # Lower bound of danger zone
MNEMEX_REVIEW_DANGER_ZONE_MAX=0.35 # Upper bound of danger zone
MNEMEX_AUTO_REINFORCE=true # Auto-reinforce on observe
```
See `docs/prompts/` for LLM system prompt templates that enable natural memory usage.
### 4. Two-Layer Architecture
```
βββββββββββββββββββββββββββββββββββββββ
β Short-term memory β
β - JSONL storage β
β - Temporal decay β
β - Hours to weeks retention β
ββββββββββββββββ¬βββββββββββββββββββββββ
β Automatic promotion
β
βββββββββββββββββββββββββββββββββββββββ
β LTM (Long-Term Memory) β
β - Markdown files (Obsidian) β
β - Permanent storage β
β - Git version control β
βββββββββββββββββββββββββββββββββββββββ
```
## Project Structure
```
mnemex/
βββ README.md # This file
βββ CLAUDE.md # Guide for AI assistants
βββ src/mnemex/
β βββ core/ # Decay, scoring, clustering
β βββ storage/ # JSONL and LTM index
β βββ tools/ # 11 MCP tools
β βββ backup/ # Git integration
β βββ vault/ # Obsidian integration
βββ docs/
β βββ scoring_algorithm.md # Mathematical details
β βββ prompts/ # Smart prompting patterns
β βββ architecture.md # System design
β βββ api.md # Tool reference
βββ tests/ # Test suite
βββ examples/ # Usage examples
βββ pyproject.toml # Project configuration
```
## Quick Start
### Installation
**Recommended: UV Tool Install**
```bash
# Install from GitHub (recommended)
uv tool install git+https://github.com/simplemindedbot/mnemex.git
# Or install from local directory (for development)
uv tool install .
```
This installs `mnemex` and all 7 CLI commands as isolated tools.
**Alternative: Editable Install (for development)**
```bash
# Clone and install in editable mode
git clone https://github.com/simplemindedbot/mnemex.git
cd mnemex
uv pip install -e ".[dev]"
```
### Configuration
Copy `.env.example` to `.env` and configure:
```bash
# Storage
MNEMEX_STORAGE_PATH=~/.config/mnemex/jsonl
# Decay model (power_law | exponential | two_component)
MNEMEX_DECAY_MODEL=power_law
# Power-law parameters (default model)
MNEMEX_PL_ALPHA=1.1
MNEMEX_PL_HALFLIFE_DAYS=3.0
# Exponential (if selected)
# MNEMEX_DECAY_LAMBDA=2.673e-6 # 3-day half-life
# Two-component (if selected)
# MNEMEX_TC_LAMBDA_FAST=1.603e-5 # ~12h
# MNEMEX_TC_LAMBDA_SLOW=1.147e-6 # ~7d
# MNEMEX_TC_WEIGHT_FAST=0.7
# Common parameters
MNEMEX_DECAY_LAMBDA=2.673e-6
MNEMEX_DECAY_BETA=0.6
# Thresholds
MNEMEX_FORGET_THRESHOLD=0.05
MNEMEX_PROMOTE_THRESHOLD=0.65
# Long-term memory (optional)
LTM_VAULT_PATH=~/Documents/Obsidian/Vault
```
### MCP Configuration
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"mnemex": {
"command": "mnemex"
}
}
}
```
That's it! No paths, no environment variables needed.
**For development (editable install):**
```json
{
"mcpServers": {
"mnemex": {
"command": "uv",
"args": ["--directory", "/path/to/mnemex", "run", "mnemex"],
"env": {"PYTHONPATH": "/path/to/mnemex/src"}
}
}
}
```
**Configuration:**
- Storage paths are configured in `~/.config/mnemex/.env` or project `.env`
- See `.env.example` for all available settings
### Maintenance
Use the maintenance CLI to inspect and compact JSONL storage:
```bash
# Show storage stats (active counts, file sizes, compaction hints)
mnemex-maintenance stats
# Compact JSONL (rewrite without tombstones/duplicates)
mnemex-maintenance compact
```
### Migrating to UV Tool Install
If you're currently using an editable install (`uv pip install -e .`), you can switch to the simpler UV tool install:
```bash
# 1. Uninstall editable version
uv pip uninstall mnemex
# 2. Install as UV tool
uv tool install git+https://github.com/simplemindedbot/mnemex.git
# 3. Update Claude Desktop config to just:
# {"command": "mnemex"}
# Remove the --directory, run, and PYTHONPATH settings
```
**Your data is safe!** This only changes how the command is installed. Your memories in `~/.config/mnemex/` are untouched.
### Migrating from STM Server
If you previously used this project as "STM Server", use the migration tool:
```bash
# Preview what will be migrated
mnemex-migrate --dry-run
# Migrate data files from ~/.stm/ to ~/.config/mnemex/
mnemex-migrate --data-only
# Also migrate .env file (rename STM_* variables to MNEMEX_*)
mnemex-migrate --migrate-env --env-path ./.env
```
The migration tool will:
- Copy JSONL files from `~/.stm/jsonl/` to `~/.config/mnemex/jsonl/`
- Optionally rename environment variables (STM_* β MNEMEX_*)
- Create backups before making changes
- Provide clear next-step instructions
After migration, update your Claude Desktop config to use `mnemex` instead of `stm`.
## CLI Commands
The server includes 7 command-line tools:
```bash
mnemex # Run MCP server
mnemex-migrate # Migrate from old STM setup
mnemex-index-ltm # Index Obsidian vault
mnemex-backup # Git backup operations
mnemex-vault # Vault markdown operations
mnemex-search # Unified STM+LTM search
mnemex-maintenance # JSONL storage stats and compaction
```
## MCP Tools
11 tools for AI assistants to manage memories:
| Tool | Purpose |
|------|---------|
| `save_memory` | Save new memory with tags, entities |
| `search_memory` | Search with filters and scoring (includes review candidates) |
| `search_unified` | Unified search across STM + LTM |
| `touch_memory` | Reinforce memory (boost strength) |
| `observe_memory_usage` | Record memory usage for natural spaced repetition |
| `gc` | Garbage collect low-scoring memories |
| `promote_memory` | Move to long-term storage |
| `cluster_memories` | Find similar memories |
| `consolidate_memories` | Merge similar memories (algorithmic) |
| `read_graph` | Get entire knowledge graph |
| `open_memories` | Retrieve specific memories |
| `create_relation` | Link memories explicitly |
### Example: Unified Search
Search across STM and LTM with the CLI:
```bash
mnemex-search "typescript preferences" --tags preferences --limit 5 --verbose
```
### Example: Reinforce (Touch) Memory
Boost a memory's recency/use count to slow decay:
```json
{
"memory_id": "mem-123",
"boost_strength": true
}
```
Sample response:
```json
{
"success": true,
"memory_id": "mem-123",
"old_score": 0.41,
"new_score": 0.78,
"use_count": 5,
"strength": 1.1
}
```
### Example: Promote Memory
Suggest and promote high-value memories to the Obsidian vault.
Auto-detect (dry run):
```json
{
"auto_detect": true,
"dry_run": true
}
```
Promote a specific memory:
```json
{
"memory_id": "mem-123",
"dry_run": false,
"target": "obsidian"
}
```
As an MCP tool (request body):
```json
{
"query": "typescript preferences",
"tags": ["preferences"],
"limit": 5,
"verbose": true
}
```
### Example: Consolidate Similar Memories
Find and merge duplicate or highly similar memories to reduce clutter:
Auto-detect candidates (preview):
```json
{
"auto_detect": true,
"mode": "preview",
"cohesion_threshold": 0.75
}
```
Apply consolidation to detected clusters:
```json
{
"auto_detect": true,
"mode": "apply",
"cohesion_threshold": 0.80
}
```
The tool will:
- Merge content intelligently (preserving unique information)
- Combine tags and entities (union)
- Calculate strength based on cluster cohesion
- Preserve earliest `created_at` and latest `last_used` timestamps
- Create tracking relations showing consolidation history
## Mathematical Details
### Decay Curves
For a memory with $n_{\text{use}}=1$, $s=1.0$, and $\lambda = 2.673 \times 10^{-6}$ (3-day half-life):
| Time | Score | Status |
|------|-------|--------|
| 0 hours | 1.000 | Fresh |
| 12 hours | 0.917 | Active |
| 1 day | 0.841 | Active |
| 3 days | 0.500 | Half-life |
| 7 days | 0.210 | Decaying |
| 14 days | 0.044 | Near forget |
| 30 days | 0.001 | **Forgotten** |
### Use Count Impact
With $\beta = 0.6$ (sub-linear weighting):
| Use Count | Boost Factor |
|-----------|--------------|
| 1 | 1.0Γ |
| 5 | 2.6Γ |
| 10 | 4.0Γ |
| 50 | 11.4Γ |
Frequent access significantly extends retention.
## Documentation
- **[Scoring Algorithm](docs/scoring_algorithm.md)** - Complete mathematical model with LaTeX formulas
- **[Smart Prompting](docs/prompts/memory_system_prompt.md)** - Patterns for natural LLM integration
- **[Architecture](docs/architecture.md)** - System design and implementation
- **[API Reference](docs/api.md)** - MCP tool documentation
- **[Bear Integration](docs/bear-integration.md)** - Guide to using Bear app as an LTM store
- **[Graph Features](docs/graph_features.md)** - Knowledge graph usage
## Use Cases
### Personal Assistant (Balanced)
- 3-day half-life
- Remember preferences and decisions
- Auto-promote frequently referenced information
### Development Environment (Aggressive)
- 1-day half-life
- Fast context switching
- Aggressive forgetting of old context
### Research / Archival (Conservative)
- 14-day half-life
- Long retention
- Comprehensive knowledge preservation
## License
MIT License - See [LICENSE](LICENSE) for details.
Clean-room implementation. No AGPL dependencies.
### Knowledge & Memory
* [mem0ai/mem0-mcp](https://github.com/mem0ai/mem0-mcp) (Python) - A MCP server that provides a smart memory for AI to manage and reference past conversations, user preferences, and key details.
* [mnemex](https://github.com/simplemindedbot/mnemex) (Python) - A Python-based MCP server that provides a human-like short-term working memory (JSONL) and long-term memory (Markdown) system for AI assistants. The core of the project is a temporal decay algorithm that causes memories to fade over time unless they are reinforced through use.
* [modelcontextprotocol/server-memory](https://github.com/modelcontextprotocol/server-memory) (TypeScript) - A knowledge graph-based persistent memory system for AI.
## Related Work
- [Model Context Protocol](https://github.com/modelcontextprotocol) - MCP specification
- [Ebbinghaus Forgetting Curve](https://en.wikipedia.org/wiki/Forgetting_curve) - Cognitive science foundation
- Research inspired by: Memoripy, Titan MCP, MemoryBank
## Citation
If you use this work in research, please cite:
```bibtex
@software{mnemex_2025,
title = {Mnemex: Temporal Memory for AI},
author = {simplemindedbot},
year = {2025},
url = {https://github.com/simplemindedbot/mnemex},
version = {1.0.0}
}
```
## Contributing
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions.
### π¨ **Help Needed: Windows & Linux Testers!**
I develop on macOS and need help testing on Windows and Linux. If you have access to these platforms, please:
- Try the installation instructions
- Run the test suite
- Report what works and what doesn't
See the [**Help Needed section**](CONTRIBUTING.md#-help-needed-windows--linux-testers) in CONTRIBUTING.md for details.
### General Contributions
For all contributors, see [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Platform-specific setup (Windows, Linux, macOS)
- Development workflow
- Testing guidelines
- Code style requirements
- Pull request process
Quick start:
1. Read [CONTRIBUTING.md](CONTRIBUTING.md) for platform-specific setup
2. Understand the [Architecture docs](docs/architecture.md)
3. Review the [Scoring Algorithm](docs/scoring_algorithm.md)
4. Follow existing code patterns
5. Add tests for new features
6. Update documentation
## Status
**Version:** 1.0.0
**Status:** Research implementation - functional but evolving
### Phase 1 (Complete) β
- 10 MCP tools
- Temporal decay algorithm
- Knowledge graph
### Phase 2 (Complete) β
- JSONL storage
- LTM index
- Git integration
- Smart prompting documentation
- Maintenance CLI
- Memory consolidation (algorithmic merging)
### Future Work
- Spaced repetition optimization
- Adaptive decay parameters
- Performance benchmarks
- LLM-assisted consolidation (optional enhancement)
---
**Built with** [Claude Code](https://claude.com/claude-code) π€
Raw data
{
"_id": null,
"home_page": null,
"name": "mnemex",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, llm, mcp, memex, memory, mnemonic, temporal-decay",
"author": "Mnemex",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/67/6f/994973a940a95b0dedd8ec710df31c7175f04c353ca6d95458f6d78a2e6e/mnemex-0.5.2.tar.gz",
"platform": null,
"description": "# Mnemex: Temporal Memory for AI\n\n<!-- mcp-name: io.github.simplemindedbot/mnemex -->\n\nA Model Context Protocol (MCP) server providing **human-like memory dynamics** for AI assistants. Memories naturally fade over time unless reinforced through use, mimicking the [Ebbinghaus forgetting curve](https://en.wikipedia.org/wiki/Forgetting_curve).\n\n[](https://opensource.org/licenses/MIT)\n[](https://www.python.org/downloads/)\n[](https://github.com/simplemindedbot/mnemex/actions/workflows/tests.yml)\n[](https://github.com/simplemindedbot/mnemex/actions/workflows/security.yml)\n[](https://codecov.io/gh/simplemindedbot/mnemex)\n[](https://github.com/simplemindedbot/mnemex/actions/workflows/security.yml)\n\n> [!WARNING]\n> **\ud83d\udea7 ACTIVE DEVELOPMENT - EXPECT BUGS \ud83d\udea7**\n>\n> This project is under active development and should be considered **experimental**. You will likely encounter bugs, breaking changes, and incomplete features. Use at your own risk. Please report issues on GitHub, but understand that this is research code, not production-ready software.\n>\n> **Known issues:**\n> - API may change without notice between versions\n> - Test coverage is incomplete\n\n> **\ud83d\udcd6 New to this project?** Start with the [ELI5 Guide](ELI5.md) for a simple explanation of what this does and how to use it.\n\n## Overview\n\nThis repository contains research, design, and a complete implementation of a short-term memory system that combines:\n\n- **Novel temporal decay algorithm** based on cognitive science\n- **Reinforcement learning** through usage patterns\n- **Two-layer architecture** (STM + LTM) for working and permanent memory\n- **Smart prompting patterns** for natural LLM integration\n- **Git-friendly storage** with human-readable JSONL\n- **Knowledge graph** with entities and relations\n\n## Why Mnemex?\n\n### \ud83d\udd12 Privacy & Transparency\n\n**All data stored locally on your machine** - no cloud services, no tracking, no data sharing.\n\n- **Short-term memory**: Human-readable JSONL files (`~/.config/mnemex/jsonl/`)\n - One JSON object per line\n - Easy to inspect, version control, and backup\n - Git-friendly format for tracking changes\n\n- **Long-term memory**: Markdown files optimized for Obsidian\n - YAML frontmatter with metadata\n - Wikilinks for connections\n - Permanent storage you control\n\nYou own your data. You can read it, edit it, delete it, or version control it - all without any special tools.\n\n## Core Algorithm\n\nThe temporal decay scoring function:\n\n$$\n\\Large \\text{score}(t) = (n_{\\text{use}})^\\beta \\cdot e^{-\\lambda \\cdot \\Delta t} \\cdot s\n$$\n\nWhere:\n\n- $\\large n_{\\text{use}}$ - Use count (number of accesses)\n- $\\large \\beta$ (beta) - Sub-linear use count weighting (default: 0.6)\n- $\\large \\lambda = \\frac{\\ln(2)}{t_{1/2}}$ (lambda) - Decay constant; set via half-life (default: 3-day)\n- $\\large \\Delta t$ - Time since last access (seconds)\n- $\\large s$ - Strength parameter $\\in [0, 2]$ (importance multiplier)\n\nThresholds:\n\n- $\\large \\tau_{\\text{forget}}$ (default 0.05) \u2014 if score < this, forget\n- $\\large \\tau_{\\text{promote}}$ (default 0.65) \u2014 if score \u2265 this, promote (or if $\\large n_{\\text{use}}\\ge5$ in 14 days)\n\nDecay Models:\n\n- Power\u2011Law (default): heavier tail; most human\u2011like retention\n- Exponential: lighter tail; forgets sooner\n- Two\u2011Component: fast early forgetting + heavier tail\n\nSee detailed parameter reference, model selection, and worked examples in docs/scoring_algorithm.md.\n\n## Tuning Cheat Sheet\n\n- Balanced (default)\n - Half-life: 3 days (\u03bb \u2248 2.67e-6)\n - \u03b2 = 0.6, \u03c4_forget = 0.05, \u03c4_promote = 0.65, use_count\u22655 in 14d\n - Strength: 1.0 (bump to 1.3\u20132.0 for critical)\n- High\u2011velocity context (ephemeral notes, rapid switching)\n - Half-life: 12\u201324 hours (\u03bb \u2248 1.60e-5 to 8.02e-6)\n - \u03b2 = 0.8\u20130.9, \u03c4_forget = 0.10\u20130.15, \u03c4_promote = 0.70\u20130.75\n- Long retention (research/archival)\n - Half-life: 7\u201314 days (\u03bb \u2248 1.15e-6 to 5.73e-7)\n - \u03b2 = 0.3\u20130.5, \u03c4_forget = 0.02\u20130.05, \u03c4_promote = 0.50\u20130.60\n- Preference/decision heavy assistants\n - Half-life: 3\u20137 days; \u03b2 = 0.6\u20130.8\n - Strength defaults: 1.3\u20131.5 for preferences; 1.8\u20132.0 for decisions\n- Aggressive space control\n - Raise \u03c4_forget to 0.08\u20130.12 and/or shorten half-life; schedule weekly GC\n- Environment template\n - MNEMEX_DECAY_LAMBDA=2.673e-6, MNEMEX_DECAY_BETA=0.6\n - MNEMEX_FORGET_THRESHOLD=0.05, MNEMEX_PROMOTE_THRESHOLD=0.65\n - MNEMEX_PROMOTE_USE_COUNT=5, MNEMEX_PROMOTE_TIME_WINDOW=14\n\n**Decision thresholds:**\n\n- Forget: $\\text{score} < 0.05$ \u2192 delete memory\n- Promote: $\\text{score} \\geq 0.65$ OR $n_{\\text{use}} \\geq 5$ within 14 days \u2192 move to LTM\n\n## Key Innovations\n\n### 1. Temporal Decay with Reinforcement\n\nUnlike traditional caching (TTL, LRU), memories are scored continuously based on:\n\n- **Recency** - Exponential decay over time\n- **Frequency** - Use count with sub-linear weighting\n- **Importance** - Adjustable strength parameter\n\nThis creates memory dynamics that closely mimic human cognition.\n\n### 2. Smart Prompting System\n\nPatterns for making AI assistants use memory naturally:\n\n**Auto-Save**\n\n```\nUser: \"I prefer TypeScript over JavaScript\"\n\u2192 Automatically saved with tags: [preferences, typescript, programming]\n```\n\n**Auto-Recall**\n\n```\nUser: \"Can you help with another TypeScript project?\"\n\u2192 Automatically retrieves preferences and conventions\n```\n\n**Auto-Reinforce**\n\n```\nUser: \"Yes, still using TypeScript\"\n\u2192 Memory strength increased, decay slowed\n```\n\nNo explicit memory commands needed - just natural conversation.\n\n### 3. Natural Spaced Repetition\n\nInspired by how concepts naturally reinforce across different contexts (the \"Maslow effect\" - remembering Maslow's hierarchy better when it appears in history, economics, and sociology classes).\n\n**No flashcards. No explicit review sessions. Just natural conversation.**\n\n**How it works:**\n\n1. **Review Priority Calculation** - Memories in the \"danger zone\" (0.15-0.35 decay score) get highest priority\n2. **Cross-Domain Detection** - Detects when memories are used in different contexts (tag Jaccard similarity <30%)\n3. **Automatic Reinforcement** - Memories strengthen naturally when used, especially across domains\n4. **Blended Search** - Review candidates appear in 30% of search results (configurable)\n\n**Usage pattern:**\n\n```\nUser: \"Can you help with authentication in my API?\"\n\u2192 System searches, retrieves JWT preference memory\n\u2192 System uses memory to answer question\n\u2192 System calls observe_memory_usage with context tags [api, auth, backend]\n\u2192 Cross-domain usage detected (original tags: [security, jwt, preferences])\n\u2192 Memory automatically reinforced, strength boosted\n\u2192 Next search naturally surfaces memories needing review\n```\n\n**Configuration:**\n\n```bash\nMNEMEX_REVIEW_BLEND_RATIO=0.3 # 30% review candidates in search\nMNEMEX_REVIEW_DANGER_ZONE_MIN=0.15 # Lower bound of danger zone\nMNEMEX_REVIEW_DANGER_ZONE_MAX=0.35 # Upper bound of danger zone\nMNEMEX_AUTO_REINFORCE=true # Auto-reinforce on observe\n```\n\nSee `docs/prompts/` for LLM system prompt templates that enable natural memory usage.\n\n### 4. Two-Layer Architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Short-term memory \u2502\n\u2502 - JSONL storage \u2502\n\u2502 - Temporal decay \u2502\n\u2502 - Hours to weeks retention \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 Automatic promotion\n \u2193\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 LTM (Long-Term Memory) \u2502\n\u2502 - Markdown files (Obsidian) \u2502\n\u2502 - Permanent storage \u2502\n\u2502 - Git version control \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## Project Structure\n\n```\nmnemex/\n\u251c\u2500\u2500 README.md # This file\n\u251c\u2500\u2500 CLAUDE.md # Guide for AI assistants\n\u251c\u2500\u2500 src/mnemex/\n\u2502 \u251c\u2500\u2500 core/ # Decay, scoring, clustering\n\u2502 \u251c\u2500\u2500 storage/ # JSONL and LTM index\n\u2502 \u251c\u2500\u2500 tools/ # 11 MCP tools\n\u2502 \u251c\u2500\u2500 backup/ # Git integration\n\u2502 \u2514\u2500\u2500 vault/ # Obsidian integration\n\u251c\u2500\u2500 docs/\n\u2502 \u251c\u2500\u2500 scoring_algorithm.md # Mathematical details\n\u2502 \u251c\u2500\u2500 prompts/ # Smart prompting patterns\n\u2502 \u251c\u2500\u2500 architecture.md # System design\n\u2502 \u2514\u2500\u2500 api.md # Tool reference\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 examples/ # Usage examples\n\u2514\u2500\u2500 pyproject.toml # Project configuration\n```\n\n## Quick Start\n\n### Installation\n\n**Recommended: UV Tool Install**\n\n```bash\n# Install from GitHub (recommended)\nuv tool install git+https://github.com/simplemindedbot/mnemex.git\n\n# Or install from local directory (for development)\nuv tool install .\n```\n\nThis installs `mnemex` and all 7 CLI commands as isolated tools.\n\n**Alternative: Editable Install (for development)**\n\n```bash\n# Clone and install in editable mode\ngit clone https://github.com/simplemindedbot/mnemex.git\ncd mnemex\nuv pip install -e \".[dev]\"\n```\n\n### Configuration\n\nCopy `.env.example` to `.env` and configure:\n\n```bash\n# Storage\nMNEMEX_STORAGE_PATH=~/.config/mnemex/jsonl\n\n# Decay model (power_law | exponential | two_component)\nMNEMEX_DECAY_MODEL=power_law\n\n# Power-law parameters (default model)\nMNEMEX_PL_ALPHA=1.1\nMNEMEX_PL_HALFLIFE_DAYS=3.0\n\n# Exponential (if selected)\n# MNEMEX_DECAY_LAMBDA=2.673e-6 # 3-day half-life\n\n# Two-component (if selected)\n# MNEMEX_TC_LAMBDA_FAST=1.603e-5 # ~12h\n# MNEMEX_TC_LAMBDA_SLOW=1.147e-6 # ~7d\n# MNEMEX_TC_WEIGHT_FAST=0.7\n\n# Common parameters\nMNEMEX_DECAY_LAMBDA=2.673e-6\nMNEMEX_DECAY_BETA=0.6\n\n# Thresholds\nMNEMEX_FORGET_THRESHOLD=0.05\nMNEMEX_PROMOTE_THRESHOLD=0.65\n\n# Long-term memory (optional)\nLTM_VAULT_PATH=~/Documents/Obsidian/Vault\n```\n\n### MCP Configuration\n\nAdd to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):\n\n```json\n{\n \"mcpServers\": {\n \"mnemex\": {\n \"command\": \"mnemex\"\n }\n }\n}\n```\n\nThat's it! No paths, no environment variables needed.\n\n**For development (editable install):**\n\n```json\n{\n \"mcpServers\": {\n \"mnemex\": {\n \"command\": \"uv\",\n \"args\": [\"--directory\", \"/path/to/mnemex\", \"run\", \"mnemex\"],\n \"env\": {\"PYTHONPATH\": \"/path/to/mnemex/src\"}\n }\n }\n}\n```\n\n**Configuration:**\n- Storage paths are configured in `~/.config/mnemex/.env` or project `.env`\n- See `.env.example` for all available settings\n\n### Maintenance\n\nUse the maintenance CLI to inspect and compact JSONL storage:\n\n```bash\n# Show storage stats (active counts, file sizes, compaction hints)\nmnemex-maintenance stats\n\n# Compact JSONL (rewrite without tombstones/duplicates)\nmnemex-maintenance compact\n```\n\n### Migrating to UV Tool Install\n\nIf you're currently using an editable install (`uv pip install -e .`), you can switch to the simpler UV tool install:\n\n```bash\n# 1. Uninstall editable version\nuv pip uninstall mnemex\n\n# 2. Install as UV tool\nuv tool install git+https://github.com/simplemindedbot/mnemex.git\n\n# 3. Update Claude Desktop config to just:\n# {\"command\": \"mnemex\"}\n# Remove the --directory, run, and PYTHONPATH settings\n```\n\n**Your data is safe!** This only changes how the command is installed. Your memories in `~/.config/mnemex/` are untouched.\n\n### Migrating from STM Server\n\nIf you previously used this project as \"STM Server\", use the migration tool:\n\n```bash\n# Preview what will be migrated\nmnemex-migrate --dry-run\n\n# Migrate data files from ~/.stm/ to ~/.config/mnemex/\nmnemex-migrate --data-only\n\n# Also migrate .env file (rename STM_* variables to MNEMEX_*)\nmnemex-migrate --migrate-env --env-path ./.env\n```\n\nThe migration tool will:\n- Copy JSONL files from `~/.stm/jsonl/` to `~/.config/mnemex/jsonl/`\n- Optionally rename environment variables (STM_* \u2192 MNEMEX_*)\n- Create backups before making changes\n- Provide clear next-step instructions\n\nAfter migration, update your Claude Desktop config to use `mnemex` instead of `stm`.\n\n## CLI Commands\n\nThe server includes 7 command-line tools:\n\n```bash\nmnemex # Run MCP server\nmnemex-migrate # Migrate from old STM setup\nmnemex-index-ltm # Index Obsidian vault\nmnemex-backup # Git backup operations\nmnemex-vault # Vault markdown operations\nmnemex-search # Unified STM+LTM search\nmnemex-maintenance # JSONL storage stats and compaction\n```\n\n## MCP Tools\n\n11 tools for AI assistants to manage memories:\n\n| Tool | Purpose |\n|------|---------|\n| `save_memory` | Save new memory with tags, entities |\n| `search_memory` | Search with filters and scoring (includes review candidates) |\n| `search_unified` | Unified search across STM + LTM |\n| `touch_memory` | Reinforce memory (boost strength) |\n| `observe_memory_usage` | Record memory usage for natural spaced repetition |\n| `gc` | Garbage collect low-scoring memories |\n| `promote_memory` | Move to long-term storage |\n| `cluster_memories` | Find similar memories |\n| `consolidate_memories` | Merge similar memories (algorithmic) |\n| `read_graph` | Get entire knowledge graph |\n| `open_memories` | Retrieve specific memories |\n| `create_relation` | Link memories explicitly |\n\n### Example: Unified Search\n\nSearch across STM and LTM with the CLI:\n\n```bash\nmnemex-search \"typescript preferences\" --tags preferences --limit 5 --verbose\n```\n\n### Example: Reinforce (Touch) Memory\n\nBoost a memory's recency/use count to slow decay:\n\n```json\n{\n \"memory_id\": \"mem-123\",\n \"boost_strength\": true\n}\n```\n\nSample response:\n\n```json\n{\n \"success\": true,\n \"memory_id\": \"mem-123\",\n \"old_score\": 0.41,\n \"new_score\": 0.78,\n \"use_count\": 5,\n \"strength\": 1.1\n}\n```\n\n### Example: Promote Memory\n\nSuggest and promote high-value memories to the Obsidian vault.\n\nAuto-detect (dry run):\n\n```json\n{\n \"auto_detect\": true,\n \"dry_run\": true\n}\n```\n\nPromote a specific memory:\n\n```json\n{\n \"memory_id\": \"mem-123\",\n \"dry_run\": false,\n \"target\": \"obsidian\"\n}\n```\n\nAs an MCP tool (request body):\n\n```json\n{\n \"query\": \"typescript preferences\",\n \"tags\": [\"preferences\"],\n \"limit\": 5,\n \"verbose\": true\n}\n```\n\n### Example: Consolidate Similar Memories\n\nFind and merge duplicate or highly similar memories to reduce clutter:\n\nAuto-detect candidates (preview):\n\n```json\n{\n \"auto_detect\": true,\n \"mode\": \"preview\",\n \"cohesion_threshold\": 0.75\n}\n```\n\nApply consolidation to detected clusters:\n\n```json\n{\n \"auto_detect\": true,\n \"mode\": \"apply\",\n \"cohesion_threshold\": 0.80\n}\n```\n\nThe tool will:\n- Merge content intelligently (preserving unique information)\n- Combine tags and entities (union)\n- Calculate strength based on cluster cohesion\n- Preserve earliest `created_at` and latest `last_used` timestamps\n- Create tracking relations showing consolidation history\n\n## Mathematical Details\n\n### Decay Curves\n\nFor a memory with $n_{\\text{use}}=1$, $s=1.0$, and $\\lambda = 2.673 \\times 10^{-6}$ (3-day half-life):\n\n| Time | Score | Status |\n|------|-------|--------|\n| 0 hours | 1.000 | Fresh |\n| 12 hours | 0.917 | Active |\n| 1 day | 0.841 | Active |\n| 3 days | 0.500 | Half-life |\n| 7 days | 0.210 | Decaying |\n| 14 days | 0.044 | Near forget |\n| 30 days | 0.001 | **Forgotten** |\n\n### Use Count Impact\n\nWith $\\beta = 0.6$ (sub-linear weighting):\n\n| Use Count | Boost Factor |\n|-----------|--------------|\n| 1 | 1.0\u00d7 |\n| 5 | 2.6\u00d7 |\n| 10 | 4.0\u00d7 |\n| 50 | 11.4\u00d7 |\n\nFrequent access significantly extends retention.\n\n## Documentation\n\n- **[Scoring Algorithm](docs/scoring_algorithm.md)** - Complete mathematical model with LaTeX formulas\n- **[Smart Prompting](docs/prompts/memory_system_prompt.md)** - Patterns for natural LLM integration\n- **[Architecture](docs/architecture.md)** - System design and implementation\n- **[API Reference](docs/api.md)** - MCP tool documentation\n- **[Bear Integration](docs/bear-integration.md)** - Guide to using Bear app as an LTM store\n- **[Graph Features](docs/graph_features.md)** - Knowledge graph usage\n\n## Use Cases\n\n### Personal Assistant (Balanced)\n\n- 3-day half-life\n- Remember preferences and decisions\n- Auto-promote frequently referenced information\n\n### Development Environment (Aggressive)\n\n- 1-day half-life\n- Fast context switching\n- Aggressive forgetting of old context\n\n### Research / Archival (Conservative)\n\n- 14-day half-life\n- Long retention\n- Comprehensive knowledge preservation\n\n## License\n\nMIT License - See [LICENSE](LICENSE) for details.\n\nClean-room implementation. No AGPL dependencies.\n\n### Knowledge & Memory\n* [mem0ai/mem0-mcp](https://github.com/mem0ai/mem0-mcp) (Python) - A MCP server that provides a smart memory for AI to manage and reference past conversations, user preferences, and key details.\n* [mnemex](https://github.com/simplemindedbot/mnemex) (Python) - A Python-based MCP server that provides a human-like short-term working memory (JSONL) and long-term memory (Markdown) system for AI assistants. The core of the project is a temporal decay algorithm that causes memories to fade over time unless they are reinforced through use.\n* [modelcontextprotocol/server-memory](https://github.com/modelcontextprotocol/server-memory) (TypeScript) - A knowledge graph-based persistent memory system for AI.\n\n## Related Work\n\n- [Model Context Protocol](https://github.com/modelcontextprotocol) - MCP specification\n- [Ebbinghaus Forgetting Curve](https://en.wikipedia.org/wiki/Forgetting_curve) - Cognitive science foundation\n- Research inspired by: Memoripy, Titan MCP, MemoryBank\n\n## Citation\n\nIf you use this work in research, please cite:\n\n```bibtex\n@software{mnemex_2025,\n title = {Mnemex: Temporal Memory for AI},\n author = {simplemindedbot},\n year = {2025},\n url = {https://github.com/simplemindedbot/mnemex},\n version = {1.0.0}\n}\n```\n\n## Contributing\n\nContributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed instructions.\n\n### \ud83d\udea8 **Help Needed: Windows & Linux Testers!**\n\nI develop on macOS and need help testing on Windows and Linux. If you have access to these platforms, please:\n\n- Try the installation instructions\n- Run the test suite\n- Report what works and what doesn't\n\nSee the [**Help Needed section**](CONTRIBUTING.md#-help-needed-windows--linux-testers) in CONTRIBUTING.md for details.\n\n### General Contributions\n\nFor all contributors, see [CONTRIBUTING.md](CONTRIBUTING.md) for:\n\n- Platform-specific setup (Windows, Linux, macOS)\n- Development workflow\n- Testing guidelines\n- Code style requirements\n- Pull request process\n\nQuick start:\n\n1. Read [CONTRIBUTING.md](CONTRIBUTING.md) for platform-specific setup\n2. Understand the [Architecture docs](docs/architecture.md)\n3. Review the [Scoring Algorithm](docs/scoring_algorithm.md)\n4. Follow existing code patterns\n5. Add tests for new features\n6. Update documentation\n\n## Status\n\n**Version:** 1.0.0\n**Status:** Research implementation - functional but evolving\n\n### Phase 1 (Complete) \u2705\n\n- 10 MCP tools\n- Temporal decay algorithm\n\n- Knowledge graph\n\n### Phase 2 (Complete) \u2705\n\n- JSONL storage\n- LTM index\n- Git integration\n- Smart prompting documentation\n- Maintenance CLI\n- Memory consolidation (algorithmic merging)\n\n### Future Work\n\n- Spaced repetition optimization\n- Adaptive decay parameters\n- Performance benchmarks\n- LLM-assisted consolidation (optional enhancement)\n\n---\n\n**Built with** [Claude Code](https://claude.com/claude-code) \ud83e\udd16\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Mnemex: Temporal memory management for AI assistants with human-like dynamics",
"version": "0.5.2",
"project_urls": null,
"split_keywords": [
"ai",
" llm",
" mcp",
" memex",
" memory",
" mnemonic",
" temporal-decay"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "8845667495abfb9755a3081f51aed5a7f79c3a88ca9dadff9db97b3c45f29f78",
"md5": "edb569721487dcf65550c10dbbc61ffc",
"sha256": "2d59cafbb75e069c548f0c565c4223fe2fc4795c59a7581ae5480c265bfcd0c5"
},
"downloads": -1,
"filename": "mnemex-0.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "edb569721487dcf65550c10dbbc61ffc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 89273,
"upload_time": "2025-10-20T22:47:32",
"upload_time_iso_8601": "2025-10-20T22:47:32.989340Z",
"url": "https://files.pythonhosted.org/packages/88/45/667495abfb9755a3081f51aed5a7f79c3a88ca9dadff9db97b3c45f29f78/mnemex-0.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "676f994973a940a95b0dedd8ec710df31c7175f04c353ca6d95458f6d78a2e6e",
"md5": "4fa00bf28e82f4dafa7bb8df8d008935",
"sha256": "963103685da7b2bcb151df77176d86507721f6b10122833d60654171497299cd"
},
"downloads": -1,
"filename": "mnemex-0.5.2.tar.gz",
"has_sig": false,
"md5_digest": "4fa00bf28e82f4dafa7bb8df8d008935",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 366392,
"upload_time": "2025-10-20T22:47:34",
"upload_time_iso_8601": "2025-10-20T22:47:34.472213Z",
"url": "https://files.pythonhosted.org/packages/67/6f/994973a940a95b0dedd8ec710df31c7175f04c353ca6d95458f6d78a2e6e/mnemex-0.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-20 22:47:34",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "mnemex"
}