llm-user-memory


Namellm-user-memory JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryTransparent memory system for LLM
upload_time2025-07-27 16:25:16
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords llm memory ai chatbot plugin
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-user-memory

[![PyPI](https://img.shields.io/pypi/v/llm-user-memory.svg)](https://pypi.org/project/llm-user-memory/)
[![Changelog](https://img.shields.io/github/v/release/jrodrigosm/llm-user-memory?include_prereleases&label=changelog)](https://github.com/jrodrigosm/llm-user-memory/releases)
[![Tests](https://github.com/jrodrigosm/llm-user-memory/workflows/Test/badge.svg)](https://github.com/jrodrigosm/llm-user-memory/actions?query=workflow%3ATest)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/jrodrigosm/llm-user-memory/blob/main/LICENSE)

A transparent memory system for [LLM](https://llm.datasette.io/) that automatically maintains and uses a user profile to provide personalized AI responses.

## Installation

Install this plugin in the same environment as LLM:
```bash
llm install llm-user-memory
```

## Usage

After installation, set up transparent memory integration:

```bash
llm memory install-shell
```

This adds a shell function that automatically injects your user profile into every LLM interaction. Restart your terminal or run:

```bash
source ~/.bashrc  # or ~/.zshrc for zsh users
```

Now use LLM normally - your conversations will automatically include memory context:

```bash
llm "What should I work on today?"
# Response will be personalized based on your stored profile

llm "I just finished the memory plugin project"
# This information will be remembered for future conversations
```

The memory system works completely transparently. Your user profile is automatically:
- Injected as context in every prompt
- Updated in the background based on your conversations
- Stored locally in your LLM configuration directory

## Features

### Automatic Profile Building

The plugin automatically builds and maintains a user profile based on your conversations:

```bash
# First conversation
llm "I'm a Python developer working on machine learning projects"

# Later conversations automatically know this context
llm "What's the best way to optimize this model?"
# Response considers your Python/ML background
```

### Transparent Operation

No need to remember special commands or flags. Once installed, the memory system works automatically:

```bash
# These all include memory context automatically:
llm "Help me debug this code"
llm -m gpt-4 "Explain quantum computing"
llm -t my-template "Process this data"
```

### Profile Management

View and manage your stored profile:

```bash
# View current profile
llm memory show

# Clear profile and start fresh
llm memory clear

# Temporarily disable memory updates
llm memory pause

# Re-enable memory updates
llm memory resume
```

### Background Updates

Profile updates happen in the background after each conversation, so they never slow down your interactions:

```bash
llm "I switched from JavaScript to Rust development"
# ✓ Response generated immediately
# ✓ Profile updated in background: "Updating memory..."
```

### Privacy and Local Storage

All profile data is stored locally in your LLM configuration directory:
- No external services involved
- Profile updates use the same model you're already using
- Full control over your data

## Memory Profile Structure

Your profile is stored as readable Markdown in `~/.config/llm/memory/profile.md`:

```markdown
# User Profile

## Personal Information
- Role: Python Developer
- Experience: 5+ years in machine learning

## Current Projects
- Working on LLM memory plugin
- Exploring transformer architectures

## Interests
- Natural language processing
- Open source development
- Performance optimization

## Preferences
- Prefers practical examples over theory
- Likes concise, actionable advice
```

## Advanced Usage

### Manual Profile Editing

You can manually edit your profile:

```bash
# Edit profile directly
$EDITOR "$(llm memory path)"

# Or use llm memory show and copy/edit content
llm memory show > temp_profile.md
# Edit temp_profile.md
llm memory load temp_profile.md
```

### Shell Integration Details

The shell integration works by creating a function that wraps the `llm` command:

```bash
llm() {
    command llm -f memory:auto "$@"
}
```

This automatically injects the `memory:auto` fragment on every call.

### Uninstalling Shell Integration

To remove the transparent integration:

```bash
llm memory uninstall-shell
```

Then restart your terminal. You can still use memory manually with:

```bash
llm -f memory:auto "your prompt here"
```

## Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:
```bash
cd llm-user-memory
python -m venv venv
source venv/bin/activate
```

Now install the dependencies and test dependencies:
```bash
pip install -e .
pip install -r requirements-dev.txt
```

To run the tests:
```bash
pytest
```

## How It Works

The plugin uses LLM's fragment loader system to inject profile context and monitors the conversation database to trigger background profile updates:

1. **Fragment Injection**: The `memory:auto` fragment loader reads your profile and injects it as context
2. **Database Monitoring**: A background process watches for new conversations in LLM's SQLite database
3. **Profile Updates**: After each conversation, the same model you used gets a request to update your profile
4. **Transparent Operation**: Shell function integration makes this completely automatic

## Troubleshooting

### Memory not working
Check if shell integration is active:
```bash
type llm
# Should show: llm is a function
```

### Profile not updating
Check if background daemon is running:
```bash
llm memory status
```

### Reset everything
```bash
llm memory clear
llm memory uninstall-shell
llm memory install-shell
```

## Configuration

Memory behavior can be configured via environment variables:

```bash
# Disable background updates
export LLM_MEMORY_UPDATES=false

# Change update frequency (seconds)
export LLM_MEMORY_UPDATE_INTERVAL=10

# Disable memory system entirely
export LLM_MEMORY_DISABLED=true
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llm-user-memory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llm, memory, ai, chatbot, plugin",
    "author": null,
    "author_email": "Rodrigo Serrano <jrodrigosm@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d8/b9/ae66aa72613419b519d9126b59d811cccf0012eaf7ac407d930b15aa4fa2/llm_user_memory-0.1.1.tar.gz",
    "platform": null,
    "description": "# llm-user-memory\n\n[![PyPI](https://img.shields.io/pypi/v/llm-user-memory.svg)](https://pypi.org/project/llm-user-memory/)\n[![Changelog](https://img.shields.io/github/v/release/jrodrigosm/llm-user-memory?include_prereleases&label=changelog)](https://github.com/jrodrigosm/llm-user-memory/releases)\n[![Tests](https://github.com/jrodrigosm/llm-user-memory/workflows/Test/badge.svg)](https://github.com/jrodrigosm/llm-user-memory/actions?query=workflow%3ATest)\n[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/jrodrigosm/llm-user-memory/blob/main/LICENSE)\n\nA transparent memory system for [LLM](https://llm.datasette.io/) that automatically maintains and uses a user profile to provide personalized AI responses.\n\n## Installation\n\nInstall this plugin in the same environment as LLM:\n```bash\nllm install llm-user-memory\n```\n\n## Usage\n\nAfter installation, set up transparent memory integration:\n\n```bash\nllm memory install-shell\n```\n\nThis adds a shell function that automatically injects your user profile into every LLM interaction. Restart your terminal or run:\n\n```bash\nsource ~/.bashrc  # or ~/.zshrc for zsh users\n```\n\nNow use LLM normally - your conversations will automatically include memory context:\n\n```bash\nllm \"What should I work on today?\"\n# Response will be personalized based on your stored profile\n\nllm \"I just finished the memory plugin project\"\n# This information will be remembered for future conversations\n```\n\nThe memory system works completely transparently. Your user profile is automatically:\n- Injected as context in every prompt\n- Updated in the background based on your conversations\n- Stored locally in your LLM configuration directory\n\n## Features\n\n### Automatic Profile Building\n\nThe plugin automatically builds and maintains a user profile based on your conversations:\n\n```bash\n# First conversation\nllm \"I'm a Python developer working on machine learning projects\"\n\n# Later conversations automatically know this context\nllm \"What's the best way to optimize this model?\"\n# Response considers your Python/ML background\n```\n\n### Transparent Operation\n\nNo need to remember special commands or flags. Once installed, the memory system works automatically:\n\n```bash\n# These all include memory context automatically:\nllm \"Help me debug this code\"\nllm -m gpt-4 \"Explain quantum computing\"\nllm -t my-template \"Process this data\"\n```\n\n### Profile Management\n\nView and manage your stored profile:\n\n```bash\n# View current profile\nllm memory show\n\n# Clear profile and start fresh\nllm memory clear\n\n# Temporarily disable memory updates\nllm memory pause\n\n# Re-enable memory updates\nllm memory resume\n```\n\n### Background Updates\n\nProfile updates happen in the background after each conversation, so they never slow down your interactions:\n\n```bash\nllm \"I switched from JavaScript to Rust development\"\n# \u2713 Response generated immediately\n# \u2713 Profile updated in background: \"Updating memory...\"\n```\n\n### Privacy and Local Storage\n\nAll profile data is stored locally in your LLM configuration directory:\n- No external services involved\n- Profile updates use the same model you're already using\n- Full control over your data\n\n## Memory Profile Structure\n\nYour profile is stored as readable Markdown in `~/.config/llm/memory/profile.md`:\n\n```markdown\n# User Profile\n\n## Personal Information\n- Role: Python Developer\n- Experience: 5+ years in machine learning\n\n## Current Projects\n- Working on LLM memory plugin\n- Exploring transformer architectures\n\n## Interests\n- Natural language processing\n- Open source development\n- Performance optimization\n\n## Preferences\n- Prefers practical examples over theory\n- Likes concise, actionable advice\n```\n\n## Advanced Usage\n\n### Manual Profile Editing\n\nYou can manually edit your profile:\n\n```bash\n# Edit profile directly\n$EDITOR \"$(llm memory path)\"\n\n# Or use llm memory show and copy/edit content\nllm memory show > temp_profile.md\n# Edit temp_profile.md\nllm memory load temp_profile.md\n```\n\n### Shell Integration Details\n\nThe shell integration works by creating a function that wraps the `llm` command:\n\n```bash\nllm() {\n    command llm -f memory:auto \"$@\"\n}\n```\n\nThis automatically injects the `memory:auto` fragment on every call.\n\n### Uninstalling Shell Integration\n\nTo remove the transparent integration:\n\n```bash\nllm memory uninstall-shell\n```\n\nThen restart your terminal. You can still use memory manually with:\n\n```bash\nllm -f memory:auto \"your prompt here\"\n```\n\n## Development\n\nTo set up this plugin locally, first checkout the code. Then create a new virtual environment:\n```bash\ncd llm-user-memory\npython -m venv venv\nsource venv/bin/activate\n```\n\nNow install the dependencies and test dependencies:\n```bash\npip install -e .\npip install -r requirements-dev.txt\n```\n\nTo run the tests:\n```bash\npytest\n```\n\n## How It Works\n\nThe plugin uses LLM's fragment loader system to inject profile context and monitors the conversation database to trigger background profile updates:\n\n1. **Fragment Injection**: The `memory:auto` fragment loader reads your profile and injects it as context\n2. **Database Monitoring**: A background process watches for new conversations in LLM's SQLite database\n3. **Profile Updates**: After each conversation, the same model you used gets a request to update your profile\n4. **Transparent Operation**: Shell function integration makes this completely automatic\n\n## Troubleshooting\n\n### Memory not working\nCheck if shell integration is active:\n```bash\ntype llm\n# Should show: llm is a function\n```\n\n### Profile not updating\nCheck if background daemon is running:\n```bash\nllm memory status\n```\n\n### Reset everything\n```bash\nllm memory clear\nllm memory uninstall-shell\nllm memory install-shell\n```\n\n## Configuration\n\nMemory behavior can be configured via environment variables:\n\n```bash\n# Disable background updates\nexport LLM_MEMORY_UPDATES=false\n\n# Change update frequency (seconds)\nexport LLM_MEMORY_UPDATE_INTERVAL=10\n\n# Disable memory system entirely\nexport LLM_MEMORY_DISABLED=true\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Transparent memory system for LLM",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/jrodrigosm/llm-user-memory",
        "Issues": "https://github.com/jrodrigosm/llm-user-memory/issues",
        "Repository": "https://github.com/jrodrigosm/llm-user-memory"
    },
    "split_keywords": [
        "llm",
        " memory",
        " ai",
        " chatbot",
        " plugin"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "052a3a6961dc15d25aa527f4f38cab90e15aff7cc28a084988ae821e0ef1b09d",
                "md5": "00bb53bd3f0a995f25ce576bab529285",
                "sha256": "0a0fcd78dd2eb9ce76c9673e9eb06cf026092fa43f38d8c315e199cf8cbe6499"
            },
            "downloads": -1,
            "filename": "llm_user_memory-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "00bb53bd3f0a995f25ce576bab529285",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 11439,
            "upload_time": "2025-07-27T16:25:15",
            "upload_time_iso_8601": "2025-07-27T16:25:15.070862Z",
            "url": "https://files.pythonhosted.org/packages/05/2a/3a6961dc15d25aa527f4f38cab90e15aff7cc28a084988ae821e0ef1b09d/llm_user_memory-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d8b9ae66aa72613419b519d9126b59d811cccf0012eaf7ac407d930b15aa4fa2",
                "md5": "40a121622687ed31a0e3404d7a6d7974",
                "sha256": "e3b9a7794574379b65c1777e687a25a2ef9666c60d2a6978c17e8d7fb2b9b178"
            },
            "downloads": -1,
            "filename": "llm_user_memory-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "40a121622687ed31a0e3404d7a6d7974",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 19861,
            "upload_time": "2025-07-27T16:25:16",
            "upload_time_iso_8601": "2025-07-27T16:25:16.118102Z",
            "url": "https://files.pythonhosted.org/packages/d8/b9/ae66aa72613419b519d9126b59d811cccf0012eaf7ac407d930b15aa4fa2/llm_user_memory-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-27 16:25:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jrodrigosm",
    "github_project": "llm-user-memory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "llm-user-memory"
}
        
Elapsed time: 2.06593s