memorg


Namememorg JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryA hierarchical context management system for Large Language Models that acts as an external memory layer, helping LLMs maintain context over extended interactions while managing token usage efficiently.
upload_time2025-08-22 12:32:50
maintainerNone
docs_urlNone
authorDipankar Sarkar
requires_python<4.0,>=3.11
licenseMIT
keywords llm context-management memory ai nlp openai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Memorg: Hierarchical Context Management System

## Why Memorg?

Large Language Models (LLMs) have revolutionized how we interact with AI, but they face fundamental limitations in managing context over extended conversations or complex workflows. As conversations grow longer or tasks become more intricate, LLMs struggle with:

- **Context Window Limits**: Most LLMs have finite context windows that fill up quickly with lengthy conversations
- **Information Loss**: Important details from earlier in a conversation can be forgotten as new information is added
- **Irrelevant Information**: Without intelligent filtering, LLMs process all context equally, leading to inefficiency
- **Memory Fragmentation**: Related information gets scattered across different parts of a conversation without proper organization

Memorg addresses these challenges by providing a sophisticated hierarchical context management system that acts as an external memory layer for LLMs. It intelligently stores, organizes, retrieves, and optimizes contextual information, allowing LLMs to maintain coherent, long-term interactions while staying within token limits.

Think of Memorg as a "smart memory manager" for LLMs - it decides what information is important to keep, how to organize it for efficient retrieval, and how to present it optimally to the model.

## What is Memorg?

Memorg is a sophisticated context management system designed to enhance the capabilities of Large Language Models (LLMs) by providing efficient context management, retrieval, and optimization. It serves as an external memory layer that helps LLMs maintain context over extended interactions, manage information hierarchically, and optimize token usage for better performance.

Originally designed for chat-based interactions, Memorg has evolved to support a wide range of workflows beyond conversation, including document analysis, research, content creation, and more.

Memorg can be used both as a **Python library** for integration into your applications and as a **command-line interface (CLI)** for standalone use.

## Features

- **Hierarchical Context Storage**: Organizes information in a Session → Conversation → Topic → Exchange hierarchy
- **Intelligent Context Management**: Prioritizes and compresses information based on relevance and importance
- **Efficient Retrieval**: Combines keyword, semantic, and temporal search capabilities
- **Context Window Optimization**: Manages token usage and creates optimized prompts
- **Working Memory Management**: Efficiently allocates and manages token budgets
- **Generic Memory Abstraction**: Use memory management capabilities across different workflows, not just chat
- **Flexible Tagging System**: Organize and search memory items using custom tags
- **Dual Interface**: Available as both a Python library and a standalone CLI

## Architecture Overview

Memorg follows a modular architecture designed for extensibility and efficiency:

```
┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│   Main System   │────│  Context Store   │────│   SQLite Storage   │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                              │                         │
                              ▼                         ▼
                   ┌──────────────────┐    ┌────────────────────┐
                   │ Vector Store     │    │   USearch Index    │
                   └──────────────────┘    └────────────────────┘
                              │
                              ▼
                   ┌──────────────────┐
                   │ OpenAI Client    │
                   └──────────────────┘

┌─────────────────┐    ┌──────────────────┐    ┌────────────────────┐
│ Context Manager │    │ Retrieval System │    │ Window Optimizer   │
└─────────────────┘    └──────────────────┘    └────────────────────┘
                              │
                              ▼
                   ┌──────────────────┐
                   │ Memory Abstraction│
                   └──────────────────┘

```

- **Context Store**: Manages the hierarchical data structure (Session → Conversation → Topic → Exchange)
- **Storage Layer**: Uses SQLite for structured data and USearch for vector embeddings
- **Context Manager**: Handles prioritization, compression, and working memory allocation
- **Retrieval System**: Provides intelligent search capabilities across different dimensions
- **Window Optimizer**: Ensures efficient token usage and prompt construction
- **Memory Abstraction**: Generic interface for using memory capabilities across different workflows

## Installation

Install Memorg using pip:

```bash
pip install memorg
```

Or install from source using Poetry:

```bash
git clone https://github.com/skelf-research/memorg.git
cd memorg
poetry install
```

## Quick Start

### As a Python Library

Memorg can be easily integrated into your Python projects:

```python
from app.main import MemorgSystem
from app.storage.sqlite_storage import SQLiteStorageAdapter
from app.vector_store.usearch_vector_store import USearchVectorStore
from openai import AsyncOpenAI

# Initialize the system
storage = SQLiteStorageAdapter("memorg.db")
vector_store = USearchVectorStore("memorg.db")
openai_client = AsyncOpenAI()
system = MemorgSystem(storage, vector_store, openai_client)

# Start using it!
session = await system.create_session("user123", {"max_tokens": 4096})
```

### As a Command-Line Interface

Memorg also provides a powerful CLI for standalone use:

1. Set up your OpenAI API key:
```bash
export OPENAI_API_KEY="your-api-key-here"
```

2. Run the CLI:
```bash
memorg
```

Or if installed from source:
```bash
poetry run python -m app.cli
```

## Specifications

For detailed specifications, please refer to:
- [Technical Specification](specifications/technical.md) - Core architecture and implementation details
- [Usage Guide](specifications/usage.md) - Detailed usage patterns and examples
- [Analysis](specifications/analysis.md) - System analysis and design decisions

## Use Cases & Benefits

Memorg is particularly valuable for:

### **Long Conversations**
- Maintain context across extended dialogues without losing important details
- Automatically prioritize recent and relevant information
- Prevent context window overflow with intelligent compression

### **Complex Workflows**
- Track multi-step processes with hierarchical organization
- Preserve key decisions and parameters throughout a workflow
- Enable context-aware decision making at each step

### **Research & Analysis**
- Organize findings and insights by topic and relevance
- Quickly retrieve relevant information from large datasets
- Maintain research context across multiple sessions

### **Customer Support**
- Keep conversation history for personalized service
- Escalate complex issues with complete context preservation
- Ensure consistency across support agent interactions

### **Content Creation**
- Manage research and drafts in organized topics
- Track content evolution and key revisions
- Optimize token usage for efficient generation

## Key Benefits

- **Reduced Token Costs**: Intelligent context management minimizes unnecessary token usage
- **Improved Accuracy**: Relevant context is always available when needed
- **Better User Experience**: More coherent and contextually appropriate responses
- **Scalable Memory**: Handle conversations of any length without performance degradation
- **Extensible Design**: Modular architecture allows for custom components and integrations

## Library Usage

Memorg can be used as a library in your Python projects. Here's how to integrate it:

```python
from app.main import MemorgSystem
from app.storage.sqlite_storage import SQLiteStorageAdapter
from app.vector_store.usearch_vector_store import USearchVectorStore
from openai import AsyncOpenAI

async def setup_memorg():
    # Initialize components
    storage = SQLiteStorageAdapter("memorg.db")
    vector_store = USearchVectorStore("memorg.db")
    openai_client = AsyncOpenAI()
    
    # Create system instance
    system = MemorgSystem(storage, vector_store, openai_client)
    
    # Create a session with token budget
    session = await system.create_session("user123", {"max_tokens": 4096})
    
    # Start a conversation
    conversation = await system.start_conversation(session.id)
    
    # Create a topic
    topic = await system.context_store.create_topic(conversation.id, "Project Discussion")
    
    # Add an exchange (interaction)
    exchange = await system.add_exchange(
        topic.id,
        "What are the key features?",
        "The system provides hierarchical storage, intelligent context management, and efficient retrieval."
    )
    
    # Search through context
    results = await system.search_context("key features")
    
    # Monitor memory usage
    memory_usage = await system.get_memory_usage()
    return system, session, conversation, topic

# Generic Memory Usage (for non-chat workflows)
async def document_analysis_workflow():
    # Initialize the same way as above
    storage = SQLiteStorageAdapter("memorg.db")
    vector_store = USearchVectorStore("memorg.db")
    openai_client = AsyncOpenAI()
    system = MemorgSystem(storage, vector_store, openai_client)
    
    # Create a session for document analysis
    session = await system.create_session("analyst_123", {"workflow": "document_analysis"})
    
    # Create custom memory items for documents
    document_item = await system.create_memory_item(
        content="This is a research document about AI advancements.",
        item_type="document",  # Can be any type, not just conversation-related
        parent_id=session.id,
        metadata={"author": "Research Team", "category": "AI"},
        tags=["research", "AI", "document"]
    )
    
    # Search across all memory, not just conversations
    results = await system.search_memory(
        query="AI research",
        item_types=["document"],  # Filter by type
        tags=["research"],        # Filter by tags
        limit=5
    )
    
    return results
```

## CLI Usage

The CLI provides an interactive way to explore and manage your memory system:

```bash
# Start the CLI
memorg
```

Available commands in the CLI:
- `help`: Show available commands
- `new`: Start a new conversation
- `search`: Search through conversation history
- `memsearch`: Search through all memory (documents, notes, etc.)
- `addnote`: Add a custom note to memory with tags
- `memory`: Show memory usage statistics
- `exit`: Exit the chat

Example CLI session:
```bash
$ memorg
Welcome to Memorg CLI Chat!
Type 'help' for available commands or start chatting.

You: help
Available Commands:
- help: Show this help message
- new: Start a new conversation
- search: Search through conversation history
- memsearch: Search through all memory (documents, notes, etc.)
- addnote: Add a custom note to memory with tags
- memory: Show memory usage statistics
- exit: Exit the chat

You: memory
Memory Usage:
Total Tokens: 1,234
Active Items: 50
Compressed Items: 10
Vector Count: 60
Index Size: 2.5 MB

You: search
Enter search query: key features
Score  Type        Content
0.92   SEMANTIC    The system provides hierarchical storage...
0.85   KEYWORD     Intelligent context management and...

You: addnote
Enter note content: Remember to review the quarterly reports
Enter tags (comma-separated, optional): reports,quarterly,review
Added note with ID: 123e4567-e89b-12d3-a456-426614174000

You: memsearch
Enter memory search query: quarterly reports
Score  Type        Content                          Tags
0.95   note        Remember to review the...        reports,quarterly,review
```

## Components

### Context Store

The Context Store manages the hierarchical storage of context data:
- Sessions: Top-level containers for user interactions
- Conversations: Groups of related exchanges
- Topics: Specific subjects within conversations
- Exchanges: Individual message pairs

### Memory Abstraction

The Memory Abstraction provides a generic interface for using memory capabilities across different workflows:
- **Memory Items**: Generic representation of any stored information
- **Memory Store**: Interface for storing and retrieving memory items
- **Memory Manager**: High-level interface for memory operations
- **Flexible Types**: Support for custom item types beyond conversation elements
- **Tagging System**: Organize and filter memory items using tags

### Context Manager

The Context Manager handles:
- Prioritization of information based on recency and importance
- Compression of content while preserving key information
- Working memory allocation and management

### Context Manager

The Context Manager handles:
- Prioritization of information based on recency and importance
- Compression of content while preserving key information
- Working memory allocation and management

### Retrieval System

The Retrieval System provides:
- Query processing with entity recognition
- Multi-factor relevance scoring
- Hybrid search capabilities

### Context Window Optimizer

The Context Window Optimizer:
- Summarizes content while preserving important entities
- Optimizes token usage
- Creates context-aware prompt templates

## Development

### Running Tests

```bash
poetry run pytest
```

### Code Style

The project uses:
- Black for code formatting
- isort for import sorting
- mypy for type checking

Run the formatters:
```bash
poetry run black .
poetry run isort .
poetry run mypy .
```

## Contributing

1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Push to the branch
5. Create a Pull Request

## Conclusion

Memorg represents a significant step forward in making LLMs more practical for real-world applications that require long-term context management. By providing a robust external memory system, it enables developers to build more sophisticated AI applications that can maintain context over extended interactions while optimizing for performance and cost.

Whether you're building customer support systems, research assistants, content creation tools, or complex workflow automation, Memorg provides the foundation for creating more intelligent and context-aware AI applications.

## Citation

If you use Memorg in your research or project, please cite it as follows:

```bibtex
@software{memorg,
  author = {Dipankar Sarkar},
  title = {Memorg: Hierarchical Context Management System},
  year = {2024},
  url = {https://github.com/skelf-research/memorg},
  note = {A sophisticated context management system for enhancing LLM capabilities}
}
```

## License

This project is licensed under the MIT License - see the LICENSE file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "memorg",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": "llm, context-management, memory, ai, nlp, openai",
    "author": "Dipankar Sarkar",
    "author_email": "me@dipankar.name",
    "download_url": "https://files.pythonhosted.org/packages/7e/da/ed7c34c7dcdf7b36c591e7cfba4c0fbf65b1e6820b9d1874d468cd101b9f/memorg-0.1.2.tar.gz",
    "platform": null,
    "description": "# Memorg: Hierarchical Context Management System\n\n## Why Memorg?\n\nLarge Language Models (LLMs) have revolutionized how we interact with AI, but they face fundamental limitations in managing context over extended conversations or complex workflows. As conversations grow longer or tasks become more intricate, LLMs struggle with:\n\n- **Context Window Limits**: Most LLMs have finite context windows that fill up quickly with lengthy conversations\n- **Information Loss**: Important details from earlier in a conversation can be forgotten as new information is added\n- **Irrelevant Information**: Without intelligent filtering, LLMs process all context equally, leading to inefficiency\n- **Memory Fragmentation**: Related information gets scattered across different parts of a conversation without proper organization\n\nMemorg addresses these challenges by providing a sophisticated hierarchical context management system that acts as an external memory layer for LLMs. It intelligently stores, organizes, retrieves, and optimizes contextual information, allowing LLMs to maintain coherent, long-term interactions while staying within token limits.\n\nThink of Memorg as a \"smart memory manager\" for LLMs - it decides what information is important to keep, how to organize it for efficient retrieval, and how to present it optimally to the model.\n\n## What is Memorg?\n\nMemorg is a sophisticated context management system designed to enhance the capabilities of Large Language Models (LLMs) by providing efficient context management, retrieval, and optimization. It serves as an external memory layer that helps LLMs maintain context over extended interactions, manage information hierarchically, and optimize token usage for better performance.\n\nOriginally designed for chat-based interactions, Memorg has evolved to support a wide range of workflows beyond conversation, including document analysis, research, content creation, and more.\n\nMemorg can be used both as a **Python library** for integration into your applications and as a **command-line interface (CLI)** for standalone use.\n\n## Features\n\n- **Hierarchical Context Storage**: Organizes information in a Session \u2192 Conversation \u2192 Topic \u2192 Exchange hierarchy\n- **Intelligent Context Management**: Prioritizes and compresses information based on relevance and importance\n- **Efficient Retrieval**: Combines keyword, semantic, and temporal search capabilities\n- **Context Window Optimization**: Manages token usage and creates optimized prompts\n- **Working Memory Management**: Efficiently allocates and manages token budgets\n- **Generic Memory Abstraction**: Use memory management capabilities across different workflows, not just chat\n- **Flexible Tagging System**: Organize and search memory items using custom tags\n- **Dual Interface**: Available as both a Python library and a standalone CLI\n\n## Architecture Overview\n\nMemorg follows a modular architecture designed for extensibility and efficiency:\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502   Main System   \u2502\u2500\u2500\u2500\u2500\u2502  Context Store   \u2502\u2500\u2500\u2500\u2500\u2502   SQLite Storage   \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                              \u2502                         \u2502\n                              \u25bc                         \u25bc\n                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                   \u2502 Vector Store     \u2502    \u2502   USearch Index    \u2502\n                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                              \u2502\n                              \u25bc\n                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                   \u2502 OpenAI Client    \u2502\n                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Context Manager \u2502    \u2502 Retrieval System \u2502    \u2502 Window Optimizer   \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n                              \u2502\n                              \u25bc\n                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n                   \u2502 Memory Abstraction\u2502\n                   \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n```\n\n- **Context Store**: Manages the hierarchical data structure (Session \u2192 Conversation \u2192 Topic \u2192 Exchange)\n- **Storage Layer**: Uses SQLite for structured data and USearch for vector embeddings\n- **Context Manager**: Handles prioritization, compression, and working memory allocation\n- **Retrieval System**: Provides intelligent search capabilities across different dimensions\n- **Window Optimizer**: Ensures efficient token usage and prompt construction\n- **Memory Abstraction**: Generic interface for using memory capabilities across different workflows\n\n## Installation\n\nInstall Memorg using pip:\n\n```bash\npip install memorg\n```\n\nOr install from source using Poetry:\n\n```bash\ngit clone https://github.com/skelf-research/memorg.git\ncd memorg\npoetry install\n```\n\n## Quick Start\n\n### As a Python Library\n\nMemorg can be easily integrated into your Python projects:\n\n```python\nfrom app.main import MemorgSystem\nfrom app.storage.sqlite_storage import SQLiteStorageAdapter\nfrom app.vector_store.usearch_vector_store import USearchVectorStore\nfrom openai import AsyncOpenAI\n\n# Initialize the system\nstorage = SQLiteStorageAdapter(\"memorg.db\")\nvector_store = USearchVectorStore(\"memorg.db\")\nopenai_client = AsyncOpenAI()\nsystem = MemorgSystem(storage, vector_store, openai_client)\n\n# Start using it!\nsession = await system.create_session(\"user123\", {\"max_tokens\": 4096})\n```\n\n### As a Command-Line Interface\n\nMemorg also provides a powerful CLI for standalone use:\n\n1. Set up your OpenAI API key:\n```bash\nexport OPENAI_API_KEY=\"your-api-key-here\"\n```\n\n2. Run the CLI:\n```bash\nmemorg\n```\n\nOr if installed from source:\n```bash\npoetry run python -m app.cli\n```\n\n## Specifications\n\nFor detailed specifications, please refer to:\n- [Technical Specification](specifications/technical.md) - Core architecture and implementation details\n- [Usage Guide](specifications/usage.md) - Detailed usage patterns and examples\n- [Analysis](specifications/analysis.md) - System analysis and design decisions\n\n## Use Cases & Benefits\n\nMemorg is particularly valuable for:\n\n### **Long Conversations**\n- Maintain context across extended dialogues without losing important details\n- Automatically prioritize recent and relevant information\n- Prevent context window overflow with intelligent compression\n\n### **Complex Workflows**\n- Track multi-step processes with hierarchical organization\n- Preserve key decisions and parameters throughout a workflow\n- Enable context-aware decision making at each step\n\n### **Research & Analysis**\n- Organize findings and insights by topic and relevance\n- Quickly retrieve relevant information from large datasets\n- Maintain research context across multiple sessions\n\n### **Customer Support**\n- Keep conversation history for personalized service\n- Escalate complex issues with complete context preservation\n- Ensure consistency across support agent interactions\n\n### **Content Creation**\n- Manage research and drafts in organized topics\n- Track content evolution and key revisions\n- Optimize token usage for efficient generation\n\n## Key Benefits\n\n- **Reduced Token Costs**: Intelligent context management minimizes unnecessary token usage\n- **Improved Accuracy**: Relevant context is always available when needed\n- **Better User Experience**: More coherent and contextually appropriate responses\n- **Scalable Memory**: Handle conversations of any length without performance degradation\n- **Extensible Design**: Modular architecture allows for custom components and integrations\n\n## Library Usage\n\nMemorg can be used as a library in your Python projects. Here's how to integrate it:\n\n```python\nfrom app.main import MemorgSystem\nfrom app.storage.sqlite_storage import SQLiteStorageAdapter\nfrom app.vector_store.usearch_vector_store import USearchVectorStore\nfrom openai import AsyncOpenAI\n\nasync def setup_memorg():\n    # Initialize components\n    storage = SQLiteStorageAdapter(\"memorg.db\")\n    vector_store = USearchVectorStore(\"memorg.db\")\n    openai_client = AsyncOpenAI()\n    \n    # Create system instance\n    system = MemorgSystem(storage, vector_store, openai_client)\n    \n    # Create a session with token budget\n    session = await system.create_session(\"user123\", {\"max_tokens\": 4096})\n    \n    # Start a conversation\n    conversation = await system.start_conversation(session.id)\n    \n    # Create a topic\n    topic = await system.context_store.create_topic(conversation.id, \"Project Discussion\")\n    \n    # Add an exchange (interaction)\n    exchange = await system.add_exchange(\n        topic.id,\n        \"What are the key features?\",\n        \"The system provides hierarchical storage, intelligent context management, and efficient retrieval.\"\n    )\n    \n    # Search through context\n    results = await system.search_context(\"key features\")\n    \n    # Monitor memory usage\n    memory_usage = await system.get_memory_usage()\n    return system, session, conversation, topic\n\n# Generic Memory Usage (for non-chat workflows)\nasync def document_analysis_workflow():\n    # Initialize the same way as above\n    storage = SQLiteStorageAdapter(\"memorg.db\")\n    vector_store = USearchVectorStore(\"memorg.db\")\n    openai_client = AsyncOpenAI()\n    system = MemorgSystem(storage, vector_store, openai_client)\n    \n    # Create a session for document analysis\n    session = await system.create_session(\"analyst_123\", {\"workflow\": \"document_analysis\"})\n    \n    # Create custom memory items for documents\n    document_item = await system.create_memory_item(\n        content=\"This is a research document about AI advancements.\",\n        item_type=\"document\",  # Can be any type, not just conversation-related\n        parent_id=session.id,\n        metadata={\"author\": \"Research Team\", \"category\": \"AI\"},\n        tags=[\"research\", \"AI\", \"document\"]\n    )\n    \n    # Search across all memory, not just conversations\n    results = await system.search_memory(\n        query=\"AI research\",\n        item_types=[\"document\"],  # Filter by type\n        tags=[\"research\"],        # Filter by tags\n        limit=5\n    )\n    \n    return results\n```\n\n## CLI Usage\n\nThe CLI provides an interactive way to explore and manage your memory system:\n\n```bash\n# Start the CLI\nmemorg\n```\n\nAvailable commands in the CLI:\n- `help`: Show available commands\n- `new`: Start a new conversation\n- `search`: Search through conversation history\n- `memsearch`: Search through all memory (documents, notes, etc.)\n- `addnote`: Add a custom note to memory with tags\n- `memory`: Show memory usage statistics\n- `exit`: Exit the chat\n\nExample CLI session:\n```bash\n$ memorg\nWelcome to Memorg CLI Chat!\nType 'help' for available commands or start chatting.\n\nYou: help\nAvailable Commands:\n- help: Show this help message\n- new: Start a new conversation\n- search: Search through conversation history\n- memsearch: Search through all memory (documents, notes, etc.)\n- addnote: Add a custom note to memory with tags\n- memory: Show memory usage statistics\n- exit: Exit the chat\n\nYou: memory\nMemory Usage:\nTotal Tokens: 1,234\nActive Items: 50\nCompressed Items: 10\nVector Count: 60\nIndex Size: 2.5 MB\n\nYou: search\nEnter search query: key features\nScore  Type        Content\n0.92   SEMANTIC    The system provides hierarchical storage...\n0.85   KEYWORD     Intelligent context management and...\n\nYou: addnote\nEnter note content: Remember to review the quarterly reports\nEnter tags (comma-separated, optional): reports,quarterly,review\nAdded note with ID: 123e4567-e89b-12d3-a456-426614174000\n\nYou: memsearch\nEnter memory search query: quarterly reports\nScore  Type        Content                          Tags\n0.95   note        Remember to review the...        reports,quarterly,review\n```\n\n## Components\n\n### Context Store\n\nThe Context Store manages the hierarchical storage of context data:\n- Sessions: Top-level containers for user interactions\n- Conversations: Groups of related exchanges\n- Topics: Specific subjects within conversations\n- Exchanges: Individual message pairs\n\n### Memory Abstraction\n\nThe Memory Abstraction provides a generic interface for using memory capabilities across different workflows:\n- **Memory Items**: Generic representation of any stored information\n- **Memory Store**: Interface for storing and retrieving memory items\n- **Memory Manager**: High-level interface for memory operations\n- **Flexible Types**: Support for custom item types beyond conversation elements\n- **Tagging System**: Organize and filter memory items using tags\n\n### Context Manager\n\nThe Context Manager handles:\n- Prioritization of information based on recency and importance\n- Compression of content while preserving key information\n- Working memory allocation and management\n\n### Context Manager\n\nThe Context Manager handles:\n- Prioritization of information based on recency and importance\n- Compression of content while preserving key information\n- Working memory allocation and management\n\n### Retrieval System\n\nThe Retrieval System provides:\n- Query processing with entity recognition\n- Multi-factor relevance scoring\n- Hybrid search capabilities\n\n### Context Window Optimizer\n\nThe Context Window Optimizer:\n- Summarizes content while preserving important entities\n- Optimizes token usage\n- Creates context-aware prompt templates\n\n## Development\n\n### Running Tests\n\n```bash\npoetry run pytest\n```\n\n### Code Style\n\nThe project uses:\n- Black for code formatting\n- isort for import sorting\n- mypy for type checking\n\nRun the formatters:\n```bash\npoetry run black .\npoetry run isort .\npoetry run mypy .\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Commit your changes\n4. Push to the branch\n5. Create a Pull Request\n\n## Conclusion\n\nMemorg represents a significant step forward in making LLMs more practical for real-world applications that require long-term context management. By providing a robust external memory system, it enables developers to build more sophisticated AI applications that can maintain context over extended interactions while optimizing for performance and cost.\n\nWhether you're building customer support systems, research assistants, content creation tools, or complex workflow automation, Memorg provides the foundation for creating more intelligent and context-aware AI applications.\n\n## Citation\n\nIf you use Memorg in your research or project, please cite it as follows:\n\n```bibtex\n@software{memorg,\n  author = {Dipankar Sarkar},\n  title = {Memorg: Hierarchical Context Management System},\n  year = {2024},\n  url = {https://github.com/skelf-research/memorg},\n  note = {A sophisticated context management system for enhancing LLM capabilities}\n}\n```\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A hierarchical context management system for Large Language Models that acts as an external memory layer, helping LLMs maintain context over extended interactions while managing token usage efficiently.",
    "version": "0.1.2",
    "project_urls": {
        "Documentation": "https://github.com/skelf-research/memorg",
        "Homepage": "https://github.com/skelf-research/memorg",
        "Repository": "https://github.com/skelf-research/memorg"
    },
    "split_keywords": [
        "llm",
        " context-management",
        " memory",
        " ai",
        " nlp",
        " openai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a8ad3d333e35e33c11b8e11e4a7b91ba896f2b914df59200bd28818cfecf4af6",
                "md5": "5baf2ddac283a34566042958f7d4c191",
                "sha256": "81ac72b744c422331fa712c2f01a7ff9c464e856f6a7882e917093408f4555f5"
            },
            "downloads": -1,
            "filename": "memorg-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5baf2ddac283a34566042958f7d4c191",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 51012,
            "upload_time": "2025-08-22T12:32:49",
            "upload_time_iso_8601": "2025-08-22T12:32:49.014887Z",
            "url": "https://files.pythonhosted.org/packages/a8/ad/3d333e35e33c11b8e11e4a7b91ba896f2b914df59200bd28818cfecf4af6/memorg-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7edaed7c34c7dcdf7b36c591e7cfba4c0fbf65b1e6820b9d1874d468cd101b9f",
                "md5": "87c2a6ea97de5e944aa3871ad7a4e1ee",
                "sha256": "dbb4ef263e2c4c7c4f96a02cad23b32513956526420fba3586e6d26fdde87ad3"
            },
            "downloads": -1,
            "filename": "memorg-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "87c2a6ea97de5e944aa3871ad7a4e1ee",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 44417,
            "upload_time": "2025-08-22T12:32:50",
            "upload_time_iso_8601": "2025-08-22T12:32:50.450268Z",
            "url": "https://files.pythonhosted.org/packages/7e/da/ed7c34c7dcdf7b36c591e7cfba4c0fbf65b1e6820b9d1874d468cd101b9f/memorg-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-22 12:32:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "skelf-research",
    "github_project": "memorg",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "memorg"
}
        
Elapsed time: 1.13113s