[](https://gibsonai.com/)
# memori
<p align="center">
<strong>Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems</strong>
</p>
<p align="center">
<i>Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.</i>
</p>
<p align="center">
<a href="https://memori.gibsonai.com/docs">Learn more</a>
ยท
<a href="https://www.gibsonai.com/discord">Join Discord</a>
</p>
<p align="center">
<a href="https://badge.fury.io/py/memorisdk">
<img src="https://badge.fury.io/py/memori.svg" alt="PyPI version">
</a>
<a href="https://pepy.tech/projects/memorisdk">
<img src="https://static.pepy.tech/badge/memorisdk" alt="Downloads">
</a>
<a href="https://opensource.org/licenses/MIT">
<img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT">
</a>
<a href="https://www.python.org/downloads/">
<img src="https://img.shields.io/badge/python-3.8+-blue.svg" alt="Python 3.8+">
</a>
</p>
---
## ๐ฏ Philosophy
- **Second-memory for all your LLM work** - Never repeat context again
- **Dual-mode memory injection** - Conscious short-term memory + Auto intelligent search
- **Flexible database connections** - SQLite, PostgreSQL, MySQL support
- **Pydantic-based intelligence** - Structured memory processing with validation
- **Simple, reliable architecture** - Just works out of the box
## โก Quick Start
Install Memori:
```bash
pip install memorisdk
```
### Example with OpenAI
1. Install OpenAI:
```bash
pip install openai
```
2. Set OpenAI API Key:
```bash
export OPENAI_API_KEY="sk-your-openai-key-here"
```
3. Run this Python script:
```python
from memori import Memori
from openai import OpenAI
# Initialize OpenAI client
openai_client = OpenAI()
# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()
print("=== First Conversation - Establishing Context ===")
response1 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "I'm working on a Python FastAPI project"
}]
)
print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")
response2 = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{
"role": "user",
"content": "Help me add user authentication"
}]
)
print("Assistant:", response2.choices[0].message.content)
print("\n๐ก Notice: Memori automatically knows about your FastAPI Python project!")
```
---
**๐ Ready to explore more?**
- [๐ Examples](#examples) - Basic usage patterns and code samples
- [๐ Framework Integrations](#framework-integrations) - LangChain, Agno & CrewAI examples
- [๐ฎ Interactive Demos](#interactive-demos) - Live applications & tutorials
---
## ๐ง How It Works
### 1. **Universal Recording**
```python
office_work.enable() # Records ALL LLM conversations automatically
```
### 2. **Intelligent Processing**
- **Entity Extraction**: Extracts people, technologies, projects
- **Smart Categorization**: Facts, preferences, skills, rules
- **Pydantic Validation**: Structured, type-safe memory storage
### 3. **Dual Memory Modes**
#### **๐ง Conscious Mode** - Short-Term Working Memory
```python
conscious_ingest=True # One-shot short-term memory injection
```
- **At Startup**: Conscious agent analyzes long-term memory patterns
- **Memory Promotion**: Moves essential conversations to short-term storage
- **One-Shot Injection**: Injects working memory once at conversation start
- **Like Human Short-Term Memory**: Names, current projects, preferences readily available
#### **๐ Auto Mode** - Dynamic Database Search
```python
auto_ingest=True # Continuous intelligent memory retrieval
```
- **Every LLM Call**: Retrieval agent analyzes user query intelligently
- **Full Database Search**: Searches through entire memory database
- **Context-Aware**: Injects relevant memories based on current conversation
- **Performance Optimized**: Caching, async processing, background threads
## ๐ง Memory Modes Explained
### **Conscious Mode** - Short-Term Working Memory
```python
# Mimics human conscious memory - essential info readily available
memori = Memori(
database_connect="sqlite:///my_memory.db",
conscious_ingest=True, # ๐ง Short-term working memory
openai_api_key="sk-..."
)
```
**How Conscious Mode Works:**
1. **At Startup**: Conscious agent analyzes long-term memory patterns
2. **Essential Selection**: Promotes 5-10 most important conversations to short-term
3. **One-Shot Injection**: Injects this working memory once at conversation start
4. **No Repeats**: Won't inject again during the same session
### **Auto Mode** - Dynamic Intelligent Search
```python
# Searches entire database dynamically based on user queries
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # ๐ Smart database search
openai_api_key="sk-..."
)
```
**How Auto Mode Works:**
1. **Every LLM Call**: Retrieval agent analyzes user input
2. **Query Planning**: Uses AI to understand what memories are needed
3. **Smart Search**: Searches through entire database (short-term + long-term)
4. **Context Injection**: Injects 3-5 most relevant memories per call
### **Combined Mode** - Best of Both Worlds
```python
# Get both working memory AND dynamic search
memori = Memori(
conscious_ingest=True, # Working memory once
auto_ingest=True, # Dynamic search every call
openai_api_key="sk-..."
)
```
### **Intelligence Layers:**
1. **Memory Agent** - Processes every conversation with Pydantic structured outputs
2. **Conscious Agent** - Analyzes patterns, promotes long-term โ short-term memories
3. **Retrieval Agent** - Intelligently searches and selects relevant context
### **What gets prioritized in Conscious Mode:**
- ๐ค **Personal Identity**: Your name, role, location, basic info
- โค๏ธ **Preferences & Habits**: What you like, work patterns, routines
- ๐ ๏ธ **Skills & Tools**: Technologies you use, expertise areas
- ๐ **Current Projects**: Ongoing work, learning goals
- ๐ค **Relationships**: Important people, colleagues, connections
- ๐ **Repeated References**: Information you mention frequently
## ๐๏ธ Memory Types
| Type | Purpose | Example | Auto-Promoted |
|------|---------|---------|---------------|
| **Facts** | Objective information | "I use PostgreSQL for databases" | โ
High frequency |
| **Preferences** | User choices | "I prefer clean, readable code" | โ
Personal identity |
| **Skills** | Abilities & knowledge | "Experienced with FastAPI" | โ
Expertise areas |
| **Rules** | Constraints & guidelines | "Always write tests first" | โ
Work patterns |
| **Context** | Session information | "Working on e-commerce project" | โ
Current projects |
## ๐ง Configuration
### Simple Setup
```python
from memori import Memori
# Conscious mode - Short-term working memory
memori = Memori(
database_connect="sqlite:///my_memory.db",
template="basic",
conscious_ingest=True, # One-shot context injection
openai_api_key="sk-..."
)
# Auto mode - Dynamic database search
memori = Memori(
database_connect="sqlite:///my_memory.db",
auto_ingest=True, # Continuous memory retrieval
openai_api_key="sk-..."
)
# Combined mode - Best of both worlds
memori = Memori(
conscious_ingest=True, # Working memory +
auto_ingest=True, # Dynamic search
openai_api_key="sk-..."
)
```
### Advanced Configuration
```python
from memori import Memori, ConfigManager
# Load from memori.json or environment
config = ConfigManager()
config.auto_load()
memori = Memori()
memori.enable()
```
Create `memori.json`:
```json
{
"database": {
"connection_string": "postgresql://user:pass@localhost/memori"
},
"agents": {
"openai_api_key": "sk-...",
"conscious_ingest": true,
"auto_ingest": false
},
"memory": {
"namespace": "my_project",
"retention_policy": "30_days"
}
}
```
## ๐ Universal Integration
Works with **ANY** LLM library:
```python
memori.enable() # Enable universal recording
# OpenAI
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(...)
# LiteLLM
from litellm import completion
completion(model="gpt-4", messages=[...])
# Anthropic
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)
# All automatically recorded and contextualized!
```
## ๐ ๏ธ Memory Management
### **Automatic Background Analysis**
```python
# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable() # Starts background conscious agent
# Manual analysis trigger
memori.trigger_conscious_analysis()
# Get essential conversations
essential = memori.get_essential_conversations(limit=5)
```
### **Memory Retrieval Tools**
```python
from memori.tools import create_memory_tool
# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)
# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)
```
### **Context Control**
```python
# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories
# Search by category
skills = memori.search_memories_by_category("skill", limit=10)
# Get memory statistics
stats = memori.get_memory_stats()
```
## ๐ Database Schema
```sql
-- Core tables created automatically
chat_history # All conversations
short_term_memory # Recent context (expires)
long_term_memory # Permanent insights
rules_memory # User preferences
memory_entities # Extracted entities
memory_relationships # Entity connections
```
## ๐ Project Structure
```
memori/
โโโ core/ # Main Memori class, database manager
โโโ agents/ # Memory processing with Pydantic
โโโ database/ # SQLite/PostgreSQL/MySQL support
โโโ integrations/ # LiteLLM, OpenAI, Anthropic
โโโ config/ # Configuration management
โโโ utils/ # Helpers, validation, logging
โโโ tools/ # Memory search tools
```
## Examples
- **[Basic Usage](./examples/basic_usage.py)** - Simple memory setup with conscious ingestion
- **[Personal Assistant](./examples/personal_assistant.py)** - AI assistant with intelligent memory
- **[Memory Retrieval](./memory_retrival_example.py)** - Function calling with memory tools
- **[Advanced Config](./examples/advanced_config.py)** - Production configuration
- **[Interactive Demo](./memori_example.py)** - Live conscious ingestion showcase
## Framework Integrations
Memori works seamlessly with popular AI frameworks:
| Framework | Description | Example | Features |
|-----------|-------------|---------|----------|
| ๐ค [Agno](./examples/integrations/agno_example.py) | Memory-enhanced agent framework integration with persistent conversations | Simple chat agent with memory search | Memory tools, conversation persistence, contextual responses |
| ๐ฅ [CrewAI](./examples/integrations/crewai_example.py) | Multi-agent system with shared memory across agent interactions | Collaborative agents with memory | Agent coordination, shared memory, task-based workflows |
| ๐ [Digital Ocean AI](./examples/integrations/digital_ocean_example.py) | Memory-enhanced customer support using Digital Ocean's AI platform | Customer support assistant with conversation history | Context injection, session continuity, support analytics |
| ๐ [LangChain](./examples/integrations/langchain_example.py) | Enterprise-grade agent framework with advanced memory integration | AI assistant with LangChain tools and memory | Custom tools, agent executors, memory persistence, error handling |
| ๏ฟฝ [OpenAI Agent](./examples/integrations/openai_agent_example.py) | Memory-enhanced OpenAI Agent with function calling and user preference tracking | Interactive assistant with memory search and user info storage | Function calling tools, memory search, preference tracking, async conversations |
| ๏ฟฝ๐ [Swarms](./examples/integrations/swarms_example.py) | Multi-agent system framework with persistent memory capabilities | Memory-enhanced Swarms agents with auto/conscious ingestion | Agent memory persistence, multi-agent coordination, contextual awareness |
## Interactive Demos
Explore Memori's capabilities through these interactive demonstrations:
| Title | Description | Tools Used | Live Demo |
|------------|-------------|------------|-----------|
| ๐ [Personal Diary Assistant](./demos/personal_diary_assistant/) | A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. | Streamlit, LiteLLM, OpenAI, SQLite | [Run Demo](https://personal-diary-assistant.streamlit.app/) |
| ๐ [Travel Planner Agent](./demos/travel_planner/) | Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. | CrewAI, Streamlit, OpenAI, SQLite | |
| ๐งโ๐ฌ [Researcher Agent](./demos/researcher_agent/) | Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. | Agno, Streamlit, OpenAI, ExaAI, SQLite | [Run Demo](https://researcher-agent-memori.streamlit.app/) |
## ๐ค Contributing
- See [CONTRIBUTING.md](./CONTRIBUTING.md) for development setup and guidelines.
- Community: [Discord](https://www.gibsonai.com/discord)
## ๐ License
MIT License - see [LICENSE](./LICENSE) for details.
---
*Made for developers who want their AI agents to remember and learn*
Raw data
{
"_id": null,
"home_page": null,
"name": "memorisdk",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, memory, agents, llm, artificial-intelligence, multi-agent",
"author": null,
"author_email": "GibsonAI Team <noc@gibsonai.com>",
"download_url": "https://files.pythonhosted.org/packages/e3/28/1060de5e6f41ac1ab62b09a97201107a9675f77465865d88357b39bf5ee1/memorisdk-2.0.0.tar.gz",
"platform": null,
"description": "[](https://gibsonai.com/)\n\n# memori\n\n<p align=\"center\">\n <strong>Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems</strong>\n</p>\n\n<p align=\"center\">\n <i>Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.</i>\n</p>\n\n<p align=\"center\">\n <a href=\"https://memori.gibsonai.com/docs\">Learn more</a>\n \u00b7\n <a href=\"https://www.gibsonai.com/discord\">Join Discord</a>\n</p>\n\n<p align=\"center\">\n <a href=\"https://badge.fury.io/py/memorisdk\">\n <img src=\"https://badge.fury.io/py/memori.svg\" alt=\"PyPI version\">\n </a>\n <a href=\"https://pepy.tech/projects/memorisdk\">\n <img src=\"https://static.pepy.tech/badge/memorisdk\" alt=\"Downloads\">\n </a>\n <a href=\"https://opensource.org/licenses/MIT\">\n <img src=\"https://img.shields.io/badge/License-MIT-yellow.svg\" alt=\"License: MIT\">\n </a>\n <a href=\"https://www.python.org/downloads/\">\n <img src=\"https://img.shields.io/badge/python-3.8+-blue.svg\" alt=\"Python 3.8+\">\n </a>\n</p>\n\n---\n\n## \ud83c\udfaf Philosophy\n\n- **Second-memory for all your LLM work** - Never repeat context again\n- **Dual-mode memory injection** - Conscious short-term memory + Auto intelligent search\n- **Flexible database connections** - SQLite, PostgreSQL, MySQL support \n- **Pydantic-based intelligence** - Structured memory processing with validation\n- **Simple, reliable architecture** - Just works out of the box\n\n## \u26a1 Quick Start\n\nInstall Memori:\n\n```bash\npip install memorisdk\n```\n\n### Example with OpenAI\n\n1. Install OpenAI:\n\n```bash\npip install openai\n```\n\n2. Set OpenAI API Key:\n\n```bash\nexport OPENAI_API_KEY=\"sk-your-openai-key-here\"\n```\n\n3. Run this Python script:\n\n```python\nfrom memori import Memori\nfrom openai import OpenAI\n\n# Initialize OpenAI client\nopenai_client = OpenAI()\n\n# Initialize memory\nmemori = Memori(conscious_ingest=True)\nmemori.enable()\n\nprint(\"=== First Conversation - Establishing Context ===\")\nresponse1 = openai_client.chat.completions.create(\n model=\"gpt-4o-mini\",\n messages=[{\n \"role\": \"user\", \n \"content\": \"I'm working on a Python FastAPI project\"\n }]\n)\n\nprint(\"Assistant:\", response1.choices[0].message.content)\nprint(\"\\n\" + \"=\"*50)\nprint(\"=== Second Conversation - Memory Provides Context ===\")\n\nresponse2 = openai_client.chat.completions.create(\n model=\"gpt-4o-mini\", \n messages=[{\n \"role\": \"user\",\n \"content\": \"Help me add user authentication\"\n }]\n)\nprint(\"Assistant:\", response2.choices[0].message.content)\nprint(\"\\n\ud83d\udca1 Notice: Memori automatically knows about your FastAPI Python project!\")\n```\n\n---\n\n**\ud83d\ude80 Ready to explore more?**\n- [\ud83d\udcd6 Examples](#examples) - Basic usage patterns and code samples\n- [\ud83d\udd0c Framework Integrations](#framework-integrations) - LangChain, Agno & CrewAI examples \n- [\ud83c\udfae Interactive Demos](#interactive-demos) - Live applications & tutorials\n\n---\n\n## \ud83e\udde0 How It Works\n\n### 1. **Universal Recording**\n```python\noffice_work.enable() # Records ALL LLM conversations automatically\n```\n\n### 2. **Intelligent Processing**\n- **Entity Extraction**: Extracts people, technologies, projects\n- **Smart Categorization**: Facts, preferences, skills, rules\n- **Pydantic Validation**: Structured, type-safe memory storage\n\n### 3. **Dual Memory Modes**\n\n#### **\ud83e\udde0 Conscious Mode** - Short-Term Working Memory\n```python\nconscious_ingest=True # One-shot short-term memory injection\n```\n- **At Startup**: Conscious agent analyzes long-term memory patterns\n- **Memory Promotion**: Moves essential conversations to short-term storage\n- **One-Shot Injection**: Injects working memory once at conversation start\n- **Like Human Short-Term Memory**: Names, current projects, preferences readily available\n\n#### **\ud83d\udd0d Auto Mode** - Dynamic Database Search\n```python\nauto_ingest=True # Continuous intelligent memory retrieval\n```\n- **Every LLM Call**: Retrieval agent analyzes user query intelligently\n- **Full Database Search**: Searches through entire memory database\n- **Context-Aware**: Injects relevant memories based on current conversation\n- **Performance Optimized**: Caching, async processing, background threads\n\n## \ud83e\udde0 Memory Modes Explained\n\n### **Conscious Mode** - Short-Term Working Memory\n```python\n# Mimics human conscious memory - essential info readily available\nmemori = Memori(\n database_connect=\"sqlite:///my_memory.db\",\n conscious_ingest=True, # \ud83e\udde0 Short-term working memory\n openai_api_key=\"sk-...\"\n)\n```\n\n**How Conscious Mode Works:**\n1. **At Startup**: Conscious agent analyzes long-term memory patterns\n2. **Essential Selection**: Promotes 5-10 most important conversations to short-term\n3. **One-Shot Injection**: Injects this working memory once at conversation start\n4. **No Repeats**: Won't inject again during the same session\n\n### **Auto Mode** - Dynamic Intelligent Search\n```python\n# Searches entire database dynamically based on user queries\nmemori = Memori(\n database_connect=\"sqlite:///my_memory.db\", \n auto_ingest=True, # \ud83d\udd0d Smart database search\n openai_api_key=\"sk-...\"\n)\n```\n\n**How Auto Mode Works:**\n1. **Every LLM Call**: Retrieval agent analyzes user input\n2. **Query Planning**: Uses AI to understand what memories are needed\n3. **Smart Search**: Searches through entire database (short-term + long-term)\n4. **Context Injection**: Injects 3-5 most relevant memories per call\n\n### **Combined Mode** - Best of Both Worlds\n```python\n# Get both working memory AND dynamic search\nmemori = Memori(\n conscious_ingest=True, # Working memory once\n auto_ingest=True, # Dynamic search every call\n openai_api_key=\"sk-...\"\n)\n```\n\n### **Intelligence Layers:**\n\n1. **Memory Agent** - Processes every conversation with Pydantic structured outputs\n2. **Conscious Agent** - Analyzes patterns, promotes long-term \u2192 short-term memories\n3. **Retrieval Agent** - Intelligently searches and selects relevant context\n\n### **What gets prioritized in Conscious Mode:**\n- \ud83d\udc64 **Personal Identity**: Your name, role, location, basic info\n- \u2764\ufe0f **Preferences & Habits**: What you like, work patterns, routines\n- \ud83d\udee0\ufe0f **Skills & Tools**: Technologies you use, expertise areas\n- \ud83d\udcca **Current Projects**: Ongoing work, learning goals\n- \ud83e\udd1d **Relationships**: Important people, colleagues, connections\n- \ud83d\udd04 **Repeated References**: Information you mention frequently\n\n## \ud83d\uddc4\ufe0f Memory Types\n\n| Type | Purpose | Example | Auto-Promoted |\n|------|---------|---------|---------------|\n| **Facts** | Objective information | \"I use PostgreSQL for databases\" | \u2705 High frequency |\n| **Preferences** | User choices | \"I prefer clean, readable code\" | \u2705 Personal identity |\n| **Skills** | Abilities & knowledge | \"Experienced with FastAPI\" | \u2705 Expertise areas |\n| **Rules** | Constraints & guidelines | \"Always write tests first\" | \u2705 Work patterns |\n| **Context** | Session information | \"Working on e-commerce project\" | \u2705 Current projects |\n\n## \ud83d\udd27 Configuration\n\n### Simple Setup\n```python\nfrom memori import Memori\n\n# Conscious mode - Short-term working memory\nmemori = Memori(\n database_connect=\"sqlite:///my_memory.db\",\n template=\"basic\", \n conscious_ingest=True, # One-shot context injection\n openai_api_key=\"sk-...\"\n)\n\n# Auto mode - Dynamic database search\nmemori = Memori(\n database_connect=\"sqlite:///my_memory.db\",\n auto_ingest=True, # Continuous memory retrieval\n openai_api_key=\"sk-...\"\n)\n\n# Combined mode - Best of both worlds\nmemori = Memori(\n conscious_ingest=True, # Working memory + \n auto_ingest=True, # Dynamic search\n openai_api_key=\"sk-...\"\n)\n```\n\n### Advanced Configuration\n```python\nfrom memori import Memori, ConfigManager\n\n# Load from memori.json or environment\nconfig = ConfigManager()\nconfig.auto_load()\n\nmemori = Memori()\nmemori.enable()\n```\n\nCreate `memori.json`:\n```json\n{\n \"database\": {\n \"connection_string\": \"postgresql://user:pass@localhost/memori\"\n },\n \"agents\": {\n \"openai_api_key\": \"sk-...\",\n \"conscious_ingest\": true,\n \"auto_ingest\": false\n },\n \"memory\": {\n \"namespace\": \"my_project\",\n \"retention_policy\": \"30_days\"\n }\n}\n```\n\n## \ud83d\udd0c Universal Integration\n\nWorks with **ANY** LLM library:\n\n```python\nmemori.enable() # Enable universal recording\n\n# OpenAI\nfrom openai import OpenAI\nclient = OpenAI()\nclient.chat.completions.create(...)\n\n# LiteLLM\nfrom litellm import completion\ncompletion(model=\"gpt-4\", messages=[...])\n\n# Anthropic \nimport anthropic\nclient = anthropic.Anthropic()\nclient.messages.create(...)\n\n# All automatically recorded and contextualized!\n```\n\n## \ud83d\udee0\ufe0f Memory Management\n\n### **Automatic Background Analysis**\n```python\n# Automatic analysis every 6 hours (when conscious_ingest=True)\nmemori.enable() # Starts background conscious agent\n\n# Manual analysis trigger\nmemori.trigger_conscious_analysis()\n\n# Get essential conversations\nessential = memori.get_essential_conversations(limit=5)\n```\n\n### **Memory Retrieval Tools**\n```python\nfrom memori.tools import create_memory_tool\n\n# Create memory search tool for your LLM\nmemory_tool = create_memory_tool(memori)\n\n# Use in function calling\ntools = [memory_tool]\ncompletion(model=\"gpt-4\", messages=[...], tools=tools)\n```\n\n### **Context Control**\n```python\n# Get relevant context for a query\ncontext = memori.retrieve_context(\"Python testing\", limit=5)\n# Returns: 3 essential + 2 specific memories\n\n# Search by category\nskills = memori.search_memories_by_category(\"skill\", limit=10)\n\n# Get memory statistics\nstats = memori.get_memory_stats()\n```\n\n## \ud83d\udccb Database Schema\n\n```sql\n-- Core tables created automatically\nchat_history # All conversations\nshort_term_memory # Recent context (expires)\nlong_term_memory # Permanent insights \nrules_memory # User preferences\nmemory_entities # Extracted entities\nmemory_relationships # Entity connections\n```\n\n## \ud83d\udcc1 Project Structure\n\n```\nmemori/\n\u251c\u2500\u2500 core/ # Main Memori class, database manager\n\u251c\u2500\u2500 agents/ # Memory processing with Pydantic \n\u251c\u2500\u2500 database/ # SQLite/PostgreSQL/MySQL support\n\u251c\u2500\u2500 integrations/ # LiteLLM, OpenAI, Anthropic\n\u251c\u2500\u2500 config/ # Configuration management\n\u251c\u2500\u2500 utils/ # Helpers, validation, logging\n\u2514\u2500\u2500 tools/ # Memory search tools\n```\n\n## Examples\n\n- **[Basic Usage](./examples/basic_usage.py)** - Simple memory setup with conscious ingestion\n- **[Personal Assistant](./examples/personal_assistant.py)** - AI assistant with intelligent memory\n- **[Memory Retrieval](./memory_retrival_example.py)** - Function calling with memory tools\n- **[Advanced Config](./examples/advanced_config.py)** - Production configuration\n- **[Interactive Demo](./memori_example.py)** - Live conscious ingestion showcase\n\n## Framework Integrations\n\nMemori works seamlessly with popular AI frameworks:\n\n| Framework | Description | Example | Features |\n|-----------|-------------|---------|----------|\n| \ud83e\udd16 [Agno](./examples/integrations/agno_example.py) | Memory-enhanced agent framework integration with persistent conversations | Simple chat agent with memory search | Memory tools, conversation persistence, contextual responses |\n| \ud83d\udc65 [CrewAI](./examples/integrations/crewai_example.py) | Multi-agent system with shared memory across agent interactions | Collaborative agents with memory | Agent coordination, shared memory, task-based workflows |\n| \ud83c\udf0a [Digital Ocean AI](./examples/integrations/digital_ocean_example.py) | Memory-enhanced customer support using Digital Ocean's AI platform | Customer support assistant with conversation history | Context injection, session continuity, support analytics |\n| \ud83d\udd17 [LangChain](./examples/integrations/langchain_example.py) | Enterprise-grade agent framework with advanced memory integration | AI assistant with LangChain tools and memory | Custom tools, agent executors, memory persistence, error handling |\n| \ufffd [OpenAI Agent](./examples/integrations/openai_agent_example.py) | Memory-enhanced OpenAI Agent with function calling and user preference tracking | Interactive assistant with memory search and user info storage | Function calling tools, memory search, preference tracking, async conversations |\n| \ufffd\ud83d\ude80 [Swarms](./examples/integrations/swarms_example.py) | Multi-agent system framework with persistent memory capabilities | Memory-enhanced Swarms agents with auto/conscious ingestion | Agent memory persistence, multi-agent coordination, contextual awareness |\n\n## Interactive Demos\n\nExplore Memori's capabilities through these interactive demonstrations:\n\n| Title | Description | Tools Used | Live Demo |\n|------------|-------------|------------|-----------|\n| \ud83c\udf1f [Personal Diary Assistant](./demos/personal_diary_assistant/) | A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. | Streamlit, LiteLLM, OpenAI, SQLite | [Run Demo](https://personal-diary-assistant.streamlit.app/) |\n| \ud83c\udf0d [Travel Planner Agent](./demos/travel_planner/) | Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. | CrewAI, Streamlit, OpenAI, SQLite | |\n| \ud83e\uddd1\u200d\ud83d\udd2c [Researcher Agent](./demos/researcher_agent/) | Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. | Agno, Streamlit, OpenAI, ExaAI, SQLite | [Run Demo](https://researcher-agent-memori.streamlit.app/) |\n\n## \ud83e\udd1d Contributing\n\n- See [CONTRIBUTING.md](./CONTRIBUTING.md) for development setup and guidelines.\n- Community: [Discord](https://www.gibsonai.com/discord)\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](./LICENSE) for details.\n\n---\n\n*Made for developers who want their AI agents to remember and learn*\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "The Open-Source Memory Layer for AI Agents & Multi-Agent Systems",
"version": "2.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/GibsonAI/memori/issues",
"Changelog": "https://github.com/GibsonAI/memori/blob/main/CHANGELOG.md",
"Contributing": "https://github.com/GibsonAI/memori/blob/main/CONTRIBUTING.md",
"Documentation": "https://memori.gibsonai.com/docs",
"Homepage": "https://github.com/GibsonAI/memori",
"Repository": "https://github.com/GibsonAI/memori.git"
},
"split_keywords": [
"ai",
" memory",
" agents",
" llm",
" artificial-intelligence",
" multi-agent"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "da0a9196e6bb1d4ec6f0174db5a68ec1bcdce1af8d74652cf99f1664063e5f57",
"md5": "2ea829ff9cac6b8b4db06ea15ed68571",
"sha256": "f9ac5ebadcf9cc09daa8dee7f12b9cfe94e4b3cf3a3a11d37783ba1aebe31047"
},
"downloads": -1,
"filename": "memorisdk-2.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2ea829ff9cac6b8b4db06ea15ed68571",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 178939,
"upload_time": "2025-09-01T18:59:33",
"upload_time_iso_8601": "2025-09-01T18:59:33.862280Z",
"url": "https://files.pythonhosted.org/packages/da/0a/9196e6bb1d4ec6f0174db5a68ec1bcdce1af8d74652cf99f1664063e5f57/memorisdk-2.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e3281060de5e6f41ac1ab62b09a97201107a9675f77465865d88357b39bf5ee1",
"md5": "883134d180dbde2cc03768fa387c4cc8",
"sha256": "b6cfe97044c01974ffe45434c30013c6ec006774b4d95ef25d909fd834960f81"
},
"downloads": -1,
"filename": "memorisdk-2.0.0.tar.gz",
"has_sig": false,
"md5_digest": "883134d180dbde2cc03768fa387c4cc8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 152970,
"upload_time": "2025-09-01T18:59:35",
"upload_time_iso_8601": "2025-09-01T18:59:35.944604Z",
"url": "https://files.pythonhosted.org/packages/e3/28/1060de5e6f41ac1ab62b09a97201107a9675f77465865d88357b39bf5ee1/memorisdk-2.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-01 18:59:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "GibsonAI",
"github_project": "memori",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "loguru",
"specs": [
[
">=",
"0.6.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "sqlalchemy",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "openai",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "litellm",
"specs": [
[
">=",
"1.0.0"
]
]
}
],
"lcname": "memorisdk"
}