lindorm-memobase


Namelindorm-memobase JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryA lightweight memory extraction and profile management system for LLM applications
upload_time2025-08-14 09:32:37
maintainerNone
docs_urlNone
authorLindormMemobase Team
requires_python>=3.10
licenseMIT
keywords llm memory embedding profile extraction
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LindormMemobase

**智能记忆管理系统** - 为LLM应用提供强大的记忆提取和用户画像管理能力

LindormMemobase是一个专为大语言模型应用设计的轻量级记忆管理库,能够从对话中自动提取结构化信息、管理用户画像,并提供高效的向量搜索能力。基于阿里云Lindorm数据库,支持海量数据的高性能存储和检索。

## 核心特性

**智能记忆提取** - 自动从对话中提取用户偏好、习惯和个人信息  
**结构化画像** - 按主题和子主题组织用户信息,构建完整用户画像  
**向量语义搜索** - 基于embedding的高效相似度搜索和上下文检索  
**高性能存储** - 支持Lindorm宽表和Search引擎,处理大规模数据  
**多语言支持** - 完善的中英文处理能力和本地化提示词  
**异步处理** - 高效的异步处理管道,支持批量数据处理  
**缓冲区管理** - 智能的数据缓冲和批量处理机制,提高处理效率  
**灵活配置** - 支持多种LLM和嵌入模型,可插拔的存储后端

## 快速开始

### 安装

```bash
# 开发环境安装
pip install -e .

# 从源码安装
git clone <repository-url>
cd lindorm-memobase
pip install -e .
```

### 基本使用

```python
import asyncio
from lindormmemobase import LindormMemobase, Config
from lindormmemobase.models.blob import ChatBlob, BlobType, OpenAICompatibleMessage
from datetime import datetime

async def main():
    # 加载配置
    config = Config.load_config()
    memobase = LindormMemobase(config)
    
    # 创建对话数据
    messages = [
        OpenAICompatibleMessage(role="user", content="我最喜欢在周末弹吉他,特别是爵士乐"),
        OpenAICompatibleMessage(role="assistant", content="太棒了!爵士乐很有魅力,周末弹吉他是很好的放松方式")
    ]
    
    conversation_blob = ChatBlob(
        messages=messages,
        fields={"user_id": "user123", "session_id": "chat_001"},
        created_at=datetime.now()
    )
    
    # 提取记忆并构建用户画像
    result = await memobase.extract_memories(
        user_id="user123",
        blobs=[conversation_blob]
    )
    
    if result:
        print("记忆提取成功!")
        
        # 查看用户画像
        profiles = await memobase.get_user_profiles("user123")
        for profile in profiles:
            print(f"主题: {profile.topic}")
            for subtopic, entry in profile.subtopics.items():
                print(f"  └── {subtopic}: {entry.content}")

asyncio.run(main())
```

### 缓冲区管理示例

```python
# 添加对话数据到缓冲区
chat_blob = ChatBlob(
    messages=[OpenAICompatibleMessage(role="user", content="我喜欢喝咖啡")],
    type=BlobType.chat
)

# 添加到缓冲区
blob_id = await memobase.add_blob_to_buffer("user123", chat_blob)
print(f"已添加到缓冲区: {blob_id}")

# 检查缓冲区状态
status = await memobase.detect_buffer_full_or_not("user123", BlobType.chat)
print(f"缓冲区状态: {status}")

# 处理缓冲区中的数据
if status["is_full"]:
    result = await memobase.process_buffer("user123", BlobType.chat)
    print("缓冲区已处理完成")
```

### 上下文增强示例

```python
# 获取记忆增强的对话上下文
context = await memobase.get_conversation_context(
    user_id="user123",
    conversation=current_messages,
    max_token_size=2000
)

print(f"智能上下文: {context}")
```

## 缓冲区管理

LindormMemobase 提供智能的缓冲区管理功能,能够自动收集和批量处理对话数据,提高记忆提取的效率。

### 核心概念

- **缓冲区**: 临时存储待处理的对话数据
- **批量处理**: 当缓冲区达到一定容量时自动触发处理
- **状态管理**: 跟踪每个数据块的处理状态
- **智能调度**: 根据token大小和数据量智能决定处理时机

### 缓冲区API

#### 添加数据到缓冲区

```python
# 添加聊天数据到缓冲区
blob_id = await memobase.add_blob_to_buffer(
    user_id="user123",
    blob=chat_blob,
    blob_id="optional_custom_id"  # 可选,默认生成UUID
)
```

#### 检测缓冲区状态

```python
# 检查缓冲区是否已满
status = await memobase.detect_buffer_full_or_not(
    user_id="user123",
    blob_type=BlobType.chat
)

print(f"缓冲区已满: {status['is_full']}")
print(f"待处理的数据块ID: {status['buffer_full_ids']}")
```

#### 处理缓冲区数据

```python
# 处理所有未处理的数据
result = await memobase.process_buffer(
    user_id="user123",
    blob_type=BlobType.chat,
    profile_config=None  # 可选的配置
)

# 处理特定的数据块
result = await memobase.process_buffer(
    user_id="user123",
    blob_type=BlobType.chat,
    blob_ids=["blob_id_1", "blob_id_2"]
)
```

### 自动化工作流程

```python
async def chat_with_memory(user_id: str, message: str):
    """带记忆的聊天处理流程"""
    
    # 1. 创建聊天数据
    chat_blob = ChatBlob(
        messages=[OpenAICompatibleMessage(role="user", content=message)],
        type=BlobType.chat
    )
    
    # 2. 添加到缓冲区
    await memobase.add_blob_to_buffer(user_id, chat_blob)
    
    # 3. 检查是否需要处理缓冲区
    status = await memobase.detect_buffer_full_or_not(user_id, BlobType.chat)
    
    # 4. 自动处理满载的缓冲区
    if status["is_full"]:
        result = await memobase.process_buffer(
            user_id=user_id,
            blob_type=BlobType.chat,
            blob_ids=status["buffer_full_ids"]
        )
        print(f"已处理 {len(status['buffer_full_ids'])} 个数据块")
    
    # 5. 获取增强的上下文进行回复
    context = await memobase.get_conversation_context(
        user_id=user_id,
        conversation=[OpenAICompatibleMessage(role="user", content=message)]
    )
    
    return f"基于记忆的回复: {context}"
```

### 配置缓冲区参数

在 `config.yaml` 中配置缓冲区行为:

```yaml
# 缓冲区配置
max_chat_blob_buffer_token_size: 8192  # 缓冲区最大token数
max_chat_blob_buffer_process_token_size: 16384  # 单次处理最大token数
```

## 配置设置

### 环境变量配置

1. 复制环境变量模板:
   ```bash
   cp example.env .env
   ```

2. 编辑 `.env` 文件,设置必要的API密钥:
   ```bash
   # LLM配置
   MEMOBASE_LLM_API_KEY=your-openai-api-key
   MEMOBASE_LLM_BASE_URL=https://api.openai.com/v1
   MEMOBASE_LLM_MODEL=gpt-3.5-turbo
   
   # 嵌入模型配置
   MEMOBASE_EMBEDDING_API_KEY=your-embedding-api-key
   MEMOBASE_EMBEDDING_MODEL=text-embedding-3-small
   
   # Lindorm数据库配置
   MEMOBASE_LINDORM_TABLE_HOST=your-lindorm-host
   MEMOBASE_LINDORM_TABLE_PORT=33060
   MEMOBASE_LINDORM_TABLE_USERNAME=your-username
   MEMOBASE_LINDORM_TABLE_PASSWORD=your-password
   MEMOBASE_LINDORM_TABLE_DATABASE=memobase
   
   # Lindorm Search配置
   MEMOBASE_LINDORM_SEARCH_HOST=your-search-host
   MEMOBASE_LINDORM_SEARCH_PORT=30070
   MEMOBASE_LINDORM_SEARCH_USERNAME=your-search-username
   MEMOBASE_LINDORM_SEARCH_PASSWORD=your-search-password
   ```

3. 复制并自定义配置文件:
   ```bash
   cp cookbooks/config.yaml.example cookbooks/config.yaml
   ```

### 配置文件说明

- **`.env`**: 敏感信息(API密钥、数据库凭证)
- **`config.yaml`**: 应用配置(模型参数、功能开关、处理限制)
- **优先级**: 默认值 < `config.yaml` < 环境变量

## 系统架构

### 核心组件

- **`core/extraction/`**: 记忆提取处理管道
  - `processor/`: 数据处理器(摘要、提取、合并、组织)
  - `prompts/`: 智能提示词(支持中英文)
- **`core/buffer/`**: 缓冲区管理(智能缓存、批量处理、状态跟踪)
- **`models/`**: 数据模型(Blob、Profile、Response类型)
- **`core/storage/`**: 存储后端(Lindorm宽表、Search引擎)
- **`embedding/`**: 嵌入服务(OpenAI、Jina等)
- **`llm/`**: 大语言模型接口和完成服务
- **`core/search/`**: 搜索服务(用户画像、事件、上下文检索)

### 处理流水线

```
原始对话数据 → 缓冲区暂存 → 智能调度 → 批量处理 → 记忆提取 → 结构化存储
    ↓
  ChatBlob → 缓冲区管理 → LLM分析 → 向量化存储 → 检索增强
```

### 数据流向

```mermaid
graph LR
    A[对话输入] --> B[ChatBlob创建]
    B --> C[缓冲区暂存]
    C --> D[容量检测]
    D --> E[批量处理]
    E --> F[记忆提取]
    F --> G[向量存储]
    G --> H[上下文检索]
    H --> I[增强响应]
```

## 实战示例

查看 `cookbooks/` 目录获取完整的实用示例:

### 快速上手

- **[`quick_start.py`](cookbooks/quick_start.py)**: 核心API使用演示
- **[`simple_chatbot/`](cookbooks/simple_chatbot/)**: 简单聊天机器人实现

### 记忆增强聊天机器人

- **[`chat_memory/`](cookbooks/chat_memory/)**: 完整的记忆增强聊天机器人
  - **Web界面**: 现代化的实时流式聊天界面
  - **智能缓存**: 90%性能提升的缓存系统
  - **记忆可视化**: 实时查看用户画像和上下文
  - **多模式支持**: 命令行和Web双界面

### 快速体验记忆聊天机器人

```bash
# 进入聊天机器人目录
cd cookbooks/chat_memory/

# 启动Web界面(推荐)
./start_web.sh

# 或启动命令行版本
python memory_chatbot.py --user_id your_name
```

**Web界面特性**:
- 实时流式响应
- 上下文可视化
- 响应式设计
- 性能统计面板

## 开发构建

### 开发环境搭建

```bash
# 开发模式安装
pip install -e .

# 运行测试
pytest tests/ -v

# 运行测试并生成覆盖率报告
pytest tests/ --cov=lindormmemobase --cov-report=html
```

### 生产环境构建

使用 `build` 工具(推荐):
```bash
# 安装构建工具
pip install build

# 构建wheel和源码分发包
python -m build

# 输出文件位于 dist/ 目录
ls dist/
# lindormmemobase-0.1.0-py3-none-any.whl
# lindormmemobase-0.1.0.tar.gz
```

直接使用 `setuptools`:
```bash
# 构建wheel包
python setup.py bdist_wheel

# 构建源码分发包
python setup.py sdist
```

### 从构建包安装

```bash
# 从wheel包安装
pip install dist/lindormmemobase-0.1.0-py3-none-any.whl

# 或从源码分发包安装
pip install dist/lindormmemobase-0.1.0.tar.gz
```

### 发布到PyPI

```bash
# 安装发布工具
pip install twine

# 先上传到TestPyPI测试
twine upload --repository-url https://test.pypi.org/legacy/ dist/*

# 正式发布到PyPI
twine upload dist/*
```

## 测试

```bash
# 运行所有测试
pytest tests/ -v

# 运行特定测试文件
pytest tests/test_lindorm_storage.py -v

# 生成HTML覆盖率报告
pytest tests/ --cov=lindormmemobase --cov-report=html
```

## 系统要求

- **Python**: 3.12+
- **API服务**: OpenAI API密钥(LLM和嵌入服务)
- **数据库**: Lindorm宽表 或 MySQL
- **搜索引擎**: Lindorm Search 或 OpenSearch

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lindorm-memobase",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "llm, memory, embedding, profile, extraction",
    "author": "LindormMemobase Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/40/83/c9f0b17c5aa1d243e17b8a6fe0467a5351eb371cd6a75e94f6fc9634f5d0/lindorm_memobase-0.1.4.tar.gz",
    "platform": null,
    "description": "# LindormMemobase\n\n**\u667a\u80fd\u8bb0\u5fc6\u7ba1\u7406\u7cfb\u7edf** - \u4e3aLLM\u5e94\u7528\u63d0\u4f9b\u5f3a\u5927\u7684\u8bb0\u5fc6\u63d0\u53d6\u548c\u7528\u6237\u753b\u50cf\u7ba1\u7406\u80fd\u529b\n\nLindormMemobase\u662f\u4e00\u4e2a\u4e13\u4e3a\u5927\u8bed\u8a00\u6a21\u578b\u5e94\u7528\u8bbe\u8ba1\u7684\u8f7b\u91cf\u7ea7\u8bb0\u5fc6\u7ba1\u7406\u5e93\uff0c\u80fd\u591f\u4ece\u5bf9\u8bdd\u4e2d\u81ea\u52a8\u63d0\u53d6\u7ed3\u6784\u5316\u4fe1\u606f\u3001\u7ba1\u7406\u7528\u6237\u753b\u50cf\uff0c\u5e76\u63d0\u4f9b\u9ad8\u6548\u7684\u5411\u91cf\u641c\u7d22\u80fd\u529b\u3002\u57fa\u4e8e\u963f\u91cc\u4e91Lindorm\u6570\u636e\u5e93\uff0c\u652f\u6301\u6d77\u91cf\u6570\u636e\u7684\u9ad8\u6027\u80fd\u5b58\u50a8\u548c\u68c0\u7d22\u3002\n\n## \u6838\u5fc3\u7279\u6027\n\n**\u667a\u80fd\u8bb0\u5fc6\u63d0\u53d6** - \u81ea\u52a8\u4ece\u5bf9\u8bdd\u4e2d\u63d0\u53d6\u7528\u6237\u504f\u597d\u3001\u4e60\u60ef\u548c\u4e2a\u4eba\u4fe1\u606f  \n**\u7ed3\u6784\u5316\u753b\u50cf** - \u6309\u4e3b\u9898\u548c\u5b50\u4e3b\u9898\u7ec4\u7ec7\u7528\u6237\u4fe1\u606f\uff0c\u6784\u5efa\u5b8c\u6574\u7528\u6237\u753b\u50cf  \n**\u5411\u91cf\u8bed\u4e49\u641c\u7d22** - \u57fa\u4e8eembedding\u7684\u9ad8\u6548\u76f8\u4f3c\u5ea6\u641c\u7d22\u548c\u4e0a\u4e0b\u6587\u68c0\u7d22  \n**\u9ad8\u6027\u80fd\u5b58\u50a8** - \u652f\u6301Lindorm\u5bbd\u8868\u548cSearch\u5f15\u64ce\uff0c\u5904\u7406\u5927\u89c4\u6a21\u6570\u636e  \n**\u591a\u8bed\u8a00\u652f\u6301** - \u5b8c\u5584\u7684\u4e2d\u82f1\u6587\u5904\u7406\u80fd\u529b\u548c\u672c\u5730\u5316\u63d0\u793a\u8bcd  \n**\u5f02\u6b65\u5904\u7406** - \u9ad8\u6548\u7684\u5f02\u6b65\u5904\u7406\u7ba1\u9053\uff0c\u652f\u6301\u6279\u91cf\u6570\u636e\u5904\u7406  \n**\u7f13\u51b2\u533a\u7ba1\u7406** - \u667a\u80fd\u7684\u6570\u636e\u7f13\u51b2\u548c\u6279\u91cf\u5904\u7406\u673a\u5236\uff0c\u63d0\u9ad8\u5904\u7406\u6548\u7387  \n**\u7075\u6d3b\u914d\u7f6e** - \u652f\u6301\u591a\u79cdLLM\u548c\u5d4c\u5165\u6a21\u578b\uff0c\u53ef\u63d2\u62d4\u7684\u5b58\u50a8\u540e\u7aef\n\n## \u5feb\u901f\u5f00\u59cb\n\n### \u5b89\u88c5\n\n```bash\n# \u5f00\u53d1\u73af\u5883\u5b89\u88c5\npip install -e .\n\n# \u4ece\u6e90\u7801\u5b89\u88c5\ngit clone <repository-url>\ncd lindorm-memobase\npip install -e .\n```\n\n### \u57fa\u672c\u4f7f\u7528\n\n```python\nimport asyncio\nfrom lindormmemobase import LindormMemobase, Config\nfrom lindormmemobase.models.blob import ChatBlob, BlobType, OpenAICompatibleMessage\nfrom datetime import datetime\n\nasync def main():\n    # \u52a0\u8f7d\u914d\u7f6e\n    config = Config.load_config()\n    memobase = LindormMemobase(config)\n    \n    # \u521b\u5efa\u5bf9\u8bdd\u6570\u636e\n    messages = [\n        OpenAICompatibleMessage(role=\"user\", content=\"\u6211\u6700\u559c\u6b22\u5728\u5468\u672b\u5f39\u5409\u4ed6\uff0c\u7279\u522b\u662f\u7235\u58eb\u4e50\"),\n        OpenAICompatibleMessage(role=\"assistant\", content=\"\u592a\u68d2\u4e86\uff01\u7235\u58eb\u4e50\u5f88\u6709\u9b45\u529b\uff0c\u5468\u672b\u5f39\u5409\u4ed6\u662f\u5f88\u597d\u7684\u653e\u677e\u65b9\u5f0f\")\n    ]\n    \n    conversation_blob = ChatBlob(\n        messages=messages,\n        fields={\"user_id\": \"user123\", \"session_id\": \"chat_001\"},\n        created_at=datetime.now()\n    )\n    \n    # \u63d0\u53d6\u8bb0\u5fc6\u5e76\u6784\u5efa\u7528\u6237\u753b\u50cf\n    result = await memobase.extract_memories(\n        user_id=\"user123\",\n        blobs=[conversation_blob]\n    )\n    \n    if result:\n        print(\"\u8bb0\u5fc6\u63d0\u53d6\u6210\u529f\uff01\")\n        \n        # \u67e5\u770b\u7528\u6237\u753b\u50cf\n        profiles = await memobase.get_user_profiles(\"user123\")\n        for profile in profiles:\n            print(f\"\u4e3b\u9898: {profile.topic}\")\n            for subtopic, entry in profile.subtopics.items():\n                print(f\"  \u2514\u2500\u2500 {subtopic}: {entry.content}\")\n\nasyncio.run(main())\n```\n\n### \u7f13\u51b2\u533a\u7ba1\u7406\u793a\u4f8b\n\n```python\n# \u6dfb\u52a0\u5bf9\u8bdd\u6570\u636e\u5230\u7f13\u51b2\u533a\nchat_blob = ChatBlob(\n    messages=[OpenAICompatibleMessage(role=\"user\", content=\"\u6211\u559c\u6b22\u559d\u5496\u5561\")],\n    type=BlobType.chat\n)\n\n# \u6dfb\u52a0\u5230\u7f13\u51b2\u533a\nblob_id = await memobase.add_blob_to_buffer(\"user123\", chat_blob)\nprint(f\"\u5df2\u6dfb\u52a0\u5230\u7f13\u51b2\u533a: {blob_id}\")\n\n# \u68c0\u67e5\u7f13\u51b2\u533a\u72b6\u6001\nstatus = await memobase.detect_buffer_full_or_not(\"user123\", BlobType.chat)\nprint(f\"\u7f13\u51b2\u533a\u72b6\u6001: {status}\")\n\n# \u5904\u7406\u7f13\u51b2\u533a\u4e2d\u7684\u6570\u636e\nif status[\"is_full\"]:\n    result = await memobase.process_buffer(\"user123\", BlobType.chat)\n    print(\"\u7f13\u51b2\u533a\u5df2\u5904\u7406\u5b8c\u6210\")\n```\n\n### \u4e0a\u4e0b\u6587\u589e\u5f3a\u793a\u4f8b\n\n```python\n# \u83b7\u53d6\u8bb0\u5fc6\u589e\u5f3a\u7684\u5bf9\u8bdd\u4e0a\u4e0b\u6587\ncontext = await memobase.get_conversation_context(\n    user_id=\"user123\",\n    conversation=current_messages,\n    max_token_size=2000\n)\n\nprint(f\"\u667a\u80fd\u4e0a\u4e0b\u6587: {context}\")\n```\n\n## \u7f13\u51b2\u533a\u7ba1\u7406\n\nLindormMemobase \u63d0\u4f9b\u667a\u80fd\u7684\u7f13\u51b2\u533a\u7ba1\u7406\u529f\u80fd\uff0c\u80fd\u591f\u81ea\u52a8\u6536\u96c6\u548c\u6279\u91cf\u5904\u7406\u5bf9\u8bdd\u6570\u636e\uff0c\u63d0\u9ad8\u8bb0\u5fc6\u63d0\u53d6\u7684\u6548\u7387\u3002\n\n### \u6838\u5fc3\u6982\u5ff5\n\n- **\u7f13\u51b2\u533a**: \u4e34\u65f6\u5b58\u50a8\u5f85\u5904\u7406\u7684\u5bf9\u8bdd\u6570\u636e\n- **\u6279\u91cf\u5904\u7406**: \u5f53\u7f13\u51b2\u533a\u8fbe\u5230\u4e00\u5b9a\u5bb9\u91cf\u65f6\u81ea\u52a8\u89e6\u53d1\u5904\u7406\n- **\u72b6\u6001\u7ba1\u7406**: \u8ddf\u8e2a\u6bcf\u4e2a\u6570\u636e\u5757\u7684\u5904\u7406\u72b6\u6001\n- **\u667a\u80fd\u8c03\u5ea6**: \u6839\u636etoken\u5927\u5c0f\u548c\u6570\u636e\u91cf\u667a\u80fd\u51b3\u5b9a\u5904\u7406\u65f6\u673a\n\n### \u7f13\u51b2\u533aAPI\n\n#### \u6dfb\u52a0\u6570\u636e\u5230\u7f13\u51b2\u533a\n\n```python\n# \u6dfb\u52a0\u804a\u5929\u6570\u636e\u5230\u7f13\u51b2\u533a\nblob_id = await memobase.add_blob_to_buffer(\n    user_id=\"user123\",\n    blob=chat_blob,\n    blob_id=\"optional_custom_id\"  # \u53ef\u9009\uff0c\u9ed8\u8ba4\u751f\u6210UUID\n)\n```\n\n#### \u68c0\u6d4b\u7f13\u51b2\u533a\u72b6\u6001\n\n```python\n# \u68c0\u67e5\u7f13\u51b2\u533a\u662f\u5426\u5df2\u6ee1\nstatus = await memobase.detect_buffer_full_or_not(\n    user_id=\"user123\",\n    blob_type=BlobType.chat\n)\n\nprint(f\"\u7f13\u51b2\u533a\u5df2\u6ee1: {status['is_full']}\")\nprint(f\"\u5f85\u5904\u7406\u7684\u6570\u636e\u5757ID: {status['buffer_full_ids']}\")\n```\n\n#### \u5904\u7406\u7f13\u51b2\u533a\u6570\u636e\n\n```python\n# \u5904\u7406\u6240\u6709\u672a\u5904\u7406\u7684\u6570\u636e\nresult = await memobase.process_buffer(\n    user_id=\"user123\",\n    blob_type=BlobType.chat,\n    profile_config=None  # \u53ef\u9009\u7684\u914d\u7f6e\n)\n\n# \u5904\u7406\u7279\u5b9a\u7684\u6570\u636e\u5757\nresult = await memobase.process_buffer(\n    user_id=\"user123\",\n    blob_type=BlobType.chat,\n    blob_ids=[\"blob_id_1\", \"blob_id_2\"]\n)\n```\n\n### \u81ea\u52a8\u5316\u5de5\u4f5c\u6d41\u7a0b\n\n```python\nasync def chat_with_memory(user_id: str, message: str):\n    \"\"\"\u5e26\u8bb0\u5fc6\u7684\u804a\u5929\u5904\u7406\u6d41\u7a0b\"\"\"\n    \n    # 1. \u521b\u5efa\u804a\u5929\u6570\u636e\n    chat_blob = ChatBlob(\n        messages=[OpenAICompatibleMessage(role=\"user\", content=message)],\n        type=BlobType.chat\n    )\n    \n    # 2. \u6dfb\u52a0\u5230\u7f13\u51b2\u533a\n    await memobase.add_blob_to_buffer(user_id, chat_blob)\n    \n    # 3. \u68c0\u67e5\u662f\u5426\u9700\u8981\u5904\u7406\u7f13\u51b2\u533a\n    status = await memobase.detect_buffer_full_or_not(user_id, BlobType.chat)\n    \n    # 4. \u81ea\u52a8\u5904\u7406\u6ee1\u8f7d\u7684\u7f13\u51b2\u533a\n    if status[\"is_full\"]:\n        result = await memobase.process_buffer(\n            user_id=user_id,\n            blob_type=BlobType.chat,\n            blob_ids=status[\"buffer_full_ids\"]\n        )\n        print(f\"\u5df2\u5904\u7406 {len(status['buffer_full_ids'])} \u4e2a\u6570\u636e\u5757\")\n    \n    # 5. \u83b7\u53d6\u589e\u5f3a\u7684\u4e0a\u4e0b\u6587\u8fdb\u884c\u56de\u590d\n    context = await memobase.get_conversation_context(\n        user_id=user_id,\n        conversation=[OpenAICompatibleMessage(role=\"user\", content=message)]\n    )\n    \n    return f\"\u57fa\u4e8e\u8bb0\u5fc6\u7684\u56de\u590d: {context}\"\n```\n\n### \u914d\u7f6e\u7f13\u51b2\u533a\u53c2\u6570\n\n\u5728 `config.yaml` \u4e2d\u914d\u7f6e\u7f13\u51b2\u533a\u884c\u4e3a\uff1a\n\n```yaml\n# \u7f13\u51b2\u533a\u914d\u7f6e\nmax_chat_blob_buffer_token_size: 8192  # \u7f13\u51b2\u533a\u6700\u5927token\u6570\nmax_chat_blob_buffer_process_token_size: 16384  # \u5355\u6b21\u5904\u7406\u6700\u5927token\u6570\n```\n\n## \u914d\u7f6e\u8bbe\u7f6e\n\n### \u73af\u5883\u53d8\u91cf\u914d\u7f6e\n\n1. \u590d\u5236\u73af\u5883\u53d8\u91cf\u6a21\u677f\uff1a\n   ```bash\n   cp example.env .env\n   ```\n\n2. \u7f16\u8f91 `.env` \u6587\u4ef6\uff0c\u8bbe\u7f6e\u5fc5\u8981\u7684API\u5bc6\u94a5\uff1a\n   ```bash\n   # LLM\u914d\u7f6e\n   MEMOBASE_LLM_API_KEY=your-openai-api-key\n   MEMOBASE_LLM_BASE_URL=https://api.openai.com/v1\n   MEMOBASE_LLM_MODEL=gpt-3.5-turbo\n   \n   # \u5d4c\u5165\u6a21\u578b\u914d\u7f6e\n   MEMOBASE_EMBEDDING_API_KEY=your-embedding-api-key\n   MEMOBASE_EMBEDDING_MODEL=text-embedding-3-small\n   \n   # Lindorm\u6570\u636e\u5e93\u914d\u7f6e\n   MEMOBASE_LINDORM_TABLE_HOST=your-lindorm-host\n   MEMOBASE_LINDORM_TABLE_PORT=33060\n   MEMOBASE_LINDORM_TABLE_USERNAME=your-username\n   MEMOBASE_LINDORM_TABLE_PASSWORD=your-password\n   MEMOBASE_LINDORM_TABLE_DATABASE=memobase\n   \n   # Lindorm Search\u914d\u7f6e\n   MEMOBASE_LINDORM_SEARCH_HOST=your-search-host\n   MEMOBASE_LINDORM_SEARCH_PORT=30070\n   MEMOBASE_LINDORM_SEARCH_USERNAME=your-search-username\n   MEMOBASE_LINDORM_SEARCH_PASSWORD=your-search-password\n   ```\n\n3. \u590d\u5236\u5e76\u81ea\u5b9a\u4e49\u914d\u7f6e\u6587\u4ef6\uff1a\n   ```bash\n   cp cookbooks/config.yaml.example cookbooks/config.yaml\n   ```\n\n### \u914d\u7f6e\u6587\u4ef6\u8bf4\u660e\n\n- **`.env`**: \u654f\u611f\u4fe1\u606f\uff08API\u5bc6\u94a5\u3001\u6570\u636e\u5e93\u51ed\u8bc1\uff09\n- **`config.yaml`**: \u5e94\u7528\u914d\u7f6e\uff08\u6a21\u578b\u53c2\u6570\u3001\u529f\u80fd\u5f00\u5173\u3001\u5904\u7406\u9650\u5236\uff09\n- **\u4f18\u5148\u7ea7**: \u9ed8\u8ba4\u503c < `config.yaml` < \u73af\u5883\u53d8\u91cf\n\n## \u7cfb\u7edf\u67b6\u6784\n\n### \u6838\u5fc3\u7ec4\u4ef6\n\n- **`core/extraction/`**: \u8bb0\u5fc6\u63d0\u53d6\u5904\u7406\u7ba1\u9053\n  - `processor/`: \u6570\u636e\u5904\u7406\u5668\uff08\u6458\u8981\u3001\u63d0\u53d6\u3001\u5408\u5e76\u3001\u7ec4\u7ec7\uff09\n  - `prompts/`: \u667a\u80fd\u63d0\u793a\u8bcd\uff08\u652f\u6301\u4e2d\u82f1\u6587\uff09\n- **`core/buffer/`**: \u7f13\u51b2\u533a\u7ba1\u7406\uff08\u667a\u80fd\u7f13\u5b58\u3001\u6279\u91cf\u5904\u7406\u3001\u72b6\u6001\u8ddf\u8e2a\uff09\n- **`models/`**: \u6570\u636e\u6a21\u578b\uff08Blob\u3001Profile\u3001Response\u7c7b\u578b\uff09\n- **`core/storage/`**: \u5b58\u50a8\u540e\u7aef\uff08Lindorm\u5bbd\u8868\u3001Search\u5f15\u64ce\uff09\n- **`embedding/`**: \u5d4c\u5165\u670d\u52a1\uff08OpenAI\u3001Jina\u7b49\uff09\n- **`llm/`**: \u5927\u8bed\u8a00\u6a21\u578b\u63a5\u53e3\u548c\u5b8c\u6210\u670d\u52a1\n- **`core/search/`**: \u641c\u7d22\u670d\u52a1\uff08\u7528\u6237\u753b\u50cf\u3001\u4e8b\u4ef6\u3001\u4e0a\u4e0b\u6587\u68c0\u7d22\uff09\n\n### \u5904\u7406\u6d41\u6c34\u7ebf\n\n```\n\u539f\u59cb\u5bf9\u8bdd\u6570\u636e \u2192 \u7f13\u51b2\u533a\u6682\u5b58 \u2192 \u667a\u80fd\u8c03\u5ea6 \u2192 \u6279\u91cf\u5904\u7406 \u2192 \u8bb0\u5fc6\u63d0\u53d6 \u2192 \u7ed3\u6784\u5316\u5b58\u50a8\n    \u2193\n  ChatBlob \u2192 \u7f13\u51b2\u533a\u7ba1\u7406 \u2192 LLM\u5206\u6790 \u2192 \u5411\u91cf\u5316\u5b58\u50a8 \u2192 \u68c0\u7d22\u589e\u5f3a\n```\n\n### \u6570\u636e\u6d41\u5411\n\n```mermaid\ngraph LR\n    A[\u5bf9\u8bdd\u8f93\u5165] --> B[ChatBlob\u521b\u5efa]\n    B --> C[\u7f13\u51b2\u533a\u6682\u5b58]\n    C --> D[\u5bb9\u91cf\u68c0\u6d4b]\n    D --> E[\u6279\u91cf\u5904\u7406]\n    E --> F[\u8bb0\u5fc6\u63d0\u53d6]\n    F --> G[\u5411\u91cf\u5b58\u50a8]\n    G --> H[\u4e0a\u4e0b\u6587\u68c0\u7d22]\n    H --> I[\u589e\u5f3a\u54cd\u5e94]\n```\n\n## \u5b9e\u6218\u793a\u4f8b\n\n\u67e5\u770b `cookbooks/` \u76ee\u5f55\u83b7\u53d6\u5b8c\u6574\u7684\u5b9e\u7528\u793a\u4f8b\uff1a\n\n### \u5feb\u901f\u4e0a\u624b\n\n- **[`quick_start.py`](cookbooks/quick_start.py)**: \u6838\u5fc3API\u4f7f\u7528\u6f14\u793a\n- **[`simple_chatbot/`](cookbooks/simple_chatbot/)**: \u7b80\u5355\u804a\u5929\u673a\u5668\u4eba\u5b9e\u73b0\n\n### \u8bb0\u5fc6\u589e\u5f3a\u804a\u5929\u673a\u5668\u4eba\n\n- **[`chat_memory/`](cookbooks/chat_memory/)**: \u5b8c\u6574\u7684\u8bb0\u5fc6\u589e\u5f3a\u804a\u5929\u673a\u5668\u4eba\n  - **Web\u754c\u9762**: \u73b0\u4ee3\u5316\u7684\u5b9e\u65f6\u6d41\u5f0f\u804a\u5929\u754c\u9762\n  - **\u667a\u80fd\u7f13\u5b58**: 90%\u6027\u80fd\u63d0\u5347\u7684\u7f13\u5b58\u7cfb\u7edf\n  - **\u8bb0\u5fc6\u53ef\u89c6\u5316**: \u5b9e\u65f6\u67e5\u770b\u7528\u6237\u753b\u50cf\u548c\u4e0a\u4e0b\u6587\n  - **\u591a\u6a21\u5f0f\u652f\u6301**: \u547d\u4ee4\u884c\u548cWeb\u53cc\u754c\u9762\n\n### \u5feb\u901f\u4f53\u9a8c\u8bb0\u5fc6\u804a\u5929\u673a\u5668\u4eba\n\n```bash\n# \u8fdb\u5165\u804a\u5929\u673a\u5668\u4eba\u76ee\u5f55\ncd cookbooks/chat_memory/\n\n# \u542f\u52a8Web\u754c\u9762\uff08\u63a8\u8350\uff09\n./start_web.sh\n\n# \u6216\u542f\u52a8\u547d\u4ee4\u884c\u7248\u672c\npython memory_chatbot.py --user_id your_name\n```\n\n**Web\u754c\u9762\u7279\u6027**:\n- \u5b9e\u65f6\u6d41\u5f0f\u54cd\u5e94\n- \u4e0a\u4e0b\u6587\u53ef\u89c6\u5316\n- \u54cd\u5e94\u5f0f\u8bbe\u8ba1\n- \u6027\u80fd\u7edf\u8ba1\u9762\u677f\n\n## \u5f00\u53d1\u6784\u5efa\n\n### \u5f00\u53d1\u73af\u5883\u642d\u5efa\n\n```bash\n# \u5f00\u53d1\u6a21\u5f0f\u5b89\u88c5\npip install -e .\n\n# \u8fd0\u884c\u6d4b\u8bd5\npytest tests/ -v\n\n# \u8fd0\u884c\u6d4b\u8bd5\u5e76\u751f\u6210\u8986\u76d6\u7387\u62a5\u544a\npytest tests/ --cov=lindormmemobase --cov-report=html\n```\n\n### \u751f\u4ea7\u73af\u5883\u6784\u5efa\n\n\u4f7f\u7528 `build` \u5de5\u5177\uff08\u63a8\u8350\uff09:\n```bash\n# \u5b89\u88c5\u6784\u5efa\u5de5\u5177\npip install build\n\n# \u6784\u5efawheel\u548c\u6e90\u7801\u5206\u53d1\u5305\npython -m build\n\n# \u8f93\u51fa\u6587\u4ef6\u4f4d\u4e8e dist/ \u76ee\u5f55\nls dist/\n# lindormmemobase-0.1.0-py3-none-any.whl\n# lindormmemobase-0.1.0.tar.gz\n```\n\n\u76f4\u63a5\u4f7f\u7528 `setuptools`:\n```bash\n# \u6784\u5efawheel\u5305\npython setup.py bdist_wheel\n\n# \u6784\u5efa\u6e90\u7801\u5206\u53d1\u5305\npython setup.py sdist\n```\n\n### \u4ece\u6784\u5efa\u5305\u5b89\u88c5\n\n```bash\n# \u4ecewheel\u5305\u5b89\u88c5\npip install dist/lindormmemobase-0.1.0-py3-none-any.whl\n\n# \u6216\u4ece\u6e90\u7801\u5206\u53d1\u5305\u5b89\u88c5\npip install dist/lindormmemobase-0.1.0.tar.gz\n```\n\n### \u53d1\u5e03\u5230PyPI\n\n```bash\n# \u5b89\u88c5\u53d1\u5e03\u5de5\u5177\npip install twine\n\n# \u5148\u4e0a\u4f20\u5230TestPyPI\u6d4b\u8bd5\ntwine upload --repository-url https://test.pypi.org/legacy/ dist/*\n\n# \u6b63\u5f0f\u53d1\u5e03\u5230PyPI\ntwine upload dist/*\n```\n\n## \u6d4b\u8bd5\n\n```bash\n# \u8fd0\u884c\u6240\u6709\u6d4b\u8bd5\npytest tests/ -v\n\n# \u8fd0\u884c\u7279\u5b9a\u6d4b\u8bd5\u6587\u4ef6\npytest tests/test_lindorm_storage.py -v\n\n# \u751f\u6210HTML\u8986\u76d6\u7387\u62a5\u544a\npytest tests/ --cov=lindormmemobase --cov-report=html\n```\n\n## \u7cfb\u7edf\u8981\u6c42\n\n- **Python**: 3.12+\n- **API\u670d\u52a1**: OpenAI API\u5bc6\u94a5\uff08LLM\u548c\u5d4c\u5165\u670d\u52a1\uff09\n- **\u6570\u636e\u5e93**: Lindorm\u5bbd\u8868 \u6216 MySQL\n- **\u641c\u7d22\u5f15\u64ce**: Lindorm Search \u6216 OpenSearch\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A lightweight memory extraction and profile management system for LLM applications",
    "version": "0.1.4",
    "project_urls": null,
    "split_keywords": [
        "llm",
        " memory",
        " embedding",
        " profile",
        " extraction"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d3074870ffe552b059aa006e90ff2db9855dd44e1849f67bbb936255f09782be",
                "md5": "f6745980fbf4614783352fd17889735f",
                "sha256": "dac4e8d1afcf3f21473a908e4f9202727193555044207d60b5cdf1d20d5f20fb"
            },
            "downloads": -1,
            "filename": "lindorm_memobase-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f6745980fbf4614783352fd17889735f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 194954,
            "upload_time": "2025-08-14T09:32:36",
            "upload_time_iso_8601": "2025-08-14T09:32:36.526669Z",
            "url": "https://files.pythonhosted.org/packages/d3/07/4870ffe552b059aa006e90ff2db9855dd44e1849f67bbb936255f09782be/lindorm_memobase-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4083c9f0b17c5aa1d243e17b8a6fe0467a5351eb371cd6a75e94f6fc9634f5d0",
                "md5": "54890296a99c5e8c81d66cbad917dc30",
                "sha256": "a90dd13871e5ebbe9d109fd1672a4aef28c7ec3d467334b956d047a766d98e17"
            },
            "downloads": -1,
            "filename": "lindorm_memobase-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "54890296a99c5e8c81d66cbad917dc30",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 157862,
            "upload_time": "2025-08-14T09:32:37",
            "upload_time_iso_8601": "2025-08-14T09:32:37.978485Z",
            "url": "https://files.pythonhosted.org/packages/40/83/c9f0b17c5aa1d243e17b8a6fe0467a5351eb371cd6a75e94f6fc9634f5d0/lindorm_memobase-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-14 09:32:37",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lindorm-memobase"
}
        
Elapsed time: 0.53899s