chatmemory


Namechatmemory JSON
Version 0.2.1 PyPI version JSON
download
home_pagehttps://github.com/uezo/chatmemory
SummaryThe simple yet powerful long-term memory manager between AI and youđź’•
upload_time2025-02-25 16:40:30
maintaineruezo
docs_urlNone
authoruezo
requires_pythonNone
licenseApache v2
keywords
VCS
bugtrack_url
requirements fastapi openai requests SQLAlchemy uvicorn pycryptodome
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ChatMemory

The simple yet powerful long-term memory manager between AI and youđź’•


## ✨ Features

- **🌟 Extremely simple:** All code is contained in one file, making it easy to track memory management—just PostgreSQL is needed as your datastore.
- **🔎 Intelligent Search & Answer:** Quickly retrieves context via vector search on summaries/knowledge, then uses detailed history if needed—returning both the answer and raw data.
- **đź’¬ Direct Answer:** Leverages an LLM to produce clear, concise answers that go beyond mere data retrieval, delivering ready-to-use responses.


## 🚀 Quick start

Install chatmemory.

```sh
pip install chatmemory
```

Create the server script (e.g.`server.py`) as following:

```python
from fastapi import FastAPI
from chatmemory import ChatMemory

cm = ChatMemory(
    openai_api_key="YOUR_OPENAI_API_KEY",
    llm_model="gpt-4o",
    # Your PostgreSQL configurations
    db_name="postgres",
    db_user="postgres",
    db_password="postgres",
    db_host="127.0.0.1",
    db_port=5432,
)

app = FastAPI()
app.include_router(cm.get_router())
```

Start API server.

```sh
uvicorn server:app
```

That's all. Long-term memory management service is ready-to-use👍

Go http://127.0.0.1:8000/docs to know the spec and try the APIs.


## 🪄 How it works

ChatMemory organizes conversation data into three primary entities:

- **📜 History:** The raw conversation logs, storing every message exchanged.
- **đź“‘ Summary:** A concise overview generated from the detailed history using an LLM. This enables fast, lightweight processing by capturing the essence of a conversation.
- **💡 Knowledge:** Additional, explicitly provided information that isn’t tied to the conversation log. This allows you to control and influence the answer independently.

When a search query is received, ChatMemory works in two stages:

1. **⚡ Lightweight Retrieval:** It first performs a vector-based search on the summaries and knowledge. This step quickly gathers relevant context and typically suffices for generating an answer.
2. **🔍 Fallback Detailed Search:** If the initial results aren’t deemed sufficient, ChatMemory then conducts a vector search over the full conversation history. This retrieves detailed logs, enabling the system to refine and improve the answer.

This two-step mechanism strikes a balance between speed and accuracy—leveraging the efficiency of summaries while still ensuring high-precision answers when more context is needed. Additionally, the explicit knowledge you provide helps guide the responses beyond just the conversation history.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/uezo/chatmemory",
    "name": "chatmemory",
    "maintainer": "uezo",
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "uezo@uezo.net",
    "keywords": null,
    "author": "uezo",
    "author_email": "uezo@uezo.net",
    "download_url": null,
    "platform": null,
    "description": "# ChatMemory\n\nThe simple yet powerful long-term memory manager between AI and you\ud83d\udc95\n\n\n## \u2728 Features\n\n- **\ud83c\udf1f Extremely simple:** All code is contained in one file, making it easy to track memory management\u2014just PostgreSQL is needed as your datastore.\n- **\ud83d\udd0e Intelligent Search & Answer:** Quickly retrieves context via vector search on summaries/knowledge, then uses detailed history if needed\u2014returning both the answer and raw data.\n- **\ud83d\udcac Direct Answer:** Leverages an LLM to produce clear, concise answers that go beyond mere data retrieval, delivering ready-to-use responses.\n\n\n## \ud83d\ude80 Quick start\n\nInstall chatmemory.\n\n```sh\npip install chatmemory\n```\n\nCreate the server script (e.g.`server.py`) as following:\n\n```python\nfrom fastapi import FastAPI\nfrom chatmemory import ChatMemory\n\ncm = ChatMemory(\n    openai_api_key=\"YOUR_OPENAI_API_KEY\",\n    llm_model=\"gpt-4o\",\n    # Your PostgreSQL configurations\n    db_name=\"postgres\",\n    db_user=\"postgres\",\n    db_password=\"postgres\",\n    db_host=\"127.0.0.1\",\n    db_port=5432,\n)\n\napp = FastAPI()\napp.include_router(cm.get_router())\n```\n\nStart API server.\n\n```sh\nuvicorn server:app\n```\n\nThat's all. Long-term memory management service is ready-to-use\ud83d\udc4d\n\nGo http://127.0.0.1:8000/docs to know the spec and try the APIs.\n\n\n## \ud83e\ude84 How it works\n\nChatMemory organizes conversation data into three primary entities:\n\n- **\ud83d\udcdc History:** The raw conversation logs, storing every message exchanged.\n- **\ud83d\udcd1 Summary:** A concise overview generated from the detailed history using an LLM. This enables fast, lightweight processing by capturing the essence of a conversation.\n- **\ud83d\udca1 Knowledge:** Additional, explicitly provided information that isn\u2019t tied to the conversation log. This allows you to control and influence the answer independently.\n\nWhen a search query is received, ChatMemory works in two stages:\n\n1. **\u26a1 Lightweight Retrieval:** It first performs a vector-based search on the summaries and knowledge. This step quickly gathers relevant context and typically suffices for generating an answer.\n2. **\ud83d\udd0d Fallback Detailed Search:** If the initial results aren\u2019t deemed sufficient, ChatMemory then conducts a vector search over the full conversation history. This retrieves detailed logs, enabling the system to refine and improve the answer.\n\nThis two-step mechanism strikes a balance between speed and accuracy\u2014leveraging the efficiency of summaries while still ensuring high-precision answers when more context is needed. Additionally, the explicit knowledge you provide helps guide the responses beyond just the conversation history.\n",
    "bugtrack_url": null,
    "license": "Apache v2",
    "summary": "The simple yet powerful long-term memory manager between AI and you\ud83d\udc95",
    "version": "0.2.1",
    "project_urls": {
        "Homepage": "https://github.com/uezo/chatmemory"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e234860a07f73b8ce363f39872ffa44cc42c6003ba73bbdc7f9e95106efb2331",
                "md5": "147622f379a4c9cb9a2f9eebd3c2fb18",
                "sha256": "7968a80c262149d12136c9f1b8d4c6eaeff6648255e2f16e16d1018fc2367d09"
            },
            "downloads": -1,
            "filename": "chatmemory-0.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "147622f379a4c9cb9a2f9eebd3c2fb18",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 13526,
            "upload_time": "2025-02-25T16:40:30",
            "upload_time_iso_8601": "2025-02-25T16:40:30.639117Z",
            "url": "https://files.pythonhosted.org/packages/e2/34/860a07f73b8ce363f39872ffa44cc42c6003ba73bbdc7f9e95106efb2331/chatmemory-0.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-25 16:40:30",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "uezo",
    "github_project": "chatmemory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "fastapi",
            "specs": [
                [
                    "==",
                    "0.100.0"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    "==",
                    "0.27.8"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.31.0"
                ]
            ]
        },
        {
            "name": "SQLAlchemy",
            "specs": [
                [
                    "==",
                    "2.0.20"
                ]
            ]
        },
        {
            "name": "uvicorn",
            "specs": [
                [
                    "==",
                    "0.23.1"
                ]
            ]
        },
        {
            "name": "pycryptodome",
            "specs": [
                [
                    "==",
                    "3.18.0"
                ]
            ]
        }
    ],
    "lcname": "chatmemory"
}
        
Elapsed time: 0.98419s