memfuse


Namememfuse JSON
Version 0.3.2 PyPI version JSON
download
home_pageNone
SummaryMemfuse Python SDK
upload_time2025-08-25 03:40:44
maintainerNone
docs_urlNone
authorCalvin Ku
requires_python<4.0,>=3.10
licenseApache-2.0
keywords memfuse sdk ai llm memory rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <a id="readme-top"></a>

[![GitHub license](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/Percena/MemFuse/blob/readme/LICENSE)

<!-- PROJECT LOGO -->
<br />
<div align="center">
  <a href="https://memfuse.vercel.app/">
    <img src="https://raw.githubusercontent.com/memfuse/memfuse-python/refs/heads/main/assets/logo.png" alt="MemFuse Logo"
         style="max-width: 90%; height: auto; display: block; margin: 0 auto; padding-left: 16px; padding-right: 16px;">
  </a>
  <br />

  <p align="center">
    <strong>MemFuse Python SDK</strong>
    <br />
    The official Python client for MemFuse, the open-source memory layer for LLMs.
    <br />
    <a href="https://memfuse.vercel.app/"><strong>Explore the Docs »</strong></a>
    <br />
    <br />
    <a href="https://memfuse.vercel.app/">View Demo</a>
    &middot;
    <a href="https://github.com/memfuse/memfuse-python/issues">Report Bug</a>
    &middot;
    <a href="https://github.com/memfuse/memfuse-python/issues">Request Feature</a>
  </p>
</div>

<!-- TABLE OF CONTENTS -->
<details>
  <summary>Table of Contents</summary>
  <ol>
    <li><a href="#about-memfuse">About MemFuse</a></li>
    <li><a href="#installation">Installation</a></li>
    <li><a href="#quick-start">Quick Start</a></li>
    <li><a href="#examples">Examples</a></li>
    <li><a href="#documentation">Documentation</a></li>
    <li><a href="#community--support">Community & Support</a></li>
    <li><a href="#license">License</a></li>
  </ol>
</details>

## About MemFuse

Large language model applications are inherently stateless by design.
When the context window reaches its limit, previous conversations, user preferences, and critical information simply disappear.

**MemFuse** bridges this gap by providing a persistent, queryable memory layer between your LLM and storage backend, enabling AI agents to:

- **Remember** user preferences and context across sessions
- **Recall** facts and events from thousands of interactions later
- **Optimize** token usage by avoiding redundant chat history resending
- **Learn** continuously and improve performance over time

This repository contains the official Python SDK for seamless integration with MemFuse servers. For comprehensive information about the MemFuse server architecture and advanced features, please visit the [MemFuse Server repository](https://github.com/memfuse/memfuse).

## Recent Updates

- **Enhanced Testing:** Comprehensive E2E testing with semantic memory validation
- **Better Error Handling:** Improved error messages and logging for easier debugging  
- **Prompt Templates:** Structured prompt management system for consistent LLM interactions
- **Performance Benchmarks:** MSC dataset accuracy testing with 95% validation threshold

## Installation

> **Note:** This is the standalone Client SDK repository. If you need to install and run the MemFuse server, which is essential to use the SDK, please visit the [MemFuse Server repository](https://github.com/memfuse/memfuse).

You can install the MemFuse Python SDK using one of the following methods:

**Option 1: Install from PyPI (Recommended)**

```bash
pip install memfuse
```

**Option 2: Install from Source**

```bash
git clone https://github.com/memfuse/memfuse-python.git
cd memfuse-python
pip install -e .
```

## Quick Start

Here's a comprehensive example demonstrating how to use the MemFuse Python SDK with OpenAI:

```python
from memfuse.llm import OpenAI
from memfuse import MemFuse
import os


memfuse_client = MemFuse(
  # api_key=os.getenv("MEMFUSE_API_KEY")
  # base_url=os.getenv("MEMFUSE_BASE_URL"),
)

memory = memfuse_client.init(
  user="alice",
  # agent="agent_default",
  # session=<randomly-generated-uuid>
)

# Initialize your LLM client with the memory scope
llm_client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),  # Your OpenAI API key
    memory=memory
)

# Make a chat completion request
response = llm_client.chat.completions.create(
    model="gpt-4o", # Or any model supported by your LLM provider
    messages=[{"role": "user", "content": "I'm planning a trip to Mars. What is the gravity there?"}]
)

print(f"Response: {response.choices[0].message.content}")
# Example Output: Response: Mars has a gravity of about 3.721 m/s², which is about 38% of Earth's gravity.
```

### Contextual Follow-up

Now, ask a follow-up question. MemFuse will automatically recall relevant context from the previous conversation:

```python
# Ask a follow-up question. MemFuse automatically recalls relevant context.
followup_response = llm_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What are some challenges of living on that planet?"}]
)

print(f"Follow-up: {followup_response.choices[0].message.content}")
# Example Output: Follow-up: Some challenges of living on Mars include its thin atmosphere, extreme temperatures, high radiation levels, and the lack of liquid water on the surface.
```

MemFuse automatically manages the retrieval of relevant information and storage of new memories from conversations within the specified `memory` scope.

## Advanced Features

### Memory Validation & Testing
The SDK includes comprehensive testing capabilities to validate memory accuracy:

- **E2E Memory Tests:** Automated tests that verify conversational context retention
- **Semantic Similarity Validation:** Uses RAGAS framework for intelligent response verification
- **Performance Benchmarks:** MSC (Multi-Session Chat) dataset testing with accuracy metrics

### Error Handling & Debugging
Enhanced error messages provide clear guidance:

- **Connection Issues:** Helpful instructions for starting the MemFuse server
- **API Errors:** Detailed error responses with actionable information
- **Logging:** Comprehensive logging for troubleshooting and monitoring

## Examples

Explore comprehensive examples in the [examples/](examples/) directory of this repository, featuring:

- **Basic Operations:** Fundamental usage patterns and asynchronous operations
- **Conversation Continuity:** Maintaining context across multiple interactions
- **UI Integrations:** Gradio-based chatbot implementations with streaming support

## Documentation

- **Server Documentation:** For detailed information about the MemFuse server architecture and advanced configuration, visit the [MemFuse online documentation](https://memfuse.vercel.app/)
- **SDK Documentation:** Comprehensive API references and guides will be available soon

## Community & Support

Join our growing community:

- **GitHub Discussions:** Participate in roadmap discussions, RFCs, and Q&A in the [MemFuse Server repository](https://github.com/memfuse/memfuse)
- **Issues & Features:** Report bugs or request features in this repository's [Python SDK Issues section](https://github.com/memfuse/memfuse-python/issues)

If MemFuse enhances your projects, please ⭐ star both the [server repository](https://github.com/memfuse/memfuse) and this SDK repository!

## License

This MemFuse Python SDK is licensed under the Apache 2.0 License. See the [LICENSE](LICENSE) file for complete details.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "memfuse",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "memfuse, sdk, ai, llm, memory, rag",
    "author": "Calvin Ku",
    "author_email": "cku@percena.co",
    "download_url": "https://files.pythonhosted.org/packages/b4/d1/0a1cccd0aece5657e4f40fcd0a832cff10281477b7802757a8697f11d514/memfuse-0.3.2.tar.gz",
    "platform": null,
    "description": "<a id=\"readme-top\"></a>\n\n[![GitHub license](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/Percena/MemFuse/blob/readme/LICENSE)\n\n<!-- PROJECT LOGO -->\n<br />\n<div align=\"center\">\n  <a href=\"https://memfuse.vercel.app/\">\n    <img src=\"https://raw.githubusercontent.com/memfuse/memfuse-python/refs/heads/main/assets/logo.png\" alt=\"MemFuse Logo\"\n         style=\"max-width: 90%; height: auto; display: block; margin: 0 auto; padding-left: 16px; padding-right: 16px;\">\n  </a>\n  <br />\n\n  <p align=\"center\">\n    <strong>MemFuse Python SDK</strong>\n    <br />\n    The official Python client for MemFuse, the open-source memory layer for LLMs.\n    <br />\n    <a href=\"https://memfuse.vercel.app/\"><strong>Explore the Docs \u00bb</strong></a>\n    <br />\n    <br />\n    <a href=\"https://memfuse.vercel.app/\">View Demo</a>\n    &middot;\n    <a href=\"https://github.com/memfuse/memfuse-python/issues\">Report Bug</a>\n    &middot;\n    <a href=\"https://github.com/memfuse/memfuse-python/issues\">Request Feature</a>\n  </p>\n</div>\n\n<!-- TABLE OF CONTENTS -->\n<details>\n  <summary>Table of Contents</summary>\n  <ol>\n    <li><a href=\"#about-memfuse\">About MemFuse</a></li>\n    <li><a href=\"#installation\">Installation</a></li>\n    <li><a href=\"#quick-start\">Quick Start</a></li>\n    <li><a href=\"#examples\">Examples</a></li>\n    <li><a href=\"#documentation\">Documentation</a></li>\n    <li><a href=\"#community--support\">Community & Support</a></li>\n    <li><a href=\"#license\">License</a></li>\n  </ol>\n</details>\n\n## About MemFuse\n\nLarge language model applications are inherently stateless by design.\nWhen the context window reaches its limit, previous conversations, user preferences, and critical information simply disappear.\n\n**MemFuse** bridges this gap by providing a persistent, queryable memory layer between your LLM and storage backend, enabling AI agents to:\n\n- **Remember** user preferences and context across sessions\n- **Recall** facts and events from thousands of interactions later\n- **Optimize** token usage by avoiding redundant chat history resending\n- **Learn** continuously and improve performance over time\n\nThis repository contains the official Python SDK for seamless integration with MemFuse servers. For comprehensive information about the MemFuse server architecture and advanced features, please visit the [MemFuse Server repository](https://github.com/memfuse/memfuse).\n\n## Recent Updates\n\n- **Enhanced Testing:** Comprehensive E2E testing with semantic memory validation\n- **Better Error Handling:** Improved error messages and logging for easier debugging  \n- **Prompt Templates:** Structured prompt management system for consistent LLM interactions\n- **Performance Benchmarks:** MSC dataset accuracy testing with 95% validation threshold\n\n## Installation\n\n> **Note:** This is the standalone Client SDK repository. If you need to install and run the MemFuse server, which is essential to use the SDK, please visit the [MemFuse Server repository](https://github.com/memfuse/memfuse).\n\nYou can install the MemFuse Python SDK using one of the following methods:\n\n**Option 1: Install from PyPI (Recommended)**\n\n```bash\npip install memfuse\n```\n\n**Option 2: Install from Source**\n\n```bash\ngit clone https://github.com/memfuse/memfuse-python.git\ncd memfuse-python\npip install -e .\n```\n\n## Quick Start\n\nHere's a comprehensive example demonstrating how to use the MemFuse Python SDK with OpenAI:\n\n```python\nfrom memfuse.llm import OpenAI\nfrom memfuse import MemFuse\nimport os\n\n\nmemfuse_client = MemFuse(\n  # api_key=os.getenv(\"MEMFUSE_API_KEY\")\n  # base_url=os.getenv(\"MEMFUSE_BASE_URL\"),\n)\n\nmemory = memfuse_client.init(\n  user=\"alice\",\n  # agent=\"agent_default\",\n  # session=<randomly-generated-uuid>\n)\n\n# Initialize your LLM client with the memory scope\nllm_client = OpenAI(\n    api_key=os.getenv(\"OPENAI_API_KEY\"),  # Your OpenAI API key\n    memory=memory\n)\n\n# Make a chat completion request\nresponse = llm_client.chat.completions.create(\n    model=\"gpt-4o\", # Or any model supported by your LLM provider\n    messages=[{\"role\": \"user\", \"content\": \"I'm planning a trip to Mars. What is the gravity there?\"}]\n)\n\nprint(f\"Response: {response.choices[0].message.content}\")\n# Example Output: Response: Mars has a gravity of about 3.721 m/s\u00b2, which is about 38% of Earth's gravity.\n```\n\n### Contextual Follow-up\n\nNow, ask a follow-up question. MemFuse will automatically recall relevant context from the previous conversation:\n\n```python\n# Ask a follow-up question. MemFuse automatically recalls relevant context.\nfollowup_response = llm_client.chat.completions.create(\n    model=\"gpt-4o\",\n    messages=[{\"role\": \"user\", \"content\": \"What are some challenges of living on that planet?\"}]\n)\n\nprint(f\"Follow-up: {followup_response.choices[0].message.content}\")\n# Example Output: Follow-up: Some challenges of living on Mars include its thin atmosphere, extreme temperatures, high radiation levels, and the lack of liquid water on the surface.\n```\n\nMemFuse automatically manages the retrieval of relevant information and storage of new memories from conversations within the specified `memory` scope.\n\n## Advanced Features\n\n### Memory Validation & Testing\nThe SDK includes comprehensive testing capabilities to validate memory accuracy:\n\n- **E2E Memory Tests:** Automated tests that verify conversational context retention\n- **Semantic Similarity Validation:** Uses RAGAS framework for intelligent response verification\n- **Performance Benchmarks:** MSC (Multi-Session Chat) dataset testing with accuracy metrics\n\n### Error Handling & Debugging\nEnhanced error messages provide clear guidance:\n\n- **Connection Issues:** Helpful instructions for starting the MemFuse server\n- **API Errors:** Detailed error responses with actionable information\n- **Logging:** Comprehensive logging for troubleshooting and monitoring\n\n## Examples\n\nExplore comprehensive examples in the [examples/](examples/) directory of this repository, featuring:\n\n- **Basic Operations:** Fundamental usage patterns and asynchronous operations\n- **Conversation Continuity:** Maintaining context across multiple interactions\n- **UI Integrations:** Gradio-based chatbot implementations with streaming support\n\n## Documentation\n\n- **Server Documentation:** For detailed information about the MemFuse server architecture and advanced configuration, visit the [MemFuse online documentation](https://memfuse.vercel.app/)\n- **SDK Documentation:** Comprehensive API references and guides will be available soon\n\n## Community & Support\n\nJoin our growing community:\n\n- **GitHub Discussions:** Participate in roadmap discussions, RFCs, and Q&A in the [MemFuse Server repository](https://github.com/memfuse/memfuse)\n- **Issues & Features:** Report bugs or request features in this repository's [Python SDK Issues section](https://github.com/memfuse/memfuse-python/issues)\n\nIf MemFuse enhances your projects, please \u2b50 star both the [server repository](https://github.com/memfuse/memfuse) and this SDK repository!\n\n## License\n\nThis MemFuse Python SDK is licensed under the Apache 2.0 License. See the [LICENSE](LICENSE) file for complete details.\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Memfuse Python SDK",
    "version": "0.3.2",
    "project_urls": {
        "Homepage": "https://memfuse.vercel.app",
        "Repository": "https://github.com/memfuse/memfuse-python"
    },
    "split_keywords": [
        "memfuse",
        " sdk",
        " ai",
        " llm",
        " memory",
        " rag"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "feccfb4b051835020fe09f57eba078c8480b6e62b90db49313c454ae63225569",
                "md5": "0070f5ea8b8249b722249bf0ea1647ab",
                "sha256": "d56b041b6e73a0f38bcc559f9f0bb8bedfe18c4a729ebdf4acef4c68138febc2"
            },
            "downloads": -1,
            "filename": "memfuse-0.3.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0070f5ea8b8249b722249bf0ea1647ab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 52624,
            "upload_time": "2025-08-25T03:40:42",
            "upload_time_iso_8601": "2025-08-25T03:40:42.577887Z",
            "url": "https://files.pythonhosted.org/packages/fe/cc/fb4b051835020fe09f57eba078c8480b6e62b90db49313c454ae63225569/memfuse-0.3.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b4d10a1cccd0aece5657e4f40fcd0a832cff10281477b7802757a8697f11d514",
                "md5": "9ee91157205cd1ff90b5ab0f50cf8eb3",
                "sha256": "3720dcdafcd624f9f05e322559756f032aca287aef3e05b40a0b8544998ba52c"
            },
            "downloads": -1,
            "filename": "memfuse-0.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "9ee91157205cd1ff90b5ab0f50cf8eb3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 39995,
            "upload_time": "2025-08-25T03:40:44",
            "upload_time_iso_8601": "2025-08-25T03:40:44.087311Z",
            "url": "https://files.pythonhosted.org/packages/b4/d1/0a1cccd0aece5657e4f40fcd0a832cff10281477b7802757a8697f11d514/memfuse-0.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-25 03:40:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "memfuse",
    "github_project": "memfuse-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "memfuse"
}
        
Elapsed time: 1.66751s