llama-index-memory-mem0


Namellama-index-memory-mem0 JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
Summaryllama-index memory mem0 integration
upload_time2024-10-31 17:02:30
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Memory Integration: Mem0

## Installation

To install the required package, run:

```bash
%pip install llama-index-memory-mem0
```

## Setup with Mem0 Platform

1. Set your Mem0 Platform API key as an environment variable. You can replace `<your-mem0-api-key>` with your actual API key:

> Note: You can obtain your Mem0 Platform API key from the [Mem0 Platform](https://app.mem0.ai/login).

```python
os.environ["MEM0_API_KEY"] = "<your-mem0-api-key>"
```

2. Import the necessary modules and create a Mem0Memory instance:

```python
from llama_index.memory.mem0 import Mem0Memory

context = {"user_id": "user_1"}
memory = Mem0Memory.from_client(
    context=context,
    api_key="<your-mem0-api-key>",
    search_msg_limit=4,  # optional, default is 5
)
```

Mem0 Context is used to identify the user, agent or the conversation in the Mem0. It is required to be passed in the at least one of the fields in the `Mem0Memory` constructor. It can be any of the following:

```python
context = {
    "user_id": "user_1",
    "agent_id": "agent_1",
    "run_id": "run_1",
}
```

`search_msg_limit` is optional, default is 5. It is the number of messages from the chat history to be used for memory retrieval from Mem0. More number of messages will result in more context being used for retrieval but will also increase the retrieval time and might result in some unwanted results.

## Setup with Mem0 OSS

1. Set your Mem0 OSS by providing configuration details:

> Note: To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/quickstart).

```python
config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "collection_name": "test_9",
            "host": "localhost",
            "port": 6333,
            "embedding_model_dims": 1536,  # Change this according to your local model's dimensions
        },
    },
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4o",
            "temperature": 0.2,
            "max_tokens": 1500,
        },
    },
    "embedder": {
        "provider": "openai",
        "config": {"model": "text-embedding-3-small"},
    },
    "version": "v1.1",
}
```

2. Create a Mem0Memory instance:

```python
memory = Mem0Memory.from_config(
    context=context,
    config=config,
    search_msg_limit=4,  # optional, default is 5
)
```

## Basic Usage

Currently, Mem0 Memory is supported in the `SimpleChatEngine`, `FunctionCallingAgent` and `ReActAgent`.

Intilaize the LLM

```python
import os
from llama_index.llms.openai import OpenAI

os.environ["OPENAI_API_KEY"] = "<your-openai-api-key>"
llm = OpenAI(model="gpt-4o")
```

### SimpleChatEngine

```python
from llama_index.core import SimpleChatEngine

agent = SimpleChatEngine.from_defaults(
    llm=llm, memory=memory  # set you memory here
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)
```

Initialize the tools

```python
from llama_index.core.tools import FunctionTool


def call_fn(name: str):
    """Call the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Calling... {name}")


def email_fn(name: str):
    """Email the provided name.
    Args:
        name: str (Name of the person)
    """
    print(f"Emailing... {name}")


call_tool = FunctionTool.from_defaults(fn=call_fn)
email_tool = FunctionTool.from_defaults(fn=email_fn)
```

### FunctionCallingAgent

```python
from llama_index.core.agent import FunctionCallingAgent

agent = FunctionCallingAgent.from_tools(
    [call_tool, email_tool],
    llm=llm,
    memory=memory,
    verbose=True,
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)
```

### ReActAgent

```python
from llama_index.core.agent import ReActAgent

agent = ReActAgent.from_tools(
    [call_tool, email_tool],
    llm=llm,
    memory=memory,
    verbose=True,
)

# Start the chat
response = agent.chat("Hi, My name is Mayank")
print(response)
```

> Note: For more examples refer to: [Notebooks](https://github.com/run-llama/llama_index/tree/main/docs/docs/examples/memory)

## References

- [Mem0 Platform](https://app.mem0.ai/login)
- [Mem0 OSS](https://docs.mem0.ai/open-source/quickstart)
- [Mem0 Github](https://github.com/mem0ai/mem0)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-memory-mem0",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/bb/e5/6a0d6261ece4cccc935feb6a75fd72973cc107a0a8fa3acdd05693326626/llama_index_memory_mem0-0.1.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Memory Integration: Mem0\n\n## Installation\n\nTo install the required package, run:\n\n```bash\n%pip install llama-index-memory-mem0\n```\n\n## Setup with Mem0 Platform\n\n1. Set your Mem0 Platform API key as an environment variable. You can replace `<your-mem0-api-key>` with your actual API key:\n\n> Note: You can obtain your Mem0 Platform API key from the [Mem0 Platform](https://app.mem0.ai/login).\n\n```python\nos.environ[\"MEM0_API_KEY\"] = \"<your-mem0-api-key>\"\n```\n\n2. Import the necessary modules and create a Mem0Memory instance:\n\n```python\nfrom llama_index.memory.mem0 import Mem0Memory\n\ncontext = {\"user_id\": \"user_1\"}\nmemory = Mem0Memory.from_client(\n    context=context,\n    api_key=\"<your-mem0-api-key>\",\n    search_msg_limit=4,  # optional, default is 5\n)\n```\n\nMem0 Context is used to identify the user, agent or the conversation in the Mem0. It is required to be passed in the at least one of the fields in the `Mem0Memory` constructor. It can be any of the following:\n\n```python\ncontext = {\n    \"user_id\": \"user_1\",\n    \"agent_id\": \"agent_1\",\n    \"run_id\": \"run_1\",\n}\n```\n\n`search_msg_limit` is optional, default is 5. It is the number of messages from the chat history to be used for memory retrieval from Mem0. More number of messages will result in more context being used for retrieval but will also increase the retrieval time and might result in some unwanted results.\n\n## Setup with Mem0 OSS\n\n1. Set your Mem0 OSS by providing configuration details:\n\n> Note: To know more about Mem0 OSS, read [Mem0 OSS Quickstart](https://docs.mem0.ai/open-source/quickstart).\n\n```python\nconfig = {\n    \"vector_store\": {\n        \"provider\": \"qdrant\",\n        \"config\": {\n            \"collection_name\": \"test_9\",\n            \"host\": \"localhost\",\n            \"port\": 6333,\n            \"embedding_model_dims\": 1536,  # Change this according to your local model's dimensions\n        },\n    },\n    \"llm\": {\n        \"provider\": \"openai\",\n        \"config\": {\n            \"model\": \"gpt-4o\",\n            \"temperature\": 0.2,\n            \"max_tokens\": 1500,\n        },\n    },\n    \"embedder\": {\n        \"provider\": \"openai\",\n        \"config\": {\"model\": \"text-embedding-3-small\"},\n    },\n    \"version\": \"v1.1\",\n}\n```\n\n2. Create a Mem0Memory instance:\n\n```python\nmemory = Mem0Memory.from_config(\n    context=context,\n    config=config,\n    search_msg_limit=4,  # optional, default is 5\n)\n```\n\n## Basic Usage\n\nCurrently, Mem0 Memory is supported in the `SimpleChatEngine`, `FunctionCallingAgent` and `ReActAgent`.\n\nIntilaize the LLM\n\n```python\nimport os\nfrom llama_index.llms.openai import OpenAI\n\nos.environ[\"OPENAI_API_KEY\"] = \"<your-openai-api-key>\"\nllm = OpenAI(model=\"gpt-4o\")\n```\n\n### SimpleChatEngine\n\n```python\nfrom llama_index.core import SimpleChatEngine\n\nagent = SimpleChatEngine.from_defaults(\n    llm=llm, memory=memory  # set you memory here\n)\n\n# Start the chat\nresponse = agent.chat(\"Hi, My name is Mayank\")\nprint(response)\n```\n\nInitialize the tools\n\n```python\nfrom llama_index.core.tools import FunctionTool\n\n\ndef call_fn(name: str):\n    \"\"\"Call the provided name.\n    Args:\n        name: str (Name of the person)\n    \"\"\"\n    print(f\"Calling... {name}\")\n\n\ndef email_fn(name: str):\n    \"\"\"Email the provided name.\n    Args:\n        name: str (Name of the person)\n    \"\"\"\n    print(f\"Emailing... {name}\")\n\n\ncall_tool = FunctionTool.from_defaults(fn=call_fn)\nemail_tool = FunctionTool.from_defaults(fn=email_fn)\n```\n\n### FunctionCallingAgent\n\n```python\nfrom llama_index.core.agent import FunctionCallingAgent\n\nagent = FunctionCallingAgent.from_tools(\n    [call_tool, email_tool],\n    llm=llm,\n    memory=memory,\n    verbose=True,\n)\n\n# Start the chat\nresponse = agent.chat(\"Hi, My name is Mayank\")\nprint(response)\n```\n\n### ReActAgent\n\n```python\nfrom llama_index.core.agent import ReActAgent\n\nagent = ReActAgent.from_tools(\n    [call_tool, email_tool],\n    llm=llm,\n    memory=memory,\n    verbose=True,\n)\n\n# Start the chat\nresponse = agent.chat(\"Hi, My name is Mayank\")\nprint(response)\n```\n\n> Note: For more examples refer to: [Notebooks](https://github.com/run-llama/llama_index/tree/main/docs/docs/examples/memory)\n\n## References\n\n- [Mem0 Platform](https://app.mem0.ai/login)\n- [Mem0 OSS](https://docs.mem0.ai/open-source/quickstart)\n- [Mem0 Github](https://github.com/mem0ai/mem0)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index memory mem0 integration",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "029d8e29da9d95880b2e5bda26a4ce0c3528a343799adad8cbbe2b72fac56854",
                "md5": "e513ed4f944a2c31028bd66ce2d94d25",
                "sha256": "cac4e95a4adf4828220c9f8dcca606d595a15d1a3261c5f7c94f0fe97afaca1e"
            },
            "downloads": -1,
            "filename": "llama_index_memory_mem0-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e513ed4f944a2c31028bd66ce2d94d25",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 5883,
            "upload_time": "2024-10-31T17:02:28",
            "upload_time_iso_8601": "2024-10-31T17:02:28.621891Z",
            "url": "https://files.pythonhosted.org/packages/02/9d/8e29da9d95880b2e5bda26a4ce0c3528a343799adad8cbbe2b72fac56854/llama_index_memory_mem0-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bbe56a0d6261ece4cccc935feb6a75fd72973cc107a0a8fa3acdd05693326626",
                "md5": "dded28118ca38dacfb80bc8e7eb7a3dc",
                "sha256": "8a077ca890d047f25969d1df60bfd1be8f9169882d4a5702d55aca207b0836aa"
            },
            "downloads": -1,
            "filename": "llama_index_memory_mem0-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "dded28118ca38dacfb80bc8e7eb7a3dc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 5253,
            "upload_time": "2024-10-31T17:02:30",
            "upload_time_iso_8601": "2024-10-31T17:02:30.361272Z",
            "url": "https://files.pythonhosted.org/packages/bb/e5/6a0d6261ece4cccc935feb6a75fd72973cc107a0a8fa3acdd05693326626/llama_index_memory_mem0-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-31 17:02:30",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-memory-mem0"
}
        
Elapsed time: 0.38561s