memoravel


Namememoravel JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/peninha/memoravel
SummaryA library to manage message history, for implementing memory in Language Models.
upload_time2024-11-22 21:47:59
maintainerNone
docs_urlNone
authorPena
requires_python>=3.7
licenseMIT
keywords llm memory message history
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # memoravel

A Python library to manage message history, for implementing memory in Language Models.

[![Documentation Status](https://readthedocs.org/projects/memoravel/badge/?version=latest)](https://memoravel.readthedocs.io/en/latest/?badge=latest)

[portuguese] Uma biblioteca para gerenciar histórico de mensagens, para implementar memória nos Modelos de Linguagem.

## Features

- **Message History Management**: Store and manage message history to simulate memory in LLMs.
- **Token Counting**: Manage the number of tokens effectively to keep conversation context under a desired limit.
- **Flexible Memory Preservation**: Allows preserving initial or last messages, including system messages, ensuring critical information remains.

## Installation

To install memoravel, you can use pip:

```sh
pip install git+https://github.com/peninha/memoravel.git#egg=memoravel
```

## Quick Start

Here is a quick example to help you get started with `memoravel`, including integration with OpenAI's API. We'll use a helper function to make requests and manage memory:

```python
from memoravel import Memoravel
from dotenv import load_dotenv
from openai import OpenAI

# Initialize OpenAI client
load_dotenv() #make sure you have a .env file with yout API token in it: OPENAI_API_KEY="..."
client = OpenAI()

model = "gpt-4o"

# Initialize memory with a message limit of 5
memory = Memoravel(limit=5, max_tokens=8000, model=model)

def make_request(memory, model):
    try:
        # Make an API request using the current memory
        completion = client.chat.completions.create(
            model=model,
            messages=memory.recall()
        )
        # Get the response from the assistant
        response = completion.choices[0].message.content
        return response
    except Exception as e:
        print(f"Error during API request: {e}")
        return None

# Add a system message and some user interactions
memory.add(role="system", content="You are a helpful assistant.")
memory.add(role="user", content="Write a haiku about recursion in programming.")
memory.add(role="assistant", content="A function returns,\nIt calls itself once again,\nInfinite beauty.")

# Add a new user message
memory.add(role="user", content="Can you explain what recursion is in two sentences?")

# Make the first API request
response = make_request(memory, model)
if response:
    print("Response from model:")
    print(response)
    # Add the response to memory
    memory.add(role="assistant", content=response)

# Add another user message
memory.add(role="user", content="What is the most common application of recursion? Summarize it in two sentences.")

# Make a second API request
response = make_request(memory, model)
if response:
    print("\nResponse from model:")
    print(response)
    # Add the response to memory
    memory.add(role="assistant", content=response)

# Recall the last two messages
last_two_messages = memory.recall(last_n=2)
print(f"\nLast two messages from the conversation:\n{last_two_messages}")

# Now, let's check the whole memory
print(f"\nFull memory after all interactions:\n{memory.recall()}")
# Because we limit the memory length to 5, there are only 5 messages stored, and the system prompt is preserved among them.

# Check the total number of tokens stored in memory
total_tokens = memory.count_tokens()
print(f"\nTokens in memory:\n{total_tokens}")
```



This example demonstrates basic usage, including adding messages and recalling them, as well as automatically trimming the history when necessary.

## Usage

`memoravel` can be used in a variety of ways to maintain conversational context for language models. Below are some of the key methods available:

### `add(role, content=None, **kwargs)`

Add a message to the history. This method will automatically trim the history if it exceeds the set limits.

- **Parameters**:
  - `role` (str): The role of the message (`user`, `assistant`, `system`).
  - `content` (str, dict, list, optional): The content of the message.
  - `kwargs`: Additional metadata.

### `recall(last_n=None, first_n=None, slice_range=None)`

Retrieve messages from the history.

- **Parameters**:
  - `last_n` (int, optional): Retrieve the last `n` messages.
  - `first_n` (int, optional): Retrieve the first `n` messages.
  - `slice_range` (slice, optional): Retrieve messages using a slice.

### `save(file_path)` / `load(file_path)`

Save or load the history from a file.

## Examples

You can find more comprehensive examples in the [`examples/`](examples/) directory of the repository. These examples cover various scenarios such as:

- Basic usage for conversational context.
- Advanced token management.
- Preserving system messages and custom metadata.

## Documentation

Full documentation for all methods and classes can be found at the [official documentation site](https://memoravel.readthedocs.io/en/latest/memoravel.html#memoravel.Memoravel). You can also refer to the docstrings in the code for quick explanations.

## Contributing

We welcome contributions! If you'd like to contribute, please follow these steps:

1. Fork the repository.
2. Create a new branch (`git checkout -b feature-branch`).
3. Make your changes and commit them (`git commit -m 'Add new feature'`).
4. Push to the branch (`git push origin feature-branch`).
5. Open a Pull Request.

Please make sure to add or update tests as appropriate, and ensure the code follows PEP8 guidelines.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Feedback and Support

If you have questions, suggestions, or feedback, feel free to open an issue on GitHub. Contributions, feedback, and improvements are always welcome.




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/peninha/memoravel",
    "name": "memoravel",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "LLM, memory, message, history",
    "author": "Pena",
    "author_email": "Pena <penadoxo@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/4f/a2/8f4e852c89d2bdb47346332e007d5cdb2cd18f8e6d7c6aac560d1a906bd5/memoravel-1.0.0.tar.gz",
    "platform": null,
    "description": "# memoravel\r\n\r\nA Python library to manage message history, for implementing memory in Language Models.\r\n\r\n[![Documentation Status](https://readthedocs.org/projects/memoravel/badge/?version=latest)](https://memoravel.readthedocs.io/en/latest/?badge=latest)\r\n\r\n[portuguese] Uma biblioteca para gerenciar hist\u00f3rico de mensagens, para implementar mem\u00f3ria nos Modelos de Linguagem.\r\n\r\n## Features\r\n\r\n- **Message History Management**: Store and manage message history to simulate memory in LLMs.\r\n- **Token Counting**: Manage the number of tokens effectively to keep conversation context under a desired limit.\r\n- **Flexible Memory Preservation**: Allows preserving initial or last messages, including system messages, ensuring critical information remains.\r\n\r\n## Installation\r\n\r\nTo install memoravel, you can use pip:\r\n\r\n```sh\r\npip install git+https://github.com/peninha/memoravel.git#egg=memoravel\r\n```\r\n\r\n## Quick Start\r\n\r\nHere is a quick example to help you get started with `memoravel`, including integration with OpenAI's API. We'll use a helper function to make requests and manage memory:\r\n\r\n```python\r\nfrom memoravel import Memoravel\r\nfrom dotenv import load_dotenv\r\nfrom openai import OpenAI\r\n\r\n# Initialize OpenAI client\r\nload_dotenv() #make sure you have a .env file with yout API token in it: OPENAI_API_KEY=\"...\"\r\nclient = OpenAI()\r\n\r\nmodel = \"gpt-4o\"\r\n\r\n# Initialize memory with a message limit of 5\r\nmemory = Memoravel(limit=5, max_tokens=8000, model=model)\r\n\r\ndef make_request(memory, model):\r\n    try:\r\n        # Make an API request using the current memory\r\n        completion = client.chat.completions.create(\r\n            model=model,\r\n            messages=memory.recall()\r\n        )\r\n        # Get the response from the assistant\r\n        response = completion.choices[0].message.content\r\n        return response\r\n    except Exception as e:\r\n        print(f\"Error during API request: {e}\")\r\n        return None\r\n\r\n# Add a system message and some user interactions\r\nmemory.add(role=\"system\", content=\"You are a helpful assistant.\")\r\nmemory.add(role=\"user\", content=\"Write a haiku about recursion in programming.\")\r\nmemory.add(role=\"assistant\", content=\"A function returns,\\nIt calls itself once again,\\nInfinite beauty.\")\r\n\r\n# Add a new user message\r\nmemory.add(role=\"user\", content=\"Can you explain what recursion is in two sentences?\")\r\n\r\n# Make the first API request\r\nresponse = make_request(memory, model)\r\nif response:\r\n    print(\"Response from model:\")\r\n    print(response)\r\n    # Add the response to memory\r\n    memory.add(role=\"assistant\", content=response)\r\n\r\n# Add another user message\r\nmemory.add(role=\"user\", content=\"What is the most common application of recursion? Summarize it in two sentences.\")\r\n\r\n# Make a second API request\r\nresponse = make_request(memory, model)\r\nif response:\r\n    print(\"\\nResponse from model:\")\r\n    print(response)\r\n    # Add the response to memory\r\n    memory.add(role=\"assistant\", content=response)\r\n\r\n# Recall the last two messages\r\nlast_two_messages = memory.recall(last_n=2)\r\nprint(f\"\\nLast two messages from the conversation:\\n{last_two_messages}\")\r\n\r\n# Now, let's check the whole memory\r\nprint(f\"\\nFull memory after all interactions:\\n{memory.recall()}\")\r\n# Because we limit the memory length to 5, there are only 5 messages stored, and the system prompt is preserved among them.\r\n\r\n# Check the total number of tokens stored in memory\r\ntotal_tokens = memory.count_tokens()\r\nprint(f\"\\nTokens in memory:\\n{total_tokens}\")\r\n```\r\n\r\n\r\n\r\nThis example demonstrates basic usage, including adding messages and recalling them, as well as automatically trimming the history when necessary.\r\n\r\n## Usage\r\n\r\n`memoravel` can be used in a variety of ways to maintain conversational context for language models. Below are some of the key methods available:\r\n\r\n### `add(role, content=None, **kwargs)`\r\n\r\nAdd a message to the history. This method will automatically trim the history if it exceeds the set limits.\r\n\r\n- **Parameters**:\r\n  - `role` (str): The role of the message (`user`, `assistant`, `system`).\r\n  - `content` (str, dict, list, optional): The content of the message.\r\n  - `kwargs`: Additional metadata.\r\n\r\n### `recall(last_n=None, first_n=None, slice_range=None)`\r\n\r\nRetrieve messages from the history.\r\n\r\n- **Parameters**:\r\n  - `last_n` (int, optional): Retrieve the last `n` messages.\r\n  - `first_n` (int, optional): Retrieve the first `n` messages.\r\n  - `slice_range` (slice, optional): Retrieve messages using a slice.\r\n\r\n### `save(file_path)` / `load(file_path)`\r\n\r\nSave or load the history from a file.\r\n\r\n## Examples\r\n\r\nYou can find more comprehensive examples in the [`examples/`](examples/) directory of the repository. These examples cover various scenarios such as:\r\n\r\n- Basic usage for conversational context.\r\n- Advanced token management.\r\n- Preserving system messages and custom metadata.\r\n\r\n## Documentation\r\n\r\nFull documentation for all methods and classes can be found at the [official documentation site](https://memoravel.readthedocs.io/en/latest/memoravel.html#memoravel.Memoravel). You can also refer to the docstrings in the code for quick explanations.\r\n\r\n## Contributing\r\n\r\nWe welcome contributions! If you'd like to contribute, please follow these steps:\r\n\r\n1. Fork the repository.\r\n2. Create a new branch (`git checkout -b feature-branch`).\r\n3. Make your changes and commit them (`git commit -m 'Add new feature'`).\r\n4. Push to the branch (`git push origin feature-branch`).\r\n5. Open a Pull Request.\r\n\r\nPlease make sure to add or update tests as appropriate, and ensure the code follows PEP8 guidelines.\r\n\r\n## License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## Feedback and Support\r\n\r\nIf you have questions, suggestions, or feedback, feel free to open an issue on GitHub. Contributions, feedback, and improvements are always welcome.\r\n\r\n\r\n\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A library to manage message history, for implementing memory in Language Models.",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/peninha/memoravel"
    },
    "split_keywords": [
        "llm",
        " memory",
        " message",
        " history"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e093e15b83a7fa006725591c7ce686ffe4d818dbdcc9c1016fa05d74e598c36d",
                "md5": "7c18f023f09880acc7b7f161ef3bdc99",
                "sha256": "c2176fb715ee2e45f33d5b0530447de5bbab6cc340e7a0ab07d71ee1c6a923cf"
            },
            "downloads": -1,
            "filename": "memoravel-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7c18f023f09880acc7b7f161ef3bdc99",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 12562,
            "upload_time": "2024-11-22T21:47:57",
            "upload_time_iso_8601": "2024-11-22T21:47:57.874923Z",
            "url": "https://files.pythonhosted.org/packages/e0/93/e15b83a7fa006725591c7ce686ffe4d818dbdcc9c1016fa05d74e598c36d/memoravel-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4fa28f4e852c89d2bdb47346332e007d5cdb2cd18f8e6d7c6aac560d1a906bd5",
                "md5": "2011324f005b169a07e47706d457d62d",
                "sha256": "6e4972d5c6af7eab9a2d46841d2f13997b388bd70534bdef9ca954986c1d642e"
            },
            "downloads": -1,
            "filename": "memoravel-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "2011324f005b169a07e47706d457d62d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 12487,
            "upload_time": "2024-11-22T21:47:59",
            "upload_time_iso_8601": "2024-11-22T21:47:59.250946Z",
            "url": "https://files.pythonhosted.org/packages/4f/a2/8f4e852c89d2bdb47346332e007d5cdb2cd18f8e6d7c6aac560d1a906bd5/memoravel-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-22 21:47:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "peninha",
    "github_project": "memoravel",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "memoravel"
}
        
Elapsed time: 0.75480s