<div align="center">
<img src="https://github.com/openscilab/memor/raw/main/otherfiles/logo.png" alt="Memor Logo" width="424">
<h1>Memor: Reproducible Structured Memory for LLMs</h1>
<br/>
<a href="https://codecov.io/gh/openscilab/memor"><img src="https://codecov.io/gh/openscilab/memor/branch/dev/graph/badge.svg?token=TS5IAEXX7O"></a>
<a href="https://badge.fury.io/py/memor"><img src="https://badge.fury.io/py/memor.svg" alt="PyPI version"></a>
<a href="https://www.python.org/"><img src="https://img.shields.io/badge/built%20with-Python3-green.svg" alt="built with Python3"></a>
<a href="https://github.com/openscilab/memor"><img alt="GitHub repo size" src="https://img.shields.io/github/repo-size/openscilab/memor"></a>
<a href="https://discord.gg/cZxGwZ6utB"><img src="https://img.shields.io/discord/1064533716615049236.svg" alt="Discord Channel"></a>
</div>
----------
## Overview
<p align="justify">
With Memor, users can store their LLM conversation history using an intuitive and structured data format.
It abstracts user prompts and model responses into a "Session", a sequence of message exchanges.
In addition to the content, it includes details like decoding temperature and token count of each message.
Therefore users could create comprehensive and reproducible logs of their interactions.
Because of the model-agnostic design, users can begin a conversation with one LLM and switch to another keeping the context the same.
For example, they might use a retrieval-augmented model (like RAG) to gather relevant context for a math problem, and then switch to a model better suited for reasoning to solve the problem based on the retrieved information presented by Memor.
</p>
<p align="justify">
Memor also lets users select, filter, and then share the specific parts of the past conversations across different models. This means users are not only able to reproduce and review previous chats through structured logs, but can also flexibly transfer the content of their conversations between LLMs.
In a nutshell, Memor makes it easy and effective to manage and reuse conversations with large language models.
</p>
<table>
<tr>
<td align="center">PyPI Counter</td>
<td align="center">
<a href="https://pepy.tech/projects/memor">
<img src="https://static.pepy.tech/badge/memor">
</a>
</td>
</tr>
<tr>
<td align="center">Github Stars</td>
<td align="center">
<a href="https://github.com/openscilab/memor">
<img src="https://img.shields.io/github/stars/openscilab/memor.svg?style=social&label=Stars">
</a>
</td>
</tr>
</table>
<table>
<tr>
<td align="center">Branch</td>
<td align="center">main</td>
<td align="center">dev</td>
</tr>
<tr>
<td align="center">CI</td>
<td align="center">
<img src="https://github.com/openscilab/memor/actions/workflows/test.yml/badge.svg?branch=main">
</td>
<td align="center">
<img src="https://github.com/openscilab/memor/actions/workflows/test.yml/badge.svg?branch=dev">
</td>
</tr>
</table>
<table>
<tr>
<td align="center">Code Quality</td>
<td align="center"><a href="https://www.codefactor.io/repository/github/openscilab/memor"><img src="https://www.codefactor.io/repository/github/openscilab/memor/badge" alt="CodeFactor"></a></td>
<td align="center"><a href="https://app.codacy.com/gh/openscilab/memor/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade"><img src="https://app.codacy.com/project/badge/Grade/3758f5116c4347ce957997bb7f679cfa"/></a></td>
</tr>
</table>
## Installation
### PyPI
- Check [Python Packaging User Guide](https://packaging.python.org/installing/)
- Run `pip install memor==0.9`
### Source code
- Download [Version 0.9](https://github.com/openscilab/memor/archive/v0.9.zip) or [Latest Source](https://github.com/openscilab/memor/archive/dev.zip)
- Run `pip install .`
## Usage
Memor provides `Prompt`, `Response`, and `Session` as abstractions by which you can save your conversation history much structured. You can set a `Session` object before starting a conversation, make a `Prompt` object from your prompt and a `Response` object from LLM's response. Then adding them to the created `Session` can keep the conversation history.
```py
from memor import Session, Prompt, Response
from memor import RenderFormat
from mistralai import Mistral
client = Mistral(api_key="YOUR_MISTRAL_API")
session = Session()
while True:
user_input = input(">> You: ")
prompt = Prompt(message=user_input)
session.add_message(prompt) # Add user input to session
response = client.chat.complete(
model="mistral-large-latest",
messages=session.render(RenderFormat.OPENAI) # Render the whole session history
)
print("<< MistralAI:", response.choices[0].message.content)
response = Response(message=response.choices[0].message.content)
session.add_message(response) # Add model response to session
```
Your conversations would carry the past interactions and LLM remembers your session's information:
```
>> You: Imagine you have 3 apples. You eat one of them. How many apples remain?
<< MistralAI: If you start with 3 apples and you eat one of them, you will have 2 apples remaining.
>> You: How about starting from 2 apples?
<< MistralAI: If you start with 2 apples and you eat one of them, you will have 1 apple remaining. Here's the simple math:
2 apples - 1 apple = 1 apple
```
In the following, we detail different abstraction levels Memor provides for the conversation artifacts.
### Prompt
The `Prompt` class is a core abstraction in Memor, representing a user prompt. The prompt can be associated with one or more responses from an LLM, with the first one being the most confident usually. It encapsulates not just the prompt text but also metadata, a template for rendering into the API endpoint, and serialization capabilities that enable saving and reusing prompts.
```py
from memor import Prompt, Response, PresetPromptTemplate
prompt = Prompt(
message="Hello, how are you?",
responses=[
Response(message="I'm fine."),
Response(message="I'm not fine."),
],
template=PresetPromptTemplate.BASIC.PROMPT_RESPONSE_STANDARD
)
prompt.render()
# Prompt: Hello, how are you?
# Response: I'm fine.
```
#### Parameters
| **Name** | **Type** | **Description** |
| ------------ | ---------------------------------------- | ---------------------------------------------------------- |
| `message` | `str` | The core prompt message content |
| `responses` | `List[Response]` | List of associated responses |
| `role` | `Role` | Role of the message sender (`USER`, `SYSTEM`, etc.) |
| `tokens` | `int` | Token count |
| `template` | `PromptTemplate \| PresetPromptTemplate` | Template used to format the prompt |
| `file_path` | `str` | Path to load a prompt from a JSON file |
| `init_check` | `bool` | Whether to verify template rendering during initialization |
#### Methods
| **Method** | **Description** |
| ---------------------------------------------- | ---------------------------------------------------------------------- |
| `add_response` | Add a new response (append or insert) |
| `remove_response` | Remove the response at specified index |
| `select_response` | Mark a specific response as selected to be included in memory |
| `update_template` | Update the rendering template |
| `update_responses` | Replace all responses |
| `update_message` | Update the prompt text |
| `update_role` | Change the prompt role |
| `update_tokens` | Set a custom token count |
| `to_json` / `from_json` | Serialize or deserialize the prompt data |
| `to_dict` | Convert the object to a Python dictionary |
| `save` / `load` | Save or load prompt from file |
| `render` | Render the prompt in a specified format |
| `check_render` | Validate if the current prompt setup can render |
| `estimate_tokens` | Estimate the token usage for the prompt |
| `get_size` | Return prompt size in bytes (JSON-encoded) |
| `copy` | Clone the prompt |
| `regenerate_id` | Reset the unique identifier of the prompt |
| `contains_xml` | Check if the prompt contains any XML tags |
| `set_size_warning` / `reset_size_warning` | Set or reset size warning |
### Response
The `Response` class represents an answer or a completion generated by a model given a prompt. It encapsulates metadata such as score, temperature, model, tokens, inference time, and more. It also provides utilities for JSON serialization, rendering in multiple formats, and import/export functionality.
```py
from memor import Response, Role, LLMModel
response = Response(
message="Sure! Here's a summary.",
score=0.94,
temperature=0.7,
model=LLMModel.GPT_4,
inference_time=0.3
)
response.render()
# Sure! Here's a summary.
```
#### Parameters
| **Name** | **Type** | **Description** |
| ---------------- | ------------------- | -----------------------------------------------------|
| `message` | `str` | The content of the response |
| `score` | `float` | Evaluation score representing the response quality |
| `role` | `Role` | Role of the message sender (`USER`, `SYSTEM`, etc.) |
| `temperature` | `float` | Sampling temperature |
| `top_k` | `int` | `k` in top-k sampling method |
| `top_p` | `float` | `p` in top-p (nucleus) sampling |
| `tokens` | `int` | Number of tokens in the response |
| `inference_time` | `float` | Time spent generating the response (seconds) |
| `model` | `LLMModel` \| `str` | Model used |
| `gpu` | `str` | GPU model used |
| `date` | `datetime.datetime` | Timestamp of the creation |
| `file_path` | `str` | Path to load a saved response |
#### Methods
| **Method** | **Description** |
| ----------------------------------------------- | ------------------------------------------------------------------------ |
| `update_score` | Update the response score |
| `update_temperature` | Set the generation temperature |
| `update_top_k` | Set the top-k value |
| `update_top_p` | Set the top-p value |
| `update_model` | Set the model name or enum |
| `update_gpu` | Set the GPU model identifier |
| `update_inference_time` | Set the inference time in seconds |
| `update_message` | Update the response message |
| `update_role` | Update the sender role |
| `update_tokens` | Set the number of tokens |
| `to_json` / `from_json` | Serialize or deserialize to/from JSON |
| `to_dict` | Convert the object to a Python dictionary |
| `save` / `load` | Save or load the response to/from a file |
| `render` | Render the response in a specific format |
| `check_render` | Validate if the current response setup can render |
| `estimate_tokens` | Estimate the token usage for the response |
| `get_size` | Return response size in bytes (JSON-encoded) |
| `copy` | Clone the response |
| `regenerate_id` | Reset the unique identifier of the response |
| `contains_xml` | Check if the response contains any XML tags |
| `set_size_warning` / `reset_size_warning` | Set or reset size warning |
### Prompt Templates
The `PromptTemplate` class provides a structured interface for managing, storing, and customizing text prompt templates used in prompt engineering tasks. This class supports template versioning, metadata tracking, file-based persistence, and integration with preset template formats. It is a core component of the memor library, designed to facilitate reproducible and organized prompt workflows for LLMs.
```py
from memor import Prompt, PromptTemplate
template = PromptTemplate(content="{instruction}, {prompt[message]}", custom_map={"instruction": "Hi"})
prompt = Prompt(message="How are you?", template=template)
prompt.render()
'Hi, How are you?'
```
#### Parameters
| **Name** | **Type** | **Description** |
| ------------ | ---------------- | ------------------------------------------------------ |
| `title` | `str` | The template name |
| `content` | `str` | The template content string with placeholders |
| `custom_map` | `Dict[str, str]` | A dictionary of custom variables used in the template |
| `file_path` | `str` | Path to a JSON file to load the template from |
#### Methods
| **Method** | **Description** |
| ---------------------------------------------------- | ------------------------------------------------------ |
| `update_title` | Update the template title |
| `update_content` | Update the template content |
| `update_map` | Update the custom variable map |
| `get_size` | Return the size (in bytes) of the JSON representation |
| `save` / `load` | Save or load the template to/from a file |
| `to_json` / `from_json` | Serialize or deserialize to/from JSON |
| `to_dict` | Convert the template to a plain Python dictionary |
| `copy` | Return a shallow copy of the template instance |
#### Preset Templates
Memor provides a variety of pre-defined `PromptTemplate`s to control how prompts and responses are rendered. Each template is prefixed by an optional instruction string and includes variations for different formatting styles. Following are different variants of parameters:
+ `INSTRUCTION1`: "*I'm providing you with a history of a previous conversation. Please consider this context when responding to my new question.*"
+ `INSTRUCTION2`: "*Here is the context from a prior conversation. Please learn from this information and use it to provide a thoughtful and context-aware response to my next questions.*"
+ `INSTRUCTION3`: "*I am sharing a record of a previous discussion. Use this information to provide a consistent and relevant answer to my next query.*"
| **Template Title** | **Description** |
|--------------------------------------------------|------------------------------------------------------------------------|
| `PROMPT` | Only includes the prompt message |
| `RESPONSE` | Only includes the response message |
| `RESPONSE0` to `RESPONSE3` | Include specific responses from a list of multiple responses |
| `PROMPT_WITH_LABEL` | Prompt with a "Prompt: " prefix |
| `RESPONSE_WITH_LABEL` | Response with a "Response: " prefix |
| `RESPONSE0_WITH_LABEL` to `RESPONSE3_WITH_LABEL` | Labeled response for the i-th response |
| `PROMPT_RESPONSE_STANDARD` | Includes both labeled prompt and response on a single line |
| `PROMPT_RESPONSE_FULL` | A detailed multi-line representation including role, date, model, etc |
You can access them using:
```py
from memor import PresetPromptTemplate
template = PresetPromptTemplate.INSTRUCTION1.PROMPT_RESPONSE_STANDARD
```
### Session
The `Session` class represents a conversation session composed of `Prompt` and `Response` messages. It supports creation, modification, saving, loading, searching, rendering, and token estimation — offering a structured way to manage LLM interaction histories. Each session tracks metadata such as title, creation/modification time, render count, and message activation (masking) status.
```py
from memor import Session, Prompt, Response
session = Session(title="Q&A Session", messages=[
Prompt(message="What is the capital of France?"),
Response(message="The capital of France is Paris.")
])
session.add_message(Prompt(message="What is the population of Paris?"))
print(session.render())
# What is the capital of France?
# The capital of France is Paris.
# What is the population of Paris?
results = session.search("Paris")
print("Found at indices:", results)
# Found at indices: [1, 2]
tokens = session.estimate_tokens()
print("Estimated tokens:", tokens)
# Estimated tokens: 35
```
#### Parameters
| **Parameter** | **Type** | **Description** |
| ---------------| ------------------------------------ | --------------------------------------------- |
| `title` | `str` | The title of the session |
| `messages` | `List[Prompt or Response]` | The list of initial messages |
| `init_check` | `bool` | Whether to check rendering at initialization |
| `file_path` | `str` | The Path to a saved session file |
#### Methods
| **Method** | **Description** |
| ----------------------------------------------------------------- | ------------------------------------------------------------- |
| `add_message` | Add a `Prompt` or `Response` to the session |
| `remove_message` | Remove a message by index or ID |
| `remove_message_by_index` | Remove a message by numeric index |
| `remove_message_by_id` | Remove a message by its unique ID |
| `update_title` | Update the title of the session |
| `update_messages` | Replace all messages and optionally update their status list |
| `update_messages_status` | Update the message status without changing the content |
| `clear_messages` | Remove all messages from the session |
| `get_message` | Retrieve a message by index, slice, or ID |
| `get_message_by_index` | Get a message by integer index or slice |
| `get_message_by_id` | Get a message by its unique ID |
| `enable_message` | Mark the message at the given index as active |
| `disable_message` | Mark the message as inactive (masked) |
| `mask_message` | Alias for `disable_message()` |
| `unmask_message` | Alias for `enable_message()` |
| `search` | Search for a string or regex pattern in the messages |
| `save` / `load` | Save or load the session to/from a file |
| `to_json` / `from_json` | Serialize or deserialize the session to/from JSON |
| `to_dict` | Return a Python dict representation of the session |
| `render` | Render the session in the specified format |
| `check_render` | Return `True` if the session renders without error |
| `get_size` | Return session size in bytes (JSON-encoded) |
| `copy` | Return a shallow copy of the session |
| `estimate_tokens` | Estimate the token count of the session content |
| `set_size_warning` / `reset_size_warning` | Set or reset size warning |
## Examples
You can find more real-world usage of Memor in the [`examples`](https://github.com/openscilab/memor/tree/main/examples) directory.
This directory includes concise and practical Python scripts that demonstrate key features of Memor library.
## Issues & bug reports
Just fill an issue and describe it. We'll check it ASAP! or send an email to [memor@openscilab.com](mailto:memor@openscilab.com "memor@openscilab.com").
- Please complete the issue template
You can also join our discord server
<a href="https://discord.gg/cZxGwZ6utB">
<img src="https://img.shields.io/discord/1064533716615049236.svg?style=for-the-badge" alt="Discord Channel">
</a>
## Show your support
### Star this repo
Give a ⭐️ if this project helped you!
### Donate to our project
If you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .
<a href="https://openscilab.com/#donation" target="_blank"><img src="https://github.com/openscilab/memor/raw/main/otherfiles/donation.png" height="90px" width="270px" alt="Memor Donation"></a>
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.9] - 2025-10-22
### Added
- `Prompt` class `contains_xml` method
- `Response` class `contains_xml` method
- `Prompt` class size warning
- `Response` class size warning
- `Session` class size warning
- `Response` class `warnings` property
- `Prompt` class `warnings` property
- `Session` class `warnings` property
### Changed
- `README.md` updated
- `LLMModel` enum updated
- `None` value update bug fixed
- `show_warning` parameter added to `render` method
- `Python 3.14` added to `test.yml`
- Test system modified
- Typing modified
## [0.8] - 2025-07-21
### Added
- Logo
- `Response` class `top_k` property
- `Response` class `top_p` property
- `Response` class `gpu` property
### Changed
- `AI_STUDIO` render format modified
- `LLMModel` enum updated
- Model load bug fixed
- Test system modified
- `README.md` updated
- `_validate_pos_float` now validates `int` values
- `_validate_probability` now validates `int` values
- `None` value validation bug fixed
## [0.7] - 2025-06-25
### Added
- `Message` abstract class
- `Session` class `_validate_extract_json` method
- `Response` class `_validate_extract_json` method
- `Prompt` class `_validate_extract_json` method
- `PromptTemplate` class `_validate_extract_json` method
- `Session` class `search` method
- `Session` class `get_size` method
- `Response` class `get_size` method
- `Prompt` class `get_size` method
- `PromptTemplate` class `get_size` method
- `Session` class `size` attribute
- `Response` class `size` attribute
- `Prompt` class `size` attribute
- `PromptTemplate` class `size` attribute
- `examples` directory
### Changed
- Validation bug fixed in `update_messages` method in `Session` class
- Validation bug fixed in `from_json` method in `PromptTemplate`, `Response`, `Prompt`, and `Session` classes
- `AI_STUDIO` render format modified
- `Session` class messages status bug fixed
- Test system modified
- `README.md` updated
## [0.6] - 2025-05-05
### Added
- `Response` class `id` property
- `Prompt` class `id` property
- `Response` class `regenerate_id` method
- `Prompt` class `regenerate_id` method
- `Session` class `render_counter` method
- `Session` class `remove_message_by_index` and `remove_message_by_id` methods
- `Session` class `get_message_by_index`, `get_message_by_id` and `get_message` methods
- `LLMModel` enum
- `AI_STUDIO` render format
### Changed
- Test system modified
- Modification handling centralized via `_mark_modified` method
- `Session` class `remove_message` method modified
## [0.5] - 2025-04-16
### Added
- `Session` class `check_render` method
- `Session` class `clear_messages` method
- `Prompt` class `check_render` method
- `Session` class `estimate_tokens` method
- `Prompt` class `estimate_tokens` method
- `Response` class `estimate_tokens` method
- `universal_tokens_estimator` function
- `openai_tokens_estimator_gpt_3_5` function
- `openai_tokens_estimator_gpt_4` function
### Changed
- `init_check` parameter added to `Prompt` class
- `init_check` parameter added to `Session` class
- Test system modified
- `Python 3.6` support dropped
- `README.md` updated
## [0.4] - 2025-03-17
### Added
- `Session` class `__contains__` method
- `Session` class `__getitem__` method
- `Session` class `mask_message` method
- `Session` class `unmask_message` method
- `Session` class `masks` attribute
- `Response` class `__len__` method
- `Prompt` class `__len__` method
### Changed
- `inference_time` parameter added to `Response` class
- `README.md` updated
- Test system modified
- Python typing features added to all modules
- `Prompt` class default values updated
- `Response` class default values updated
## [0.3] - 2025-03-08
### Added
- `Session` class `__len__` method
- `Session` class `__iter__` method
- `Session` class `__add__` and `__radd__` methods
### Changed
- `tokens` parameter added to `Prompt` class
- `tokens` parameter added to `Response` class
- `tokens` parameter added to preset templates
- `Prompt` class modified
- `Response` class modified
- `PromptTemplate` class modified
## [0.2] - 2025-03-01
### Added
- `Session` class
### Changed
- `Prompt` class modified
- `Response` class modified
- `PromptTemplate` class modified
- `README.md` updated
- Test system modified
## [0.1] - 2025-02-12
### Added
- `Prompt` class
- `Response` class
- `PromptTemplate` class
- `PresetPromptTemplate` class
[Unreleased]: https://github.com/openscilab/memor/compare/v0.9...dev
[0.9]: https://github.com/openscilab/memor/compare/v0.8...v0.9
[0.8]: https://github.com/openscilab/memor/compare/v0.7...v0.8
[0.7]: https://github.com/openscilab/memor/compare/v0.6...v0.7
[0.6]: https://github.com/openscilab/memor/compare/v0.5...v0.6
[0.5]: https://github.com/openscilab/memor/compare/v0.4...v0.5
[0.4]: https://github.com/openscilab/memor/compare/v0.3...v0.4
[0.3]: https://github.com/openscilab/memor/compare/v0.2...v0.3
[0.2]: https://github.com/openscilab/memor/compare/v0.1...v0.2
[0.1]: https://github.com/openscilab/memor/compare/6594313...v0.1
Raw data
{
"_id": null,
"home_page": "https://github.com/openscilab/memor",
"name": "memor",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "llm memory management conversational history ai agent",
"author": "Memor Development Team",
"author_email": "memor@openscilab.com",
"download_url": "https://files.pythonhosted.org/packages/c8/c9/55041d3f60fafe288c6a13a5945da0955a7cc6889c51703b0def83cba21c/memor-0.9.tar.gz",
"platform": null,
"description": "\n<div align=\"center\">\n <img src=\"https://github.com/openscilab/memor/raw/main/otherfiles/logo.png\" alt=\"Memor Logo\" width=\"424\">\n <h1>Memor: Reproducible Structured Memory for LLMs</h1>\n <br/>\n <a href=\"https://codecov.io/gh/openscilab/memor\"><img src=\"https://codecov.io/gh/openscilab/memor/branch/dev/graph/badge.svg?token=TS5IAEXX7O\"></a>\n <a href=\"https://badge.fury.io/py/memor\"><img src=\"https://badge.fury.io/py/memor.svg\" alt=\"PyPI version\"></a>\n <a href=\"https://www.python.org/\"><img src=\"https://img.shields.io/badge/built%20with-Python3-green.svg\" alt=\"built with Python3\"></a>\n <a href=\"https://github.com/openscilab/memor\"><img alt=\"GitHub repo size\" src=\"https://img.shields.io/github/repo-size/openscilab/memor\"></a>\n <a href=\"https://discord.gg/cZxGwZ6utB\"><img src=\"https://img.shields.io/discord/1064533716615049236.svg\" alt=\"Discord Channel\"></a>\n</div>\n\n----------\n\n\n## Overview\n<p align=\"justify\">\nWith Memor, users can store their LLM conversation history using an intuitive and structured data format.\nIt abstracts user prompts and model responses into a \"Session\", a sequence of message exchanges.\nIn addition to the content, it includes details like decoding temperature and token count of each message.\nTherefore users could create comprehensive and reproducible logs of their interactions.\nBecause of the model-agnostic design, users can begin a conversation with one LLM and switch to another keeping the context the same.\nFor example, they might use a retrieval-augmented model (like RAG) to gather relevant context for a math problem, and then switch to a model better suited for reasoning to solve the problem based on the retrieved information presented by Memor.\n</p>\n\n<p align=\"justify\">\nMemor also lets users select, filter, and then share the specific parts of the past conversations across different models. This means users are not only able to reproduce and review previous chats through structured logs, but can also flexibly transfer the content of their conversations between LLMs.\nIn a nutshell, Memor makes it easy and effective to manage and reuse conversations with large language models.\n</p>\n<table>\n <tr>\n <td align=\"center\">PyPI Counter</td>\n <td align=\"center\">\n <a href=\"https://pepy.tech/projects/memor\">\n <img src=\"https://static.pepy.tech/badge/memor\">\n </a>\n </td>\n </tr>\n <tr>\n <td align=\"center\">Github Stars</td>\n <td align=\"center\">\n <a href=\"https://github.com/openscilab/memor\">\n <img src=\"https://img.shields.io/github/stars/openscilab/memor.svg?style=social&label=Stars\">\n </a>\n </td>\n </tr>\n</table>\n<table>\n <tr> \n <td align=\"center\">Branch</td>\n <td align=\"center\">main</td>\n <td align=\"center\">dev</td>\n </tr>\n <tr>\n <td align=\"center\">CI</td>\n <td align=\"center\">\n <img src=\"https://github.com/openscilab/memor/actions/workflows/test.yml/badge.svg?branch=main\">\n </td>\n <td align=\"center\">\n <img src=\"https://github.com/openscilab/memor/actions/workflows/test.yml/badge.svg?branch=dev\">\n </td>\n </tr>\n</table>\n<table>\n <tr> \n <td align=\"center\">Code Quality</td>\n <td align=\"center\"><a href=\"https://www.codefactor.io/repository/github/openscilab/memor\"><img src=\"https://www.codefactor.io/repository/github/openscilab/memor/badge\" alt=\"CodeFactor\"></a></td>\n <td align=\"center\"><a href=\"https://app.codacy.com/gh/openscilab/memor/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade\"><img src=\"https://app.codacy.com/project/badge/Grade/3758f5116c4347ce957997bb7f679cfa\"/></a></td>\n </tr>\n</table>\n\n\n## Installation\n\n### PyPI\n- Check [Python Packaging User Guide](https://packaging.python.org/installing/)\n- Run `pip install memor==0.9`\n### Source code\n- Download [Version 0.9](https://github.com/openscilab/memor/archive/v0.9.zip) or [Latest Source](https://github.com/openscilab/memor/archive/dev.zip)\n- Run `pip install .`\n\n## Usage\nMemor provides `Prompt`, `Response`, and `Session` as abstractions by which you can save your conversation history much structured. You can set a `Session` object before starting a conversation, make a `Prompt` object from your prompt and a `Response` object from LLM's response. Then adding them to the created `Session` can keep the conversation history.\n\n```py\nfrom memor import Session, Prompt, Response\nfrom memor import RenderFormat\nfrom mistralai import Mistral\n\nclient = Mistral(api_key=\"YOUR_MISTRAL_API\")\nsession = Session()\nwhile True:\n user_input = input(\">> You: \")\n prompt = Prompt(message=user_input)\n session.add_message(prompt) # Add user input to session\n response = client.chat.complete(\n model=\"mistral-large-latest\",\n messages=session.render(RenderFormat.OPENAI) # Render the whole session history\n )\n print(\"<< MistralAI:\", response.choices[0].message.content)\n response = Response(message=response.choices[0].message.content)\n session.add_message(response) # Add model response to session\n```\nYour conversations would carry the past interactions and LLM remembers your session's information:\n```\n>> You: Imagine you have 3 apples. You eat one of them. How many apples remain?\n<< MistralAI: If you start with 3 apples and you eat one of them, you will have 2 apples remaining.\n>> You: How about starting from 2 apples?\n<< MistralAI: If you start with 2 apples and you eat one of them, you will have 1 apple remaining. Here's the simple math:\n2 apples - 1 apple = 1 apple\n```\n\nIn the following, we detail different abstraction levels Memor provides for the conversation artifacts.\n\n### Prompt\n\nThe `Prompt` class is a core abstraction in Memor, representing a user prompt. The prompt can be associated with one or more responses from an LLM, with the first one being the most confident usually. It encapsulates not just the prompt text but also metadata, a template for rendering into the API endpoint, and serialization capabilities that enable saving and reusing prompts.\n\n```py\nfrom memor import Prompt, Response, PresetPromptTemplate\nprompt = Prompt(\n message=\"Hello, how are you?\",\n responses=[\n Response(message=\"I'm fine.\"),\n Response(message=\"I'm not fine.\"),\n ],\n template=PresetPromptTemplate.BASIC.PROMPT_RESPONSE_STANDARD\n)\nprompt.render()\n# Prompt: Hello, how are you?\n# Response: I'm fine.\n```\n\n#### Parameters\n\n| **Name** | **Type** | **Description** |\n| ------------ | ---------------------------------------- | ---------------------------------------------------------- |\n| `message` | `str` | The core prompt message content |\n| `responses` | `List[Response]` | List of associated responses |\n| `role` | `Role` | Role of the message sender (`USER`, `SYSTEM`, etc.) |\n| `tokens` | `int` | Token count |\n| `template` | `PromptTemplate \\| PresetPromptTemplate` | Template used to format the prompt |\n| `file_path` | `str` | Path to load a prompt from a JSON file |\n| `init_check` | `bool` | Whether to verify template rendering during initialization |\n\n#### Methods\n\n| **Method** | **Description** |\n| ---------------------------------------------- | ---------------------------------------------------------------------- |\n| `add_response` | Add a new response (append or insert) |\n| `remove_response` | Remove the response at specified index |\n| `select_response` | Mark a specific response as selected to be included in memory |\n| `update_template` | Update the rendering template |\n| `update_responses` | Replace all responses |\n| `update_message` | Update the prompt text |\n| `update_role` | Change the prompt role |\n| `update_tokens` | Set a custom token count |\n| `to_json` / `from_json` | Serialize or deserialize the prompt data |\n| `to_dict` | Convert the object to a Python dictionary |\n| `save` / `load` | Save or load prompt from file |\n| `render` | Render the prompt in a specified format |\n| `check_render` | Validate if the current prompt setup can render | \n| `estimate_tokens` | Estimate the token usage for the prompt |\n| `get_size` | Return prompt size in bytes (JSON-encoded) |\n| `copy` | Clone the prompt |\n| `regenerate_id` | Reset the unique identifier of the prompt |\n| `contains_xml` | Check if the prompt contains any XML tags |\n| `set_size_warning` / `reset_size_warning` | Set or reset size warning | \n\n\n### Response\nThe `Response` class represents an answer or a completion generated by a model given a prompt. It encapsulates metadata such as score, temperature, model, tokens, inference time, and more. It also provides utilities for JSON serialization, rendering in multiple formats, and import/export functionality.\n\n```py\nfrom memor import Response, Role, LLMModel\nresponse = Response(\n message=\"Sure! Here's a summary.\",\n score=0.94,\n temperature=0.7,\n model=LLMModel.GPT_4,\n inference_time=0.3\n)\nresponse.render()\n# Sure! Here's a summary.\n```\n\n#### Parameters\n\n| **Name** | **Type** | **Description** |\n| ---------------- | ------------------- | -----------------------------------------------------|\n| `message` | `str` | The content of the response |\n| `score` | `float` | Evaluation score representing the response quality |\n| `role` | `Role` | Role of the message sender (`USER`, `SYSTEM`, etc.) |\n| `temperature` | `float` | Sampling temperature |\n| `top_k` | `int` | `k` in top-k sampling method |\n| `top_p` | `float` | `p` in top-p (nucleus) sampling |\n| `tokens` | `int` | Number of tokens in the response |\n| `inference_time` | `float` | Time spent generating the response (seconds) |\n| `model` | `LLMModel` \\| `str` | Model used |\n| `gpu` | `str` | GPU model used |\n| `date` | `datetime.datetime` | Timestamp of the creation |\n| `file_path` | `str` | Path to load a saved response |\n\n#### Methods\n\n| **Method** | **Description** |\n| ----------------------------------------------- | ------------------------------------------------------------------------ |\n| `update_score` | Update the response score |\n| `update_temperature` | Set the generation temperature |\n| `update_top_k` | Set the top-k value |\n| `update_top_p` | Set the top-p value |\n| `update_model` | Set the model name or enum |\n| `update_gpu` | Set the GPU model identifier |\n| `update_inference_time` | Set the inference time in seconds |\n| `update_message` | Update the response message |\n| `update_role` | Update the sender role |\n| `update_tokens` | Set the number of tokens |\n| `to_json` / `from_json` | Serialize or deserialize to/from JSON |\n| `to_dict` | Convert the object to a Python dictionary |\n| `save` / `load` | Save or load the response to/from a file |\n| `render` | Render the response in a specific format |\n| `check_render` | Validate if the current response setup can render | \n| `estimate_tokens` | Estimate the token usage for the response |\n| `get_size` | Return response size in bytes (JSON-encoded) |\n| `copy` | Clone the response |\n| `regenerate_id` | Reset the unique identifier of the response |\n| `contains_xml` | Check if the response contains any XML tags |\n| `set_size_warning` / `reset_size_warning` | Set or reset size warning | \n\n\n\n### Prompt Templates\nThe `PromptTemplate` class provides a structured interface for managing, storing, and customizing text prompt templates used in prompt engineering tasks. This class supports template versioning, metadata tracking, file-based persistence, and integration with preset template formats. It is a core component of the memor library, designed to facilitate reproducible and organized prompt workflows for LLMs.\n\n```py\nfrom memor import Prompt, PromptTemplate\ntemplate = PromptTemplate(content=\"{instruction}, {prompt[message]}\", custom_map={\"instruction\": \"Hi\"})\nprompt = Prompt(message=\"How are you?\", template=template)\nprompt.render()\n'Hi, How are you?'\n```\n\n#### Parameters\n\n| **Name** | **Type** | **Description** |\n| ------------ | ---------------- | ------------------------------------------------------ |\n| `title` | `str` | The template name |\n| `content` | `str` | The template content string with placeholders |\n| `custom_map` | `Dict[str, str]` | A dictionary of custom variables used in the template |\n| `file_path` | `str` | Path to a JSON file to load the template from |\n\n#### Methods\n\n| **Method** | **Description** |\n| ---------------------------------------------------- | ------------------------------------------------------ |\n| `update_title` | Update the template title |\n| `update_content` | Update the template content |\n| `update_map` | Update the custom variable map |\n| `get_size` | Return the size (in bytes) of the JSON representation |\n| `save` / `load` | Save or load the template to/from a file |\n| `to_json` / `from_json` | Serialize or deserialize to/from JSON |\n| `to_dict` | Convert the template to a plain Python dictionary |\n| `copy` | Return a shallow copy of the template instance |\n\n#### Preset Templates\n\nMemor provides a variety of pre-defined `PromptTemplate`s to control how prompts and responses are rendered. Each template is prefixed by an optional instruction string and includes variations for different formatting styles. Following are different variants of parameters:\n\n+ `INSTRUCTION1`: \"*I'm providing you with a history of a previous conversation. Please consider this context when responding to my new question.*\"\n+ `INSTRUCTION2`: \"*Here is the context from a prior conversation. Please learn from this information and use it to provide a thoughtful and context-aware response to my next questions.*\"\n+ `INSTRUCTION3`: \"*I am sharing a record of a previous discussion. Use this information to provide a consistent and relevant answer to my next query.*\"\n\n| **Template Title** | **Description** |\n|--------------------------------------------------|------------------------------------------------------------------------|\n| `PROMPT` | Only includes the prompt message |\n| `RESPONSE` | Only includes the response message |\n| `RESPONSE0` to `RESPONSE3` | Include specific responses from a list of multiple responses |\n| `PROMPT_WITH_LABEL` | Prompt with a \"Prompt: \" prefix |\n| `RESPONSE_WITH_LABEL` | Response with a \"Response: \" prefix |\n| `RESPONSE0_WITH_LABEL` to `RESPONSE3_WITH_LABEL` | Labeled response for the i-th response |\n| `PROMPT_RESPONSE_STANDARD` | Includes both labeled prompt and response on a single line |\n| `PROMPT_RESPONSE_FULL` | A detailed multi-line representation including role, date, model, etc |\n\nYou can access them using:\n\n```py\nfrom memor import PresetPromptTemplate\ntemplate = PresetPromptTemplate.INSTRUCTION1.PROMPT_RESPONSE_STANDARD\n```\n\n### Session\nThe `Session` class represents a conversation session composed of `Prompt` and `Response` messages. It supports creation, modification, saving, loading, searching, rendering, and token estimation \u2014 offering a structured way to manage LLM interaction histories. Each session tracks metadata such as title, creation/modification time, render count, and message activation (masking) status.\n\n```py\nfrom memor import Session, Prompt, Response\nsession = Session(title=\"Q&A Session\", messages=[\n Prompt(message=\"What is the capital of France?\"),\n Response(message=\"The capital of France is Paris.\")\n ])\nsession.add_message(Prompt(message=\"What is the population of Paris?\"))\nprint(session.render())\n# What is the capital of France?\n# The capital of France is Paris.\n# What is the population of Paris?\n\nresults = session.search(\"Paris\")\nprint(\"Found at indices:\", results)\n# Found at indices: [1, 2]\n\ntokens = session.estimate_tokens()\nprint(\"Estimated tokens:\", tokens)\n# Estimated tokens: 35\n```\n\n#### Parameters\n\n| **Parameter** | **Type** | **Description** |\n| ---------------| ------------------------------------ | --------------------------------------------- |\n| `title` | `str` | The title of the session |\n| `messages` | `List[Prompt or Response]` | The list of initial messages |\n| `init_check` | `bool` | Whether to check rendering at initialization |\n| `file_path` | `str` | The Path to a saved session file |\n\n#### Methods\n\n| **Method** | **Description** |\n| ----------------------------------------------------------------- | ------------------------------------------------------------- |\n| `add_message` | Add a `Prompt` or `Response` to the session |\n| `remove_message` | Remove a message by index or ID |\n| `remove_message_by_index` | Remove a message by numeric index |\n| `remove_message_by_id` | Remove a message by its unique ID |\n| `update_title` | Update the title of the session |\n| `update_messages` | Replace all messages and optionally update their status list |\n| `update_messages_status` | Update the message status without changing the content |\n| `clear_messages` | Remove all messages from the session |\n| `get_message` | Retrieve a message by index, slice, or ID |\n| `get_message_by_index` | Get a message by integer index or slice |\n| `get_message_by_id` | Get a message by its unique ID |\n| `enable_message` | Mark the message at the given index as active |\n| `disable_message` | Mark the message as inactive (masked) |\n| `mask_message` | Alias for `disable_message()` |\n| `unmask_message` | Alias for `enable_message()` |\n| `search` | Search for a string or regex pattern in the messages |\n| `save` / `load` | Save or load the session to/from a file |\n| `to_json` / `from_json` | Serialize or deserialize the session to/from JSON |\n| `to_dict` | Return a Python dict representation of the session |\n| `render` | Render the session in the specified format |\n| `check_render` | Return `True` if the session renders without error |\n| `get_size` | Return session size in bytes (JSON-encoded) |\n| `copy` | Return a shallow copy of the session |\n| `estimate_tokens` | Estimate the token count of the session content |\n| `set_size_warning` / `reset_size_warning` | Set or reset size warning | \n\n\n## Examples\nYou can find more real-world usage of Memor in the [`examples`](https://github.com/openscilab/memor/tree/main/examples) directory.\nThis directory includes concise and practical Python scripts that demonstrate key features of Memor library.\n\n## Issues & bug reports\n\nJust fill an issue and describe it. We'll check it ASAP! or send an email to [memor@openscilab.com](mailto:memor@openscilab.com \"memor@openscilab.com\"). \n\n- Please complete the issue template\n \nYou can also join our discord server\n\n<a href=\"https://discord.gg/cZxGwZ6utB\">\n <img src=\"https://img.shields.io/discord/1064533716615049236.svg?style=for-the-badge\" alt=\"Discord Channel\">\n</a>\n\n## Show your support\n\n\n### Star this repo\n\nGive a \u2b50\ufe0f if this project helped you!\n\n### Donate to our project\nIf you do like our project and we hope that you do, can you please support us? Our project is not and is never going to be working for profit. We need the money just so we can continue doing what we do ;-) .\t\t\t\n\n<a href=\"https://openscilab.com/#donation\" target=\"_blank\"><img src=\"https://github.com/openscilab/memor/raw/main/otherfiles/donation.png\" height=\"90px\" width=\"270px\" alt=\"Memor Donation\"></a>\n# Changelog\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)\nand this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).\n\n## [Unreleased]\n## [0.9] - 2025-10-22\n### Added\n- `Prompt` class `contains_xml` method\n- `Response` class `contains_xml` method\n- `Prompt` class size warning\n- `Response` class size warning\n- `Session` class size warning\n- `Response` class `warnings` property\n- `Prompt` class `warnings` property\n- `Session` class `warnings` property\n### Changed\n- `README.md` updated\n- `LLMModel` enum updated\n- `None` value update bug fixed\n- `show_warning` parameter added to `render` method\n- `Python 3.14` added to `test.yml`\n- Test system modified\n- Typing modified\n## [0.8] - 2025-07-21\n### Added\n- Logo\n- `Response` class `top_k` property\n- `Response` class `top_p` property\n- `Response` class `gpu` property\n### Changed\n- `AI_STUDIO` render format modified\n- `LLMModel` enum updated\n- Model load bug fixed\n- Test system modified\n- `README.md` updated\n- `_validate_pos_float` now validates `int` values\n- `_validate_probability` now validates `int` values\n- `None` value validation bug fixed\n## [0.7] - 2025-06-25\n### Added\n- `Message` abstract class\n- `Session` class `_validate_extract_json` method\n- `Response` class `_validate_extract_json` method\n- `Prompt` class `_validate_extract_json` method\n- `PromptTemplate` class `_validate_extract_json` method\n- `Session` class `search` method\n- `Session` class `get_size` method\n- `Response` class `get_size` method\n- `Prompt` class `get_size` method\n- `PromptTemplate` class `get_size` method\n- `Session` class `size` attribute\n- `Response` class `size` attribute\n- `Prompt` class `size` attribute\n- `PromptTemplate` class `size` attribute\n- `examples` directory\n### Changed\n- Validation bug fixed in `update_messages` method in `Session` class\n- Validation bug fixed in `from_json` method in `PromptTemplate`, `Response`, `Prompt`, and `Session` classes\n- `AI_STUDIO` render format modified\n- `Session` class messages status bug fixed\n- Test system modified\n- `README.md` updated\n## [0.6] - 2025-05-05\n### Added\n- `Response` class `id` property\n- `Prompt` class `id` property\n- `Response` class `regenerate_id` method\n- `Prompt` class `regenerate_id` method\n- `Session` class `render_counter` method\n- `Session` class `remove_message_by_index` and `remove_message_by_id` methods\n- `Session` class `get_message_by_index`, `get_message_by_id` and `get_message` methods\n- `LLMModel` enum\n- `AI_STUDIO` render format\n### Changed\n- Test system modified\n- Modification handling centralized via `_mark_modified` method\n- `Session` class `remove_message` method modified\n## [0.5] - 2025-04-16\n### Added\n- `Session` class `check_render` method\n- `Session` class `clear_messages` method\n- `Prompt` class `check_render` method\n- `Session` class `estimate_tokens` method\n- `Prompt` class `estimate_tokens` method\n- `Response` class `estimate_tokens` method\n- `universal_tokens_estimator` function\n- `openai_tokens_estimator_gpt_3_5` function\n- `openai_tokens_estimator_gpt_4` function\n### Changed\n- `init_check` parameter added to `Prompt` class\n- `init_check` parameter added to `Session` class\n- Test system modified\n- `Python 3.6` support dropped\n- `README.md` updated\n## [0.4] - 2025-03-17\n### Added\n- `Session` class `__contains__` method\n- `Session` class `__getitem__` method\n- `Session` class `mask_message` method\n- `Session` class `unmask_message` method\n- `Session` class `masks` attribute\n- `Response` class `__len__` method\n- `Prompt` class `__len__` method\n### Changed\n- `inference_time` parameter added to `Response` class\n- `README.md` updated\n- Test system modified\n- Python typing features added to all modules\n- `Prompt` class default values updated\n- `Response` class default values updated\n## [0.3] - 2025-03-08\n### Added\n- `Session` class `__len__` method\n- `Session` class `__iter__` method\n- `Session` class `__add__` and `__radd__` methods\n### Changed\n- `tokens` parameter added to `Prompt` class\n- `tokens` parameter added to `Response` class\n- `tokens` parameter added to preset templates\n- `Prompt` class modified\n- `Response` class modified\n- `PromptTemplate` class modified\n## [0.2] - 2025-03-01\n### Added\n- `Session` class\n### Changed\n- `Prompt` class modified\n- `Response` class modified\n- `PromptTemplate` class modified\n- `README.md` updated\n- Test system modified\n## [0.1] - 2025-02-12\n### Added\n- `Prompt` class\n- `Response` class\n- `PromptTemplate` class\n- `PresetPromptTemplate` class\n\n\n[Unreleased]: https://github.com/openscilab/memor/compare/v0.9...dev\n[0.9]: https://github.com/openscilab/memor/compare/v0.8...v0.9\n[0.8]: https://github.com/openscilab/memor/compare/v0.7...v0.8\n[0.7]: https://github.com/openscilab/memor/compare/v0.6...v0.7\n[0.6]: https://github.com/openscilab/memor/compare/v0.5...v0.6\n[0.5]: https://github.com/openscilab/memor/compare/v0.4...v0.5\n[0.4]: https://github.com/openscilab/memor/compare/v0.3...v0.4\n[0.3]: https://github.com/openscilab/memor/compare/v0.2...v0.3\n[0.2]: https://github.com/openscilab/memor/compare/v0.1...v0.2\n[0.1]: https://github.com/openscilab/memor/compare/6594313...v0.1\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Memor: Reproducible Structured Memory for LLMs",
"version": "0.9",
"project_urls": {
"Download": "https://github.com/openscilab/memor/tarball/v0.9",
"Homepage": "https://github.com/openscilab/memor",
"Source": "https://github.com/openscilab/memor"
},
"split_keywords": [
"llm",
"memory",
"management",
"conversational",
"history",
"ai",
"agent"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "13baf6e76bb498edfe749817cfa61684a518dc1c8b0664d0eb11dc813ca45591",
"md5": "8635a48e91d739e7ec8c81a154ec45e8",
"sha256": "74cd15379ec31e69b7b56d13f691bf8e7202d3adaea4b2783298bec2865f27aa"
},
"downloads": -1,
"filename": "memor-0.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8635a48e91d739e7ec8c81a154ec45e8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 38205,
"upload_time": "2025-10-22T12:39:53",
"upload_time_iso_8601": "2025-10-22T12:39:53.785997Z",
"url": "https://files.pythonhosted.org/packages/13/ba/f6e76bb498edfe749817cfa61684a518dc1c8b0664d0eb11dc813ca45591/memor-0.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c8c955041d3f60fafe288c6a13a5945da0955a7cc6889c51703b0def83cba21c",
"md5": "a9bd3fce2c45c94247f13b3d22b9d700",
"sha256": "2979064215f47ca3afe24025d3c15cec4169716ba54f4fef4fd20ffcbb331b56"
},
"downloads": -1,
"filename": "memor-0.9.tar.gz",
"has_sig": false,
"md5_digest": "a9bd3fce2c45c94247f13b3d22b9d700",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 57451,
"upload_time": "2025-10-22T12:39:52",
"upload_time_iso_8601": "2025-10-22T12:39:52.468443Z",
"url": "https://files.pythonhosted.org/packages/c8/c9/55041d3f60fafe288c6a13a5945da0955a7cc6889c51703b0def83cba21c/memor-0.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-22 12:39:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "openscilab",
"github_project": "memor",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"requirements": [],
"lcname": "memor"
}