<div align="center">
<a href="https://memos.openmem.net/">
<img src="docs/assets/banner_new.gif" alt="MemOS Banner">
</a>
<h1 align="center">
<img src="docs/assets/memos_logo.png" alt="MemOS Logo" width="50"/> MemOS 1.0: ææēģ (Stellar) <img src="https://img.shields.io/badge/status-Preview-blue" alt="Preview Badge"/>
</h1>
<p>
<a href="https://www.memtensor.com.cn/">
<img alt="Static Badge" src="https://img.shields.io/badge/Maintained_by-MemTensor-blue">
</a>
<a href="https://pypi.org/project/MemoryOS">
<img src="https://img.shields.io/pypi/v/MemoryOS?label=pypi%20package" alt="PyPI Version">
</a>
<a href="https://pypi.org/project/MemoryOS">
<img src="https://img.shields.io/pypi/pyversions/MemoryOS.svg" alt="Supported Python versions">
</a>
<a href="https://memos.openmem.net/docs/home">
<img src="https://img.shields.io/badge/Documentation-view-blue.svg" alt="Documentation">
</a>
<a href="https://arxiv.org/abs/2507.03724">
<img src="https://img.shields.io/badge/arXiv-2507.03724-b31b1b.svg" alt="ArXiv Paper">
</a>
<a href="https://github.com/MemTensor/MemOS/discussions">
<img src="https://img.shields.io/badge/GitHub-Discussions-181717.svg?logo=github" alt="GitHub Discussions">
</a>
<a href="https://discord.gg/Txbx3gebZR">
<img src="https://img.shields.io/badge/Discord-join%20chat-7289DA.svg?logo=discord" alt="Discord">
</a>
<a href="docs/assets/qr_code.png">
<img src="https://img.shields.io/badge/WeChat-Group-07C160.svg?logo=wechat" alt="WeChat Group">
</a>
<a href="https://opensource.org/license/apache-2-0/">
<img src="https://img.shields.io/badge/License-Apache_2.0-green.svg?logo=apache" alt="License">
</a>
</p>
</div>
---
<a href="https://memos.openmem.net/">
<img src="docs/assets/sota_score.jpg" alt="SOTA SCORE">
</a>
**MemOS** is an operating system for Large Language Models (LLMs) that enhances them with long-term memory capabilities. It allows LLMs to store, retrieve, and manage information, enabling more context-aware, consistent, and personalized interactions.
- **Website**: <a href="https://memos.openmem.net/" target="_blank">https://memos.openmem.net/</a>
- **Documentation**: <a href="https://memos.openmem.net/docs/home" target="_blank">https://memos.openmem.net/docs/home</a>
- **API Reference**: <a href="https://memos.openmem.net/docs/api/info" target="_blank">https://memos.openmem.net/docs/api/info</a>
- **Source Code**: <a href="https://github.com/MemTensor/MemOS" target="_blank">https://github.com/MemTensor/MemOS</a>
## ð Performance Benchmark
MemOS demonstrates significant improvements over baseline memory solutions in multiple reasoning tasks.
| Model | Avg. Score | Multi-Hop | Open Domain | Single-Hop | Temporal Reasoning |
|-------------|------------|-----------|-------------|------------|---------------------|
| **OpenAI** | 0.5275 | 0.6028 | 0.3299 | 0.6183 | 0.2825 |
| **MemOS** | **0.7331** | **0.6430** | **0.5521** | **0.7844** | **0.7321** |
| **Improvement** | **+38.98%** | **+6.67%** | **+67.35%** | **+26.86%** | **+159.15%** |
> ðĄ **Temporal reasoning accuracy improved by 159% compared to the OpenAI baseline.**
### Details of End-to-End Evaluation on LOCOMO
> [!NOTE]
> Comparison of LLM Judge Scores across five major tasks in the LOCOMO benchmark. Each bar shows the mean evaluation score judged by LLMs for a given method-task pair, with standard deviation as error bars. MemOS-0630 consistently outperforms baseline methods (LangMem, Zep, OpenAI, Mem0) across all task types, especially in multi-hop and temporal reasoning scenarios.
<a href="https://memos.openmem.net/">
<img src="docs/assets/score_all_end2end.jpg" alt="END2END SCORE">
</a>
## âĻ Key Features
- **ð§ Memory-Augmented Generation (MAG)**: Provides a unified API for memory operations, integrating with LLMs to enhance chat and reasoning with contextual memory retrieval.
- **ðĶ Modular Memory Architecture (MemCube)**: A flexible and modular architecture that allows for easy integration and management of different memory types.
- **ðū Multiple Memory Types**:
- **Textual Memory**: For storing and retrieving unstructured or structured text knowledge.
- **Activation Memory**: Caches key-value pairs (`KVCacheMemory`) to accelerate LLM inference and context reuse.
- **Parametric Memory**: Stores model adaptation parameters (e.g., LoRA weights).
- **ð Extensible**: Easily extend and customize memory modules, data sources, and LLM integrations.
## ð Getting Started
Here's a quick example of how to create a **`MemCube`**, load it from a directory, access its memories, and save it.
```python
from memos.mem_cube.general import GeneralMemCube
# Initialize a MemCube from a local directory
mem_cube = GeneralMemCube.init_from_dir("examples/data/mem_cube_2")
# Access and print all memories
print("--- Textual Memories ---")
for item in mem_cube.text_mem.get_all():
print(item)
print("\n--- Activation Memories ---")
for item in mem_cube.act_mem.get_all():
print(item)
# Save the MemCube to a new directory
mem_cube.dump("tmp/mem_cube")
```
What about **`MOS`** (Memory Operating System)? It's a higher-level orchestration layer that manages multiple MemCubes and provides a unified API for memory operations. Here's a quick example of how to use MOS:
```python
from memos.configs.mem_os import MOSConfig
from memos.mem_os.main import MOS
# init MOS
mos_config = MOSConfig.from_json_file("examples/data/config/simple_memos_config.json")
memory = MOS(mos_config)
# create user
user_id = "b41a34d5-5cae-4b46-8c49-d03794d206f5"
memory.create_user(user_id=user_id)
# register cube for user
memory.register_mem_cube("examples/data/mem_cube_2", user_id=user_id)
# add memory for user
memory.add(
messages=[
{"role": "user", "content": "I like playing football."},
{"role": "assistant", "content": "I like playing football too."},
],
user_id=user_id,
)
# Later, when you want to retrieve memory for user
retrieved_memories = memory.search(query="What do you like?", user_id=user_id)
# output text_memories: I like playing football, act_memories, para_memories
print(f"text_memories: {retrieved_memories['text_mem']}")
```
For more detailed examples, please check out the [`examples`](./examples) directory.
## ðĶ Installation
> [!WARNING]
> MemOS is compatible with Linux, Windows, and macOS.
>
> However, if you're using macOS, please note that there may be dependency issues that are difficult to resolve.
>
> For example, compatibility with macOS 13 Ventura is currently challenging.
### Install via pip
```bash
pip install MemoryOS
```
### Development Install
To contribute to MemOS, clone the repository and install it in editable mode:
```bash
git clone https://github.com/MemTensor/MemOS.git
cd MemOS
make install
```
### Optional Dependencies
#### Ollama Support
To use MemOS with [Ollama](https://ollama.com/), first install the Ollama CLI:
```bash
curl -fsSL https://ollama.com/install.sh | sh
```
#### Transformers Support
To use functionalities based on the `transformers` library, ensure you have [PyTorch](https://pytorch.org/get-started/locally/) installed (CUDA version recommended for GPU acceleration).
## ðŽ Community & Support
Join our community to ask questions, share your projects, and connect with other developers.
- **GitHub Issues**: Report bugs or request features in our <a href="https://github.com/MemTensor/MemOS/issues" target="_blank">GitHub Issues</a>.
- **GitHub Pull Requests**: Contribute code improvements via <a href="https://github.com/MemTensor/MemOS/pulls" target="_blank">Pull Requests</a>.
- **GitHub Discussions**: Participate in our <a href="https://github.com/MemTensor/MemOS/discussions" target="_blank">GitHub Discussions</a> to ask questions or share ideas.
- **Discord**: Join our <a href="https://discord.gg/Txbx3gebZR" target="_blank">Discord Server</a>.
- **WeChat**: Scan the QR code to join our WeChat group.
<img src="docs/assets/qr_code.png" alt="QR Code" width="600">
## ð Citation
> [!NOTE]
> We publicly released the Short Version on **May 28, 2025**, making it the earliest work to propose the concept of a Memory Operating System for LLMs.
If you use MemOS in your research, we would appreciate citations to our papers.
```bibtex
@article{li2025memos_long,
title={MemOS: A Memory OS for AI System},
author={Li, Zhiyu and Song, Shichao and Xi, Chenyang and Wang, Hanyu and Tang, Chen and Niu, Simin and Chen, Ding and Yang, Jiawei and Li, Chunyu and Yu, Qingchen and Zhao, Jihao and Wang, Yezhaohui and Liu, Peng and Lin, Zehao and Wang, Pengyuan and Huo, Jiahao and Chen, Tianyi and Chen, Kai and Li, Kehang and Tao, Zhen and Ren, Junpeng and Lai, Huayi and Wu, Hao and Tang, Bo and Wang, Zhenren and Fan, Zhaoxin and Zhang, Ningyu and Zhang, Linfeng and Yan, Junchi and Yang, Mingchuan and Xu, Tong and Xu, Wei and Chen, Huajun and Wang, Haofeng and Yang, Hongkang and Zhang, Wentao and Xu, Zhi-Qin John and Chen, Siheng and Xiong, Feiyu},
journal={arXiv preprint arXiv:2507.03724},
year={2025},
url={https://arxiv.org/abs/2507.03724}
}
@article{li2025memos_short,
title={MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models},
author={Li, Zhiyu and Song, Shichao and Wang, Hanyu and Niu, Simin and Chen, Ding and Yang, Jiawei and Xi, Chenyang and Lai, Huayi and Zhao, Jihao and Wang, Yezhaohui and others},
journal={arXiv preprint arXiv:2505.22101},
year={2025},
url={https://arxiv.org/abs/2505.22101}
}
@article{yang2024memory3,
author = {Yang, Hongkang and Zehao, Lin and Wenjin, Wang and Wu, Hao and Zhiyu, Li and Tang, Bo and Wenqiang, Wei and Wang, Jinbo and Zeyun, Tang and Song, Shichao and Xi, Chenyang and Yu, Yu and Kai, Chen and Xiong, Feiyu and Tang, Linpeng and Weinan, E},
title = {Memory$^3$: Language Modeling with Explicit Memory},
journal = {Journal of Machine Learning},
year = {2024},
volume = {3},
number = {3},
pages = {300--346},
issn = {2790-2048},
doi = {https://doi.org/10.4208/jml.240708},
url = {https://global-sci.com/article/91443/memory3-language-modeling-with-explicit-memory}
}
```
## ð Contributing
We welcome contributions from the community! Please read our [contribution guidelines](https://memos.openmem.net/docs/contribution/overview) to get started.
## ð License
MemOS is licensed under the [Apache 2.0 License](./LICENSE).
## ð° News
Stay up to date with the latest MemOS announcements, releases, and community highlights.
- **2025-07-07** â ð *MemOS 1.0 (Stellar) Preview Release*: A SOTA Memory OS for LLMs is now open-sourced.
- **2025-07-04** â ð *MemOS Paper Released*: [MemOS: A Memory OS for AI System](https://arxiv.org/abs/2507.03724) was published on arXiv.
- **2025-05-28** â ð *Short Paper Uploaded*: [MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models](https://arxiv.org/abs/2505.22101) was published on arXiv.
- **2024-07-04** â ð *Memory3 Model Released at WAIC 2024*: The new memory-layered architecture model was unveiled at the 2024 World Artificial Intelligence Conference.
- **2024-07-01** â ð *Memory3 Paper Released*: [Memory3: Language Modeling with Explicit Memory](https://arxiv.org/abs/2407.01178) introduces the new approach to structured memory in LLMs.
Raw data
{
"_id": null,
"home_page": null,
"name": "MemoryOS",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "memory, llm, language model, memoryOS, agent",
"author": "MemTensor",
"author_email": "lizy@memtensor.cn",
"download_url": "https://files.pythonhosted.org/packages/31/90/79f79d20f3b6c79d8ac6b0b7b662e0b0c3189c0c41116fdaef0396ac4e50/memoryos-0.1.13.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <a href=\"https://memos.openmem.net/\">\n <img src=\"docs/assets/banner_new.gif\" alt=\"MemOS Banner\">\n </a>\n\n\n\n<h1 align=\"center\">\n <img src=\"docs/assets/memos_logo.png\" alt=\"MemOS Logo\" width=\"50\"/> MemOS 1.0: \u661f\u6cb3 (Stellar) <img src=\"https://img.shields.io/badge/status-Preview-blue\" alt=\"Preview Badge\"/>\n</h1>\n\n <p>\n <a href=\"https://www.memtensor.com.cn/\">\n <img alt=\"Static Badge\" src=\"https://img.shields.io/badge/Maintained_by-MemTensor-blue\">\n </a>\n <a href=\"https://pypi.org/project/MemoryOS\">\n <img src=\"https://img.shields.io/pypi/v/MemoryOS?label=pypi%20package\" alt=\"PyPI Version\">\n </a>\n <a href=\"https://pypi.org/project/MemoryOS\">\n <img src=\"https://img.shields.io/pypi/pyversions/MemoryOS.svg\" alt=\"Supported Python versions\">\n </a>\n <a href=\"https://memos.openmem.net/docs/home\">\n <img src=\"https://img.shields.io/badge/Documentation-view-blue.svg\" alt=\"Documentation\">\n </a>\n <a href=\"https://arxiv.org/abs/2507.03724\">\n <img src=\"https://img.shields.io/badge/arXiv-2507.03724-b31b1b.svg\" alt=\"ArXiv Paper\">\n </a>\n <a href=\"https://github.com/MemTensor/MemOS/discussions\">\n <img src=\"https://img.shields.io/badge/GitHub-Discussions-181717.svg?logo=github\" alt=\"GitHub Discussions\">\n </a>\n <a href=\"https://discord.gg/Txbx3gebZR\">\n <img src=\"https://img.shields.io/badge/Discord-join%20chat-7289DA.svg?logo=discord\" alt=\"Discord\">\n </a>\n <a href=\"docs/assets/qr_code.png\">\n <img src=\"https://img.shields.io/badge/WeChat-Group-07C160.svg?logo=wechat\" alt=\"WeChat Group\">\n </a>\n <a href=\"https://opensource.org/license/apache-2-0/\">\n <img src=\"https://img.shields.io/badge/License-Apache_2.0-green.svg?logo=apache\" alt=\"License\">\n </a>\n </p>\n</div>\n\n---\n\n <a href=\"https://memos.openmem.net/\">\n <img src=\"docs/assets/sota_score.jpg\" alt=\"SOTA SCORE\">\n </a>\n\n\n**MemOS** is an operating system for Large Language Models (LLMs) that enhances them with long-term memory capabilities. It allows LLMs to store, retrieve, and manage information, enabling more context-aware, consistent, and personalized interactions.\n\n- **Website**: <a href=\"https://memos.openmem.net/\" target=\"_blank\">https://memos.openmem.net/</a>\n- **Documentation**: <a href=\"https://memos.openmem.net/docs/home\" target=\"_blank\">https://memos.openmem.net/docs/home</a>\n- **API Reference**: <a href=\"https://memos.openmem.net/docs/api/info\" target=\"_blank\">https://memos.openmem.net/docs/api/info</a>\n- **Source Code**: <a href=\"https://github.com/MemTensor/MemOS\" target=\"_blank\">https://github.com/MemTensor/MemOS</a>\n\n## \ud83d\udcc8 Performance Benchmark\n\nMemOS demonstrates significant improvements over baseline memory solutions in multiple reasoning tasks.\n\n| Model | Avg. Score | Multi-Hop | Open Domain | Single-Hop | Temporal Reasoning |\n|-------------|------------|-----------|-------------|------------|---------------------|\n| **OpenAI** | 0.5275 | 0.6028 | 0.3299 | 0.6183 | 0.2825 |\n| **MemOS** | **0.7331** | **0.6430** | **0.5521** | **0.7844** | **0.7321** |\n| **Improvement** | **+38.98%** | **+6.67%** | **+67.35%** | **+26.86%** | **+159.15%** |\n\n> \ud83d\udca1 **Temporal reasoning accuracy improved by 159% compared to the OpenAI baseline.**\n\n\n\n### Details of End-to-End Evaluation on LOCOMO\n\n> [!NOTE]\n> Comparison of LLM Judge Scores across five major tasks in the LOCOMO benchmark. Each bar shows the mean evaluation score judged by LLMs for a given method-task pair, with standard deviation as error bars. MemOS-0630 consistently outperforms baseline methods (LangMem, Zep, OpenAI, Mem0) across all task types, especially in multi-hop and temporal reasoning scenarios.\n\n<a href=\"https://memos.openmem.net/\">\n <img src=\"docs/assets/score_all_end2end.jpg\" alt=\"END2END SCORE\">\n</a>\n\n\n\n\n## \u2728 Key Features\n\n- **\ud83e\udde0 Memory-Augmented Generation (MAG)**: Provides a unified API for memory operations, integrating with LLMs to enhance chat and reasoning with contextual memory retrieval.\n- **\ud83d\udce6 Modular Memory Architecture (MemCube)**: A flexible and modular architecture that allows for easy integration and management of different memory types.\n- **\ud83d\udcbe Multiple Memory Types**:\n - **Textual Memory**: For storing and retrieving unstructured or structured text knowledge.\n - **Activation Memory**: Caches key-value pairs (`KVCacheMemory`) to accelerate LLM inference and context reuse.\n - **Parametric Memory**: Stores model adaptation parameters (e.g., LoRA weights).\n- **\ud83d\udd0c Extensible**: Easily extend and customize memory modules, data sources, and LLM integrations.\n\n## \ud83d\ude80 Getting Started\n\nHere's a quick example of how to create a **`MemCube`**, load it from a directory, access its memories, and save it.\n\n```python\nfrom memos.mem_cube.general import GeneralMemCube\n\n# Initialize a MemCube from a local directory\nmem_cube = GeneralMemCube.init_from_dir(\"examples/data/mem_cube_2\")\n\n# Access and print all memories\nprint(\"--- Textual Memories ---\")\nfor item in mem_cube.text_mem.get_all():\n print(item)\n\nprint(\"\\n--- Activation Memories ---\")\nfor item in mem_cube.act_mem.get_all():\n print(item)\n\n# Save the MemCube to a new directory\nmem_cube.dump(\"tmp/mem_cube\")\n```\n\nWhat about **`MOS`** (Memory Operating System)? It's a higher-level orchestration layer that manages multiple MemCubes and provides a unified API for memory operations. Here's a quick example of how to use MOS:\n\n```python\nfrom memos.configs.mem_os import MOSConfig\nfrom memos.mem_os.main import MOS\n\n\n# init MOS\nmos_config = MOSConfig.from_json_file(\"examples/data/config/simple_memos_config.json\")\nmemory = MOS(mos_config)\n\n# create user\nuser_id = \"b41a34d5-5cae-4b46-8c49-d03794d206f5\"\nmemory.create_user(user_id=user_id)\n\n# register cube for user\nmemory.register_mem_cube(\"examples/data/mem_cube_2\", user_id=user_id)\n\n# add memory for user\nmemory.add(\n messages=[\n {\"role\": \"user\", \"content\": \"I like playing football.\"},\n {\"role\": \"assistant\", \"content\": \"I like playing football too.\"},\n ],\n user_id=user_id,\n)\n\n# Later, when you want to retrieve memory for user\nretrieved_memories = memory.search(query=\"What do you like?\", user_id=user_id)\n# output text_memories: I like playing football, act_memories, para_memories\nprint(f\"text_memories: {retrieved_memories['text_mem']}\")\n```\n\nFor more detailed examples, please check out the [`examples`](./examples) directory.\n\n## \ud83d\udce6 Installation\n\n> [!WARNING]\n> MemOS is compatible with Linux, Windows, and macOS.\n>\n> However, if you're using macOS, please note that there may be dependency issues that are difficult to resolve.\n>\n> For example, compatibility with macOS 13 Ventura is currently challenging.\n\n### Install via pip\n\n```bash\npip install MemoryOS\n```\n\n### Development Install\n\nTo contribute to MemOS, clone the repository and install it in editable mode:\n\n```bash\ngit clone https://github.com/MemTensor/MemOS.git\ncd MemOS\nmake install\n```\n\n### Optional Dependencies\n\n#### Ollama Support\n\nTo use MemOS with [Ollama](https://ollama.com/), first install the Ollama CLI:\n```bash\ncurl -fsSL https://ollama.com/install.sh | sh\n```\n\n#### Transformers Support\n\nTo use functionalities based on the `transformers` library, ensure you have [PyTorch](https://pytorch.org/get-started/locally/) installed (CUDA version recommended for GPU acceleration).\n\n## \ud83d\udcac Community & Support\n\nJoin our community to ask questions, share your projects, and connect with other developers.\n\n- **GitHub Issues**: Report bugs or request features in our <a href=\"https://github.com/MemTensor/MemOS/issues\" target=\"_blank\">GitHub Issues</a>.\n- **GitHub Pull Requests**: Contribute code improvements via <a href=\"https://github.com/MemTensor/MemOS/pulls\" target=\"_blank\">Pull Requests</a>.\n- **GitHub Discussions**: Participate in our <a href=\"https://github.com/MemTensor/MemOS/discussions\" target=\"_blank\">GitHub Discussions</a> to ask questions or share ideas.\n- **Discord**: Join our <a href=\"https://discord.gg/Txbx3gebZR\" target=\"_blank\">Discord Server</a>.\n- **WeChat**: Scan the QR code to join our WeChat group.\n\n<img src=\"docs/assets/qr_code.png\" alt=\"QR Code\" width=\"600\">\n\n## \ud83d\udcdc Citation\n\n> [!NOTE]\n> We publicly released the Short Version on **May 28, 2025**, making it the earliest work to propose the concept of a Memory Operating System for LLMs.\n\nIf you use MemOS in your research, we would appreciate citations to our papers.\n\n```bibtex\n\n@article{li2025memos_long,\n title={MemOS: A Memory OS for AI System},\n author={Li, Zhiyu and Song, Shichao and Xi, Chenyang and Wang, Hanyu and Tang, Chen and Niu, Simin and Chen, Ding and Yang, Jiawei and Li, Chunyu and Yu, Qingchen and Zhao, Jihao and Wang, Yezhaohui and Liu, Peng and Lin, Zehao and Wang, Pengyuan and Huo, Jiahao and Chen, Tianyi and Chen, Kai and Li, Kehang and Tao, Zhen and Ren, Junpeng and Lai, Huayi and Wu, Hao and Tang, Bo and Wang, Zhenren and Fan, Zhaoxin and Zhang, Ningyu and Zhang, Linfeng and Yan, Junchi and Yang, Mingchuan and Xu, Tong and Xu, Wei and Chen, Huajun and Wang, Haofeng and Yang, Hongkang and Zhang, Wentao and Xu, Zhi-Qin John and Chen, Siheng and Xiong, Feiyu},\n journal={arXiv preprint arXiv:2507.03724},\n year={2025},\n url={https://arxiv.org/abs/2507.03724}\n}\n\n@article{li2025memos_short,\n title={MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models},\n author={Li, Zhiyu and Song, Shichao and Wang, Hanyu and Niu, Simin and Chen, Ding and Yang, Jiawei and Xi, Chenyang and Lai, Huayi and Zhao, Jihao and Wang, Yezhaohui and others},\n journal={arXiv preprint arXiv:2505.22101},\n year={2025},\n url={https://arxiv.org/abs/2505.22101}\n}\n\n@article{yang2024memory3,\nauthor = {Yang, Hongkang and Zehao, Lin and Wenjin, Wang and Wu, Hao and Zhiyu, Li and Tang, Bo and Wenqiang, Wei and Wang, Jinbo and Zeyun, Tang and Song, Shichao and Xi, Chenyang and Yu, Yu and Kai, Chen and Xiong, Feiyu and Tang, Linpeng and Weinan, E},\ntitle = {Memory$^3$: Language Modeling with Explicit Memory},\njournal = {Journal of Machine Learning},\nyear = {2024},\nvolume = {3},\nnumber = {3},\npages = {300--346},\nissn = {2790-2048},\ndoi = {https://doi.org/10.4208/jml.240708},\nurl = {https://global-sci.com/article/91443/memory3-language-modeling-with-explicit-memory}\n}\n```\n\n## \ud83d\ude4c Contributing\n\nWe welcome contributions from the community! Please read our [contribution guidelines](https://memos.openmem.net/docs/contribution/overview) to get started.\n\n## \ud83d\udcc4 License\n\nMemOS is licensed under the [Apache 2.0 License](./LICENSE).\n\n## \ud83d\udcf0 News\n\nStay up to date with the latest MemOS announcements, releases, and community highlights.\n\n- **2025-07-07** \u2013 \ud83c\udf89 *MemOS 1.0 (Stellar) Preview Release*: A SOTA Memory OS for LLMs is now open-sourced.\n- **2025-07-04** \u2013 \ud83c\udf89 *MemOS Paper Released*: [MemOS: A Memory OS for AI System](https://arxiv.org/abs/2507.03724) was published on arXiv.\n- **2025-05-28** \u2013 \ud83c\udf89 *Short Paper Uploaded*: [MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models](https://arxiv.org/abs/2505.22101) was published on arXiv.\n- **2024-07-04** \u2013 \ud83c\udf89 *Memory3 Model Released at WAIC 2024*: The new memory-layered architecture model was unveiled at the 2024 World Artificial Intelligence Conference.\n- **2024-07-01** \u2013 \ud83c\udf89 *Memory3 Paper Released*: [Memory3: Language Modeling with Explicit Memory](https://arxiv.org/abs/2407.01178) introduces the new approach to structured memory in LLMs.\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Intelligence Begins with Memory",
"version": "0.1.13",
"project_urls": {
"Repository": "https://github.com/MemTensor/MemOS"
},
"split_keywords": [
"memory",
" llm",
" language model",
" memoryos",
" agent"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ddff50bfed7860b8339fd6d758ee9a08ab99963836b93259692993872de1ddae",
"md5": "5b6c8eb26f0d807fc2356cf0cbc9e036",
"sha256": "2cb3b69a6d1d7368ae14128fd3509d19b2e4424b4f41c755ce16a3346d8c7d36"
},
"downloads": -1,
"filename": "memoryos-0.1.13-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5b6c8eb26f0d807fc2356cf0cbc9e036",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 150914,
"upload_time": "2025-07-09T13:22:19",
"upload_time_iso_8601": "2025-07-09T13:22:19.278752Z",
"url": "https://files.pythonhosted.org/packages/dd/ff/50bfed7860b8339fd6d758ee9a08ab99963836b93259692993872de1ddae/memoryos-0.1.13-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "319079f79d20f3b6c79d8ac6b0b7b662e0b0c3189c0c41116fdaef0396ac4e50",
"md5": "de38b7c00718f434c862613dcdbed476",
"sha256": "1d43aeeb6f7b5230cfb3c72e0aabb86f41859e9538e3cea4467ed60db33d1f37"
},
"downloads": -1,
"filename": "memoryos-0.1.13.tar.gz",
"has_sig": false,
"md5_digest": "de38b7c00718f434c862613dcdbed476",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 104812,
"upload_time": "2025-07-09T13:22:20",
"upload_time_iso_8601": "2025-07-09T13:22:20.494698Z",
"url": "https://files.pythonhosted.org/packages/31/90/79f79d20f3b6c79d8ac6b0b7b662e0b0c3189c0c41116fdaef0396ac4e50/memoryos-0.1.13.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 13:22:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MemTensor",
"github_project": "MemOS",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "memoryos"
}