# Memoria
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)
[![CircleCI](https://dl.circleci.com/status-badge/img/gh/cosmoquester/memoria/tree/master.svg?style=svg&circle-token=513f0f5e9a706a51509d198359fe0e016a227ce9)](https://dl.circleci.com/status-badge/redirect/gh/cosmoquester/memoria/tree/master)
[![codecov](https://codecov.io/gh/cosmoquester/memoria/branch/master/graph/badge.svg?token=KZdkgkBzZG)](https://codecov.io/gh/cosmoquester/memoria)
<img src="./images/Memoria-Engrams.gif" width="55%">
Making neural networks remember over the long term has been a longstanding issue. Although several external memory techniques have been introduced, most focus on retaining recent information in the short term. Regardless of its importance, information tends to be fatefully forgotten over time. We present Memoria, a memory system for artificial neural networks, drawing inspiration from humans and applying various neuroscientific and psychological theories. The experimental results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification, surpassing conventional techniques. Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.
Memoria is an independant module which can be applied to neural network models in various ways and the experiment code of the paper is in the `experiment` directory.
My paper [Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture](https://icml.cc/virtual/2024/poster/32668) is accepted to **International Conference on Machine Learning (ICML) 2024 as a Spotlight paper**.
The full text of the paper can be accessed from [OpenReview](https://openreview.net/forum?id=yTz0u4B8ug) or [ArXiv](https://arxiv.org/abs/2310.03052).
## Installation
```sh
$ pip install memoria-pytorch
```
You can install memoria by pip command above.
## Tutorial
This is a tutorial to help to understand the concept and mechanism of Memoria.
#### 1. Import Memoria and Set Parameters
```python
import torch
from memoria import Memoria, EngramType
torch.manual_seed(42)
# Memoria Parameters
num_reminded_stm = 4
stm_capacity = 16
ltm_search_depth = 5
initial_lifespan = 3
num_final_ltms = 4
# Data Parameters
batch_size = 2
sequence_length = 8
hidden_dim = 64
```
#### 2. Initialize Memoria and Dummy Data
- Fake random data and lifespan delta are used for simplification.
```python
memoria = Memoria(
num_reminded_stm=num_reminded_stm,
stm_capacity=stm_capacity,
ltm_search_depth=ltm_search_depth,
initial_lifespan=initial_lifespan,
num_final_ltms=num_final_ltms,
)
data = torch.rand(batch_size, sequence_length, hidden_dim)
```
#### 3. Add Data as Working Memory
```python
# Add data as working memory
memoria.add_working_memory(data)
```
```python
# Expected values
>>> len(memoria.engrams)
16
>>> memoria.engrams.data.shape
torch.Size([2, 8, 64])
>>> memoria.engrams.lifespan
tensor([[3., 3., 3., 3., 3., 3., 3., 3.],
[3., 3., 3., 3., 3., 3., 3., 3.]])
```
#### 4. Remind Memories
- Empty memories are reminded because there is no engrams in STM/LTM yet
```python
reminded_memories, reminded_indices = memoria.remind()
```
```python
# No reminded memories because there is no STM/LTM engrams yet
>>> reminded_memories
tensor([], size=(2, 0, 64))
>>> reminded_indices
tensor([], size=(2, 0), dtype=torch.int64)
```
#### 5. Adjust Lifespan and Memories
- In this step, no engrams earn lifespan because there is no reminded memories
```python
memoria.adjust_lifespan_and_memories(reminded_indices, torch.zeros_like(reminded_indices))
```
```python
# Decreases lifespan for all engrams & working memories have changed into shortterm memory
>>> memoria.engrams.lifespan
tensor([[2., 2., 2., 2., 2., 2., 2., 2.],
[2., 2., 2., 2., 2., 2., 2., 2.]])
>>> memoria.engrams.engrams_types
tensor([[2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)
>>> EngramType.SHORTTERM
<EngramType.SHORTTERM: 2>
```
#### 6. Repeat one more time
- Now, there are some engrams in STM, remind and adjustment from STM will work
```python
data2 = torch.rand(batch_size, sequence_length, hidden_dim)
memoria.add_working_memory(data2)
```
```python
>>> len(memoria.engrams)
32
>>> memoria.engrams.lifespan
tensor([[2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.],
[2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.]])
```
```python
reminded_memories, reminded_indices = memoria.remind()
```
```python
# Remind memories from STM
>>> reminded_memories.shape
torch.Size([2, 6, 64])
>>> reminded_indices.shape
torch.Size([2, 6])
>>> reminded_indices
tensor([[ 0, 6, 4, 3, 2, -1],
[ 0, 7, 6, 5, 4, -1]])
```
```python
# Increase lifespan of all the reminded engrams by 5
memoria.adjust_lifespan_and_memories(reminded_indices, torch.full_like(reminded_indices, 5))
```
```python
# Reminded engrams got lifespan by 5, other engrams have got older
>>> memoria.engrams.lifespan
>>> memoria.engrams.lifespan
tensor([[6., 1., 6., 6., 6., 1., 6., 1., 2., 2., 2., 2., 2., 2., 2., 2.],
[6., 1., 1., 1., 6., 6., 6., 6., 2., 2., 2., 2., 2., 2., 2., 2.]])
```
#### 7. Repeat
- Repeat 10 times to see the dynamics of LTM
```python
# This is default process to utilize Memoria
for _ in range(10):
data = torch.rand(batch_size, sequence_length, hidden_dim)
memoria.add_working_memory(data)
reminded_memories, reminded_indices = memoria.remind()
lifespan_delta = torch.randint_like(reminded_indices, 0, 6).float()
memoria.adjust_lifespan_and_memories(reminded_indices, lifespan_delta)
```
```python
# After 10 iteration, some engrams have changed into longterm memory and got large lifespan
# Engram type zero means those engrams are deleted
>>> len(memoria.engrams)
72
>>> memoria.engrams.engrams_types
tensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)
>>> EngramType.LONGTERM
<EngramType.LONGTERM: 3>
>>> EngramType.NULL
<EngramType.NULL: 0>
>>> memoria.engrams.lifespan
tensor([[ 9., 1., 8., 2., 16., 5., 13., 7., 7., 3., 3., 4., 3., 3.,
4., 2., 2., 1., 1., 1., 1., 1., 1., 1., 2., 6., 1., 1.,
2., 2., 2., 2., 2., 2., 2., 2.],
[-1., -1., 3., 2., 19., 21., 11., 6., 14., 1., 5., 1., 5., 1.,
5., 1., 1., 8., 2., 1., 1., 1., 2., 1., 1., 1., 1., 1.,
2., 2., 2., 2., 2., 2., 2., 2.]])
```
# Citation
```bibtex
@InProceedings{pmlr-v235-park24a,
title = {Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture},
author = {Park, Sangjun and Bak, Jinyeong},
booktitle = {Proceedings of the 41st International Conference on Machine Learning},
pages = {39587--39615},
year = {2024},
editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},
volume = {235},
series = {Proceedings of Machine Learning Research},
month = {21--27 Jul},
publisher = {PMLR},
pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24a/park24a.pdf},
url = {https://proceedings.mlr.press/v235/park24a.html},
abstract = {Making neural networks remember over the long term has been a longstanding issue. Although several external memory techniques have been introduced, most focus on retaining recent information in the short term. Regardless of its importance, information tends to be fatefully forgotten over time. We present Memoria, a memory system for artificial neural networks, drawing inspiration from humans and applying various neuroscientific and psychological theories. The experimental results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification, surpassing conventional techniques. Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/cosmoquester/memoria.git",
"name": "memoria-pytorch",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "memoria, hebbian, memory, transformer",
"author": "Park Sangjun",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/ff/2d/ee9fd856ef3c1947a2687b4ec86953d7e3091b6b62f4a4bf83a37faabe9f/memoria_pytorch-1.1.0.tar.gz",
"platform": null,
"description": "# Memoria\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n[![CircleCI](https://dl.circleci.com/status-badge/img/gh/cosmoquester/memoria/tree/master.svg?style=svg&circle-token=513f0f5e9a706a51509d198359fe0e016a227ce9)](https://dl.circleci.com/status-badge/redirect/gh/cosmoquester/memoria/tree/master)\n[![codecov](https://codecov.io/gh/cosmoquester/memoria/branch/master/graph/badge.svg?token=KZdkgkBzZG)](https://codecov.io/gh/cosmoquester/memoria)\n\n<img src=\"./images/Memoria-Engrams.gif\" width=\"55%\">\n\nMaking neural networks remember over the long term has been a longstanding issue. Although several external memory techniques have been introduced, most focus on retaining recent information in the short term. Regardless of its importance, information tends to be fatefully forgotten over time. We present Memoria, a memory system for artificial neural networks, drawing inspiration from humans and applying various neuroscientific and psychological theories. The experimental results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification, surpassing conventional techniques. Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.\n\nMemoria is an independant module which can be applied to neural network models in various ways and the experiment code of the paper is in the `experiment` directory.\n\nMy paper [Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture](https://icml.cc/virtual/2024/poster/32668) is accepted to **International Conference on Machine Learning (ICML) 2024 as a Spotlight paper**.\nThe full text of the paper can be accessed from [OpenReview](https://openreview.net/forum?id=yTz0u4B8ug) or [ArXiv](https://arxiv.org/abs/2310.03052).\n\n## Installation\n\n```sh\n$ pip install memoria-pytorch\n```\n\nYou can install memoria by pip command above.\n\n## Tutorial\n\nThis is a tutorial to help to understand the concept and mechanism of Memoria.\n\n#### 1. Import Memoria and Set Parameters\n\n```python\nimport torch\nfrom memoria import Memoria, EngramType\n\ntorch.manual_seed(42)\n\n# Memoria Parameters\nnum_reminded_stm = 4\nstm_capacity = 16\nltm_search_depth = 5\ninitial_lifespan = 3\nnum_final_ltms = 4\n\n# Data Parameters\nbatch_size = 2\nsequence_length = 8\nhidden_dim = 64\n```\n\n#### 2. Initialize Memoria and Dummy Data\n\n- Fake random data and lifespan delta are used for simplification.\n\n```python\nmemoria = Memoria(\n num_reminded_stm=num_reminded_stm,\n stm_capacity=stm_capacity,\n ltm_search_depth=ltm_search_depth,\n initial_lifespan=initial_lifespan,\n num_final_ltms=num_final_ltms,\n)\ndata = torch.rand(batch_size, sequence_length, hidden_dim)\n```\n\n#### 3. Add Data as Working Memory\n\n```python\n# Add data as working memory\nmemoria.add_working_memory(data)\n```\n\n```python\n# Expected values\n>>> len(memoria.engrams)\n16\n>>> memoria.engrams.data.shape\ntorch.Size([2, 8, 64])\n>>> memoria.engrams.lifespan\ntensor([[3., 3., 3., 3., 3., 3., 3., 3.],\n [3., 3., 3., 3., 3., 3., 3., 3.]])\n```\n\n#### 4. Remind Memories\n\n- Empty memories are reminded because there is no engrams in STM/LTM yet\n\n```python\nreminded_memories, reminded_indices = memoria.remind()\n```\n\n```python\n# No reminded memories because there is no STM/LTM engrams yet\n>>> reminded_memories\ntensor([], size=(2, 0, 64))\n>>> reminded_indices\ntensor([], size=(2, 0), dtype=torch.int64)\n```\n\n#### 5. Adjust Lifespan and Memories\n\n- In this step, no engrams earn lifespan because there is no reminded memories\n\n```python\nmemoria.adjust_lifespan_and_memories(reminded_indices, torch.zeros_like(reminded_indices))\n```\n\n```python\n# Decreases lifespan for all engrams & working memories have changed into shortterm memory\n>>> memoria.engrams.lifespan\ntensor([[2., 2., 2., 2., 2., 2., 2., 2.],\n [2., 2., 2., 2., 2., 2., 2., 2.]])\n>>> memoria.engrams.engrams_types\ntensor([[2, 2, 2, 2, 2, 2, 2, 2],\n [2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)\n>>> EngramType.SHORTTERM\n<EngramType.SHORTTERM: 2>\n```\n\n#### 6. Repeat one more time\n\n- Now, there are some engrams in STM, remind and adjustment from STM will work\n\n```python\ndata2 = torch.rand(batch_size, sequence_length, hidden_dim)\nmemoria.add_working_memory(data2)\n```\n\n```python\n>>> len(memoria.engrams)\n32\n>>> memoria.engrams.lifespan\ntensor([[2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.],\n [2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.]])\n```\n\n```python\nreminded_memories, reminded_indices = memoria.remind()\n```\n\n```python\n# Remind memories from STM\n>>> reminded_memories.shape\ntorch.Size([2, 6, 64])\n>>> reminded_indices.shape\ntorch.Size([2, 6])\n>>> reminded_indices\ntensor([[ 0, 6, 4, 3, 2, -1],\n [ 0, 7, 6, 5, 4, -1]])\n```\n\n```python\n# Increase lifespan of all the reminded engrams by 5\nmemoria.adjust_lifespan_and_memories(reminded_indices, torch.full_like(reminded_indices, 5))\n```\n\n```python\n# Reminded engrams got lifespan by 5, other engrams have got older\n>>> memoria.engrams.lifespan\n>>> memoria.engrams.lifespan\ntensor([[6., 1., 6., 6., 6., 1., 6., 1., 2., 2., 2., 2., 2., 2., 2., 2.],\n [6., 1., 1., 1., 6., 6., 6., 6., 2., 2., 2., 2., 2., 2., 2., 2.]])\n```\n\n#### 7. Repeat\n\n- Repeat 10 times to see the dynamics of LTM\n\n```python\n# This is default process to utilize Memoria\nfor _ in range(10):\n data = torch.rand(batch_size, sequence_length, hidden_dim)\n memoria.add_working_memory(data)\n\n reminded_memories, reminded_indices = memoria.remind()\n\n lifespan_delta = torch.randint_like(reminded_indices, 0, 6).float()\n\n memoria.adjust_lifespan_and_memories(reminded_indices, lifespan_delta)\n```\n\n```python\n# After 10 iteration, some engrams have changed into longterm memory and got large lifespan\n# Engram type zero means those engrams are deleted\n>>> len(memoria.engrams)\n72\n>>> memoria.engrams.engrams_types\ntensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],\n [0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)\n>>> EngramType.LONGTERM\n<EngramType.LONGTERM: 3>\n>>> EngramType.NULL\n<EngramType.NULL: 0>\n>>> memoria.engrams.lifespan\ntensor([[ 9., 1., 8., 2., 16., 5., 13., 7., 7., 3., 3., 4., 3., 3.,\n 4., 2., 2., 1., 1., 1., 1., 1., 1., 1., 2., 6., 1., 1.,\n 2., 2., 2., 2., 2., 2., 2., 2.],\n [-1., -1., 3., 2., 19., 21., 11., 6., 14., 1., 5., 1., 5., 1.,\n 5., 1., 1., 8., 2., 1., 1., 1., 2., 1., 1., 1., 1., 1.,\n 2., 2., 2., 2., 2., 2., 2., 2.]])\n```\n\n# Citation\n\n```bibtex\n@InProceedings{pmlr-v235-park24a,\n title = \t {Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture},\n author = {Park, Sangjun and Bak, Jinyeong},\n booktitle = \t {Proceedings of the 41st International Conference on Machine Learning},\n pages = \t {39587--39615},\n year = \t {2024},\n editor = \t {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix},\n volume = \t {235},\n series = \t {Proceedings of Machine Learning Research},\n month = \t {21--27 Jul},\n publisher = {PMLR},\n pdf = \t {https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24a/park24a.pdf},\n url = \t {https://proceedings.mlr.press/v235/park24a.html},\n abstract = \t {Making neural networks remember over the long term has been a longstanding issue. Although several external memory techniques have been introduced, most focus on retaining recent information in the short term. Regardless of its importance, information tends to be fatefully forgotten over time. We present Memoria, a memory system for artificial neural networks, drawing inspiration from humans and applying various neuroscientific and psychological theories. The experimental results prove the effectiveness of Memoria in the diverse tasks of sorting, language modeling, and classification, surpassing conventional techniques. Engram analysis reveals that Memoria exhibits the primacy, recency, and temporal contiguity effects which are characteristics of human memory.}\n}\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "Memoria is a human-inspired memory architecture for neural networks.",
"version": "1.1.0",
"project_urls": {
"Homepage": "https://github.com/cosmoquester/memoria.git"
},
"split_keywords": [
"memoria",
" hebbian",
" memory",
" transformer"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bd1f8b5f77837399f6c71416735f71f05a02d9481a17931c046445441d7f3fd4",
"md5": "81a9dcf467f5d0bd7591c47acea0db23",
"sha256": "ada173065f58a9dd7e6d521cdea2e1bf08df8cdee36dfda7af727bccbff9e971"
},
"downloads": -1,
"filename": "memoria_pytorch-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "81a9dcf467f5d0bd7591c47acea0db23",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 19942,
"upload_time": "2024-10-20T05:06:44",
"upload_time_iso_8601": "2024-10-20T05:06:44.180781Z",
"url": "https://files.pythonhosted.org/packages/bd/1f/8b5f77837399f6c71416735f71f05a02d9481a17931c046445441d7f3fd4/memoria_pytorch-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ff2dee9fd856ef3c1947a2687b4ec86953d7e3091b6b62f4a4bf83a37faabe9f",
"md5": "359044a7db7a3a9760230f0b1edb5a06",
"sha256": "fa1a333d2430c41d86cfb1be6856270c96a25f889e20e4e53ddab356dcfcfa01"
},
"downloads": -1,
"filename": "memoria_pytorch-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "359044a7db7a3a9760230f0b1edb5a06",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 25914,
"upload_time": "2024-10-20T05:06:45",
"upload_time_iso_8601": "2024-10-20T05:06:45.221780Z",
"url": "https://files.pythonhosted.org/packages/ff/2d/ee9fd856ef3c1947a2687b4ec86953d7e3091b6b62f4a4bf83a37faabe9f/memoria_pytorch-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-20 05:06:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cosmoquester",
"github_project": "memoria",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"circle": true,
"requirements": [],
"lcname": "memoria-pytorch"
}