memoria-pytorch


Namememoria-pytorch JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/cosmoquester/memoria.git
SummaryMemoria is a Hebbian memory architecture for neural networks.
upload_time2023-10-06 02:32:22
maintainer
docs_urlNone
authorPark Sangjun
requires_python>=3.7
license
keywords memoria hebbian memory transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Memoria

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)
[![CircleCI](https://dl.circleci.com/status-badge/img/gh/cosmoquester/memoria/tree/master.svg?style=svg&circle-token=513f0f5e9a706a51509d198359fe0e016a227ce9)](https://dl.circleci.com/status-badge/redirect/gh/cosmoquester/memoria/tree/master)
[![codecov](https://codecov.io/gh/cosmoquester/memoria/branch/master/graph/badge.svg?token=KZdkgkBzZG)](https://codecov.io/gh/cosmoquester/memoria)

<img src="https://github.com/cosmoquester/memoria/assets/30718444/fa36dd13-7aac-4c4d-b749-83d93993d422" width="55%">

Memoria is a general memory network that applies Hebbian theory which is a major theory explaining human memory formulation to enhance long-term dependencies in neural networks. Memoria stores and retrieves information called engram at multiple memory levels of working memory, short-term memory, and long-term memory, using connection weights that change according to Hebb's rule.

Memoria is an independant module which can be applied to neural network models in various ways and the experiment code of the paper is in the `experiment` directory.

Please refer to [Memoria: Hebbian Memory Architecture for Human-Like Sequential Processing](https://arxiv.org/abs/2310.03052) for more details about Memoria.

## Installation

```sh
$ pip install memoria-pytorch
```

You can install memoria by pip command above.

## Tutorial

This is a tutorial to help to understand the concept and mechanism of Memoria.

#### 1. Import Memoria and Set Parameters

```python
import torch
from memoria import Memoria, EngramType

torch.manual_seed(42)

# Memoria Parameters
num_reminded_stm = 4
stm_capacity = 16
ltm_search_depth = 5
initial_lifespan = 3
num_final_ltms = 4

# Data Parameters
batch_size = 2
sequence_length = 8
hidden_dim = 64
```

#### 2. Initialize Memoria and Dummy Data

- Fake random data and lifespan delta are used for simplification.

```python
memoria = Memoria(
    num_reminded_stm=num_reminded_stm,
    stm_capacity=stm_capacity,
    ltm_search_depth=ltm_search_depth,
    initial_lifespan=initial_lifespan,
    num_final_ltms=num_final_ltms,
)
data = torch.rand(batch_size, sequence_length, hidden_dim)
```

#### 3. Add Data as Working Memory

```python
# Add data as working memory
memoria.add_working_memory(data)
```

```python
# Expected values
>>> len(memoria.engrams)
16
>>> memoria.engrams.data.shape
torch.Size([2, 8, 64])
>>> memoria.engrams.lifespan
tensor([[3., 3., 3., 3., 3., 3., 3., 3.],
        [3., 3., 3., 3., 3., 3., 3., 3.]])
```

#### 4. Remind Memories

- Empty memories are reminded because there is no engrams in STM/LTM yet

```python
reminded_memories, reminded_indices = memoria.remind()
```

```python
# No reminded memories because there is no STM/LTM engrams yet
>>> reminded_memories
tensor([], size=(2, 0, 64))
>>> reminded_indices
tensor([], size=(2, 0), dtype=torch.int64)
```

#### 5. Adjust Lifespan and Memories

- In this step, no engrams earn lifespan because there is no reminded memories

```python
memoria.adjust_lifespan_and_memories(reminded_indices, torch.zeros_like(reminded_indices))
```

```python
# Decreases lifespan for all engrams & working memories have changed into shortterm memory
>>> memoria.engrams.lifespan
tensor([[2., 2., 2., 2., 2., 2., 2., 2.],
        [2., 2., 2., 2., 2., 2., 2., 2.]])
>>> memoria.engrams.engrams_types
tensor([[2, 2, 2, 2, 2, 2, 2, 2],
        [2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)
>>> EngramType.SHORTTERM
<EngramType.SHORTTERM: 2>
```

#### 6. Repeat one more time

- Now, there are some engrams in STM, remind and adjustment from STM will work

```python
data2 = torch.rand(batch_size, sequence_length, hidden_dim)
memoria.add_working_memory(data2)
```

```python
>>> len(memoria.engrams)
32
>>> memoria.engrams.lifespan
tensor([[2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.],
        [2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.]])
```

```python
reminded_memories, reminded_indices = memoria.remind()
```

```python
# Remind memories from STM
>>> reminded_memories.shape
torch.Size([2, 6, 64])
>>> reminded_indices.shape
torch.Size([2, 6])
>>> reminded_indices
tensor([[ 0,  6,  4,  3,  2, -1],
        [ 0,  7,  6,  5,  4, -1]])
```

```python
# Increase lifespan of all the reminded engrams by 5
memoria.adjust_lifespan_and_memories(reminded_indices, torch.full_like(reminded_indices, 5))
```

```python
# Reminded engrams got lifespan by 5, other engrams have got older
>>> memoria.engrams.lifespan
>>> memoria.engrams.lifespan
tensor([[6., 1., 6., 6., 6., 1., 6., 1., 2., 2., 2., 2., 2., 2., 2., 2.],
        [6., 1., 1., 1., 6., 6., 6., 6., 2., 2., 2., 2., 2., 2., 2., 2.]])
```

#### 7. Repeat

- Repeat 10 times to see the dynamics of LTM

```python
# This is default process to utilize Memoria
for _ in range(10):
    data = torch.rand(batch_size, sequence_length, hidden_dim)
    memoria.add_working_memory(data)

    reminded_memories, reminded_indices = memoria.remind()

    lifespan_delta = torch.randint_like(reminded_indices, 0, 6).float()

    memoria.adjust_lifespan_and_memories(reminded_indices, lifespan_delta)
```

```python
# After 10 iteration, some engrams have changed into longterm memory and got large lifespan
# Engram type zero means those engrams are deleted
>>> len(memoria.engrams)
72
>>> memoria.engrams.engrams_types
tensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,
         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
        [0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,
         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)
>>> EngramType.LONGTERM
<EngramType.LONGTERM: 3>
>>> EngramType.NULL
<EngramType.NULL: 0>
>>> memoria.engrams.lifespan
tensor([[ 9.,  1.,  8.,  2., 16.,  5., 13.,  7.,  7.,  3.,  3.,  4.,  3.,  3.,
          4.,  2.,  2.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  2.,  6.,  1.,  1.,
          2.,  2.,  2.,  2.,  2.,  2.,  2.,  2.],
        [-1., -1.,  3.,  2., 19., 21., 11.,  6., 14.,  1.,  5.,  1.,  5.,  1.,
          5.,  1.,  1.,  8.,  2.,  1.,  1.,  1.,  2.,  1.,  1.,  1.,  1.,  1.,
          2.,  2.,  2.,  2.,  2.,  2.,  2.,  2.]])
```

# Citation

```bibtex
@misc{park2023memoria,
  title         = {Memoria: Hebbian Memory Architecture for Human-Like Sequential Processing},
  author        = {Sangjun Park and JinYeong Bak},
  year          = {2023},
  eprint        = {2310.03052},
  archiveprefix = {arXiv},
  primaryclass  = {cs.LG}
}
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cosmoquester/memoria.git",
    "name": "memoria-pytorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "memoria,hebbian,memory,transformer",
    "author": "Park Sangjun",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "# Memoria\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)\n[![CircleCI](https://dl.circleci.com/status-badge/img/gh/cosmoquester/memoria/tree/master.svg?style=svg&circle-token=513f0f5e9a706a51509d198359fe0e016a227ce9)](https://dl.circleci.com/status-badge/redirect/gh/cosmoquester/memoria/tree/master)\n[![codecov](https://codecov.io/gh/cosmoquester/memoria/branch/master/graph/badge.svg?token=KZdkgkBzZG)](https://codecov.io/gh/cosmoquester/memoria)\n\n<img src=\"https://github.com/cosmoquester/memoria/assets/30718444/fa36dd13-7aac-4c4d-b749-83d93993d422\" width=\"55%\">\n\nMemoria is a general memory network that applies Hebbian theory which is a major theory explaining human memory formulation to enhance long-term dependencies in neural networks. Memoria stores and retrieves information called engram at multiple memory levels of working memory, short-term memory, and long-term memory, using connection weights that change according to Hebb's rule.\n\nMemoria is an independant module which can be applied to neural network models in various ways and the experiment code of the paper is in the `experiment` directory.\n\nPlease refer to [Memoria: Hebbian Memory Architecture for Human-Like Sequential Processing](https://arxiv.org/abs/2310.03052) for more details about Memoria.\n\n## Installation\n\n```sh\n$ pip install memoria-pytorch\n```\n\nYou can install memoria by pip command above.\n\n## Tutorial\n\nThis is a tutorial to help to understand the concept and mechanism of Memoria.\n\n#### 1. Import Memoria and Set Parameters\n\n```python\nimport torch\nfrom memoria import Memoria, EngramType\n\ntorch.manual_seed(42)\n\n# Memoria Parameters\nnum_reminded_stm = 4\nstm_capacity = 16\nltm_search_depth = 5\ninitial_lifespan = 3\nnum_final_ltms = 4\n\n# Data Parameters\nbatch_size = 2\nsequence_length = 8\nhidden_dim = 64\n```\n\n#### 2. Initialize Memoria and Dummy Data\n\n- Fake random data and lifespan delta are used for simplification.\n\n```python\nmemoria = Memoria(\n    num_reminded_stm=num_reminded_stm,\n    stm_capacity=stm_capacity,\n    ltm_search_depth=ltm_search_depth,\n    initial_lifespan=initial_lifespan,\n    num_final_ltms=num_final_ltms,\n)\ndata = torch.rand(batch_size, sequence_length, hidden_dim)\n```\n\n#### 3. Add Data as Working Memory\n\n```python\n# Add data as working memory\nmemoria.add_working_memory(data)\n```\n\n```python\n# Expected values\n>>> len(memoria.engrams)\n16\n>>> memoria.engrams.data.shape\ntorch.Size([2, 8, 64])\n>>> memoria.engrams.lifespan\ntensor([[3., 3., 3., 3., 3., 3., 3., 3.],\n        [3., 3., 3., 3., 3., 3., 3., 3.]])\n```\n\n#### 4. Remind Memories\n\n- Empty memories are reminded because there is no engrams in STM/LTM yet\n\n```python\nreminded_memories, reminded_indices = memoria.remind()\n```\n\n```python\n# No reminded memories because there is no STM/LTM engrams yet\n>>> reminded_memories\ntensor([], size=(2, 0, 64))\n>>> reminded_indices\ntensor([], size=(2, 0), dtype=torch.int64)\n```\n\n#### 5. Adjust Lifespan and Memories\n\n- In this step, no engrams earn lifespan because there is no reminded memories\n\n```python\nmemoria.adjust_lifespan_and_memories(reminded_indices, torch.zeros_like(reminded_indices))\n```\n\n```python\n# Decreases lifespan for all engrams & working memories have changed into shortterm memory\n>>> memoria.engrams.lifespan\ntensor([[2., 2., 2., 2., 2., 2., 2., 2.],\n        [2., 2., 2., 2., 2., 2., 2., 2.]])\n>>> memoria.engrams.engrams_types\ntensor([[2, 2, 2, 2, 2, 2, 2, 2],\n        [2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)\n>>> EngramType.SHORTTERM\n<EngramType.SHORTTERM: 2>\n```\n\n#### 6. Repeat one more time\n\n- Now, there are some engrams in STM, remind and adjustment from STM will work\n\n```python\ndata2 = torch.rand(batch_size, sequence_length, hidden_dim)\nmemoria.add_working_memory(data2)\n```\n\n```python\n>>> len(memoria.engrams)\n32\n>>> memoria.engrams.lifespan\ntensor([[2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.],\n        [2., 2., 2., 2., 2., 2., 2., 2., 3., 3., 3., 3., 3., 3., 3., 3.]])\n```\n\n```python\nreminded_memories, reminded_indices = memoria.remind()\n```\n\n```python\n# Remind memories from STM\n>>> reminded_memories.shape\ntorch.Size([2, 6, 64])\n>>> reminded_indices.shape\ntorch.Size([2, 6])\n>>> reminded_indices\ntensor([[ 0,  6,  4,  3,  2, -1],\n        [ 0,  7,  6,  5,  4, -1]])\n```\n\n```python\n# Increase lifespan of all the reminded engrams by 5\nmemoria.adjust_lifespan_and_memories(reminded_indices, torch.full_like(reminded_indices, 5))\n```\n\n```python\n# Reminded engrams got lifespan by 5, other engrams have got older\n>>> memoria.engrams.lifespan\n>>> memoria.engrams.lifespan\ntensor([[6., 1., 6., 6., 6., 1., 6., 1., 2., 2., 2., 2., 2., 2., 2., 2.],\n        [6., 1., 1., 1., 6., 6., 6., 6., 2., 2., 2., 2., 2., 2., 2., 2.]])\n```\n\n#### 7. Repeat\n\n- Repeat 10 times to see the dynamics of LTM\n\n```python\n# This is default process to utilize Memoria\nfor _ in range(10):\n    data = torch.rand(batch_size, sequence_length, hidden_dim)\n    memoria.add_working_memory(data)\n\n    reminded_memories, reminded_indices = memoria.remind()\n\n    lifespan_delta = torch.randint_like(reminded_indices, 0, 6).float()\n\n    memoria.adjust_lifespan_and_memories(reminded_indices, lifespan_delta)\n```\n\n```python\n# After 10 iteration, some engrams have changed into longterm memory and got large lifespan\n# Engram type zero means those engrams are deleted\n>>> len(memoria.engrams)\n72\n>>> memoria.engrams.engrams_types\ntensor([[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],\n        [0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2,\n         2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]], dtype=torch.uint8)\n>>> EngramType.LONGTERM\n<EngramType.LONGTERM: 3>\n>>> EngramType.NULL\n<EngramType.NULL: 0>\n>>> memoria.engrams.lifespan\ntensor([[ 9.,  1.,  8.,  2., 16.,  5., 13.,  7.,  7.,  3.,  3.,  4.,  3.,  3.,\n          4.,  2.,  2.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  2.,  6.,  1.,  1.,\n          2.,  2.,  2.,  2.,  2.,  2.,  2.,  2.],\n        [-1., -1.,  3.,  2., 19., 21., 11.,  6., 14.,  1.,  5.,  1.,  5.,  1.,\n          5.,  1.,  1.,  8.,  2.,  1.,  1.,  1.,  2.,  1.,  1.,  1.,  1.,  1.,\n          2.,  2.,  2.,  2.,  2.,  2.,  2.,  2.]])\n```\n\n# Citation\n\n```bibtex\n@misc{park2023memoria,\n  title         = {Memoria: Hebbian Memory Architecture for Human-Like Sequential Processing},\n  author        = {Sangjun Park and JinYeong Bak},\n  year          = {2023},\n  eprint        = {2310.03052},\n  archiveprefix = {arXiv},\n  primaryclass  = {cs.LG}\n}\n```\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Memoria is a Hebbian memory architecture for neural networks.",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/cosmoquester/memoria.git"
    },
    "split_keywords": [
        "memoria",
        "hebbian",
        "memory",
        "transformer"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "faf70c6dfc79681f196de45dfbba83814421d4953d1802f0381bf29e878196c9",
                "md5": "89decc3071bf7508bbaaa999ccc4e958",
                "sha256": "0ca4664aade537d00aae7d6f0611b763cd66bc7c5996470fccd2e3921ded5bb5"
            },
            "downloads": -1,
            "filename": "memoria_pytorch-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "89decc3071bf7508bbaaa999ccc4e958",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 15058,
            "upload_time": "2023-10-06T02:32:22",
            "upload_time_iso_8601": "2023-10-06T02:32:22.755335Z",
            "url": "https://files.pythonhosted.org/packages/fa/f7/0c6dfc79681f196de45dfbba83814421d4953d1802f0381bf29e878196c9/memoria_pytorch-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-06 02:32:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cosmoquester",
    "github_project": "memoria",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "requirements": [],
    "lcname": "memoria-pytorch"
}
        
Elapsed time: 0.12236s