infini-transformer-pytorch


Nameinfini-transformer-pytorch JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryInfini-Transformer in Pytorch
upload_time2024-05-09 14:30:15
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Phil Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords artificial intelligence attention mechanism deep learning long context memory transformers
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="./infini-attention.png" width="300px"></img>

## Infini-Transformer - Pytorch

Implementation of <a href="https://arxiv.org/abs/2404.07143">Infini-Transformer</a> in Pytorch. They use a linear attention scheme to compress past memories and demonstrate multiple SOTAs for long context benchmarks.

Although unlikely to beat <a href="https://github.com/lucidrains/ring-attention-pytorch">Ring Attention</a>, I think it is worth exploring, as the techniques are orthogonal.

<a href="https://www.youtube.com/watch?v=r_UBBfTPcF0">Yannic Kilcher's explanation</a>

## Install

```bash
$ pip install infini-transformer-pytorch
```

## Usage

```python
import torch
from infini_transformer_pytorch import InfiniTransformer

transformer = InfiniTransformer(
    num_tokens = 256,
    dim = 512,
    depth = 8,
    dim_head = 128,  # high head dimension may be part of the reason they got good results (kv has high capacity)
    heads = 8,
    rotary_emb_linear_attn = True
)

x = torch.randint(0, 256, (1, 1024))

logits1, cached_kv1, mem1 = transformer(x, return_new_memories = False)
logits2, cached_kv2, mem2 = transformer(x, past_memories = mem1, cached_kv = cached_kv1, return_new_memories = False)
logits3, cached_kv3, mem3 = transformer(x, past_memories = mem2, cached_kv = cached_kv2, return_new_memories = True)

```

Training a transformer with recurrence usually trips up a lot of researchers, so to make it easy, just wrap it with `InfiniTransformerWrapper`

```python
import torch

from infini_transformer_pytorch import (
    InfiniTransformer,
    InfiniTransformerWrapper
)

# model and wrapper

model = InfiniTransformer(
    num_tokens = 256,
    dim = 512,
    depth = 8,
    dim_head = 128,
    heads = 8,
    rotary_emb_linear_attn = True
)

wrapper = InfiniTransformerWrapper(
    model,
    segment_length = 512,
    detach_mems_every_num_segments = 2 # greater than 1 so the network can learn how to 'write' to the fast weight memories
).cuda()

# mock input

seq = torch.randint(0, 256, (2, 10000)).cuda() # can be arbitrarily long sequence

# training

loss = wrapper(
    seq,
    backward = True # will automatically segment and accumulate gradients when it detaches the memories
)

# after much data...

# calculating eval loss

with torch.no_grad():
    wrapper.eval()
    eval_loss = wrapper(seq)

# generating is as easy as

output = wrapper.generate(seq_len = 8192, prompt = seq[:, :1])

output.shape # (2, 8192 - 1)
```

## Testing

Train an autoregressive enwik8

```python
$ python train.py
```

## Todo

- [x] working example with enwik8

## Citations

```bibtex
@inproceedings{Munkhdalai2024LeaveNC,
    title   = {Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention},
    author  = {Tsendsuren Munkhdalai and Manaal Faruqui and Siddharth Gopal},
    year    = {2024},
    url     = {https://api.semanticscholar.org/CorpusID:269033427}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "infini-transformer-pytorch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "artificial intelligence, attention mechanism, deep learning, long context, memory, transformers",
    "author": null,
    "author_email": "Phil Wang <lucidrains@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/23/14/1e94866cd2b92fa5993b1faf4aa1a7629c644241050ef85f6077f3fcc148/infini_transformer_pytorch-0.1.1.tar.gz",
    "platform": null,
    "description": "<img src=\"./infini-attention.png\" width=\"300px\"></img>\n\n## Infini-Transformer - Pytorch\n\nImplementation of <a href=\"https://arxiv.org/abs/2404.07143\">Infini-Transformer</a> in Pytorch. They use a linear attention scheme to compress past memories and demonstrate multiple SOTAs for long context benchmarks.\n\nAlthough unlikely to beat <a href=\"https://github.com/lucidrains/ring-attention-pytorch\">Ring Attention</a>, I think it is worth exploring, as the techniques are orthogonal.\n\n<a href=\"https://www.youtube.com/watch?v=r_UBBfTPcF0\">Yannic Kilcher's explanation</a>\n\n## Install\n\n```bash\n$ pip install infini-transformer-pytorch\n```\n\n## Usage\n\n```python\nimport torch\nfrom infini_transformer_pytorch import InfiniTransformer\n\ntransformer = InfiniTransformer(\n    num_tokens = 256,\n    dim = 512,\n    depth = 8,\n    dim_head = 128,  # high head dimension may be part of the reason they got good results (kv has high capacity)\n    heads = 8,\n    rotary_emb_linear_attn = True\n)\n\nx = torch.randint(0, 256, (1, 1024))\n\nlogits1, cached_kv1, mem1 = transformer(x, return_new_memories = False)\nlogits2, cached_kv2, mem2 = transformer(x, past_memories = mem1, cached_kv = cached_kv1, return_new_memories = False)\nlogits3, cached_kv3, mem3 = transformer(x, past_memories = mem2, cached_kv = cached_kv2, return_new_memories = True)\n\n```\n\nTraining a transformer with recurrence usually trips up a lot of researchers, so to make it easy, just wrap it with `InfiniTransformerWrapper`\n\n```python\nimport torch\n\nfrom infini_transformer_pytorch import (\n    InfiniTransformer,\n    InfiniTransformerWrapper\n)\n\n# model and wrapper\n\nmodel = InfiniTransformer(\n    num_tokens = 256,\n    dim = 512,\n    depth = 8,\n    dim_head = 128,\n    heads = 8,\n    rotary_emb_linear_attn = True\n)\n\nwrapper = InfiniTransformerWrapper(\n    model,\n    segment_length = 512,\n    detach_mems_every_num_segments = 2 # greater than 1 so the network can learn how to 'write' to the fast weight memories\n).cuda()\n\n# mock input\n\nseq = torch.randint(0, 256, (2, 10000)).cuda() # can be arbitrarily long sequence\n\n# training\n\nloss = wrapper(\n    seq,\n    backward = True # will automatically segment and accumulate gradients when it detaches the memories\n)\n\n# after much data...\n\n# calculating eval loss\n\nwith torch.no_grad():\n    wrapper.eval()\n    eval_loss = wrapper(seq)\n\n# generating is as easy as\n\noutput = wrapper.generate(seq_len = 8192, prompt = seq[:, :1])\n\noutput.shape # (2, 8192 - 1)\n```\n\n## Testing\n\nTrain an autoregressive enwik8\n\n```python\n$ python train.py\n```\n\n## Todo\n\n- [x] working example with enwik8\n\n## Citations\n\n```bibtex\n@inproceedings{Munkhdalai2024LeaveNC,\n    title   = {Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention},\n    author  = {Tsendsuren Munkhdalai and Manaal Faruqui and Siddharth Gopal},\n    year    = {2024},\n    url     = {https://api.semanticscholar.org/CorpusID:269033427}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Phil Wang  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Infini-Transformer in Pytorch",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://pypi.org/project/infini-transformer-pytorch/",
        "Repository": "https://github.com/lucidrains/infini-transformer-pytorch"
    },
    "split_keywords": [
        "artificial intelligence",
        " attention mechanism",
        " deep learning",
        " long context",
        " memory",
        " transformers"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "671dd2c2d84aaaeef453c49a20278e36a86e0640535613e450b55ec356710f27",
                "md5": "652bcaeef119591775cd7b0002c3d81f",
                "sha256": "fdf4262e70b768476b156df52112fd6997c25e45a74f6e3ff8a80d752d6b984b"
            },
            "downloads": -1,
            "filename": "infini_transformer_pytorch-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "652bcaeef119591775cd7b0002c3d81f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 4095,
            "upload_time": "2024-05-09T14:30:12",
            "upload_time_iso_8601": "2024-05-09T14:30:12.304246Z",
            "url": "https://files.pythonhosted.org/packages/67/1d/d2c2d84aaaeef453c49a20278e36a86e0640535613e450b55ec356710f27/infini_transformer_pytorch-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "23141e94866cd2b92fa5993b1faf4aa1a7629c644241050ef85f6077f3fcc148",
                "md5": "4ce257c0e36b2e53f92e817a0edf9968",
                "sha256": "331b099db4ed1a7ae507cb66a48f02d54a1d255f97a85b6e99dbcfdac651fbdd"
            },
            "downloads": -1,
            "filename": "infini_transformer_pytorch-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4ce257c0e36b2e53f92e817a0edf9968",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 36650548,
            "upload_time": "2024-05-09T14:30:15",
            "upload_time_iso_8601": "2024-05-09T14:30:15.268399Z",
            "url": "https://files.pythonhosted.org/packages/23/14/1e94866cd2b92fa5993b1faf4aa1a7629c644241050ef85f6077f3fcc148/infini_transformer_pytorch-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-09 14:30:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lucidrains",
    "github_project": "infini-transformer-pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "infini-transformer-pytorch"
}
        
Elapsed time: 0.25914s