lm-infinite


Namelm-infinite JSON
Version 0.0.2 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/LM-Infinite
SummaryPaper - Pytorch
upload_time2023-09-06 16:22:07
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# LM-INFINITE: SIMPLE ON-THE-FLY LENGTH GENERALIZATION FOR LARGE LANGUAGE MODELS

LM-Infinite is a solution proposed by Chi Han, Qifan Wang, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang to address the length generalization failure of Large Language Models (LLMs) on long sequences. LLMs, such as Transformer-based models, have shown impressive performance in various domains but struggle when it comes to longer reasoning processes or understanding larger contexts. Current pre-training schemes truncate training sequences to a fixed length, and even with relative positional encoding, LLMs struggle to generate coherent texts or perform downstream tasks after longer contexts.

The authors investigate the main out-of-distribution factors contributing to this problem and propose LM-Infinite as an efficient solution. LM-Infinite only requires a Λ-shaped attention mask and a distance limit, without any parameter updates or learning. It can be applied to different LLMs using relative-position encoding methods. LM-Infinite demonstrates consistent fluency and generation quality for sequences as long as 32k tokens on datasets like ArXiv and OpenWebText2, with a decoding speedup of 2.72x. Furthermore, it continues to perform well on inputs much longer than training lengths in downstream tasks like passkey retrieval, where vanilla models fail immediately.

[Paper Link](https://arxiv.org/pdf/2308.16137.pdf)

---

# Appreciation
* Lucidrains
* Agorians



# Install
`pip install lm-infinite`

# Usage
```python
import torch
from infinite.main import LMInfinite

d_model = 512
seq_len = 100
n_global = 100
l_pretrain = 50


#sample
q = torch.randn(1, seq_len, d_model)
k = torch.randn(1, seq_len, d_model)
v = torch.randn(1, seq_len, d_model)


#llm infinite mode
model = LMInfinite(
    d_model,
    n_global,
    l_pretrain
)

#forwad pass
output = model(q, k, v)
print(output.shape)
```
# Architecture

# Todo


# License
MIT

# Citations
```latex
@misc{2308.16137,
Author = {Chi Han and Qifan Wang and Wenhan Xiong and Yu Chen and Heng Ji and Sinong Wang},
Title = {LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models},
Year = {2023},
Eprint = {arXiv:2308.16137},
}
```
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/LM-Infinite",
    "name": "lm-infinite",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/38/68/6daec5fa8456a95bb12030f47d04970fbd53e8bf928ce459ab3f584b5f29/lm_infinite-0.0.2.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# LM-INFINITE: SIMPLE ON-THE-FLY LENGTH GENERALIZATION FOR LARGE LANGUAGE MODELS\n\nLM-Infinite is a solution proposed by Chi Han, Qifan Wang, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang to address the length generalization failure of Large Language Models (LLMs) on long sequences. LLMs, such as Transformer-based models, have shown impressive performance in various domains but struggle when it comes to longer reasoning processes or understanding larger contexts. Current pre-training schemes truncate training sequences to a fixed length, and even with relative positional encoding, LLMs struggle to generate coherent texts or perform downstream tasks after longer contexts.\n\nThe authors investigate the main out-of-distribution factors contributing to this problem and propose LM-Infinite as an efficient solution. LM-Infinite only requires a \u039b-shaped attention mask and a distance limit, without any parameter updates or learning. It can be applied to different LLMs using relative-position encoding methods. LM-Infinite demonstrates consistent fluency and generation quality for sequences as long as 32k tokens on datasets like ArXiv and OpenWebText2, with a decoding speedup of 2.72x. Furthermore, it continues to perform well on inputs much longer than training lengths in downstream tasks like passkey retrieval, where vanilla models fail immediately.\n\n[Paper Link](https://arxiv.org/pdf/2308.16137.pdf)\n\n---\n\n# Appreciation\n* Lucidrains\n* Agorians\n\n\n\n# Install\n`pip install lm-infinite`\n\n# Usage\n```python\nimport torch\nfrom infinite.main import LMInfinite\n\nd_model = 512\nseq_len = 100\nn_global = 100\nl_pretrain = 50\n\n\n#sample\nq = torch.randn(1, seq_len, d_model)\nk = torch.randn(1, seq_len, d_model)\nv = torch.randn(1, seq_len, d_model)\n\n\n#llm infinite mode\nmodel = LMInfinite(\n    d_model,\n    n_global,\n    l_pretrain\n)\n\n#forwad pass\noutput = model(q, k, v)\nprint(output.shape)\n```\n# Architecture\n\n# Todo\n\n\n# License\nMIT\n\n# Citations\n```latex\n@misc{2308.16137,\nAuthor = {Chi Han and Qifan Wang and Wenhan Xiong and Yu Chen and Heng Ji and Sinong Wang},\nTitle = {LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models},\nYear = {2023},\nEprint = {arXiv:2308.16137},\n}\n```",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Paper - Pytorch",
    "version": "0.0.2",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/LM-Infinite",
        "Repository": "https://github.com/kyegomez/LM-Infinite"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8cd990fd1d20caa9952293342ecb6ae5b0cd05edd2c7ab6ab931072df8afd8c4",
                "md5": "24db6aa1ddf9930f59987fd77632eab6",
                "sha256": "1a22b5fca8cc509bfd8a82a8dbfff6ff2bef744134b006df8ed176b260e2e89f"
            },
            "downloads": -1,
            "filename": "lm_infinite-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "24db6aa1ddf9930f59987fd77632eab6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 4137,
            "upload_time": "2023-09-06T16:22:06",
            "upload_time_iso_8601": "2023-09-06T16:22:06.216825Z",
            "url": "https://files.pythonhosted.org/packages/8c/d9/90fd1d20caa9952293342ecb6ae5b0cd05edd2c7ab6ab931072df8afd8c4/lm_infinite-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "38686daec5fa8456a95bb12030f47d04970fbd53e8bf928ce459ab3f584b5f29",
                "md5": "cf60b45e9e01dc27cf50a06eb64db312",
                "sha256": "c30096228b43e80b671867db31ac8bb40936240edc0da8dc28c8708f0e0c072d"
            },
            "downloads": -1,
            "filename": "lm_infinite-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "cf60b45e9e01dc27cf50a06eb64db312",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 4150,
            "upload_time": "2023-09-06T16:22:07",
            "upload_time_iso_8601": "2023-09-06T16:22:07.741812Z",
            "url": "https://files.pythonhosted.org/packages/38/68/6daec5fa8456a95bb12030f47d04970fbd53e8bf928ce459ab3f584b5f29/lm_infinite-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-06 16:22:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "LM-Infinite",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lm-infinite"
}
        
Elapsed time: 0.16323s