pytorch-fast-transformers


Namepytorch-fast-transformers JSON
Version 0.4.0 PyPI version JSON
download
home_pagehttps://github.com/idiap/fast-transformers
SummaryProvide a library with fast transformer implementations.
upload_time2021-04-15 13:17:57
maintainerAngelos Katharopoulos, Apoorv Vyas
docs_urlNone
author
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Fast Transformers
=================

Transformers are very successful models that achieve state of the art
performance in many natural language tasks. However, it is very difficult to
scale them to long sequences due to the quadratic scaling of self-attention.

This library was developed for our research on fast attention for transformers.
You can find a list of our papers `in the docs
<https://fast-transformers.github.io>`_ as well as related papers and papers
that we have implemented.

Quick-start
-----------

The following code builds a transformer with softmax attention and one with
linear attention and compares the time required by each to encode a sequence
with 1000 elements.

.. code:: python

    import torch
    from fast_transformers.builders import TransformerEncoderBuilder

    # Create the builder for our transformers
    builder = TransformerEncoderBuilder.from_kwargs(
        n_layers=8,
        n_heads=8,
        query_dimensions=64,
        value_dimensions=64,
        feed_forward_dimensions=1024
    )

    # Build a transformer with softmax attention
    builder.attention_type = "full"
    softmax_model = builder.get()

    # Build a transformer with linear attention
    builder.attention_type = "linear"
    linear_model = builder.get()

    # Construct the dummy input
    X = torch.rand(10, 1000, 8*64)

    # Prepare everythin for CUDA
    X = X.cuda()
    softmax_model.cuda()
    softmax_model.eval()
    linear_model.cuda()
    linear_model.eval()

    # Warmup the GPU
    with torch.no_grad():
        softmax_model(X)
        linear_model(X)
    torch.cuda.synchronize()

    # Measure the execution time
    softmax_start = torch.cuda.Event(enable_timing=True)
    softmax_end = torch.cuda.Event(enable_timing=True)
    linear_start = torch.cuda.Event(enable_timing=True)
    linear_end = torch.cuda.Event(enable_timing=True)

    with torch.no_grad():
        softmax_start.record()
        y = softmax_model(X)
        softmax_end.record()
        torch.cuda.synchronize()
        print("Softmax: ", softmax_start.elapsed_time(softmax_end), "ms")
        # Softmax: 144 ms (on a GTX1080Ti)

    with torch.no_grad():
        linear_start.record()
        y = linear_model(X)
        linear_end.record()
        torch.cuda.synchronize()
        print("Linear: ", linear_start.elapsed_time(linear_end), "ms")
        # Linear: 68 ms (on a GTX1080Ti)

Dependencies & Installation
---------------------------

The fast transformers library has the following dependencies:

* PyTorch
* C++ toolchain
* CUDA toolchain (if you want to compile for GPUs)

For most machines installation should be as simple as:

.. code:: bash

    pip install --user pytorch-fast-transformers

Note: macOS users should ensure they have `llvm` and `libomp` installed.
Using the `homebrew <https://brew.sh>`_ package manager, this can be
accomplished by running `brew install llvm libomp`.

Documentation
-------------

There exists a dedicated `documentation site
<https://fast-transformers.github.io/>`_ but you are also encouraged to read
the `source code <https://github.com/idiap/fast-transformers>`_.

Research
--------

Ours
~~~~

To read about the theory behind some attention implementations in this library
we encourage you to follow our research.

* Transformers are RNNs: Fast Autoregressive Transformers with
  Linear Attention (`2006.16236 <https://arxiv.org/abs/2006.16236>`_)
* Fast Transformers with Clustered Attention
  (`2007.04825 <https://arxiv.org/abs/2007.04825>`_)

If you found our research helpful or influential please consider citing

.. code::

    @inproceedings{katharopoulos_et_al_2020,
        author = {Katharopoulos, A. and Vyas, A. and Pappas, N. and Fleuret, F.},
        title = {Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention},
        booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
        year = {2020}
    }

    @article{vyas_et_al_2020,
        author={Vyas, A. and Katharopoulos, A. and Fleuret, F.},
        title={Fast Transformers with Clustered Attention},
        booktitle = {Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)},
        year={2020}
    }

By others
~~~~~~~~~

* Efficient Attention: Attention with Linear Complexities (`1812.01243
  <https://arxiv.org/abs/1812.01243>`_)
* Linformer: Self-Attention with Linear Complexity (`2006.04768
  <https://arxiv.org/abs/2006.04768>`_)
* Reformer: The Efficient Transformer (`2001.04451
  <https://arxiv.org/abs/2001.04451>`_)

Support, License and Copyright
------------------------------

This software is distributed with the **MIT** license which pretty much means that
you can use it however you want and for whatever reason you want. All the
information regarding support, copyright and the license can be found in the
`LICENSE <https://github.com/idiap/fast-transformers/blob/master/LICENSE>`_
file in the repository.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/idiap/fast-transformers",
    "name": "pytorch-fast-transformers",
    "maintainer": "Angelos Katharopoulos, Apoorv Vyas",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "angelos.katharopoulos@idiap.ch, avyas@idiap.ch",
    "keywords": "",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/ea/bc/00f597fefeab6341114c41045c1b232c38436738e0e8eac1bc9d5e9d5962/pytorch-fast-transformers-0.4.0.tar.gz",
    "platform": "",
    "description": "Fast Transformers\n=================\n\nTransformers are very successful models that achieve state of the art\nperformance in many natural language tasks. However, it is very difficult to\nscale them to long sequences due to the quadratic scaling of self-attention.\n\nThis library was developed for our research on fast attention for transformers.\nYou can find a list of our papers `in the docs\n<https://fast-transformers.github.io>`_ as well as related papers and papers\nthat we have implemented.\n\nQuick-start\n-----------\n\nThe following code builds a transformer with softmax attention and one with\nlinear attention and compares the time required by each to encode a sequence\nwith 1000 elements.\n\n.. code:: python\n\n    import torch\n    from fast_transformers.builders import TransformerEncoderBuilder\n\n    # Create the builder for our transformers\n    builder = TransformerEncoderBuilder.from_kwargs(\n        n_layers=8,\n        n_heads=8,\n        query_dimensions=64,\n        value_dimensions=64,\n        feed_forward_dimensions=1024\n    )\n\n    # Build a transformer with softmax attention\n    builder.attention_type = \"full\"\n    softmax_model = builder.get()\n\n    # Build a transformer with linear attention\n    builder.attention_type = \"linear\"\n    linear_model = builder.get()\n\n    # Construct the dummy input\n    X = torch.rand(10, 1000, 8*64)\n\n    # Prepare everythin for CUDA\n    X = X.cuda()\n    softmax_model.cuda()\n    softmax_model.eval()\n    linear_model.cuda()\n    linear_model.eval()\n\n    # Warmup the GPU\n    with torch.no_grad():\n        softmax_model(X)\n        linear_model(X)\n    torch.cuda.synchronize()\n\n    # Measure the execution time\n    softmax_start = torch.cuda.Event(enable_timing=True)\n    softmax_end = torch.cuda.Event(enable_timing=True)\n    linear_start = torch.cuda.Event(enable_timing=True)\n    linear_end = torch.cuda.Event(enable_timing=True)\n\n    with torch.no_grad():\n        softmax_start.record()\n        y = softmax_model(X)\n        softmax_end.record()\n        torch.cuda.synchronize()\n        print(\"Softmax: \", softmax_start.elapsed_time(softmax_end), \"ms\")\n        # Softmax: 144 ms (on a GTX1080Ti)\n\n    with torch.no_grad():\n        linear_start.record()\n        y = linear_model(X)\n        linear_end.record()\n        torch.cuda.synchronize()\n        print(\"Linear: \", linear_start.elapsed_time(linear_end), \"ms\")\n        # Linear: 68 ms (on a GTX1080Ti)\n\nDependencies & Installation\n---------------------------\n\nThe fast transformers library has the following dependencies:\n\n* PyTorch\n* C++ toolchain\n* CUDA toolchain (if you want to compile for GPUs)\n\nFor most machines installation should be as simple as:\n\n.. code:: bash\n\n    pip install --user pytorch-fast-transformers\n\nNote: macOS users should ensure they have `llvm` and `libomp` installed.\nUsing the `homebrew <https://brew.sh>`_ package manager, this can be\naccomplished by running `brew install llvm libomp`.\n\nDocumentation\n-------------\n\nThere exists a dedicated `documentation site\n<https://fast-transformers.github.io/>`_ but you are also encouraged to read\nthe `source code <https://github.com/idiap/fast-transformers>`_.\n\nResearch\n--------\n\nOurs\n~~~~\n\nTo read about the theory behind some attention implementations in this library\nwe encourage you to follow our research.\n\n* Transformers are RNNs: Fast Autoregressive Transformers with\n  Linear Attention (`2006.16236 <https://arxiv.org/abs/2006.16236>`_)\n* Fast Transformers with Clustered Attention\n  (`2007.04825 <https://arxiv.org/abs/2007.04825>`_)\n\nIf you found our research helpful or influential please consider citing\n\n.. code::\n\n    @inproceedings{katharopoulos_et_al_2020,\n        author = {Katharopoulos, A. and Vyas, A. and Pappas, N. and Fleuret, F.},\n        title = {Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention},\n        booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},\n        year = {2020}\n    }\n\n    @article{vyas_et_al_2020,\n        author={Vyas, A. and Katharopoulos, A. and Fleuret, F.},\n        title={Fast Transformers with Clustered Attention},\n        booktitle = {Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)},\n        year={2020}\n    }\n\nBy others\n~~~~~~~~~\n\n* Efficient Attention: Attention with Linear Complexities (`1812.01243\n  <https://arxiv.org/abs/1812.01243>`_)\n* Linformer: Self-Attention with Linear Complexity (`2006.04768\n  <https://arxiv.org/abs/2006.04768>`_)\n* Reformer: The Efficient Transformer (`2001.04451\n  <https://arxiv.org/abs/2001.04451>`_)\n\nSupport, License and Copyright\n------------------------------\n\nThis software is distributed with the **MIT** license which pretty much means that\nyou can use it however you want and for whatever reason you want. All the\ninformation regarding support, copyright and the license can be found in the\n`LICENSE <https://github.com/idiap/fast-transformers/blob/master/LICENSE>`_\nfile in the repository.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Provide a library with fast transformer implementations.",
    "version": "0.4.0",
    "project_urls": {
        "Homepage": "https://github.com/idiap/fast-transformers"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eabc00f597fefeab6341114c41045c1b232c38436738e0e8eac1bc9d5e9d5962",
                "md5": "332493c8711a225f4eb5358f43d03ad4",
                "sha256": "d1826bc31b9dfbcd018998b897667e89fc6566bd3f8c424cda9f0943544f7e90"
            },
            "downloads": -1,
            "filename": "pytorch-fast-transformers-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "332493c8711a225f4eb5358f43d03ad4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 93616,
            "upload_time": "2021-04-15T13:17:57",
            "upload_time_iso_8601": "2021-04-15T13:17:57.755729Z",
            "url": "https://files.pythonhosted.org/packages/ea/bc/00f597fefeab6341114c41045c1b232c38436738e0e8eac1bc9d5e9d5962/pytorch-fast-transformers-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-04-15 13:17:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "idiap",
    "github_project": "fast-transformers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "pytorch-fast-transformers"
}
        
Elapsed time: 0.16383s