Attention-and-Transformers


NameAttention-and-Transformers JSON
Version 0.0.15 PyPI version JSON
download
home_pagehttps://github.com/veb-101/Attention-and-Transformers
SummaryBuilding attention mechanisms and Transformer models from scratch. Alias ATF.
upload_time2022-12-17 19:13:12
maintainer
docs_urlNone
authorVaibhav Singh
requires_python>=3.7,<3.11.*
licenseApache 2.0
keywords tensorflow keras attention transformers
VCS
bugtrack_url
requirements tensorflow tensorflow-addons tensorflow-datasets livelossplot opencv-python Pillow scikit-learn pandas matplotlib scikit-image
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Attention mechanisms and Transformers

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/Attention-and-Transformers)](https://www.python.org/) [![TensorFlow](https://img.shields.io/badge/Tensorflow-2.10%20%7C%202.11-orange?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/) [![PyPI version](https://badge.fury.io/py/Attention-and-Transformers.svg)](https://badge.fury.io/py/Attention-and-Transformers) [![TensorFlow](https://img.shields.io/badge/TensorFlow-%23FF6F00.svg?style=for-the-badge&logo=TensorFlow&logoColor=white)](https://www.tensorflow.org/)

* This goal of this repository is to host basic architecture and model traning code associated with the different attention mechanisms and transformer architecture.
* At the moment, I more interested in learning and recreating these new architectures from scratch than full-fledged training. For now, I'll just be training these models on small datasets.

#### Installation

* Using pip to install from [pypi](https://pypi.org/project/Attention-and-Transformers/)

```bash
pip install Attention-and-Transformers
```

* Using pip to install latest version from github

```bash
pip install git+https://github.com/veb-101/Attention-and-Transformers.git
```

* Local clone and install

```bash
git clone https://github.com/veb-101/Attention-and-Transformers.git atf
cd atf
python setup.py install
```

**Example Use**

```bash
python load_test.py
```

**Attention Mechanisms**

<table>
<thead>
<tr>
<th style="text-align:center">
<strong># No.</strong>
</th>
<th style="text-align:center">
<strong>Mechanism</strong>
</th>
<th style="text-align:center">
<strong>Paper</strong>
</th>
</tr>
</thead>
<tbody>

<tr>
<td style="text-align:center">1</td>
<td style="text-align:center">
<a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/ViT/multihead_self_attention.py">Multi-head Self Attention</a>
</td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/1706.03762">Attention is all you need</a>
</td>
</tr>
<tr>
<td style="text-align:center">2</td>
<td style="text-align:center">
<a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v1/multihead_self_attention_2D.py">Multi-head Self Attention 2D</a>
</td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/2110.02178">MobileViT V1</a>
</td>
</tr>
<tr>
<td style="text-align:center">2</td>
<td style="text-align:center">
<a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v2/linear_attention.py">Separable Self Attention</a>
</td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/2206.02680">MobileViT V2</a>
</td>
</tr>
</tbody>
</table>

**Transformer Models**

<table>
<thead>
<tr>
<th style="text-align:center">
<strong># No.</strong>
</th>
<th style="text-align:center">
<strong>Models</strong>
</th>
<th style="text-align:center">
<strong>Paper</strong>
</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">1</td>
<td style="text-align:center">
<a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/ViT/vision_transformer.py">Vision Transformer</a>
</td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/2010.11929">An Image is Worth 16x16 Words:</a>
</td>
</tr>
<tr>
<td style="text-align:center">2</td>
<td style="text-align:center">
<a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v1/mobile_vit_v1.py">MobileViT-V1</a>
</td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/2110.02178">MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer</a>
</td>
</tr>
<tr>
<td style="text-align:center">3</td>
<td style="text-align:center"><a href="https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v2/mobile_vit_v2.py">MobileViT-V2</a></td>
<td style="text-align:center">
<a href="https://arxiv.org/abs/2206.02680">Separable Self-attention for Mobile Vision Transformers</a>
</td>
</tr>
</tbody>
</table>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/veb-101/Attention-and-Transformers",
    "name": "Attention-and-Transformers",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7,<3.11.*",
    "maintainer_email": "",
    "keywords": "tensorflow keras attention transformers",
    "author": "Vaibhav Singh",
    "author_email": "vaibhav.singh.3001@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/0c/5d/1b143ff86a9182751ac0ddabac126afce2f1c0e6ee91e812f46d0c9d2b3f/Attention_and_Transformers-0.0.15.tar.gz",
    "platform": null,
    "description": "## Attention mechanisms and Transformers\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/Attention-and-Transformers)](https://www.python.org/) [![TensorFlow](https://img.shields.io/badge/Tensorflow-2.10%20%7C%202.11-orange?logo=tensorflow)](https://github.com/tensorflow/tensorflow/releases/) [![PyPI version](https://badge.fury.io/py/Attention-and-Transformers.svg)](https://badge.fury.io/py/Attention-and-Transformers) [![TensorFlow](https://img.shields.io/badge/TensorFlow-%23FF6F00.svg?style=for-the-badge&logo=TensorFlow&logoColor=white)](https://www.tensorflow.org/)\n\n* This goal of this repository is to host basic architecture and model traning code associated with the different attention mechanisms and transformer architecture.\n* At the moment, I more interested in learning and recreating these new architectures from scratch than full-fledged training. For now, I'll just be training these models on small datasets.\n\n#### Installation\n\n* Using pip to install from [pypi](https://pypi.org/project/Attention-and-Transformers/)\n\n```bash\npip install Attention-and-Transformers\n```\n\n* Using pip to install latest version from github\n\n```bash\npip install git+https://github.com/veb-101/Attention-and-Transformers.git\n```\n\n* Local clone and install\n\n```bash\ngit clone https://github.com/veb-101/Attention-and-Transformers.git atf\ncd atf\npython setup.py install\n```\n\n**Example Use**\n\n```bash\npython load_test.py\n```\n\n**Attention Mechanisms**\n\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">\n<strong># No.</strong>\n</th>\n<th style=\"text-align:center\">\n<strong>Mechanism</strong>\n</th>\n<th style=\"text-align:center\">\n<strong>Paper</strong>\n</th>\n</tr>\n</thead>\n<tbody>\n\n<tr>\n<td style=\"text-align:center\">1</td>\n<td style=\"text-align:center\">\n<a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/ViT/multihead_self_attention.py\">Multi-head Self Attention</a>\n</td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/1706.03762\">Attention is all you need</a>\n</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">2</td>\n<td style=\"text-align:center\">\n<a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v1/multihead_self_attention_2D.py\">Multi-head Self Attention 2D</a>\n</td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/2110.02178\">MobileViT V1</a>\n</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">2</td>\n<td style=\"text-align:center\">\n<a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v2/linear_attention.py\">Separable Self Attention</a>\n</td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/2206.02680\">MobileViT V2</a>\n</td>\n</tr>\n</tbody>\n</table>\n\n**Transformer Models**\n\n<table>\n<thead>\n<tr>\n<th style=\"text-align:center\">\n<strong># No.</strong>\n</th>\n<th style=\"text-align:center\">\n<strong>Models</strong>\n</th>\n<th style=\"text-align:center\">\n<strong>Paper</strong>\n</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td style=\"text-align:center\">1</td>\n<td style=\"text-align:center\">\n<a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/ViT/vision_transformer.py\">Vision Transformer</a>\n</td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/2010.11929\">An Image is Worth 16x16 Words:</a>\n</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">2</td>\n<td style=\"text-align:center\">\n<a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v1/mobile_vit_v1.py\">MobileViT-V1</a>\n</td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/2110.02178\">MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer</a>\n</td>\n</tr>\n<tr>\n<td style=\"text-align:center\">3</td>\n<td style=\"text-align:center\"><a href=\"https://github.com/veb-101/Attention-and-Transformers/blob/main/Attention_and_Transformers/MobileViT_v2/mobile_vit_v2.py\">MobileViT-V2</a></td>\n<td style=\"text-align:center\">\n<a href=\"https://arxiv.org/abs/2206.02680\">Separable Self-attention for Mobile Vision Transformers</a>\n</td>\n</tr>\n</tbody>\n</table>\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Building attention mechanisms and Transformer models from scratch. Alias ATF.",
    "version": "0.0.15",
    "split_keywords": [
        "tensorflow",
        "keras",
        "attention",
        "transformers"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "e91cb98da61973197058849f34b4c2c8",
                "sha256": "a32c67a0fcb200627baad4f66e7bcec4edc96771f1faf67d7af1c669ce139ae3"
            },
            "downloads": -1,
            "filename": "Attention_and_Transformers-0.0.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e91cb98da61973197058849f34b4c2c8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7,<3.11.*",
            "size": 24772,
            "upload_time": "2022-12-17T19:13:10",
            "upload_time_iso_8601": "2022-12-17T19:13:10.744046Z",
            "url": "https://files.pythonhosted.org/packages/86/2c/83acacb0fa37c7e47809d896287e2440ba66682f4f948e423148dcca8482/Attention_and_Transformers-0.0.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "c75f073989d43cbef4aea9d4950d427d",
                "sha256": "18de0593625a77b0dacff19e64ef77b6860e4e9a8d6f06e99f9448c127f0fd07"
            },
            "downloads": -1,
            "filename": "Attention_and_Transformers-0.0.15.tar.gz",
            "has_sig": false,
            "md5_digest": "c75f073989d43cbef4aea9d4950d427d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7,<3.11.*",
            "size": 17527,
            "upload_time": "2022-12-17T19:13:12",
            "upload_time_iso_8601": "2022-12-17T19:13:12.190814Z",
            "url": "https://files.pythonhosted.org/packages/0c/5d/1b143ff86a9182751ac0ddabac126afce2f1c0e6ee91e812f46d0c9d2b3f/Attention_and_Transformers-0.0.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-17 19:13:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "veb-101",
    "github_project": "Attention-and-Transformers",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "tensorflow",
            "specs": [
                [
                    ">=",
                    "2.10.0"
                ]
            ]
        },
        {
            "name": "tensorflow-addons",
            "specs": []
        },
        {
            "name": "tensorflow-datasets",
            "specs": []
        },
        {
            "name": "livelossplot",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "Pillow",
            "specs": []
        },
        {
            "name": "scikit-learn",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "scikit-image",
            "specs": []
        }
    ],
    "lcname": "attention-and-transformers"
}
        
Elapsed time: 0.03905s