mega-vit


Namemega-vit JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/mega-vit
Summarymega-vit - Pytorch
upload_time2023-10-03 05:57:31
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# MegaVit
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"



[Paper Link](https://arxiv.org/pdf/2302.05442.pdf)

# Appreciation
* Lucidrains
* Agorians



# Install
`pip install mega-vit`

# Usage
- Simple usage,
```python
import torch
from mega_vit.main import MegaVit

v = MegaVit(
    image_size = 256,
    patch_size = 32,
    num_classes = 1000,
    dim = 1024,
    depth = 6,
    heads = 16,
    mlp_dim = 2048,
    dropout = 0.1,
    emb_dropout = 0.1
)

img = torch.randn(1, 3, 256, 256)

preds = v(img) # (1, 1000)
print(preds)
```

- Hyperparams as stated in paper:
```python
import torch
from mega_vit.main import MegaVit

v = ViT(
    image_size = 224,
    patch_size = 14,
    num_classes = 1000,
    dim = 6144,
    depth = 48,
    heads = 48,
    mlp_dim = 2048,
    dropout = 0.1,
    emb_dropout = 0.1
)

img = torch.randn(1, 3, 224, 224)

preds = v(img) # (1, 1000)
print(preds)
```

# Model Architecture
- Regular vit with new parallel layers, QK(Query/Key)Normalization, and omitted biases.

----
# Dataset Strategy
The paper trains ViT-22B on a version of the JFT dataset that has been extended to around 4 billion images. JFT is a large-scale dataset scraped from the internet, originally containing over 300 million images labeled with a hierarchical taxonomy of 30,000 categories. 

The authors do not provide full details on how the dataset was extended from the original JFT to 4 billion images. However, the goal seems to be creating a larger and more diverse training set to support scaling up the model size. Pre-training on larger datasets enables learning more robust and generalizable visual representations.

The authors evaluate ViT-22B on a comprehensive set of 39 datasets covering various domains like image classification, dense prediction tasks, video, and fairness benchmarks. Using such a diverse evaluation suite allows them to thoroughly assess the scalability and transferability of ViT-22B across different domains and data distributions.

Below is a table summarizing some of the key datasets used in the paper:

| Dataset | Domain | Images | Classes |
|-|-|-|-| 
| JFT (training set) | Internet images | ~4 billion | 30,000 |
| ImageNet | Natural images | 1.28M | 1000 |
| ImageNet-C | Corrupted ImageNet images | 1.28M | 1000 |  
| ImageNet-R | Hard ImageNet images | 30K | 200 |
| ImageNet-A | Adversarial ImageNet images | 7.5K | 200 |
| ObjectNet | Natural images | 113K | 113 |
| Cifar-10 | Tiny natural images | 60K | 10 |
| Cifar-100 | Tiny natural images | 60K | 100 | 
| ADE20K | Scene parsing | 25K | 150 |
| Kinetics-400 | Human action videos | 400K | 400 |
| CelebA | Celeb faces | 202K | 40 |


# License
MIT

# Citations
```
@misc{2302.05442,
Author = {Mostafa Dehghani and Josip Djolonga and Basil Mustafa and Piotr Padlewski and Jonathan Heek and Justin Gilmer and Andreas Steiner and Mathilde Caron and Robert Geirhos and Ibrahim Alabdulmohsin and Rodolphe Jenatton and Lucas Beyer and Michael Tschannen and Anurag Arnab and Xiao Wang and Carlos Riquelme and Matthias Minderer and Joan Puigcerver and Utku Evci and Manoj Kumar and Sjoerd van Steenkiste and Gamaleldin F. Elsayed and Aravindh Mahendran and Fisher Yu and Avital Oliver and Fantine Huot and Jasmijn Bastings and Mark Patrick Collier and Alexey Gritsenko and Vighnesh Birodkar and Cristina Vasconcelos and Yi Tay and Thomas Mensink and Alexander Kolesnikov and Filip Pavetić and Dustin Tran and Thomas Kipf and Mario Lučić and Xiaohua Zhai and Daniel Keysers and Jeremiah Harmsen and Neil Houlsby},
Title = {Scaling Vision Transformers to 22 Billion Parameters},
Year = {2023},
Eprint = {arXiv:2302.05442},
}
```

# Todo
- [ ] Add flash attention, with layernorm before attn, and then layernom for qk values,
- [ ] Basic training script on CIFAR,
- [ ] When using ViT-22B, similar to any large scale model, it is difficult to understand how the model arrived at a specific decision, which could lead to lack of
trust and accountability. Add in a mechanism to backtrack
- [ ] create logic to train the decoder for 300k steps with a batch size of 64 using Adam (Kingma and Ba, 2015) and clip the gradients to a global norm value of 0.05 to stabilize training. We linearly increase the learning rate for 2500 steps to 0.0002 (starting from 0) and then decay the learning rate with a cosine schedule (Loshchilov and Hutter, 2017) back to 0.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/mega-vit",
    "name": "mega-vit",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/c8/6d/6cac8f682369b898e2d1a2ee770c356faa61f4e7695ea311e0280262d237/mega_vit-0.0.4.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# MegaVit\nThe open source implementation of the model from \"Scaling Vision Transformers to 22 Billion Parameters\"\n\n\n\n[Paper Link](https://arxiv.org/pdf/2302.05442.pdf)\n\n# Appreciation\n* Lucidrains\n* Agorians\n\n\n\n# Install\n`pip install mega-vit`\n\n# Usage\n- Simple usage,\n```python\nimport torch\nfrom mega_vit.main import MegaVit\n\nv = MegaVit(\n    image_size = 256,\n    patch_size = 32,\n    num_classes = 1000,\n    dim = 1024,\n    depth = 6,\n    heads = 16,\n    mlp_dim = 2048,\n    dropout = 0.1,\n    emb_dropout = 0.1\n)\n\nimg = torch.randn(1, 3, 256, 256)\n\npreds = v(img) # (1, 1000)\nprint(preds)\n```\n\n- Hyperparams as stated in paper:\n```python\nimport torch\nfrom mega_vit.main import MegaVit\n\nv = ViT(\n    image_size = 224,\n    patch_size = 14,\n    num_classes = 1000,\n    dim = 6144,\n    depth = 48,\n    heads = 48,\n    mlp_dim = 2048,\n    dropout = 0.1,\n    emb_dropout = 0.1\n)\n\nimg = torch.randn(1, 3, 224, 224)\n\npreds = v(img) # (1, 1000)\nprint(preds)\n```\n\n# Model Architecture\n- Regular vit with new parallel layers, QK(Query/Key)Normalization, and omitted biases.\n\n----\n# Dataset Strategy\nThe paper trains ViT-22B on a version of the JFT dataset that has been extended to around 4 billion images. JFT is a large-scale dataset scraped from the internet, originally containing over 300 million images labeled with a hierarchical taxonomy of 30,000 categories. \n\nThe authors do not provide full details on how the dataset was extended from the original JFT to 4 billion images. However, the goal seems to be creating a larger and more diverse training set to support scaling up the model size. Pre-training on larger datasets enables learning more robust and generalizable visual representations.\n\nThe authors evaluate ViT-22B on a comprehensive set of 39 datasets covering various domains like image classification, dense prediction tasks, video, and fairness benchmarks. Using such a diverse evaluation suite allows them to thoroughly assess the scalability and transferability of ViT-22B across different domains and data distributions.\n\nBelow is a table summarizing some of the key datasets used in the paper:\n\n| Dataset | Domain | Images | Classes |\n|-|-|-|-| \n| JFT (training set) | Internet images | ~4 billion | 30,000 |\n| ImageNet | Natural images | 1.28M | 1000 |\n| ImageNet-C | Corrupted ImageNet images | 1.28M | 1000 |  \n| ImageNet-R | Hard ImageNet images | 30K | 200 |\n| ImageNet-A | Adversarial ImageNet images | 7.5K | 200 |\n| ObjectNet | Natural images | 113K | 113 |\n| Cifar-10 | Tiny natural images | 60K | 10 |\n| Cifar-100 | Tiny natural images | 60K | 100 | \n| ADE20K | Scene parsing | 25K | 150 |\n| Kinetics-400 | Human action videos | 400K | 400 |\n| CelebA | Celeb faces | 202K | 40 |\n\n\n# License\nMIT\n\n# Citations\n```\n@misc{2302.05442,\nAuthor = {Mostafa Dehghani and Josip Djolonga and Basil Mustafa and Piotr Padlewski and Jonathan Heek and Justin Gilmer and Andreas Steiner and Mathilde Caron and Robert Geirhos and Ibrahim Alabdulmohsin and Rodolphe Jenatton and Lucas Beyer and Michael Tschannen and Anurag Arnab and Xiao Wang and Carlos Riquelme and Matthias Minderer and Joan Puigcerver and Utku Evci and Manoj Kumar and Sjoerd van Steenkiste and Gamaleldin F. Elsayed and Aravindh Mahendran and Fisher Yu and Avital Oliver and Fantine Huot and Jasmijn Bastings and Mark Patrick Collier and Alexey Gritsenko and Vighnesh Birodkar and Cristina Vasconcelos and Yi Tay and Thomas Mensink and Alexander Kolesnikov and Filip Paveti\u0107 and Dustin Tran and Thomas Kipf and Mario Lu\u010di\u0107 and Xiaohua Zhai and Daniel Keysers and Jeremiah Harmsen and Neil Houlsby},\nTitle = {Scaling Vision Transformers to 22 Billion Parameters},\nYear = {2023},\nEprint = {arXiv:2302.05442},\n}\n```\n\n# Todo\n- [ ] Add flash attention, with layernorm before attn, and then layernom for qk values,\n- [ ] Basic training script on CIFAR,\n- [ ] When using ViT-22B, similar to any large scale model, it is difficult to understand how the model arrived at a specific decision, which could lead to lack of\ntrust and accountability. Add in a mechanism to backtrack\n- [ ] create logic to train the decoder for 300k steps with a batch size of 64 using Adam (Kingma and Ba, 2015) and clip the gradients to a global norm value of 0.05 to stabilize training. We linearly increase the learning rate for 2500 steps to 0.0002 (starting from 0) and then decay the learning rate with a cosine schedule (Loshchilov and Hutter, 2017) back to 0.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "mega-vit - Pytorch",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/mega-vit",
        "Repository": "https://github.com/kyegomez/mega-vit"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "495dfe3a4307bac83dfcca4824b3066e4941e073c99151037890861a905bd34f",
                "md5": "13b66bb7f9b732809fbf5325f876ef1e",
                "sha256": "a1dc9c669723e14c3c69d3bdd323ab83e3210db4cf2c3e10a9622311418578cf"
            },
            "downloads": -1,
            "filename": "mega_vit-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "13b66bb7f9b732809fbf5325f876ef1e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 6543,
            "upload_time": "2023-10-03T05:57:30",
            "upload_time_iso_8601": "2023-10-03T05:57:30.326593Z",
            "url": "https://files.pythonhosted.org/packages/49/5d/fe3a4307bac83dfcca4824b3066e4941e073c99151037890861a905bd34f/mega_vit-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c86d6cac8f682369b898e2d1a2ee770c356faa61f4e7695ea311e0280262d237",
                "md5": "60b6805fec0ac45f2c6fb96d75d44d02",
                "sha256": "34a5ab030cd8522df973a196ad3db31145e3cf7e327fb1847fee8975882025e9"
            },
            "downloads": -1,
            "filename": "mega_vit-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "60b6805fec0ac45f2c6fb96d75d44d02",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 6694,
            "upload_time": "2023-10-03T05:57:31",
            "upload_time_iso_8601": "2023-10-03T05:57:31.914686Z",
            "url": "https://files.pythonhosted.org/packages/c8/6d/6cac8f682369b898e2d1a2ee770c356faa61f4e7695ea311e0280262d237/mega_vit-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-03 05:57:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "mega-vit",
    "github_not_found": true,
    "lcname": "mega-vit"
}
        
Elapsed time: 0.15230s