x-Metaformer


Namex-Metaformer JSON
Version 0.3.1 PyPI version JSON
download
home_pagehttps://github.com/romue404/x-metaformer
SummaryA PyTorch implementation of "MetaFormer Baselines" with optional extensions.
upload_time2023-04-14 10:38:47
maintainer
docs_urlNone
authorRobert Müller
requires_python
licenseMIT
keywords artificial intelligence pytorch metaformer transformer attention convolutions
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🥞 x-Metaformer

A PyTorch implementation of ["MetaFormer Baselines"](https://arxiv.org/abs/2210.13452) with optional extensions.  
We support various self-supervised pretraining approaches such as [BarlowTwins](https://arxiv.org/abs/2103.03230),
[MoCoV3](https://arxiv.org/abs/2104.02057) or [VICReg](https://arxiv.org/abs/2105.04906) (see ```x_metaformer.pretraining```).


## Setup
Simply run:
```pip install x-metaformer```

## Example
```py
import torch
from x_metaformer import CAFormer, ConvFormer


my_metaformer = CAFormer(
    in_channels=3,
    depths=(3, 3, 9, 3),
    dims=(64, 128, 320, 512),
    multi_query_attention=False,  # share keys and values across query heads
    use_seqpool=True,  # use sequence pooling vom CCT
    init_kernel_size=3,
    init_stride=2,
    drop_path_rate=0.4,
    norm='ln',  # ln, bn, rms (layernorm, batchnorm, rmsnorm)
    use_grn_mlp=True,  # use global response norm in mlps
    use_dual_patchnorm=False,  # norm on both sides for the patch embedding
    use_pos_emb=True,  # use 2d sinusodial positional embeddings
    head_dim=32,
    num_heads=4,
    attn_dropout=0.1,
    proj_dropout=0.1,
    patchmasking_prob=0.05,  # replace 5% of the initial tokens with a </mask> token
    scale_value=1.0, # scale attention logits by this value
    trainable_scale=False, # if scale can be trained
    num_mem_vecs=0, # additional memory vectors (in the attention layers)
    sparse_topk=0,  # sparsify - keep only top k values (in the attention layers)
    l2=False,   # l2 norm on tokens (in the attention layers) 
    improve_locality=False,  # remove attention on own token
    use_starreglu=False  # use gated StarReLU
)

x   = torch.randn(64, 3, 64, 64)  # B C H W
out = my_metaformer(x, return_embeddings=False)  # returns average pooled tokens
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/romue404/x-metaformer",
    "name": "x-Metaformer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "artificial intelligence,pytorch,metaformer,transformer,attention,convolutions",
    "author": "Robert M\u00fcller",
    "author_email": "robert.mueller1990@googlemail.com",
    "download_url": "https://files.pythonhosted.org/packages/9e/94/ee62ac48ceb19d7bbf501d200f56f7f893d6a36e5fd9617343bf99a77190/x-Metaformer-0.3.1.tar.gz",
    "platform": null,
    "description": "# \ud83e\udd5e x-Metaformer\n\nA PyTorch implementation of [\"MetaFormer Baselines\"](https://arxiv.org/abs/2210.13452) with optional extensions.  \nWe support various self-supervised pretraining approaches such as [BarlowTwins](https://arxiv.org/abs/2103.03230),\n[MoCoV3](https://arxiv.org/abs/2104.02057) or [VICReg](https://arxiv.org/abs/2105.04906) (see ```x_metaformer.pretraining```).\n\n\n## Setup\nSimply run:\n```pip install x-metaformer```\n\n## Example\n```py\nimport torch\nfrom x_metaformer import CAFormer, ConvFormer\n\n\nmy_metaformer = CAFormer(\n    in_channels=3,\n    depths=(3, 3, 9, 3),\n    dims=(64, 128, 320, 512),\n    multi_query_attention=False,  # share keys and values across query heads\n    use_seqpool=True,  # use sequence pooling vom CCT\n    init_kernel_size=3,\n    init_stride=2,\n    drop_path_rate=0.4,\n    norm='ln',  # ln, bn, rms (layernorm, batchnorm, rmsnorm)\n    use_grn_mlp=True,  # use global response norm in mlps\n    use_dual_patchnorm=False,  # norm on both sides for the patch embedding\n    use_pos_emb=True,  # use 2d sinusodial positional embeddings\n    head_dim=32,\n    num_heads=4,\n    attn_dropout=0.1,\n    proj_dropout=0.1,\n    patchmasking_prob=0.05,  # replace 5% of the initial tokens with a </mask> token\n    scale_value=1.0, # scale attention logits by this value\n    trainable_scale=False, # if scale can be trained\n    num_mem_vecs=0, # additional memory vectors (in the attention layers)\n    sparse_topk=0,  # sparsify - keep only top k values (in the attention layers)\n    l2=False,   # l2 norm on tokens (in the attention layers) \n    improve_locality=False,  # remove attention on own token\n    use_starreglu=False  # use gated StarReLU\n)\n\nx   = torch.randn(64, 3, 64, 64)  # B C H W\nout = my_metaformer(x, return_embeddings=False)  # returns average pooled tokens\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A PyTorch implementation of \"MetaFormer Baselines\" with optional extensions.",
    "version": "0.3.1",
    "split_keywords": [
        "artificial intelligence",
        "pytorch",
        "metaformer",
        "transformer",
        "attention",
        "convolutions"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8d559f411fddde3c702192e8c843354ac73f1ed22d6b6a36585c4d2e13829083",
                "md5": "8624d87e6763942147939495c0695787",
                "sha256": "44d719f025113328ed382dfcd005d490e1c87f20cc3e6a8960055f1ef9c9284d"
            },
            "downloads": -1,
            "filename": "x_Metaformer-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8624d87e6763942147939495c0695787",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 16413,
            "upload_time": "2023-04-14T10:38:45",
            "upload_time_iso_8601": "2023-04-14T10:38:45.292950Z",
            "url": "https://files.pythonhosted.org/packages/8d/55/9f411fddde3c702192e8c843354ac73f1ed22d6b6a36585c4d2e13829083/x_Metaformer-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e94ee62ac48ceb19d7bbf501d200f56f7f893d6a36e5fd9617343bf99a77190",
                "md5": "29535d1d7976cea10f962144d412d1eb",
                "sha256": "bed386ba8dd0ce155866c0504a0ed63406fa5bf05a586ca8370ffe3d6cc2075f"
            },
            "downloads": -1,
            "filename": "x-Metaformer-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "29535d1d7976cea10f962144d412d1eb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 11634,
            "upload_time": "2023-04-14T10:38:47",
            "upload_time_iso_8601": "2023-04-14T10:38:47.289534Z",
            "url": "https://files.pythonhosted.org/packages/9e/94/ee62ac48ceb19d7bbf501d200f56f7f893d6a36e5fd9617343bf99a77190/x-Metaformer-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-14 10:38:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "romue404",
    "github_project": "x-metaformer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "x-metaformer"
}
        
Elapsed time: 0.05807s