limoe


Namelimoe JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/LIMoE
SummaryLiMoE - Pytorch
upload_time2024-02-21 22:25:42
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements torch zetascale swarms
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# LiMoE
Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts". [CLICK HERE FOR THE PAPER LINK:](https://arxiv.org/abs/2206.02770)


## install
`pip install limoe`

## usage
```python
import torch

from limoe.main import LiMoE

# Text tokens (batch, sequence length)
text = torch.randint(0, 100, (1, 64))

# image (batch, channels, height, width)
image = torch.randn(1, 3, 224, 224)

# Create an instance of LiMoE with the specified parameters
model = LiMoE(
    dim=64,  # Dimension of the input and output tensors
    depth=3,  # Number of layers in the encoder
    heads=8,  # Number of attention heads
    num_tokens=100,  # Number of tokens in the vocabulary
    seq_length=64,  # Length of the input sequence
    num_experts=4,  # Number of experts in the mixture-of-experts layer
    dim_head=64,  # Dimension of each attention head
    dropout=0.1,  # Dropout rate
    ff_mult=4,  # Multiplier for the dimension of the feed-forward layer
    patch_size=16,  # Patch size
    image_size=224,  # Image size
    channels=3,  # Number of image channels
    dense_encoder_depth=5,
)

# Pass the input tensor through the model and print the output
out = model(text, image)

# Print
print(out)
```

# License
MIT


## Citation
```bibtex
@misc{mustafa2022multimodal,
    title={Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts}, 
    author={Basil Mustafa and Carlos Riquelme and Joan Puigcerver and Rodolphe Jenatton and Neil Houlsby},
    year={2022},
    eprint={2206.02770},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
```
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/LIMoE",
    "name": "limoe",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/bc/b5/47680b9cbcd915aa5ad3ce8c13f583f39233fb9a1d60f622a72b6cbc73f6/limoe-0.0.5.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# LiMoE\nImplementation of the \"the first large-scale multimodal mixture of experts models.\" from the paper: \"Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts\". [CLICK HERE FOR THE PAPER LINK:](https://arxiv.org/abs/2206.02770)\n\n\n## install\n`pip install limoe`\n\n## usage\n```python\nimport torch\n\nfrom limoe.main import LiMoE\n\n# Text tokens (batch, sequence length)\ntext = torch.randint(0, 100, (1, 64))\n\n# image (batch, channels, height, width)\nimage = torch.randn(1, 3, 224, 224)\n\n# Create an instance of LiMoE with the specified parameters\nmodel = LiMoE(\n    dim=64,  # Dimension of the input and output tensors\n    depth=3,  # Number of layers in the encoder\n    heads=8,  # Number of attention heads\n    num_tokens=100,  # Number of tokens in the vocabulary\n    seq_length=64,  # Length of the input sequence\n    num_experts=4,  # Number of experts in the mixture-of-experts layer\n    dim_head=64,  # Dimension of each attention head\n    dropout=0.1,  # Dropout rate\n    ff_mult=4,  # Multiplier for the dimension of the feed-forward layer\n    patch_size=16,  # Patch size\n    image_size=224,  # Image size\n    channels=3,  # Number of image channels\n    dense_encoder_depth=5,\n)\n\n# Pass the input tensor through the model and print the output\nout = model(text, image)\n\n# Print\nprint(out)\n```\n\n# License\nMIT\n\n\n## Citation\n```bibtex\n@misc{mustafa2022multimodal,\n    title={Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts}, \n    author={Basil Mustafa and Carlos Riquelme and Joan Puigcerver and Rodolphe Jenatton and Neil Houlsby},\n    year={2022},\n    eprint={2206.02770},\n    archivePrefix={arXiv},\n    primaryClass={cs.CV}\n}\n```",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LiMoE - Pytorch",
    "version": "0.0.5",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/LIMoE",
        "Homepage": "https://github.com/kyegomez/LIMoE",
        "Repository": "https://github.com/kyegomez/LIMoE"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e33782484dd4c4ef032ddc980b1ac46aad0e41db6df76a97021748029019d372",
                "md5": "ac72a7cbfc934fb7d49818ef90380c2d",
                "sha256": "01398132bdd480ba3488c6f949f514f0758f6ac78045a631fae073ca8f7a2266"
            },
            "downloads": -1,
            "filename": "limoe-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ac72a7cbfc934fb7d49818ef90380c2d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 5419,
            "upload_time": "2024-02-21T22:25:41",
            "upload_time_iso_8601": "2024-02-21T22:25:41.166000Z",
            "url": "https://files.pythonhosted.org/packages/e3/37/82484dd4c4ef032ddc980b1ac46aad0e41db6df76a97021748029019d372/limoe-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bcb547680b9cbcd915aa5ad3ce8c13f583f39233fb9a1d60f622a72b6cbc73f6",
                "md5": "4fcd4cbe764b3c81b1e1b5a3b7efdc77",
                "sha256": "50b55f08c5e5ac0c8acc25cc8fffec1cedc482ba655a5248c92aa4bb88835628"
            },
            "downloads": -1,
            "filename": "limoe-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "4fcd4cbe764b3c81b1e1b5a3b7efdc77",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 5572,
            "upload_time": "2024-02-21T22:25:42",
            "upload_time_iso_8601": "2024-02-21T22:25:42.331560Z",
            "url": "https://files.pythonhosted.org/packages/bc/b5/47680b9cbcd915aa5ad3ce8c13f583f39233fb9a1d60f622a72b6cbc73f6/limoe-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-21 22:25:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "LIMoE",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "zetascale",
            "specs": []
        },
        {
            "name": "swarms",
            "specs": []
        }
    ],
    "lcname": "limoe"
}
        
Elapsed time: 0.22136s