mm1-torch


Namemm1-torch JSON
Version 0.0.6 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/mm1
SummaryMM1 - Pytorch
upload_time2024-04-26 16:17:01
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# MM1 
PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training".

`img -> encoder -> connector -> llm -> tokens` 

## install
`pip3 install mm1-torch`

## usage
```python
import torch
from mm1_torch.main import MM1

# Tensors
x = torch.randint(0, 100, (1, 512))  # Create a random tensor of shape (1, 512)
img = torch.randn(1, 3, 224, 224)  # Create a random image tensor of shape (1, 3, 224, 224)

# Create a model
model = MM1(
    dim=512,  # Dimension of the input tensor
    depth=12,  # Number of transformer layers
    heads=8,  # Number of attention heads
    dim_head=64,  # Dimension of each attention head
    dropout=0.1,  # Dropout rate
    num_experts=4,  # Number of experts in mixture-of-experts
    num_experts_per_tok=2,  # Number of experts per token in mixture-of-experts
    encoder_dim=512,  # Dimension of the encoder output
    encoder_depth=12,  # Number of encoder transformer layers
    encoder_heads=8,  # Number of encoder attention heads
    use_moe=True,  # Whether to use mixture-of-experts
    return_logits=True  # Whether to return logits or probabilities
)

# Forward
out = model(x, img)  # Forward pass through the model
print(out.shape)  # Print the shape of the output tensor (torch.Size([2, 3, 512]))
print(out)  # Print the output tensor
```

### `CAbstractor`

```python
import torch
from mm1_torch.main import CAbstractor

# Tensors
x = torch.randn(1, 100, 512)

# Create a model
model = CAbstractor(
    dim=512,
    depth=12,
    heads=8,
)


# Forward
out = model(x)
print(out.shape)

```


# License
MIT


## Todo

- [x] Implement the deformable attention
- [ ] Create a training script for Huggingface datasets
- [ ] Create unit tests for every module
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/mm1",
    "name": "mm1-torch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/f3/bc/8bd107f4e9fe4fd07d7e9e4df833a65ecb7a24edc8f9db61b367e1fde034/mm1_torch-0.0.6.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# MM1 \nPyTorch Implementation of the paper \"MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training\".\n\n`img -> encoder -> connector -> llm -> tokens`\u00a0\n\n## install\n`pip3 install mm1-torch`\n\n## usage\n```python\nimport torch\nfrom mm1_torch.main import MM1\n\n# Tensors\nx = torch.randint(0, 100, (1, 512))  # Create a random tensor of shape (1, 512)\nimg = torch.randn(1, 3, 224, 224)  # Create a random image tensor of shape (1, 3, 224, 224)\n\n# Create a model\nmodel = MM1(\n    dim=512,  # Dimension of the input tensor\n    depth=12,  # Number of transformer layers\n    heads=8,  # Number of attention heads\n    dim_head=64,  # Dimension of each attention head\n    dropout=0.1,  # Dropout rate\n    num_experts=4,  # Number of experts in mixture-of-experts\n    num_experts_per_tok=2,  # Number of experts per token in mixture-of-experts\n    encoder_dim=512,  # Dimension of the encoder output\n    encoder_depth=12,  # Number of encoder transformer layers\n    encoder_heads=8,  # Number of encoder attention heads\n    use_moe=True,  # Whether to use mixture-of-experts\n    return_logits=True  # Whether to return logits or probabilities\n)\n\n# Forward\nout = model(x, img)  # Forward pass through the model\nprint(out.shape)  # Print the shape of the output tensor (torch.Size([2, 3, 512]))\nprint(out)  # Print the output tensor\n```\n\n### `CAbstractor`\n\n```python\nimport torch\nfrom mm1_torch.main import CAbstractor\n\n# Tensors\nx = torch.randn(1, 100, 512)\n\n# Create a model\nmodel = CAbstractor(\n    dim=512,\n    depth=12,\n    heads=8,\n)\n\n\n# Forward\nout = model(x)\nprint(out.shape)\n\n```\n\n\n# License\nMIT\n\n\n## Todo\n\n- [x] Implement the deformable attention\n- [ ] Create a training script for Huggingface datasets\n- [ ] Create unit tests for every module",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "MM1 - Pytorch",
    "version": "0.0.6",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/mm1",
        "Homepage": "https://github.com/kyegomez/mm1",
        "Repository": "https://github.com/kyegomez/mm1"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2cfd475139b97e7736cd8dc3c2b0beb48f89f8ddef69bfdbc1197a5a67e58284",
                "md5": "2d4fcbc28549623cfb67410414960b79",
                "sha256": "fd23792bfa3604478f4ffd8b2d313cebf52acc9288e1c0e42377f0d4eade02fc"
            },
            "downloads": -1,
            "filename": "mm1_torch-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2d4fcbc28549623cfb67410414960b79",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 6792,
            "upload_time": "2024-04-26T16:17:00",
            "upload_time_iso_8601": "2024-04-26T16:17:00.296801Z",
            "url": "https://files.pythonhosted.org/packages/2c/fd/475139b97e7736cd8dc3c2b0beb48f89f8ddef69bfdbc1197a5a67e58284/mm1_torch-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f3bc8bd107f4e9fe4fd07d7e9e4df833a65ecb7a24edc8f9db61b367e1fde034",
                "md5": "6a8242775c2789b3710e65d1d28499d0",
                "sha256": "4f5d6cb67e51ada13401a42771f21488a5a78376a9ed763f49cae599539d38e3"
            },
            "downloads": -1,
            "filename": "mm1_torch-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "6a8242775c2789b3710e65d1d28499d0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 6607,
            "upload_time": "2024-04-26T16:17:01",
            "upload_time_iso_8601": "2024-04-26T16:17:01.936608Z",
            "url": "https://files.pythonhosted.org/packages/f3/bc/8bd107f4e9fe4fd07d7e9e4df833a65ecb7a24edc8f9db61b367e1fde034/mm1_torch-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 16:17:01",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "mm1",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "mm1-torch"
}
        
Elapsed time: 0.29992s