video-vit


Namevideo-vit JSON
Version 0.0.4 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/VideoVIT
SummaryPaper - Pytorch
upload_time2024-02-09 18:14:04
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6.1,<4.0.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# Video Vit
Open source implementation of a vision transformer that can understand Videos using max vit as a foundation. This uses max vit as the backbone vit and then packs the video tensor into a 4d tensor which is the input to the maxvit model. Implementing this because the new McVit came out and I need more practice. This is fully ready to train and I believe would perform amazingly.

## Installation
`$ pip install video-vit`

## Usage
```python
import torch
from video_vit.main import VideoViT

# Instantiate the VideoViT model with the specified parameters
model = VideoViT(
    num_classes=10,                 # Number of output classes
    dim=64,                         # Dimension of the token embeddings
    depth=(2, 2, 2),                # Depth of each stage in the model
    dim_head=32,                    # Dimension of the attention head
    window_size=7,                  # Size of the attention window
    mbconv_expansion_rate=4,        # Expansion rate of the Mobile Inverted Bottleneck block
    mbconv_shrinkage_rate=0.25,     # Shrinkage rate of the Mobile Inverted Bottleneck block
    dropout=0.1,                    # Dropout rate
    channels=3,                     # Number of input channels
)

# Create a random tensor with shape (batch_size, channels, frames, height, width)
x = torch.randn(1, 3, 10, 224, 224)

# Perform a forward pass through the model
output = model(x)

# Print the shape of the output tensor
print(output.shape)


```


# License
MIT

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/VideoVIT",
    "name": "video-vit",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6.1,<4.0.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/16/ef/2dd8b31e1629d6ca1d716c14322d03bd108637f6d82b0fea3fb5933e1321/video_vit-0.0.4.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Video Vit\nOpen source implementation of a vision transformer that can understand Videos using max vit as a foundation. This uses max vit as the backbone vit and then packs the video tensor into a 4d tensor which is the input to the maxvit model. Implementing this because the new McVit came out and I need more practice. This is fully ready to train and I believe would perform amazingly.\n\n## Installation\n`$ pip install video-vit`\n\n## Usage\n```python\nimport torch\nfrom video_vit.main import VideoViT\n\n# Instantiate the VideoViT model with the specified parameters\nmodel = VideoViT(\n    num_classes=10,                 # Number of output classes\n    dim=64,                         # Dimension of the token embeddings\n    depth=(2, 2, 2),                # Depth of each stage in the model\n    dim_head=32,                    # Dimension of the attention head\n    window_size=7,                  # Size of the attention window\n    mbconv_expansion_rate=4,        # Expansion rate of the Mobile Inverted Bottleneck block\n    mbconv_shrinkage_rate=0.25,     # Shrinkage rate of the Mobile Inverted Bottleneck block\n    dropout=0.1,                    # Dropout rate\n    channels=3,                     # Number of input channels\n)\n\n# Create a random tensor with shape (batch_size, channels, frames, height, width)\nx = torch.randn(1, 3, 10, 224, 224)\n\n# Perform a forward pass through the model\noutput = model(x)\n\n# Print the shape of the output tensor\nprint(output.shape)\n\n\n```\n\n\n# License\nMIT\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Paper - Pytorch",
    "version": "0.0.4",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/VideoVIT",
        "Repository": "https://github.com/kyegomez/VideoVIT"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fd018fc5570c1429f9f8c91bdb93cc435c452d4019aa2741c0d1b47145ba5d99",
                "md5": "5dcc6ce561f5b7625a68f4d0b3c8273c",
                "sha256": "84966886036bb6e4d81f14db3b42283004699ab274b91849b070470be5d2943a"
            },
            "downloads": -1,
            "filename": "video_vit-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5dcc6ce561f5b7625a68f4d0b3c8273c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6.1,<4.0.0",
            "size": 7189,
            "upload_time": "2024-02-09T18:14:02",
            "upload_time_iso_8601": "2024-02-09T18:14:02.888662Z",
            "url": "https://files.pythonhosted.org/packages/fd/01/8fc5570c1429f9f8c91bdb93cc435c452d4019aa2741c0d1b47145ba5d99/video_vit-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "16ef2dd8b31e1629d6ca1d716c14322d03bd108637f6d82b0fea3fb5933e1321",
                "md5": "58af345faf8453ffcfb3b3c2c7456050",
                "sha256": "c4f031328fc1e856ca095bc7c2d392a0908d4733b5b93682971ffc78edd25f7c"
            },
            "downloads": -1,
            "filename": "video_vit-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "58af345faf8453ffcfb3b3c2c7456050",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6.1,<4.0.0",
            "size": 7068,
            "upload_time": "2024-02-09T18:14:04",
            "upload_time_iso_8601": "2024-02-09T18:14:04.480580Z",
            "url": "https://files.pythonhosted.org/packages/16/ef/2dd8b31e1629d6ca1d716c14322d03bd108637f6d82b0fea3fb5933e1321/video_vit-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-09 18:14:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "VideoVIT",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "video-vit"
}
        
Elapsed time: 0.51642s