mambavision


Namemambavision JSON
Version 1.0.9 PyPI version JSON
download
home_pagehttps://github.com/NVlabs/MambaVision
SummaryMambaVision: A Hybrid Mamba-Transformer Vision Backbone
upload_time2024-07-24 16:26:12
maintainerNone
docs_urlNone
authorAli Hatamizadeh
requires_python>=3.7
licenseNVIDIA Source Code License-NC
keywords pytorch pretrained models mamba vision transformer vit
VCS
bugtrack_url
requirements torch causal-conv1d mamba-ssm timm tensorboardX einops transformers
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MambaVision: A Hybrid Mamba-Transformer Vision Backbone

Official PyTorch implementation of [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).


[![Star on GitHub](https://img.shields.io/github/stars/NVlabs/MambaVision.svg?style=social)](https://github.com/NVlabs/MambaVision/stargazers)

[Ali Hatamizadeh](https://research.nvidia.com/person/ali-hatamizadeh) and
[Jan Kautz](https://jankautz.com/). 

For business inquiries, please visit our website and submit the form: [NVIDIA Research Licensing](https://www.nvidia.com/en-us/research/inquiries/)

--- 

MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput. 

<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=62% height=62% 
class="center">
</p>

We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context: 


<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/295c0984-071e-4c84-b2c8-9059e2794182" width=32% height=32% 
class="center">
</p>



MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks:

![teaser](./mambavision/assets/arch.png)


## 💥 News 💥

- **[07.24.2024]** MambaVision [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) models are released ! 

- **[07.14.2024]** We added support for processing any resolution images.

- **[07.12.2024]** [Paper](https://arxiv.org/abs/2407.08083) is now available on arXiv !

- **[07.11.2024]** [Mambavision pip package](https://pypi.org/project/mambavision/) is released !

- **[07.10.2024]** We have released the code and model checkpoints for Mambavision !

## Quick Start


### Hugging Face (Classification + Feature extraction)

Pretrained MambaVision models can be simply used via [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) library with **1 line of code**. First install the requirements: 

```bash
pip install mambavision
```

The model can be simply imported:


```python
>>> from transformers import AutoModelForImageClassification

>>> model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)
```

We demonstrate an end-to-end image classification example in the following.

Given the following image from [COCO dataset](https://cocodataset.org/#home)  val set as an input:


<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg" width=70% height=70% 
class="center">
</p>


The following snippet can be used:

```python
from transformers import AutoModelForImageClassification
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests

model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

# eval mode for inference
model.cuda().eval()

# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)  # MambaVision supports any input resolutions

transform = create_transform(input_size=input_resolution,
                             is_training=False,
                             mean=model.config.mean,
                             std=model.config.std,
                             crop_mode=model.config.crop_mode,
                             crop_pct=model.config.crop_pct)

inputs = transform(image).unsqueeze(0).cuda()
# model inference
outputs = model(inputs)
logits = outputs['logits'] 
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```

The predicted label is brown bear, bruin, Ursus arctos.


You can also use Hugging Face MambaVision models for feature extraction. The model provides the outputs of each stage of model (hierarchical multi-scale features in 4 stages) as well as the final averaged-pool features that are flattened. The former is used for downstream tasks such as classification and detection. 

The following snippet can be used for feature extraction:

```Python
from transformers import AutoModel
from PIL import Image
from timm.data.transforms_factory import create_transform
import requests

model = AutoModel.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)

# eval mode for inference
model.cuda().eval()

# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000020247.jpg'
image = Image.open(requests.get(url, stream=True).raw)
input_resolution = (3, 224, 224)  # MambaVision supports any input resolutions

transform = create_transform(input_size=input_resolution,
                             is_training=False,
                             mean=model.config.mean,
                             std=model.config.std,
                             crop_mode=model.config.crop_mode,
                             crop_pct=model.config.crop_pct)
inputs = transform(image).unsqueeze(0).cuda()
# model inference
out_avg_pool, features = model(inputs)
print("Size of the averaged pool features:", out_avg_pool.size())  # torch.Size([1, 640])
print("Number of stages in extracted features:", len(features)) # 4 stages
print("Size of extracted features in stage 1:", features[0].size()) # torch.Size([1, 80, 56, 56])
print("Size of extracted features in stage 4:", features[3].size()) # torch.Size([1, 640, 7, 7])
```

Currently, we offer [MambaVision-T-1K](https://huggingface.co/nvidia/MambaVision-T-1K), [MambaVision-T2-1K](https://huggingface.co/nvidia/MambaVision-T2-1K), [MambaVision-S-1K](https://huggingface.co/nvidia/MambaVision-S-1K), [MambaVision-B-1K](https://huggingface.co/nvidia/MambaVision-B-1K), [MambaVision-L-1K](https://huggingface.co/nvidia/MambaVision-L-1K) and [MambaVision-L2-1K](https://huggingface.co/nvidia/MambaVision-L2-1K) on Hugging Face. All models can also be viewed [here](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3).

### Classification (pip package)

We can also import pre-trained MambaVision models from the pip package with **1 line of code**:

```bash
pip install mambavision
```

A pretrained MambaVision model with default hyper-parameters can be created as in:

```python
>>> from mambavision import create_model

# Define mamba_vision_T model

>>> model = create_model('mamba_vision_T', pretrained=True, model_path="/tmp/mambavision_tiny_1k.pth.tar")
```

Available list of pretrained models include `mamba_vision_T`, `mamba_vision_T2`, `mamba_vision_S`, `mamba_vision_B`, `mamba_vision_L` and `mamba_vision_L2`.  

We can also simply test the model by passing a dummy image with **any resolution**. The output is the logits:

```python
>>> import torch

>>> image = torch.rand(1, 3, 512, 224).cuda() # place image on cuda
>>> model = model.cuda() # place model on cuda
>>> output = model(image) # output logit size is [1, 1000]
```

Using the pretrained models from our pip package, you can simply run validation:

```
python validate_pip_model.py --model mamba_vision_T --data_dir=$DATA_PATH --batch-size $BS 
``` 

## FAQ

1. Does MambaVision support processing images with any input resolutions ? 

Yes ! you can pass images with any arbitrary resolutions without the need to change the model.


2. Can I apply MambaVision for downstream tasks like detection, segmentation ? 

Yes ! we are working to have it released very soon. But employing MambaVision backbones for these tasks is very similar to other models in `mmseg` or `mmdet` packages. In addition, MambaVision [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) models provide feature extraction capablity which can be used for downstream tasks. Please see the above example. 


3. I am interested in re-implementing MambaVision in my own repository. Can we use the pretrained weights ? 

Yes ! the pretrained weights are released under [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Please submit an issue in this repo and we will add your repository to the README of our codebase and properly acknowledge your efforts. 

## Results + Pretrained Models

### ImageNet-1K
**MambaVision ImageNet-1K Pretrained Models**

<table>
  <tr>
    <th>Name</th>
    <th>Acc@1(%)</th>
    <th>Acc@5(%)</th>
    <th>Throughput(Img/Sec)</th>
    <th>Resolution</th>
    <th>#Params(M)</th>
    <th>FLOPs(G)</th>
    <th>Download</th>
  </tr>

<tr>
    <td>MambaVision-T</td>
    <td>82.3</td>
    <td>96.2</td>
    <td>6298</td>
    <td>224x224</td>
    <td>31.8</td>
    <td>4.4</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-T-1K/resolve/main/mambavision_tiny_1k.pth.tar">model</a></td>
</tr>

<tr>
    <td>MambaVision-T2</td>
    <td>82.7</td>
    <td>96.3</td>
    <td>5990</td>
    <td>224x224</td>
    <td>35.1</td>
    <td>5.1</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-T2-1K/resolve/main/mambavision_tiny2_1k.pth.tar">model</a></td>
</tr>

<tr>
    <td>MambaVision-S</td>
    <td>83.3</td>
    <td>96.5</td>
    <td>4700</td>
    <td>224x224</td>
    <td>50.1</td>
    <td>7.5</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-S-1K/resolve/main/mambavision_small_1k.pth.tar">model</a></td>
</tr>

<tr>
    <td>MambaVision-B</td>
    <td>84.2</td>
    <td>96.9</td>
    <td>3670</td>
    <td>224x224</td>
    <td>97.7</td>
    <td>15.0</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-B-1K/resolve/main/mambavision_base_1k.pth.tar">model</a></td>
</tr>

<tr>
    <td>MambaVision-L</td>
    <td>85.0</td>
    <td>97.1</td>
    <td>2190</td>
    <td>224x224</td>
    <td>227.9</td>
    <td>34.9</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-L-1K/resolve/main/mambavision_large_1k.pth.tar">model</a></td>
</tr>

<tr>
    <td>MambaVision-L2</td>
    <td>85.3</td>
    <td>97.2</td>
    <td>1021</td>
    <td>224x224</td>
    <td>241.5</td>
    <td>37.5</td>
    <td><a href="https://huggingface.co/nvidia/MambaVision-L2-1K/resolve/main/mambavision_large2_1k.pth.tar">model</a></td>
</tr>

</table>

## Installation

We provide a [docker file](./Dockerfile). In addition, assuming that a recent [PyTorch](https://pytorch.org/get-started/locally/) package is installed, the dependencies can be installed by running:

```bash
pip install -r requirements.txt
```

## Evaluation

The MambaVision models can be evaluated on ImageNet-1K validation set using the following: 

```
python validate.py \
--model <model-name>
--checkpoint <checkpoint-path>
--data_dir <imagenet-path>
--batch-size <batch-size-per-gpu
``` 

Here `--model` is the MambaVision variant (e.g. `mambavision_tiny_1k`), `--checkpoint` is the path to pretrained model weights, `--data_dir` is the path to ImageNet-1K validation set and `--batch-size` is the number of batch size. We also provide a sample script [here](./mambavision/validate.sh). 

## Citation

If you find MambaVision to be useful for your work, please consider citing our paper: 

```
@article{hatamizadeh2024mambavision,
  title={MambaVision: A Hybrid Mamba-Transformer Vision Backbone},
  author={Hatamizadeh, Ali and Kautz, Jan},
  journal={arXiv preprint arXiv:2407.08083},
  year={2024}
}
```

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=NVlabs/MambaVision&type=Date)](https://star-history.com/#NVlabs/MambaVision&Date)


## Licenses

Copyright © 2024, NVIDIA Corporation. All rights reserved.

This work is made available under the NVIDIA Source Code License-NC. Click [here](LICENSE) to view a copy of this license.

The pre-trained models are shared under [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

For license information regarding the timm repository, please refer to its [repository](https://github.com/rwightman/pytorch-image-models).

For license information regarding the ImageNet dataset, please see the [ImageNet official website](https://www.image-net.org/). 

## Acknowledgement
This repository is built on top of the [timm](https://github.com/huggingface/pytorch-image-models) repository. We thank [Ross Wrightman](https://rwightman.com/) for creating and maintaining this high-quality library.  

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NVlabs/MambaVision",
    "name": "mambavision",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "pytorch pretrained models mamba vision transformer vit",
    "author": "Ali Hatamizadeh",
    "author_email": "ahatamiz123@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f1/a8/3c8d99866d85ae65e22c0d4568842e69a6a5c24ac8620c9fe426c5ae8d05/mambavision-1.0.9.tar.gz",
    "platform": null,
    "description": "# MambaVision: A Hybrid Mamba-Transformer Vision Backbone\n\nOfficial PyTorch implementation of [**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).\n\n\n[![Star on GitHub](https://img.shields.io/github/stars/NVlabs/MambaVision.svg?style=social)](https://github.com/NVlabs/MambaVision/stargazers)\n\n[Ali Hatamizadeh](https://research.nvidia.com/person/ali-hatamizadeh) and\n[Jan Kautz](https://jankautz.com/). \n\nFor business inquiries, please visit our website and submit the form: [NVIDIA Research Licensing](https://www.nvidia.com/en-us/research/inquiries/)\n\n--- \n\nMambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in\nterms of Top-1 accuracy and throughput. \n\n<p align=\"center\">\n<img src=\"https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320\" width=62% height=62% \nclass=\"center\">\n</p>\n\nWe introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context: \n\n\n<p align=\"center\">\n<img src=\"https://github.com/NVlabs/MambaVision/assets/26806394/295c0984-071e-4c84-b2c8-9059e2794182\" width=32% height=32% \nclass=\"center\">\n</p>\n\n\n\nMambaVision has a hierarchical architecture that employs both self-attention and mixer blocks:\n\n![teaser](./mambavision/assets/arch.png)\n\n\n## \ud83d\udca5 News \ud83d\udca5\n\n- **[07.24.2024]** MambaVision [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) models are released ! \n\n- **[07.14.2024]** We added support for processing any resolution images.\n\n- **[07.12.2024]** [Paper](https://arxiv.org/abs/2407.08083) is now available on arXiv !\n\n- **[07.11.2024]** [Mambavision pip package](https://pypi.org/project/mambavision/) is released !\n\n- **[07.10.2024]** We have released the code and model checkpoints for Mambavision !\n\n## Quick Start\n\n\n### Hugging Face (Classification + Feature extraction)\n\nPretrained MambaVision models can be simply used via [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) library with **1 line of code**. First install the requirements: \n\n```bash\npip install mambavision\n```\n\nThe model can be simply imported:\n\n\n```python\n>>> from transformers import AutoModelForImageClassification\n\n>>> model = AutoModelForImageClassification.from_pretrained(\"nvidia/MambaVision-T-1K\", trust_remote_code=True)\n```\n\nWe demonstrate an end-to-end image classification example in the following.\n\nGiven the following image from [COCO dataset](https://cocodataset.org/#home)  val set as an input:\n\n\n<p align=\"center\">\n<img src=\"https://cdn-uploads.huggingface.co/production/uploads/64414b62603214724ebd2636/4duSnqLf4lrNiAHczSmAN.jpeg\" width=70% height=70% \nclass=\"center\">\n</p>\n\n\nThe following snippet can be used:\n\n```python\nfrom transformers import AutoModelForImageClassification\nfrom PIL import Image\nfrom timm.data.transforms_factory import create_transform\nimport requests\n\nmodel = AutoModelForImageClassification.from_pretrained(\"nvidia/MambaVision-T-1K\", trust_remote_code=True)\n\n# eval mode for inference\nmodel.cuda().eval()\n\n# prepare image for the model\nurl = 'http://images.cocodataset.org/val2017/000000020247.jpg'\nimage = Image.open(requests.get(url, stream=True).raw)\ninput_resolution = (3, 224, 224)  # MambaVision supports any input resolutions\n\ntransform = create_transform(input_size=input_resolution,\n                             is_training=False,\n                             mean=model.config.mean,\n                             std=model.config.std,\n                             crop_mode=model.config.crop_mode,\n                             crop_pct=model.config.crop_pct)\n\ninputs = transform(image).unsqueeze(0).cuda()\n# model inference\noutputs = model(inputs)\nlogits = outputs['logits'] \npredicted_class_idx = logits.argmax(-1).item()\nprint(\"Predicted class:\", model.config.id2label[predicted_class_idx])\n```\n\nThe predicted label is brown bear, bruin, Ursus arctos.\n\n\nYou can also use Hugging Face MambaVision models for feature extraction. The model provides the outputs of each stage of model (hierarchical multi-scale features in 4 stages) as well as the final averaged-pool features that are flattened. The former is used for downstream tasks such as classification and detection. \n\nThe following snippet can be used for feature extraction:\n\n```Python\nfrom transformers import AutoModel\nfrom PIL import Image\nfrom timm.data.transforms_factory import create_transform\nimport requests\n\nmodel = AutoModel.from_pretrained(\"nvidia/MambaVision-T-1K\", trust_remote_code=True)\n\n# eval mode for inference\nmodel.cuda().eval()\n\n# prepare image for the model\nurl = 'http://images.cocodataset.org/val2017/000000020247.jpg'\nimage = Image.open(requests.get(url, stream=True).raw)\ninput_resolution = (3, 224, 224)  # MambaVision supports any input resolutions\n\ntransform = create_transform(input_size=input_resolution,\n                             is_training=False,\n                             mean=model.config.mean,\n                             std=model.config.std,\n                             crop_mode=model.config.crop_mode,\n                             crop_pct=model.config.crop_pct)\ninputs = transform(image).unsqueeze(0).cuda()\n# model inference\nout_avg_pool, features = model(inputs)\nprint(\"Size of the averaged pool features:\", out_avg_pool.size())  # torch.Size([1, 640])\nprint(\"Number of stages in extracted features:\", len(features)) # 4 stages\nprint(\"Size of extracted features in stage 1:\", features[0].size()) # torch.Size([1, 80, 56, 56])\nprint(\"Size of extracted features in stage 4:\", features[3].size()) # torch.Size([1, 640, 7, 7])\n```\n\nCurrently, we offer [MambaVision-T-1K](https://huggingface.co/nvidia/MambaVision-T-1K), [MambaVision-T2-1K](https://huggingface.co/nvidia/MambaVision-T2-1K), [MambaVision-S-1K](https://huggingface.co/nvidia/MambaVision-S-1K), [MambaVision-B-1K](https://huggingface.co/nvidia/MambaVision-B-1K), [MambaVision-L-1K](https://huggingface.co/nvidia/MambaVision-L-1K) and [MambaVision-L2-1K](https://huggingface.co/nvidia/MambaVision-L2-1K) on Hugging Face. All models can also be viewed [here](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3).\n\n### Classification (pip package)\n\nWe can also import pre-trained MambaVision models from the pip package with **1 line of code**:\n\n```bash\npip install mambavision\n```\n\nA pretrained MambaVision model with default hyper-parameters can be created as in:\n\n```python\n>>> from mambavision import create_model\n\n# Define mamba_vision_T model\n\n>>> model = create_model('mamba_vision_T', pretrained=True, model_path=\"/tmp/mambavision_tiny_1k.pth.tar\")\n```\n\nAvailable list of pretrained models include `mamba_vision_T`, `mamba_vision_T2`, `mamba_vision_S`, `mamba_vision_B`, `mamba_vision_L` and `mamba_vision_L2`.  \n\nWe can also simply test the model by passing a dummy image with **any resolution**. The output is the logits:\n\n```python\n>>> import torch\n\n>>> image = torch.rand(1, 3, 512, 224).cuda() # place image on cuda\n>>> model = model.cuda() # place model on cuda\n>>> output = model(image) # output logit size is [1, 1000]\n```\n\nUsing the pretrained models from our pip package, you can simply run validation:\n\n```\npython validate_pip_model.py --model mamba_vision_T --data_dir=$DATA_PATH --batch-size $BS \n``` \n\n## FAQ\n\n1. Does MambaVision support processing images with any input resolutions ? \n\nYes ! you can pass images with any arbitrary resolutions without the need to change the model.\n\n\n2. Can I apply MambaVision for downstream tasks like detection, segmentation ? \n\nYes ! we are working to have it released very soon. But employing MambaVision backbones for these tasks is very similar to other models in `mmseg` or `mmdet` packages. In addition, MambaVision [Hugging Face](https://huggingface.co/collections/nvidia/mambavision-66943871a6b36c9e78b327d3) models provide feature extraction capablity which can be used for downstream tasks. Please see the above example. \n\n\n3. I am interested in re-implementing MambaVision in my own repository. Can we use the pretrained weights ? \n\nYes ! the pretrained weights are released under [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Please submit an issue in this repo and we will add your repository to the README of our codebase and properly acknowledge your efforts. \n\n## Results + Pretrained Models\n\n### ImageNet-1K\n**MambaVision ImageNet-1K Pretrained Models**\n\n<table>\n  <tr>\n    <th>Name</th>\n    <th>Acc@1(%)</th>\n    <th>Acc@5(%)</th>\n    <th>Throughput(Img/Sec)</th>\n    <th>Resolution</th>\n    <th>#Params(M)</th>\n    <th>FLOPs(G)</th>\n    <th>Download</th>\n  </tr>\n\n<tr>\n    <td>MambaVision-T</td>\n    <td>82.3</td>\n    <td>96.2</td>\n    <td>6298</td>\n    <td>224x224</td>\n    <td>31.8</td>\n    <td>4.4</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-T-1K/resolve/main/mambavision_tiny_1k.pth.tar\">model</a></td>\n</tr>\n\n<tr>\n    <td>MambaVision-T2</td>\n    <td>82.7</td>\n    <td>96.3</td>\n    <td>5990</td>\n    <td>224x224</td>\n    <td>35.1</td>\n    <td>5.1</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-T2-1K/resolve/main/mambavision_tiny2_1k.pth.tar\">model</a></td>\n</tr>\n\n<tr>\n    <td>MambaVision-S</td>\n    <td>83.3</td>\n    <td>96.5</td>\n    <td>4700</td>\n    <td>224x224</td>\n    <td>50.1</td>\n    <td>7.5</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-S-1K/resolve/main/mambavision_small_1k.pth.tar\">model</a></td>\n</tr>\n\n<tr>\n    <td>MambaVision-B</td>\n    <td>84.2</td>\n    <td>96.9</td>\n    <td>3670</td>\n    <td>224x224</td>\n    <td>97.7</td>\n    <td>15.0</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-B-1K/resolve/main/mambavision_base_1k.pth.tar\">model</a></td>\n</tr>\n\n<tr>\n    <td>MambaVision-L</td>\n    <td>85.0</td>\n    <td>97.1</td>\n    <td>2190</td>\n    <td>224x224</td>\n    <td>227.9</td>\n    <td>34.9</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-L-1K/resolve/main/mambavision_large_1k.pth.tar\">model</a></td>\n</tr>\n\n<tr>\n    <td>MambaVision-L2</td>\n    <td>85.3</td>\n    <td>97.2</td>\n    <td>1021</td>\n    <td>224x224</td>\n    <td>241.5</td>\n    <td>37.5</td>\n    <td><a href=\"https://huggingface.co/nvidia/MambaVision-L2-1K/resolve/main/mambavision_large2_1k.pth.tar\">model</a></td>\n</tr>\n\n</table>\n\n## Installation\n\nWe provide a [docker file](./Dockerfile). In addition, assuming that a recent [PyTorch](https://pytorch.org/get-started/locally/) package is installed, the dependencies can be installed by running:\n\n```bash\npip install -r requirements.txt\n```\n\n## Evaluation\n\nThe MambaVision models can be evaluated on ImageNet-1K validation set using the following: \n\n```\npython validate.py \\\n--model <model-name>\n--checkpoint <checkpoint-path>\n--data_dir <imagenet-path>\n--batch-size <batch-size-per-gpu\n``` \n\nHere `--model` is the MambaVision variant (e.g. `mambavision_tiny_1k`), `--checkpoint` is the path to pretrained model weights, `--data_dir` is the path to ImageNet-1K validation set and `--batch-size` is the number of batch size. We also provide a sample script [here](./mambavision/validate.sh). \n\n## Citation\n\nIf you find MambaVision to be useful for your work, please consider citing our paper: \n\n```\n@article{hatamizadeh2024mambavision,\n  title={MambaVision: A Hybrid Mamba-Transformer Vision Backbone},\n  author={Hatamizadeh, Ali and Kautz, Jan},\n  journal={arXiv preprint arXiv:2407.08083},\n  year={2024}\n}\n```\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=NVlabs/MambaVision&type=Date)](https://star-history.com/#NVlabs/MambaVision&Date)\n\n\n## Licenses\n\nCopyright \u00a9 2024, NVIDIA Corporation. All rights reserved.\n\nThis work is made available under the NVIDIA Source Code License-NC. Click [here](LICENSE) to view a copy of this license.\n\nThe pre-trained models are shared under [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.\n\nFor license information regarding the timm repository, please refer to its [repository](https://github.com/rwightman/pytorch-image-models).\n\nFor license information regarding the ImageNet dataset, please see the [ImageNet official website](https://www.image-net.org/). \n\n## Acknowledgement\nThis repository is built on top of the [timm](https://github.com/huggingface/pytorch-image-models) repository. We thank [Ross Wrightman](https://rwightman.com/) for creating and maintaining this high-quality library.  \n",
    "bugtrack_url": null,
    "license": "NVIDIA Source Code License-NC",
    "summary": "MambaVision: A Hybrid Mamba-Transformer Vision Backbone",
    "version": "1.0.9",
    "project_urls": {
        "Homepage": "https://github.com/NVlabs/MambaVision"
    },
    "split_keywords": [
        "pytorch",
        "pretrained",
        "models",
        "mamba",
        "vision",
        "transformer",
        "vit"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "155e72f64a39e868a38e5f52e0d6f7dec2236f4d525cb848160c0c849ac910b4",
                "md5": "44c6ae584f86b6054323501d73df8419",
                "sha256": "98f0a186b0e93613b122706191cc8fda7d69f7f961bf6c9d2cb91292d8c6bb0b"
            },
            "downloads": -1,
            "filename": "mambavision-1.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "44c6ae584f86b6054323501d73df8419",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 56904,
            "upload_time": "2024-07-24T16:26:11",
            "upload_time_iso_8601": "2024-07-24T16:26:11.033587Z",
            "url": "https://files.pythonhosted.org/packages/15/5e/72f64a39e868a38e5f52e0d6f7dec2236f4d525cb848160c0c849ac910b4/mambavision-1.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f1a83c8d99866d85ae65e22c0d4568842e69a6a5c24ac8620c9fe426c5ae8d05",
                "md5": "515459fd6d677203f2c65ea33cbebf17",
                "sha256": "b8029ac7922046c65c847e4b3b752d4d3f37dfa6acf5894ae58e1c213620803d"
            },
            "downloads": -1,
            "filename": "mambavision-1.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "515459fd6d677203f2c65ea33cbebf17",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 46910,
            "upload_time": "2024-07-24T16:26:12",
            "upload_time_iso_8601": "2024-07-24T16:26:12.684622Z",
            "url": "https://files.pythonhosted.org/packages/f1/a8/3c8d99866d85ae65e22c0d4568842e69a6a5c24ac8620c9fe426c5ae8d05/mambavision-1.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-24 16:26:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NVlabs",
    "github_project": "MambaVision",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "2.1.2"
                ]
            ]
        },
        {
            "name": "causal-conv1d",
            "specs": []
        },
        {
            "name": "mamba-ssm",
            "specs": []
        },
        {
            "name": "timm",
            "specs": []
        },
        {
            "name": "tensorboardX",
            "specs": []
        },
        {
            "name": "einops",
            "specs": []
        },
        {
            "name": "transformers",
            "specs": []
        }
    ],
    "lcname": "mambavision"
}
        
Elapsed time: 0.44886s