flowvision


Nameflowvision JSON
Version 0.2.2 PyPI version JSON
download
home_pagehttps://github.com/Oneflow-Inc/vision
Summaryoneflow vision codebase
upload_time2023-11-17 03:33:47
maintainer
docs_urlNone
authorflow vision contributors
requires_python
licenseBSD
keywords computer vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h2 align="center">flowvision</h2>
<p align="center">
    <a href="https://pypi.org/project/flowvision/">
        <img alt="PyPI" src="https://img.shields.io/pypi/v/flowvision">
    </a>
    <a href="https://flowvision.readthedocs.io/en/latest/index.html">
        <img alt="docs" src="https://img.shields.io/badge/docs-latest-blue">
    </a>
    <a href="https://github.com/Oneflow-Inc/vision/blob/master/LICENSE">
        <img alt="GitHub" src="https://img.shields.io/github/license/Oneflow-Inc/vision.svg?color=blue">
    </a>
    <a href="https://github.com/Oneflow-Inc/vision/releases">
        <img alt="GitHub release" src="https://img.shields.io/github/release/Oneflow-Inc/vision.svg">
    </a>
    <a href="https://github.com/Oneflow-Inc/vision/issues">
        <img alt="PRs Welcome" src="https://img.shields.io/badge/PRs-welcome-pink.svg">
    </a>
</p>


## Introduction
The flowvision package consists of popular datasets, SOTA computer vision models, layers, utilities, schedulers, advanced data augmentations and common image transformations based on OneFlow.

## Installation
First install OneFlow, please refer to [install-oneflow](https://github.com/Oneflow-Inc/oneflow#install-oneflow) for more details.

Then install the latest stable release of `flowvision`
```bash
pip install flowvision==0.2.2
```

## Overview of flowvision structure
<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Vision Models</b>
      </td>
      <td>
        <b>Components</b>
      </td>
      <td>
        <b>Augmentation and Datasets</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
        <ul>
          <li><b>Classification</b></li>
          <ul>
            <li>AlexNet</li>
            <li>SqueezeNet</li>
            <li>VGG</li>
            <li>GoogleNet</li>
            <li>InceptionV3</li>
            <li>ResNet</li>
            <li>ResNeXt</li>
            <li>ResNeSt</li>
            <li>SENet</li>
            <li>DenseNet</li>
            <li>ShuffleNetV2</li>  
            <li>MobileNetV2</li>
            <li>MobileNetV3</li>
            <li>MNASNet</li>
            <li>Res2Net</li>
            <li>EfficientNet</li>  
            <li>GhostNet</li>
            <li>RegNet</li> 
            <li>ReXNet</li>
            <li>Vision Transformer</li>
            <li>DeiT</li>
            <li>PVT</li>
            <li>Swin Transformer</li>
            <li>CSwin Transformer</li>
            <li>CrossFormer</li>
            <li>PoolFormer</li>
            <li>Mlp Mixer</li>
            <li>ResMLP</li>
            <li>gMLP</li>
            <li>ConvMixer</li>
            <li>ConvNeXt</li>
            <li>LeViT</li>
            <li>RegionViT</li>
            <li>UniFormer</li>
            <li>VAN</li>
            <li>MobileViT</li>
            <li>DeiT-III</li>
            <li>CaiT</li>
            <li>DLA</li>
            <li>GENet</li>
            <li>HRNet</li>
            <li>FAN</li>
        </ul>
        <li><b>Detection</b></li>
        <ul>
            <li>SSD</li>
            <li>SSDLite</li>
            <li>Faster RCNN</li>
            <li>RetinaNet</li>
        </ul>
        <li><b>Segmentation</b></li>
        <ul>
            <li>FCN</li>
            <li>DeepLabV3</li>
        </ul>
        <li><b>Neural Style Transfer</b></li>
        <ul>
            <li>StyleNet</li>
        </ul>
        <li><b>Face Recognition</b></li>
        <ul>
            <li>IResNet</li>
        </ul>        
      </ul>
      </td>
      <td>
      <ul><li><b>Attention Layers</b></li>
          <ul>
            <li>SE</li>
            <li>BAM</li>
            <li>CBAM</li>
            <li>ECA</li>
            <li>Non Local Attention</li>
            <li>Global Context</li>
            <li>Gated Channel Transform</li>
            <li>Coordinate Attention</li>
          </ul>  
        </ul>
      <ul><li><b>Regularization Layers</b></li>
          <ul>
            <li>Drop Block</li>
            <li>Drop Path</li>
            <li>Stochastic Depth</li>
            <li>LayerNorm2D</li>
          </ul>  
        </ul>
      <ul><li><b>Basic Layers</b></li>
          <ul>
            <li>Patch Embedding</li>
            <li>Mlp Block</li>
            <li>FPN</li>
          </ul>  
        </ul>
      <ul><li><b>Activation Layers</b></li>
          <ul>
            <li>Hard Sigmoid</li>
            <li>Hard Swish</li>
          </ul>  
        </ul>
      <ul><li><b>Initialization Function</b></li>
          <ul>
            <li>Truncated Normal</li>
            <li>Lecun Normal</li>
          </ul>  
        </ul>
      <ul><li><b>LR Scheduler</b></li>
        <ul>
            <li>StepLRScheduler</li>
            <li>MultiStepLRScheduler</li>
            <li>CosineLRScheduler</li>
            <li>LinearLRScheduler</li>
            <li>PolyLRScheduler</li>
            <li>TanhLRScheduler</li>
          </ul>  
        </ul>
        <ul><li><b>Loss</b></li>
          <ul>
            <li>LabelSmoothingCrossEntropy</li>
            <li>SoftTargetCrossEntropy</li>
          </ul>  
        </ul>
      </td>
      <td>
        <ul><li><b>Basic Augmentation</b></li>
          <ul>
            <li>CenterCrop</li>
            <li>RandomCrop</li>
            <li>RandomResizedCrop</li>
            <li>FiveCrop</li>
            <li>TenCrop</li>
            <li>RandomVerticalFlip</li>
            <li>RandomHorizontalFlip</li>
            <li>Resize</li>
            <li>RandomGrayscale</li>
            <li>GaussianBlur</li>
          </ul>  
        </ul>
        <ul><li><b>Advanced Augmentation</b></li>
          <ul>
            <li>Mixup</li>
            <li>CutMix</li>
            <li>AugMix</li>
            <li>RandomErasing</li>
            <li>Rand Augmentation</li>
            <li>Auto Augmentation</li>
          </ul>  
        </ul>
        <ul><li><b>Datasets</b></li>
          <ul>
            <li>CIFAR10</li>
            <li>CIFAR100</li>
            <li>COCO</li>
            <li>FashionMNIST</li>
            <li>ImageNet</li>
            <li>VOC</li>
          </ul>  
        </ul>
      </td>  
    </tr>


</td>
    </tr>
  </tbody>
</table>


## Documentation
Please refer to [docs](https://flowvision.readthedocs.io/en/latest/index.html) for full API documentation and tutorials


## ChangeLog
Please refer to [ChangeLog](https://flowvision.readthedocs.io/en/latest/changelog.html) for details and release history


## Model Zoo
We have conducted all the tests under the same setting, please refer to the model page [here](./results/results_imagenet.md) for more details.

## Quick Start
### Create a model
In flowvision we support two ways to create a model.

- Import the target model from `flowvision.models`, e.g., create `alexnet` from flowvision

```python
from flowvision.models.alexnet import alexnet
model = alexnet()

# will download the pretrained model
model = alexnet(pretrained=True)

# customize model to fit different number of classes
model = alexnet(num_classes=100)
```

- Or create model in an easier way by using `ModelCreator`, e.g., create `alexnet` model by `ModelCreator`
```python
from flowvision.models import ModelCreator
alexnet = ModelCreator.create_model("alexnet")

# will download the pretrained model
alexnet = ModelCreator.create_model("alexnet", pretrained=True)

# customize model to fit different number of classes
alexnet = ModelCreator.create_model("alexnet", num_classes=100)
```

### Tabulate all models with pretrained weights
`ModelCreator.model_table()` returns a tabular results of available models in `flowvision`. To check all of pretrained models, pass in `pretrained=True` in `ModelCreator.model_table()`.
```python
from flowvision.models import ModelCreator
all_pretrained_models = ModelCreator.model_table(pretrained=True)
print(all_pretrained_models)
```
You can get the results like:
```python
╒════════════════════════════════════════════╤══════════════╕
│ Supported Models                           │ Pretrained   │
╞════════════════════════════════════════════╪══════════════╡
│ alexnet                                    │ true         │
├────────────────────────────────────────────┼──────────────┤
│ convmixer_1024_20                          │ true         │
├────────────────────────────────────────────┼──────────────┤
│ convmixer_1536_20                          │ true         │
├────────────────────────────────────────────┼──────────────┤
│ convmixer_768_32_relu                      │ true         │
├────────────────────────────────────────────┼──────────────┤
│ crossformer_base_patch4_group7_224         │ true         │
├────────────────────────────────────────────┼──────────────┤
│ crossformer_large_patch4_group7_224        │ true         │
├────────────────────────────────────────────┼──────────────┤
│ crossformer_small_patch4_group7_224        │ true         │
├────────────────────────────────────────────┼──────────────┤
│ crossformer_tiny_patch4_group7_224         │ true         │
├────────────────────────────────────────────┼──────────────┤
│                    ...                     │ ...          │
├────────────────────────────────────────────┼──────────────┤
│ wide_resnet101_2                           │ true         │
├────────────────────────────────────────────┼──────────────┤
│ wide_resnet50_2                            │ true         │
╘════════════════════════════════════════════╧══════════════╛
```

### Search for supported model by Wildcard
It is easy to search for model architectures by using Wildcard as below:
```python
from flowvision.models import ModelCreator
all_efficientnet_models = ModelCreator.model_table("**efficientnet**")
print(all_efficientnet_models)
```
You can get the results like:
```python
╒════════════════════╤══════════════╕
│ Supported Models   │ Pretrained   │
╞════════════════════╪══════════════╡
│ efficientnet_b0    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b1    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b2    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b3    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b4    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b5    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b6    │ true         │
├────────────────────┼──────────────┤
│ efficientnet_b7    │ true         │
╘════════════════════╧══════════════╛
```

### List all models supported in flowvision
`ModelCreator.model_list` has similar function as `ModelCreator.model_table` but return a list object, which gives the user a more flexible way to check the supported model in flowvision.
- List all models with pretrained weights
```python
from flowvision.models import ModelCreator
all_pretrained_models = ModelCreator.model_list(pretrained=True)
print(all_pretrained_models[:5])
```
You can get the results like:
```python
['alexnet', 
 'convmixer_1024_20', 
 'convmixer_1536_20', 
 'convmixer_768_32_relu', 
 'crossformer_base_patch4_group7_224']
```

- Support wildcard search
```python
from flowvision.models import ModelCreator
all_efficientnet_models = ModelCreator.model_list("**efficientnet**")
print(all_efficientnet_models)
```
You can get the results like:
```python
['efficientnet_b0', 
 'efficientnet_b1', 
 'efficientnet_b2', 
 'efficientnet_b3', 
 'efficientnet_b4', 
 'efficientnet_b5', 
 'efficientnet_b6', 
 'efficientnet_b7']
```

</details>

## Disclaimer on Datasets
This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Oneflow-Inc/vision",
    "name": "flowvision",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "computer vision",
    "author": "flow vision contributors",
    "author_email": "rentianhe@oneflow.org",
    "download_url": "https://files.pythonhosted.org/packages/54/a7/93ceb683e5ed7ccb3afa85926c3c1857c3a34e07d7c426748aac7a3fd745/flowvision-0.2.2.tar.gz",
    "platform": "any",
    "description": "<h2 align=\"center\">flowvision</h2>\n<p align=\"center\">\n    <a href=\"https://pypi.org/project/flowvision/\">\n        <img alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/flowvision\">\n    </a>\n    <a href=\"https://flowvision.readthedocs.io/en/latest/index.html\">\n        <img alt=\"docs\" src=\"https://img.shields.io/badge/docs-latest-blue\">\n    </a>\n    <a href=\"https://github.com/Oneflow-Inc/vision/blob/master/LICENSE\">\n        <img alt=\"GitHub\" src=\"https://img.shields.io/github/license/Oneflow-Inc/vision.svg?color=blue\">\n    </a>\n    <a href=\"https://github.com/Oneflow-Inc/vision/releases\">\n        <img alt=\"GitHub release\" src=\"https://img.shields.io/github/release/Oneflow-Inc/vision.svg\">\n    </a>\n    <a href=\"https://github.com/Oneflow-Inc/vision/issues\">\n        <img alt=\"PRs Welcome\" src=\"https://img.shields.io/badge/PRs-welcome-pink.svg\">\n    </a>\n</p>\n\n\n## Introduction\nThe flowvision package consists of popular datasets, SOTA computer vision models, layers, utilities, schedulers, advanced data augmentations and common image transformations based on OneFlow.\n\n## Installation\nFirst install OneFlow, please refer to [install-oneflow](https://github.com/Oneflow-Inc/oneflow#install-oneflow) for more details.\n\nThen install the latest stable release of `flowvision`\n```bash\npip install flowvision==0.2.2\n```\n\n## Overview of flowvision structure\n<table align=\"center\">\n  <tbody>\n    <tr align=\"center\" valign=\"bottom\">\n      <td>\n        <b>Vision Models</b>\n      </td>\n      <td>\n        <b>Components</b>\n      </td>\n      <td>\n        <b>Augmentation and Datasets</b>\n      </td>\n    </tr>\n    <tr valign=\"top\">\n      <td>\n        <ul>\n          <li><b>Classification</b></li>\n          <ul>\n            <li>AlexNet</li>\n            <li>SqueezeNet</li>\n            <li>VGG</li>\n            <li>GoogleNet</li>\n            <li>InceptionV3</li>\n            <li>ResNet</li>\n            <li>ResNeXt</li>\n            <li>ResNeSt</li>\n            <li>SENet</li>\n            <li>DenseNet</li>\n            <li>ShuffleNetV2</li>  \n            <li>MobileNetV2</li>\n            <li>MobileNetV3</li>\n            <li>MNASNet</li>\n            <li>Res2Net</li>\n            <li>EfficientNet</li>  \n            <li>GhostNet</li>\n            <li>RegNet</li> \n            <li>ReXNet</li>\n            <li>Vision Transformer</li>\n            <li>DeiT</li>\n            <li>PVT</li>\n            <li>Swin Transformer</li>\n            <li>CSwin Transformer</li>\n            <li>CrossFormer</li>\n            <li>PoolFormer</li>\n            <li>Mlp Mixer</li>\n            <li>ResMLP</li>\n            <li>gMLP</li>\n            <li>ConvMixer</li>\n            <li>ConvNeXt</li>\n            <li>LeViT</li>\n            <li>RegionViT</li>\n            <li>UniFormer</li>\n            <li>VAN</li>\n            <li>MobileViT</li>\n            <li>DeiT-III</li>\n            <li>CaiT</li>\n            <li>DLA</li>\n            <li>GENet</li>\n            <li>HRNet</li>\n            <li>FAN</li>\n        </ul>\n        <li><b>Detection</b></li>\n        <ul>\n            <li>SSD</li>\n            <li>SSDLite</li>\n            <li>Faster RCNN</li>\n            <li>RetinaNet</li>\n        </ul>\n        <li><b>Segmentation</b></li>\n        <ul>\n            <li>FCN</li>\n            <li>DeepLabV3</li>\n        </ul>\n        <li><b>Neural Style Transfer</b></li>\n        <ul>\n            <li>StyleNet</li>\n        </ul>\n        <li><b>Face Recognition</b></li>\n        <ul>\n            <li>IResNet</li>\n        </ul>        \n      </ul>\n      </td>\n      <td>\n      <ul><li><b>Attention Layers</b></li>\n          <ul>\n            <li>SE</li>\n            <li>BAM</li>\n            <li>CBAM</li>\n            <li>ECA</li>\n            <li>Non Local Attention</li>\n            <li>Global Context</li>\n            <li>Gated Channel Transform</li>\n            <li>Coordinate Attention</li>\n          </ul>  \n        </ul>\n      <ul><li><b>Regularization Layers</b></li>\n          <ul>\n            <li>Drop Block</li>\n            <li>Drop Path</li>\n            <li>Stochastic Depth</li>\n            <li>LayerNorm2D</li>\n          </ul>  \n        </ul>\n      <ul><li><b>Basic Layers</b></li>\n          <ul>\n            <li>Patch Embedding</li>\n            <li>Mlp Block</li>\n            <li>FPN</li>\n          </ul>  \n        </ul>\n      <ul><li><b>Activation Layers</b></li>\n          <ul>\n            <li>Hard Sigmoid</li>\n            <li>Hard Swish</li>\n          </ul>  \n        </ul>\n      <ul><li><b>Initialization Function</b></li>\n          <ul>\n            <li>Truncated Normal</li>\n            <li>Lecun Normal</li>\n          </ul>  \n        </ul>\n      <ul><li><b>LR Scheduler</b></li>\n        <ul>\n            <li>StepLRScheduler</li>\n            <li>MultiStepLRScheduler</li>\n            <li>CosineLRScheduler</li>\n            <li>LinearLRScheduler</li>\n            <li>PolyLRScheduler</li>\n            <li>TanhLRScheduler</li>\n          </ul>  \n        </ul>\n        <ul><li><b>Loss</b></li>\n          <ul>\n            <li>LabelSmoothingCrossEntropy</li>\n            <li>SoftTargetCrossEntropy</li>\n          </ul>  \n        </ul>\n      </td>\n      <td>\n        <ul><li><b>Basic Augmentation</b></li>\n          <ul>\n            <li>CenterCrop</li>\n            <li>RandomCrop</li>\n            <li>RandomResizedCrop</li>\n            <li>FiveCrop</li>\n            <li>TenCrop</li>\n            <li>RandomVerticalFlip</li>\n            <li>RandomHorizontalFlip</li>\n            <li>Resize</li>\n            <li>RandomGrayscale</li>\n            <li>GaussianBlur</li>\n          </ul>  \n        </ul>\n        <ul><li><b>Advanced Augmentation</b></li>\n          <ul>\n            <li>Mixup</li>\n            <li>CutMix</li>\n            <li>AugMix</li>\n            <li>RandomErasing</li>\n            <li>Rand Augmentation</li>\n            <li>Auto Augmentation</li>\n          </ul>  \n        </ul>\n        <ul><li><b>Datasets</b></li>\n          <ul>\n            <li>CIFAR10</li>\n            <li>CIFAR100</li>\n            <li>COCO</li>\n            <li>FashionMNIST</li>\n            <li>ImageNet</li>\n            <li>VOC</li>\n          </ul>  \n        </ul>\n      </td>  \n    </tr>\n\n\n</td>\n    </tr>\n  </tbody>\n</table>\n\n\n## Documentation\nPlease refer to [docs](https://flowvision.readthedocs.io/en/latest/index.html) for full API documentation and tutorials\n\n\n## ChangeLog\nPlease refer to [ChangeLog](https://flowvision.readthedocs.io/en/latest/changelog.html) for details and release history\n\n\n## Model Zoo\nWe have conducted all the tests under the same setting, please refer to the model page [here](./results/results_imagenet.md) for more details.\n\n## Quick Start\n### Create a model\nIn flowvision we support two ways to create a model.\n\n- Import the target model from `flowvision.models`, e.g., create `alexnet` from flowvision\n\n```python\nfrom flowvision.models.alexnet import alexnet\nmodel = alexnet()\n\n# will download the pretrained model\nmodel = alexnet(pretrained=True)\n\n# customize model to fit different number of classes\nmodel = alexnet(num_classes=100)\n```\n\n- Or create model in an easier way by using `ModelCreator`, e.g., create `alexnet` model by `ModelCreator`\n```python\nfrom flowvision.models import ModelCreator\nalexnet = ModelCreator.create_model(\"alexnet\")\n\n# will download the pretrained model\nalexnet = ModelCreator.create_model(\"alexnet\", pretrained=True)\n\n# customize model to fit different number of classes\nalexnet = ModelCreator.create_model(\"alexnet\", num_classes=100)\n```\n\n### Tabulate all models with pretrained weights\n`ModelCreator.model_table()` returns a tabular results of available models in `flowvision`. To check all of pretrained models, pass in `pretrained=True` in `ModelCreator.model_table()`.\n```python\nfrom flowvision.models import ModelCreator\nall_pretrained_models = ModelCreator.model_table(pretrained=True)\nprint(all_pretrained_models)\n```\nYou can get the results like:\n```python\n\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Supported Models                           \u2502 Pretrained   \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 alexnet                                    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 convmixer_1024_20                          \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 convmixer_1536_20                          \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 convmixer_768_32_relu                      \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 crossformer_base_patch4_group7_224         \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 crossformer_large_patch4_group7_224        \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 crossformer_small_patch4_group7_224        \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 crossformer_tiny_patch4_group7_224         \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502                    ...                     \u2502 ...          \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 wide_resnet101_2                           \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 wide_resnet50_2                            \u2502 true         \u2502\n\u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n```\n\n### Search for supported model by Wildcard\nIt is easy to search for model architectures by using Wildcard as below:\n```python\nfrom flowvision.models import ModelCreator\nall_efficientnet_models = ModelCreator.model_table(\"**efficientnet**\")\nprint(all_efficientnet_models)\n```\nYou can get the results like:\n```python\n\u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n\u2502 Supported Models   \u2502 Pretrained   \u2502\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n\u2502 efficientnet_b0    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b1    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b2    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b3    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b4    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b5    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b6    \u2502 true         \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 efficientnet_b7    \u2502 true         \u2502\n\u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n```\n\n### List all models supported in flowvision\n`ModelCreator.model_list` has similar function as `ModelCreator.model_table` but return a list object, which gives the user a more flexible way to check the supported model in flowvision.\n- List all models with pretrained weights\n```python\nfrom flowvision.models import ModelCreator\nall_pretrained_models = ModelCreator.model_list(pretrained=True)\nprint(all_pretrained_models[:5])\n```\nYou can get the results like:\n```python\n['alexnet', \n 'convmixer_1024_20', \n 'convmixer_1536_20', \n 'convmixer_768_32_relu', \n 'crossformer_base_patch4_group7_224']\n```\n\n- Support wildcard search\n```python\nfrom flowvision.models import ModelCreator\nall_efficientnet_models = ModelCreator.model_list(\"**efficientnet**\")\nprint(all_efficientnet_models)\n```\nYou can get the results like:\n```python\n['efficientnet_b0', \n 'efficientnet_b1', \n 'efficientnet_b2', \n 'efficientnet_b3', \n 'efficientnet_b4', \n 'efficientnet_b5', \n 'efficientnet_b6', \n 'efficientnet_b7']\n```\n\n</details>\n\n## Disclaimer on Datasets\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.\n\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "oneflow vision codebase",
    "version": "0.2.2",
    "project_urls": {
        "Homepage": "https://github.com/Oneflow-Inc/vision"
    },
    "split_keywords": [
        "computer",
        "vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "54a793ceb683e5ed7ccb3afa85926c3c1857c3a34e07d7c426748aac7a3fd745",
                "md5": "c0a117541a8b116c1696fe1b48f765e1",
                "sha256": "104be064115322d64160a5a2cbc080333362459ab6a0015d675b4bf9a99f3010"
            },
            "downloads": -1,
            "filename": "flowvision-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "c0a117541a8b116c1696fe1b48f765e1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 339026,
            "upload_time": "2023-11-17T03:33:47",
            "upload_time_iso_8601": "2023-11-17T03:33:47.218164Z",
            "url": "https://files.pythonhosted.org/packages/54/a7/93ceb683e5ed7ccb3afa85926c3c1857c3a34e07d7c426748aac7a3fd745/flowvision-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-17 03:33:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Oneflow-Inc",
    "github_project": "vision",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "flowvision"
}
        
Elapsed time: 0.13683s