pai-easycv


Namepai-easycv JSON
Version 0.11.6 PyPI version JSON
download
home_pagehttps://github.com/alibaba/EasyCV.git
SummaryAn all-in-one toolkit for computer vision
upload_time2023-11-15 09:30:28
maintainer
docs_urlNone
authorAlibaba PAI team
requires_python
licenseApache License 2.0
keywords self-supvervised classification vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">

[![PyPI](https://img.shields.io/pypi/v/pai-easycv)](https://pypi.org/project/pai-easycv/)
[![Documentation Status](https://readthedocs.org/projects/easy-cv/badge/?version=latest)](https://easy-cv.readthedocs.io/en/latest/)
[![license](https://img.shields.io/github/license/alibaba/EasyCV.svg)](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/alibaba/EasyCV.svg)](https://github.com/alibaba/EasyCV/issues)
[![GitHub pull-requests](https://img.shields.io/github/issues-pr/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/pull/)
[![GitHub latest commit](https://badgen.net/github/last-commit/alibaba/EasyCV)](https://GitHub.com/alibaba/EasyCV/commit/)
<!-- [![GitHub contributors](https://img.shields.io/github/contributors/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/graphs/contributors/) -->
<!-- [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) -->


</div>


# EasyCV

English | [简体中文](README_zh-CN.md)

## Introduction

EasyCV is an all-in-one computer vision toolbox based on PyTorch, mainly focuses on self-supervised learning, transformer based models, and major CV tasks including image classification, metric-learning, object detection, pose estimation, and so on.


### Major features

- **SOTA SSL Algorithms**

  EasyCV provides state-of-the-art algorithms in self-supervised learning based on contrastive learning such as SimCLR, MoCO V2, Swav, DINO, and also MAE based on masked image modeling. We also provide standard benchmarking tools for ssl model evaluation.

- **Vision Transformers**

  EasyCV aims to provide an easy way to use the off-the-shelf SOTA transformer models trained either using supervised learning or self-supervised learning, such as ViT, Swin Transformer, and DETR Series. More models will be added in the future. In addition, we support all the pretrained models from [timm](https://github.com/rwightman/pytorch-image-models).

- **Functionality & Extensibility**

  In addition to SSL, EasyCV also supports image classification, object detection, metric learning, and more areas will be supported in the future. Although covering different areas,
  EasyCV decomposes the framework into different components such as dataset, model and running hook, making it easy to add new components and combining it with existing modules.

  EasyCV provides simple and comprehensive interface for inference. Additionally, all models are supported on [PAI-EAS](https://help.aliyun.com/document_detail/113696.html), which can be easily deployed as online service and support automatic scaling and service monitoring.

- **Efficiency**

  EasyCV supports multi-gpu and multi-worker training. EasyCV uses [DALI](https://github.com/NVIDIA/DALI) to accelerate data io and preprocessing process, and uses [TorchAccelerator](https://github.com/alibaba/EasyCV/tree/master/docs/source/tutorials/torchacc.md) and fp16 to accelerate training process. For inference optimization, EasyCV exports model using jit script, which can be optimized by [PAI-Blade](https://help.aliyun.com/document_detail/205134.html)


## What's New
[🔥 2023.05.09]

* 09/05/2023 EasyCV v0.11.0 was released.
- Support EasyCV as a plug-in for [modelscope](https://github.com/modelscope/modelscope.

[🔥 2023.03.06]

* 06/03/2023 EasyCV v0.10.0 was released.
- Add segmentation model STDC
- Add skeleton based video recognition model STGCN
- Support ReID and Multi-len MOT

[🔥 2023.01.17]

* 17/01/2023 EasyCV v0.9.0 was released.
- Support Single-lens MOT
- Support video recognition (X3D, SWIN-video)

[🔥 2022.12.02]

* 02/12/2022 EasyCV v0.8.0 was released.
- bevformer-base NDS increased by 0.8 on nuscenes val, training speed increased by 10%, and inference speed increased by 40%.
- Support Objects365 pretrain and Adding the DINO++ model can achieve an accuracy of 63.4mAP at a model scale of 200M(Under the same scale, the accuracy is the best).

[🔥 2022.08.31] We have released our YOLOX-PAI that achieves SOTA results within 40~50 mAP (less than 1ms). And we also provide a convenient and fast export/predictor api for end2end object detection. To get a quick start of YOLOX-PAI, click [here](docs/source/tutorials/yolox.md)!

* 31/08/2022 EasyCV v0.6.0 was released.
  -  Release YOLOX-PAI which achieves SOTA results within 40~50 mAP (less than 1ms)
  -  Add detection algo DINO which achieves 58.5 mAP on COCO
  -  Add mask2former algo
  -  Releases imagenet1k, imagenet22k, coco, lvis, voc2012 data with BaiduDisk to accelerate downloading

Please refer to [change_log.md](docs/source/change_log.md) for more details and history.


## Technical Articles

We have a series of technical articles on the functionalities of EasyCV.
* [EasyCV开源|开箱即用的视觉自监督+Transformer算法库](https://zhuanlan.zhihu.com/p/505219993)
* [MAE自监督算法介绍和基于EasyCV的复现](https://zhuanlan.zhihu.com/p/515859470)
* [基于EasyCV复现ViTDet:单层特征超越FPN](https://zhuanlan.zhihu.com/p/528733299)
* [基于EasyCV复现DETR和DAB-DETR,Object Query的正确打开方式](https://zhuanlan.zhihu.com/p/543129581)
* [YOLOX-PAI: 加速YOLOX, 比YOLOv6更快更强](https://zhuanlan.zhihu.com/p/560597953)
* [EasyCV带你复现更好更快的自监督算法-FastConvMAE](https://zhuanlan.zhihu.com/p/566988235)
* [EasyCV DataHub 提供多领域视觉数据集下载,助力模型生产](https://zhuanlan.zhihu.com/p/572593950)
* [使用EasyCV Mask2Former轻松实现图像分割](https://zhuanlan.zhihu.com/p/583831421)


## Installation

Please refer to the installation section in [quick_start.md](docs/source/quick_start.md) for installation.


## Get Started

Please refer to [quick_start.md](docs/source/quick_start.md) for quick start. We also provides tutorials for more usages.

* [self-supervised learning](docs/source/tutorials/ssl.md)
* [image classification](docs/source/tutorials/cls.md)
* [metric learning](docs/source/tutorials/metric_learning.md)
* [object detection with yolox-pai](docs/source/tutorials/yolox.md)
* [model compression with yolox](docs/source/tutorials/compression.md)
* [using torchacc](docs/source/tutorials/torchacc.md)
* [file io for local and oss files](docs/source/tutorials/file.md)
* [using mmdetection model in EasyCV](docs/source/tutorials/mmdet_models_usage_guide.md)
* [batch prediction tools](docs/source/tutorials/predict.md)



notebook
* [self-supervised learning](docs/source/tutorials/EasyCV图像自监督训练-MAE.ipynb)
* [image classification](docs/source/tutorials/EasyCV图像分类resnet50.ipynb)
* [object detection with yolox-pai](docs/source/tutorials/EasyCV图像检测YoloX.ipynb)
* [metric learning](docs/source/tutorials/EasyCV度量学习resnet50.ipynb)


## Model Zoo

<div align="center">
  <b>Architectures</b>
</div>
<table align="center">
  <tbody>
    <tr align="center">
      <td>
        <b>Self-Supervised Learning</b>
      </td>
      <td>
        <b>Image Classification</b>
      </td>
      <td>
        <b>Object Detection</b>
      </td>
      <td>
        <b>Segmentation</b>
      </td>
      <td>
        <b>Object Detection 3D</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
        <ul>
            <li><a href="configs/selfsup/byol">BYOL (NeurIPS'2020)</a></li>
            <li><a href="configs/selfsup/dino">DINO (ICCV'2021)</a></li>
            <li><a href="configs/selfsup/mixco">MiXCo (NeurIPS'2020)</a></li>
            <li><a href="configs/selfsup/moby">MoBY (ArXiv'2021)</a></li>
            <li><a href="configs/selfsup/mocov2">MoCov2 (ArXiv'2020)</a></li>
            <li><a href="configs/selfsup/simclr">SimCLR (ICML'2020)</a></li>
            <li><a href="configs/selfsup/swav">SwAV (NeurIPS'2020)</a></li>
            <li><a href="configs/selfsup/mae">MAE (CVPR'2022)</a></li>
            <li><a href="configs/selfsup/fast_convmae">FastConvMAE (ArXiv'2022)</a></li>
      </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/classification/imagenet/resnet">ResNet (CVPR'2016)</a></li>
          <li><a href="configs/classification/imagenet/resnext">ResNeXt (CVPR'2017)</a></li>
          <li><a href="configs/classification/imagenet/hrnet">HRNet (CVPR'2019)</a></li>
          <li><a href="configs/classification/imagenet/vit">ViT (ICLR'2021)</a></li>
          <li><a href="configs/classification/imagenet/swint">SwinT (ICCV'2021)</a></li>
          <li><a href="configs/classification/imagenet/efficientformer">EfficientFormer (ArXiv'2022)</a></li>
          <li><a href="configs/classification/imagenet/timm/deit">DeiT (ICML'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/xcit">XCiT (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/tnt">TNT (NeurIPS'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/convit">ConViT (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/cait">CaiT (ICCV'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/levit">LeViT (ICCV'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/convnext">ConvNeXt (CVPR'2022)</a></li>
          <li><a href="configs/classification/imagenet/timm/resmlp">ResMLP (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/coat">CoaT (ICCV'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/convmixer">ConvMixer (ICLR'2022)</a></li>
          <li><a href="configs/classification/imagenet/timm/mlp-mixer">MLP-Mixer (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/nest">NesT (AAAI'2022)</a></li>
          <li><a href="configs/classification/imagenet/timm/pit">PiT (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/twins">Twins (NeurIPS'2021)</a></li>
          <li><a href="configs/classification/imagenet/timm/shuffle_transformer">Shuffle Transformer (ArXiv'2021)</a></li>
          <li><a href="configs/classification/imagenet/deitiii">DeiT III (ECCV'2022)</a></li>
          <li><a href="configs/classification/imagenet/deit">Hydra Attention (2022)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/detection/fcos">FCOS (ICCV'2019)</a></li>
          <li><a href="configs/detection/yolox">YOLOX (ArXiv'2021)</a></li>
          <li><a href="configs/detection/yolox">YOLOX-PAI (ArXiv'2022)</a></li>
          <li><a href="configs/detection/detr">DETR (ECCV'2020)</a></li>
          <li><a href="configs/detection/dab_detr">DAB-DETR (ICLR'2022)</a></li>
          <li><a href="configs/detection/dab_detr">DN-DETR (CVPR'2022)</a></li>
          <li><a href="configs/detection/dino">DINO (ArXiv'2022)</a></li>
        </ul>
      </td>
      <td>
        </ul>
          <li><b>Instance Segmentation</b></li>
        <ul>
        <ul>
          <li><a href="configs/detection/mask_rcnn">Mask R-CNN (ICCV'2017)</a></li>
          <li><a href="configs/detection/vitdet">ViTDet (ArXiv'2022)</a></li>
          <li><a href="configs/segmentation/mask2former">Mask2Former (CVPR'2022)</a></li>
        </ul>
        </ul>
        </ul>
          <li><b>Semantic Segmentation</b></li>
        <ul>
        <ul>
          <li><a href="configs/segmentation/fcn">FCN (CVPR'2015)</a></li>
          <li><a href="configs/segmentation/upernet">UperNet (ECCV'2018)</a></li>
        </ul>
        </ul>
        </ul>
          <li><b>Panoptic Segmentation</b></li>
        <ul>
        <ul>
          <li><a href="configs/segmentation/mask2former">Mask2Former (CVPR'2022)</a></li>
        </ul>
        </ul>
      </ul>
      </td>
      <td>
        <ul>
            <li><a href="configs/detection3d/bevformer">BEVFormer (ECCV'2022)</a></li>
      </ul>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>


Please refer to the following model zoo for more details.

- [self-supervised learning model zoo](docs/source/model_zoo_ssl.md)
- [classification model zoo](docs/source/model_zoo_cls.md)
- [detection model zoo](docs/source/model_zoo_det.md)
- [detection3d model zoo](docs/source/model_zoo_det3d.md)
- [segmentation model zoo](docs/source/model_zoo_seg.md)
- [pose model zoo](docs/source/model_zoo_pose.md)

## Data Hub

EasyCV have collected dataset info for different senarios, making it easy for users to finetune or evaluate models in EasyCV model zoo.

Please refer to [data_hub.md](docs/source/data_hub.md).


## License

This project is licensed under the [Apache License (Version 2.0)](LICENSE). This toolkit also contains various third-party components and some code modified from other repos under other open source licenses. See the [NOTICE](NOTICE) file for more information.


## Contact

This repo is currently maintained by PAI-CV team, you can contact us by
* Dingding group number: 41783266
* Email: easycv@list.alibaba-inc.com

### Enterprise Service
If you need EasyCV enterprise service support, or purchase cloud product services, you can contact us by DingDing Group.

![dingding_qrcode](https://user-images.githubusercontent.com/4771825/165244727-b5d69628-97a6-4e2a-a23f-0c38a8d29341.jpg)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/alibaba/EasyCV.git",
    "name": "pai-easycv",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "self-supvervised,classification,vision",
    "author": "Alibaba PAI team",
    "author_email": "easycv@list.alibaba-inc.com",
    "download_url": "https://files.pythonhosted.org/packages/cb/88/6d1662f24aedf2519bb191e73d83ffe89901d7a4b512d54fa6abfe27425f/pai-easycv-0.11.6.tar.gz",
    "platform": null,
    "description": "\n<div align=\"center\">\n\n[![PyPI](https://img.shields.io/pypi/v/pai-easycv)](https://pypi.org/project/pai-easycv/)\n[![Documentation Status](https://readthedocs.org/projects/easy-cv/badge/?version=latest)](https://easy-cv.readthedocs.io/en/latest/)\n[![license](https://img.shields.io/github/license/alibaba/EasyCV.svg)](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/alibaba/EasyCV.svg)](https://github.com/alibaba/EasyCV/issues)\n[![GitHub pull-requests](https://img.shields.io/github/issues-pr/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/pull/)\n[![GitHub latest commit](https://badgen.net/github/last-commit/alibaba/EasyCV)](https://GitHub.com/alibaba/EasyCV/commit/)\n<!-- [![GitHub contributors](https://img.shields.io/github/contributors/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/graphs/contributors/) -->\n<!-- [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) -->\n\n\n</div>\n\n\n# EasyCV\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](README_zh-CN.md)\n\n## Introduction\n\nEasyCV is an all-in-one computer vision toolbox based on PyTorch, mainly focuses on self-supervised learning, transformer based models, and major CV tasks including image classification, metric-learning, object detection, pose estimation, and so on.\n\n\n### Major features\n\n- **SOTA SSL Algorithms**\n\n  EasyCV provides state-of-the-art algorithms in self-supervised learning based on contrastive learning such as SimCLR, MoCO V2, Swav, DINO, and also MAE based on masked image modeling. We also provide standard benchmarking tools for ssl model evaluation.\n\n- **Vision Transformers**\n\n  EasyCV aims to provide an easy way to use the off-the-shelf SOTA transformer models trained either using supervised learning or self-supervised learning, such as ViT, Swin Transformer, and DETR Series. More models will be added in the future. In addition, we support all the pretrained models from [timm](https://github.com/rwightman/pytorch-image-models).\n\n- **Functionality & Extensibility**\n\n  In addition to SSL, EasyCV also supports image classification, object detection, metric learning, and more areas will be supported in the future. Although covering different areas,\n  EasyCV decomposes the framework into different components such as dataset, model and running hook, making it easy to add new components and combining it with existing modules.\n\n  EasyCV provides simple and comprehensive interface for inference. Additionally, all models are supported on [PAI-EAS](https://help.aliyun.com/document_detail/113696.html), which can be easily deployed as online service and support automatic scaling and service monitoring.\n\n- **Efficiency**\n\n  EasyCV supports multi-gpu and multi-worker training. EasyCV uses [DALI](https://github.com/NVIDIA/DALI) to accelerate data io and preprocessing process, and uses [TorchAccelerator](https://github.com/alibaba/EasyCV/tree/master/docs/source/tutorials/torchacc.md) and fp16 to accelerate training process. For inference optimization, EasyCV exports model using jit script, which can be optimized by [PAI-Blade](https://help.aliyun.com/document_detail/205134.html)\n\n\n## What's New\n[\ud83d\udd25 2023.05.09]\n\n* 09/05/2023 EasyCV v0.11.0 was released.\n- Support EasyCV as a plug-in for [modelscope](https://github.com/modelscope/modelscope.\n\n[\ud83d\udd25 2023.03.06]\n\n* 06/03/2023 EasyCV v0.10.0 was released.\n- Add segmentation model STDC\n- Add skeleton based video recognition model STGCN\n- Support ReID and Multi-len MOT\n\n[\ud83d\udd25 2023.01.17]\n\n* 17/01/2023 EasyCV v0.9.0 was released.\n- Support Single-lens MOT\n- Support video recognition (X3D, SWIN-video)\n\n[\ud83d\udd25 2022.12.02]\n\n* 02/12/2022 EasyCV v0.8.0 was released.\n- bevformer-base NDS increased by 0.8 on nuscenes val, training speed increased by 10%, and inference speed increased by 40%.\n- Support Objects365 pretrain and Adding the DINO++ model can achieve an accuracy of 63.4mAP at a model scale of 200M(Under the same scale, the accuracy is the best).\n\n[\ud83d\udd25 2022.08.31] We have released our YOLOX-PAI that achieves SOTA results within 40~50 mAP (less than 1ms). And we also provide a convenient and fast export/predictor api for end2end object detection. To get a quick start of YOLOX-PAI, click [here](docs/source/tutorials/yolox.md)!\n\n* 31/08/2022 EasyCV v0.6.0 was released.\n  -  Release YOLOX-PAI which achieves SOTA results within 40~50 mAP (less than 1ms)\n  -  Add detection algo DINO which achieves 58.5 mAP on COCO\n  -  Add mask2former algo\n  -  Releases imagenet1k, imagenet22k, coco, lvis, voc2012 data with BaiduDisk to accelerate downloading\n\nPlease refer to [change_log.md](docs/source/change_log.md) for more details and history.\n\n\n## Technical Articles\n\nWe have a series of technical articles on the functionalities of EasyCV.\n* [EasyCV\u5f00\u6e90\uff5c\u5f00\u7bb1\u5373\u7528\u7684\u89c6\u89c9\u81ea\u76d1\u7763+Transformer\u7b97\u6cd5\u5e93](https://zhuanlan.zhihu.com/p/505219993)\n* [MAE\u81ea\u76d1\u7763\u7b97\u6cd5\u4ecb\u7ecd\u548c\u57fa\u4e8eEasyCV\u7684\u590d\u73b0](https://zhuanlan.zhihu.com/p/515859470)\n* [\u57fa\u4e8eEasyCV\u590d\u73b0ViTDet\uff1a\u5355\u5c42\u7279\u5f81\u8d85\u8d8aFPN](https://zhuanlan.zhihu.com/p/528733299)\n* [\u57fa\u4e8eEasyCV\u590d\u73b0DETR\u548cDAB-DETR\uff0cObject Query\u7684\u6b63\u786e\u6253\u5f00\u65b9\u5f0f](https://zhuanlan.zhihu.com/p/543129581)\n* [YOLOX-PAI: \u52a0\u901fYOLOX, \u6bd4YOLOv6\u66f4\u5feb\u66f4\u5f3a](https://zhuanlan.zhihu.com/p/560597953)\n* [EasyCV\u5e26\u4f60\u590d\u73b0\u66f4\u597d\u66f4\u5feb\u7684\u81ea\u76d1\u7763\u7b97\u6cd5-FastConvMAE](https://zhuanlan.zhihu.com/p/566988235)\n* [EasyCV DataHub \u63d0\u4f9b\u591a\u9886\u57df\u89c6\u89c9\u6570\u636e\u96c6\u4e0b\u8f7d\uff0c\u52a9\u529b\u6a21\u578b\u751f\u4ea7](https://zhuanlan.zhihu.com/p/572593950)\n* [\u4f7f\u7528EasyCV Mask2Former\u8f7b\u677e\u5b9e\u73b0\u56fe\u50cf\u5206\u5272](https://zhuanlan.zhihu.com/p/583831421)\n\n\n## Installation\n\nPlease refer to the installation section in [quick_start.md](docs/source/quick_start.md) for installation.\n\n\n## Get Started\n\nPlease refer to [quick_start.md](docs/source/quick_start.md) for quick start. We also provides tutorials for more usages.\n\n* [self-supervised learning](docs/source/tutorials/ssl.md)\n* [image classification](docs/source/tutorials/cls.md)\n* [metric learning](docs/source/tutorials/metric_learning.md)\n* [object detection with yolox-pai](docs/source/tutorials/yolox.md)\n* [model compression with yolox](docs/source/tutorials/compression.md)\n* [using torchacc](docs/source/tutorials/torchacc.md)\n* [file io for local and oss files](docs/source/tutorials/file.md)\n* [using mmdetection model in EasyCV](docs/source/tutorials/mmdet_models_usage_guide.md)\n* [batch prediction tools](docs/source/tutorials/predict.md)\n\n\n\nnotebook\n* [self-supervised learning](docs/source/tutorials/EasyCV\u56fe\u50cf\u81ea\u76d1\u7763\u8bad\u7ec3-MAE.ipynb)\n* [image classification](docs/source/tutorials/EasyCV\u56fe\u50cf\u5206\u7c7bresnet50.ipynb)\n* [object detection with yolox-pai](docs/source/tutorials/EasyCV\u56fe\u50cf\u68c0\u6d4bYoloX.ipynb)\n* [metric learning](docs/source/tutorials/EasyCV\u5ea6\u91cf\u5b66\u4e60resnet50.ipynb)\n\n\n## Model Zoo\n\n<div align=\"center\">\n  <b>Architectures</b>\n</div>\n<table align=\"center\">\n  <tbody>\n    <tr align=\"center\">\n      <td>\n        <b>Self-Supervised Learning</b>\n      </td>\n      <td>\n        <b>Image Classification</b>\n      </td>\n      <td>\n        <b>Object Detection</b>\n      </td>\n      <td>\n        <b>Segmentation</b>\n      </td>\n      <td>\n        <b>Object Detection 3D</b>\n      </td>\n    </tr>\n    <tr valign=\"top\">\n      <td>\n        <ul>\n            <li><a href=\"configs/selfsup/byol\">BYOL (NeurIPS'2020)</a></li>\n            <li><a href=\"configs/selfsup/dino\">DINO (ICCV'2021)</a></li>\n            <li><a href=\"configs/selfsup/mixco\">MiXCo (NeurIPS'2020)</a></li>\n            <li><a href=\"configs/selfsup/moby\">MoBY (ArXiv'2021)</a></li>\n            <li><a href=\"configs/selfsup/mocov2\">MoCov2 (ArXiv'2020)</a></li>\n            <li><a href=\"configs/selfsup/simclr\">SimCLR (ICML'2020)</a></li>\n            <li><a href=\"configs/selfsup/swav\">SwAV (NeurIPS'2020)</a></li>\n            <li><a href=\"configs/selfsup/mae\">MAE (CVPR'2022)</a></li>\n            <li><a href=\"configs/selfsup/fast_convmae\">FastConvMAE (ArXiv'2022)</a></li>\n      </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/classification/imagenet/resnet\">ResNet (CVPR'2016)</a></li>\n          <li><a href=\"configs/classification/imagenet/resnext\">ResNeXt (CVPR'2017)</a></li>\n          <li><a href=\"configs/classification/imagenet/hrnet\">HRNet (CVPR'2019)</a></li>\n          <li><a href=\"configs/classification/imagenet/vit\">ViT (ICLR'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/swint\">SwinT (ICCV'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/efficientformer\">EfficientFormer (ArXiv'2022)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/deit\">DeiT (ICML'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/xcit\">XCiT (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/tnt\">TNT (NeurIPS'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/convit\">ConViT (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/cait\">CaiT (ICCV'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/levit\">LeViT (ICCV'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/convnext\">ConvNeXt (CVPR'2022)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/resmlp\">ResMLP (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/coat\">CoaT (ICCV'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/convmixer\">ConvMixer (ICLR'2022)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/mlp-mixer\">MLP-Mixer (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/nest\">NesT (AAAI'2022)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/pit\">PiT (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/twins\">Twins (NeurIPS'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/timm/shuffle_transformer\">Shuffle Transformer (ArXiv'2021)</a></li>\n          <li><a href=\"configs/classification/imagenet/deitiii\">DeiT III (ECCV'2022)</a></li>\n          <li><a href=\"configs/classification/imagenet/deit\">Hydra Attention (2022)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/detection/fcos\">FCOS (ICCV'2019)</a></li>\n          <li><a href=\"configs/detection/yolox\">YOLOX (ArXiv'2021)</a></li>\n          <li><a href=\"configs/detection/yolox\">YOLOX-PAI (ArXiv'2022)</a></li>\n          <li><a href=\"configs/detection/detr\">DETR (ECCV'2020)</a></li>\n          <li><a href=\"configs/detection/dab_detr\">DAB-DETR (ICLR'2022)</a></li>\n          <li><a href=\"configs/detection/dab_detr\">DN-DETR (CVPR'2022)</a></li>\n          <li><a href=\"configs/detection/dino\">DINO (ArXiv'2022)</a></li>\n        </ul>\n      </td>\n      <td>\n        </ul>\n          <li><b>Instance Segmentation</b></li>\n        <ul>\n        <ul>\n          <li><a href=\"configs/detection/mask_rcnn\">Mask R-CNN (ICCV'2017)</a></li>\n          <li><a href=\"configs/detection/vitdet\">ViTDet (ArXiv'2022)</a></li>\n          <li><a href=\"configs/segmentation/mask2former\">Mask2Former (CVPR'2022)</a></li>\n        </ul>\n        </ul>\n        </ul>\n          <li><b>Semantic Segmentation</b></li>\n        <ul>\n        <ul>\n          <li><a href=\"configs/segmentation/fcn\">FCN (CVPR'2015)</a></li>\n          <li><a href=\"configs/segmentation/upernet\">UperNet (ECCV'2018)</a></li>\n        </ul>\n        </ul>\n        </ul>\n          <li><b>Panoptic Segmentation</b></li>\n        <ul>\n        <ul>\n          <li><a href=\"configs/segmentation/mask2former\">Mask2Former (CVPR'2022)</a></li>\n        </ul>\n        </ul>\n      </ul>\n      </td>\n      <td>\n        <ul>\n            <li><a href=\"configs/detection3d/bevformer\">BEVFormer (ECCV'2022)</a></li>\n      </ul>\n      </td>\n    </tr>\n</td>\n    </tr>\n  </tbody>\n</table>\n\n\nPlease refer to the following model zoo for more details.\n\n- [self-supervised learning model zoo](docs/source/model_zoo_ssl.md)\n- [classification model zoo](docs/source/model_zoo_cls.md)\n- [detection model zoo](docs/source/model_zoo_det.md)\n- [detection3d model zoo](docs/source/model_zoo_det3d.md)\n- [segmentation model zoo](docs/source/model_zoo_seg.md)\n- [pose model zoo](docs/source/model_zoo_pose.md)\n\n## Data Hub\n\nEasyCV have collected dataset info for different senarios, making it easy for users to finetune or evaluate models in EasyCV model zoo.\n\nPlease refer to [data_hub.md](docs/source/data_hub.md).\n\n\n## License\n\nThis project is licensed under the [Apache License (Version 2.0)](LICENSE). This toolkit also contains various third-party components and some code modified from other repos under other open source licenses. See the [NOTICE](NOTICE) file for more information.\n\n\n## Contact\n\nThis repo is currently maintained by PAI-CV team, you can contact us by\n* Dingding group number: 41783266\n* Email: easycv@list.alibaba-inc.com\n\n### Enterprise Service\nIf you need EasyCV enterprise service support, or purchase cloud product services, you can contact us by DingDing Group.\n\n![dingding_qrcode](https://user-images.githubusercontent.com/4771825/165244727-b5d69628-97a6-4e2a-a23f-0c38a8d29341.jpg)\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "An all-in-one toolkit for computer vision",
    "version": "0.11.6",
    "project_urls": {
        "Homepage": "https://github.com/alibaba/EasyCV.git"
    },
    "split_keywords": [
        "self-supvervised",
        "classification",
        "vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cae4c84d5d930ed0b0e5b27dbd9dad2b08a879aeb0290f6c3eaa7e2c49f8d21c",
                "md5": "6b29d7279bfd29e394a71287928e30d3",
                "sha256": "ef966a760d3b05aac2fa056fd7ece69fe88d37615cb82c2a96eb474414c91860"
            },
            "downloads": -1,
            "filename": "pai_easycv-0.11.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6b29d7279bfd29e394a71287928e30d3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 6797492,
            "upload_time": "2023-11-15T09:30:25",
            "upload_time_iso_8601": "2023-11-15T09:30:25.661858Z",
            "url": "https://files.pythonhosted.org/packages/ca/e4/c84d5d930ed0b0e5b27dbd9dad2b08a879aeb0290f6c3eaa7e2c49f8d21c/pai_easycv-0.11.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cb886d1662f24aedf2519bb191e73d83ffe89901d7a4b512d54fa6abfe27425f",
                "md5": "fe6f37aabef2935c5ed0f39293f2c241",
                "sha256": "03051be0bbb78dba38316aa825f9266fd66c2fb686411ab7c6fa3fe7c33501a9"
            },
            "downloads": -1,
            "filename": "pai-easycv-0.11.6.tar.gz",
            "has_sig": false,
            "md5_digest": "fe6f37aabef2935c5ed0f39293f2c241",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6227662,
            "upload_time": "2023-11-15T09:30:28",
            "upload_time_iso_8601": "2023-11-15T09:30:28.862527Z",
            "url": "https://files.pythonhosted.org/packages/cb/88/6d1662f24aedf2519bb191e73d83ffe89901d7a4b512d54fa6abfe27425f/pai-easycv-0.11.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-15 09:30:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "alibaba",
    "github_project": "EasyCV",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "pai-easycv"
}
        
Elapsed time: 0.15420s