<div align="center">
<img width="100%" src="https://user-images.githubusercontent.com/27466624/222385101-516e551c-49f5-480d-a135-4b24ee6dc308.png"/>
<div> </div>
<div align="center">
<b><font size="5">OpenMMLab website</font></b>
<sup>
<a href="https://openmmlab.com">
<i><font size="4">HOT</font></i>
</a>
</sup>
<b><font size="5">OpenMMLab platform</font></b>
<sup>
<a href="https://platform.openmmlab.com">
<i><font size="4">TRY IT OUT</font></i>
</a>
</sup>
</div>
<div> </div>
[![PyPI](https://img.shields.io/pypi/v/mmyolo)](https://pypi.org/project/mmyolo)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmyolo.readthedocs.io/en/latest/)
[![deploy](https://github.com/open-mmlab/mmyolo/workflows/deploy/badge.svg)](https://github.com/open-mmlab/mmyolo/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmyolo/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmyolo)
[![license](https://img.shields.io/github/license/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/blob/main/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/issues)
[๐Documentation](https://mmyolo.readthedocs.io/en/latest/) |
[๐ ๏ธInstallation](https://mmyolo.readthedocs.io/en/latest/get_started/installation.html) |
[๐Model Zoo](https://mmyolo.readthedocs.io/en/latest/model_zoo.html) |
[๐Update News](https://mmyolo.readthedocs.io/en/latest/notes/changelog.html) |
[๐คReporting Issues](https://github.com/open-mmlab/mmyolo/issues/new/choose)
</div>
<div align="center">
English | [็ฎไฝไธญๆ](README_zh-CN.md)
</div>
<div align="center">
<a href="https://openmmlab.medium.com/" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a>
<img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
<a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
<img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
<a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
<img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
<a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
<img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
<a href="https://space.bilibili.com/1293512903" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a>
<img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
<a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;">
<img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a>
</div>
## ๐ Table of Contents
- [๐ฅณ ๐ What's New](#--whats-new-)
- [โจ Highlight](#-highlight-)
- [๐ Introduction](#-introduction-)
- [๐ ๏ธ Installation](#%EF%B8%8F-installation-)
- [๐จโ๐ซ Tutorial](#-tutorial-)
- [๐ Overview of Benchmark and Model Zoo](#-overview-of-benchmark-and-model-zoo-)
- [โ FAQ](#-faq-)
- [๐ Contributing](#-contributing-)
- [๐ค Acknowledgement](#-acknowledgement-)
- [๐๏ธ Citation](#๏ธ-citation-)
- [๐ซ License](#-license-)
- [๐๏ธ Projects in OpenMMLab](#%EF%B8%8F-projects-in-openmmlab-)
## ๐ฅณ ๐ What's New [๐](#-table-of-contents)
๐ **v0.6.0** was released on 15/8/2023:
- Support YOLOv5 instance segmentation
- Support YOLOX-Pose based on MMPose
- Add 15 minutes instance segmentation tutorial.
- YOLOv5 supports using mask annotation to optimize bbox
- Add Multi-scale training and testing docs
For release history and update details, please refer to [changelog](https://mmyolo.readthedocs.io/en/latest/notes/changelog.html).
### โจ Highlight [๐](#-table-of-contents)
We are excited to announce our latest work on real-time object recognition tasks, **RTMDet**, a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the [technical report](https://arxiv.org/abs/2212.07784). Pre-trained models are [here](configs/rtmdet).
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real)
| Task | Dataset | AP | FPS(TRT FP16 BS1 3090) |
| ------------------------ | ------- | ------------------------------------ | ---------------------- |
| Object Detection | COCO | 52.8 | 322 |
| Instance Segmentation | COCO | 44.6 | 188 |
| Rotated Object Detection | DOTA | 78.9(single-scale)/81.3(multi-scale) | 121 |
<div align=center>
<img src="https://user-images.githubusercontent.com/12907710/208044554-1e8de6b5-48d8-44e4-a7b5-75076c7ebb71.png"/>
</div>
MMYOLO currently implements the object detection and rotated object detection algorithm, but it has a significant training acceleration compared to the MMDeteciton version. The training speed is 2.6 times faster than the previous version.
## ๐ Introduction [๐](#-table-of-contents)
MMYOLO is an open source toolbox for YOLO series algorithms based on PyTorch and [MMDetection](https://github.com/open-mmlab/mmdetection). It is a part of the [OpenMMLab](https://openmmlab.com/) project.
The master branch works with **PyTorch 1.6+**.
<img src="https://user-images.githubusercontent.com/45811724/190993591-bd3f1f11-1c30-4b93-b5f4-05c9ff64ff7f.gif"/>
<details open>
<summary>Major features</summary>
- ๐น๏ธ **Unified and convenient benchmark**
MMYOLO unifies the implementation of modules in various YOLO algorithms and provides a unified benchmark. Users can compare and analyze in a fair and convenient way.
- ๐ **Rich and detailed documentation**
MMYOLO provides rich documentation for getting started, model deployment, advanced usages, and algorithm analysis, making it easy for users at different levels to get started and make extensions quickly.
- ๐งฉ **Modular Design**
MMYOLO decomposes the framework into different components where users can easily customize a model by combining different modules with various training and testing strategies.
<img src="https://user-images.githubusercontent.com/27466624/199999337-0544a4cb-3cbd-4f3e-be26-bcd9e74db7ff.jpg" alt="BaseModule-P5"/>
The figure above is contributed by RangeKing@GitHub, thank you very much!
And the figure of P6 model is in [model_design.md](docs/en/recommended_topics/model_design.md).
</details>
## ๐ ๏ธ Installation [๐](#-table-of-contents)
MMYOLO relies on PyTorch, MMCV, MMEngine, and MMDetection. Below are quick steps for installation. Please refer to the [Install Guide](docs/en/get_started/installation.md) for more detailed instructions.
```shell
conda create -n mmyolo python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y
conda activate mmyolo
pip install openmim
mim install "mmengine>=0.6.0"
mim install "mmcv>=2.0.0rc4,<2.1.0"
mim install "mmdet>=3.0.0,<4.0.0"
git clone https://github.com/open-mmlab/mmyolo.git
cd mmyolo
# Install albumentations
pip install -r requirements/albu.txt
# Install MMYOLO
mim install -v -e .
```
## ๐จโ๐ซ Tutorial [๐](#-table-of-contents)
MMYOLO is based on MMDetection and adopts the same code structure and design approach. To get better use of this, please read [MMDetection Overview](https://mmdetection.readthedocs.io/en/latest/get_started.html) for the first understanding of MMDetection.
The usage of MMYOLO is almost identical to MMDetection and all tutorials are straightforward to use, you can also learn about [MMDetection User Guide and Advanced Guide](https://mmdetection.readthedocs.io/en/3.x/).
For different parts from MMDetection, we have also prepared user guides and advanced guides, please read our [documentation](https://mmyolo.readthedocs.io/zenh_CN/latest/).
<details>
<summary>Get Started</summary>
- [Overview](docs/en/get_started/overview.md)
- [Dependencies](docs/en/get_started/dependencies.md)
- [Installation](docs/en/get_started/installation.md)
- [15 minutes object detection](docs/en/get_started/15_minutes_object_detection.md)
- [15 minutes rotated object detection](docs/en/get_started/15_minutes_rotated_object_detection.md)
- [15 minutes instance segmentation](docs/en/get_started/15_minutes_instance_segmentation.md)
- [Resources summary](docs/en/get_started/article.md)
</details>
<details>
<summary>Recommended Topics</summary>
- [How to contribute code to MMYOLO](docs/en/recommended_topics/contributing.md)
- [Training testing tricks](docs/en/recommended_topics/training_testing_tricks.md)
- [MMYOLO model design](docs/en/recommended_topics/model_design.md)
- [Algorithm principles and implementation](docs/en/recommended_topics/algorithm_descriptions/)
- [Replace the backbone network](docs/en/recommended_topics/replace_backbone.md)
- [MMYOLO model complexity analysis](docs/en/recommended_topics/complexity_analysis.md)
- [Annotation-to-deployment workflow for custom dataset](docs/en/recommended_topics/labeling_to_deployment_tutorials.md)
- [Visualization](docs/en/recommended_topics/visualization.md)
- [Model deployment](docs/en/recommended_topics/deploy/)
- [Troubleshooting steps](docs/en/recommended_topics/troubleshooting_steps.md)
- [MMYOLO application examples](docs/en/recommended_topics/application_examples/)
- [MM series repo essential basics](docs/en/recommended_topics/mm_basics.md)
- [Dataset preparation and description](docs/en/recommended_topics/dataset_preparation.md)
</details>
<details>
<summary>Common Usage</summary>
- [Resume training](docs/en/common_usage/resume_training.md)
- [Enabling and disabling SyncBatchNorm](docs/en/common_usage/syncbn.md)
- [Enabling AMP](docs/en/common_usage/amp_training.md)
- [Multi-scale training and testing](docs/en/common_usage/ms_training_testing.md)
- [TTA Related Notes](docs/en/common_usage/tta.md)
- [Add plugins to the backbone network](docs/en/common_usage/plugins.md)
- [Freeze layers](docs/en/common_usage/freeze_layers.md)
- [Output model predictions](docs/en/common_usage/output_predictions.md)
- [Set random seed](docs/en/common_usage/set_random_seed.md)
- [Module combination](docs/en/common_usage/module_combination.md)
- [Cross-library calls using mim](docs/en/common_usage/mim_usage.md)
- [Apply multiple Necks](docs/en/common_usage/multi_necks.md)
- [Specify specific device training or inference](docs/en/common_usage/specify_device.md)
- [Single and multi-channel application examples](docs/en/common_usage/single_multi_channel_applications.md)
</details>
<details>
<summary>Useful Tools</summary>
- [Browse coco json](docs/en/useful_tools/browse_coco_json.md)
- [Browse dataset](docs/en/useful_tools/browse_dataset.md)
- [Print config](docs/en/useful_tools/print_config.md)
- [Dataset analysis](docs/en/useful_tools/dataset_analysis.md)
- [Optimize anchors](docs/en/useful_tools/optimize_anchors.md)
- [Extract subcoco](docs/en/useful_tools/extract_subcoco.md)
- [Visualization scheduler](docs/en/useful_tools/vis_scheduler.md)
- [Dataset converters](docs/en/useful_tools/dataset_converters.md)
- [Download dataset](docs/en/useful_tools/download_dataset.md)
- [Log analysis](docs/en/useful_tools/log_analysis.md)
- [Model converters](docs/en/useful_tools/model_converters.md)
</details>
<details>
<summary>Basic Tutorials</summary>
- [Learn about configs with YOLOv5](docs/en/tutorials/config.md)
- [Data flow](docs/en/tutorials/data_flow.md)
- [Rotated detection](docs/en/tutorials/rotated_detection.md)
- [Custom Installation](docs/en/tutorials/custom_installation.md)
- [Common Warning Notes](docs/zh_cn/tutorials/warning_notes.md)
- [FAQ](docs/en/tutorials/faq.md)
</details>
<details>
<summary>Advanced Tutorials</summary>
- [MMYOLO cross-library application](docs/en/advanced_guides/cross-library_application.md)
</details>
<details>
<summary>Descriptions</summary>
- [Changelog](docs/en/notes/changelog.md)
- [Compatibility](docs/en/notes/compatibility.md)
- [Conventions](docs/en/notes/conventions.md)
- [Code Style](docs/en/notes/code_style.md)
</details>
## ๐ Overview of Benchmark and Model Zoo [๐](#-table-of-contents)
<div align=center>
<img src="https://user-images.githubusercontent.com/17425982/222087414-168175cc-dae6-4c5c-a8e3-3109a152dd19.png"/>
</div>
Results and models are available in the [model zoo](docs/en/model_zoo.md).
<details open>
<summary><b>Supported Tasks</b></summary>
- [x] Object detection
- [x] Rotated object detection
</details>
<details open>
<summary><b>Supported Algorithms</b></summary>
- [x] [YOLOv5](configs/yolov5)
- [ ] [YOLOv5u](configs/yolov5/yolov5u) (Inference only)
- [x] [YOLOX](configs/yolox)
- [x] [RTMDet](configs/rtmdet)
- [x] [RTMDet-Rotated](configs/rtmdet)
- [x] [YOLOv6](configs/yolov6)
- [x] [YOLOv7](configs/yolov7)
- [x] [PPYOLOE](configs/ppyoloe)
- [x] [YOLOv8](configs/yolov8)
</details>
<details open>
<summary><b>Supported Datasets</b></summary>
- [x] COCO Dataset
- [x] VOC Dataset
- [x] CrowdHuman Dataset
- [x] DOTA 1.0 Dataset
</details>
<details open>
<div align="center">
<b>Module Components</b>
</div>
<table align="center">
<tbody>
<tr align="center" valign="bottom">
<td>
<b>Backbones</b>
</td>
<td>
<b>Necks</b>
</td>
<td>
<b>Loss</b>
</td>
<td>
<b>Common</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li>YOLOv5CSPDarknet</li>
<li>YOLOv8CSPDarknet</li>
<li>YOLOXCSPDarknet</li>
<li>EfficientRep</li>
<li>CSPNeXt</li>
<li>YOLOv7Backbone</li>
<li>PPYOLOECSPResNet</li>
<li>mmdet backbone</li>
<li>mmcls backbone</li>
<li>timm</li>
</ul>
</td>
<td>
<ul>
<li>YOLOv5PAFPN</li>
<li>YOLOv8PAFPN</li>
<li>YOLOv6RepPAFPN</li>
<li>YOLOXPAFPN</li>
<li>CSPNeXtPAFPN</li>
<li>YOLOv7PAFPN</li>
<li>PPYOLOECSPPAFPN</li>
</ul>
</td>
<td>
<ul>
<li>IoULoss</li>
<li>mmdet loss</li>
</ul>
</td>
<td>
<ul>
</ul>
</td>
</tr>
</td>
</tr>
</tbody>
</table>
</details>
## โ FAQ [๐](#-table-of-contents)
Please refer to the [FAQ](docs/en/tutorials/faq.md) for frequently asked questions.
## ๐ Contributing [๐](#-table-of-contents)
We appreciate all contributions to improving MMYOLO. Ongoing projects can be found in our [GitHub Projects](https://github.com/open-mmlab/mmyolo/projects). Welcome community users to participate in these projects. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
## ๐ค Acknowledgement [๐](#-table-of-contents)
MMYOLO is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedback.
We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to re-implement existing methods and develop their own new detectors.
<div align="center">
<a href="https://github.com/open-mmlab/mmyolo/graphs/contributors"><img src="https://contrib.rocks/image?repo=open-mmlab/mmyolo"/></a>
</div>
## ๐๏ธ Citation [๐](#-table-of-contents)
If you find this project useful in your research, please consider citing:
```latex
@misc{mmyolo2022,
title={{MMYOLO: OpenMMLab YOLO} series toolbox and benchmark},
author={MMYOLO Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmyolo}},
year={2022}
}
```
## ๐ซ License [๐](#-table-of-contents)
This project is released under the [GPL 3.0 license](LICENSE).
## ๐๏ธ Projects in OpenMMLab [๐](#-table-of-contents)
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.
- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMEval](https://github.com/open-mmlab/mmeval): OpenMMLab machine learning evaluation library.
- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.
Raw data
{
"_id": null,
"home_page": "https://github.com/open-mmlab/mmyolo",
"name": "mmyolo-open",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "computer vision, object detection",
"author": "MMYOLO Contributors",
"author_email": "openmmlab@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/fe/40/7350b0ad68274104274944db96a162ba09a2408088faf2395f3d92cd088d/mmyolo_open-0.6.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img width=\"100%\" src=\"https://user-images.githubusercontent.com/27466624/222385101-516e551c-49f5-480d-a135-4b24ee6dc308.png\"/>\n <div> </div>\n <div align=\"center\">\n <b><font size=\"5\">OpenMMLab website</font></b>\n <sup>\n <a href=\"https://openmmlab.com\">\n <i><font size=\"4\">HOT</font></i>\n </a>\n </sup>\n \n <b><font size=\"5\">OpenMMLab platform</font></b>\n <sup>\n <a href=\"https://platform.openmmlab.com\">\n <i><font size=\"4\">TRY IT OUT</font></i>\n </a>\n </sup>\n </div>\n <div> </div>\n\n[![PyPI](https://img.shields.io/pypi/v/mmyolo)](https://pypi.org/project/mmyolo)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmyolo.readthedocs.io/en/latest/)\n[![deploy](https://github.com/open-mmlab/mmyolo/workflows/deploy/badge.svg)](https://github.com/open-mmlab/mmyolo/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmyolo/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmyolo)\n[![license](https://img.shields.io/github/license/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/blob/main/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmyolo.svg)](https://github.com/open-mmlab/mmyolo/issues)\n\n[\ud83d\udcd8Documentation](https://mmyolo.readthedocs.io/en/latest/) |\n[\ud83d\udee0\ufe0fInstallation](https://mmyolo.readthedocs.io/en/latest/get_started/installation.html) |\n[\ud83d\udc40Model Zoo](https://mmyolo.readthedocs.io/en/latest/model_zoo.html) |\n[\ud83c\udd95Update News](https://mmyolo.readthedocs.io/en/latest/notes/changelog.html) |\n[\ud83e\udd14Reporting Issues](https://github.com/open-mmlab/mmyolo/issues/new/choose)\n\n</div>\n\n<div align=\"center\">\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](README_zh-CN.md)\n\n</div>\n\n<div align=\"center\">\n <a href=\"https://openmmlab.medium.com/\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png\" width=\"3%\" alt=\"\" /></a>\n <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n <a href=\"https://discord.com/channels/1037617289144569886/1046608014234370059\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png\" width=\"3%\" alt=\"\" /></a>\n <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n <a href=\"https://twitter.com/OpenMMLab\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png\" width=\"3%\" alt=\"\" /></a>\n <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n <a href=\"https://www.youtube.com/openmmlab\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png\" width=\"3%\" alt=\"\" /></a>\n <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n <a href=\"https://space.bilibili.com/1293512903\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png\" width=\"3%\" alt=\"\" /></a>\n <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n <a href=\"https://www.zhihu.com/people/openmmlab\" style=\"text-decoration:none;\">\n <img src=\"https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png\" width=\"3%\" alt=\"\" /></a>\n</div>\n\n## \ud83d\udcc4 Table of Contents\n\n- [\ud83e\udd73 \ud83d\ude80 What's New](#--whats-new-)\n - [\u2728 Highlight](#-highlight-)\n- [\ud83d\udcd6 Introduction](#-introduction-)\n- [\ud83d\udee0\ufe0f Installation](#%EF%B8%8F-installation-)\n- [\ud83d\udc68\u200d\ud83c\udfeb Tutorial](#-tutorial-)\n- [\ud83d\udcca Overview of Benchmark and Model Zoo](#-overview-of-benchmark-and-model-zoo-)\n- [\u2753 FAQ](#-faq-)\n- [\ud83d\ude4c Contributing](#-contributing-)\n- [\ud83e\udd1d Acknowledgement](#-acknowledgement-)\n- [\ud83d\udd8a\ufe0f Citation](#\ufe0f-citation-)\n- [\ud83c\udfab License](#-license-)\n- [\ud83c\udfd7\ufe0f Projects in OpenMMLab](#%EF%B8%8F-projects-in-openmmlab-)\n\n## \ud83e\udd73 \ud83d\ude80 What's New [\ud83d\udd1d](#-table-of-contents)\n\n\ud83d\udc8e **v0.6.0** was released on 15/8/2023:\n\n- Support YOLOv5 instance segmentation\n- Support YOLOX-Pose based on MMPose\n- Add 15 minutes instance segmentation tutorial.\n- YOLOv5 supports using mask annotation to optimize bbox\n- Add Multi-scale training and testing docs\n\nFor release history and update details, please refer to [changelog](https://mmyolo.readthedocs.io/en/latest/notes/changelog.html).\n\n### \u2728 Highlight [\ud83d\udd1d](#-table-of-contents)\n\nWe are excited to announce our latest work on real-time object recognition tasks, **RTMDet**, a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the [technical report](https://arxiv.org/abs/2212.07784). Pre-trained models are [here](configs/rtmdet).\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real)\n\n| Task | Dataset | AP | FPS(TRT FP16 BS1 3090) |\n| ------------------------ | ------- | ------------------------------------ | ---------------------- |\n| Object Detection | COCO | 52.8 | 322 |\n| Instance Segmentation | COCO | 44.6 | 188 |\n| Rotated Object Detection | DOTA | 78.9(single-scale)/81.3(multi-scale) | 121 |\n\n<div align=center>\n<img src=\"https://user-images.githubusercontent.com/12907710/208044554-1e8de6b5-48d8-44e4-a7b5-75076c7ebb71.png\"/>\n</div>\n\nMMYOLO currently implements the object detection and rotated object detection algorithm, but it has a significant training acceleration compared to the MMDeteciton version. The training speed is 2.6 times faster than the previous version.\n\n## \ud83d\udcd6 Introduction [\ud83d\udd1d](#-table-of-contents)\n\nMMYOLO is an open source toolbox for YOLO series algorithms based on PyTorch and [MMDetection](https://github.com/open-mmlab/mmdetection). It is a part of the [OpenMMLab](https://openmmlab.com/) project.\n\nThe master branch works with **PyTorch 1.6+**.\n<img src=\"https://user-images.githubusercontent.com/45811724/190993591-bd3f1f11-1c30-4b93-b5f4-05c9ff64ff7f.gif\"/>\n\n<details open>\n<summary>Major features</summary>\n\n- \ud83d\udd79\ufe0f **Unified and convenient benchmark**\n\n MMYOLO unifies the implementation of modules in various YOLO algorithms and provides a unified benchmark. Users can compare and analyze in a fair and convenient way.\n\n- \ud83d\udcda **Rich and detailed documentation**\n\n MMYOLO provides rich documentation for getting started, model deployment, advanced usages, and algorithm analysis, making it easy for users at different levels to get started and make extensions quickly.\n\n- \ud83e\udde9 **Modular Design**\n\n MMYOLO decomposes the framework into different components where users can easily customize a model by combining different modules with various training and testing strategies.\n\n<img src=\"https://user-images.githubusercontent.com/27466624/199999337-0544a4cb-3cbd-4f3e-be26-bcd9e74db7ff.jpg\" alt=\"BaseModule-P5\"/>\n The figure above is contributed by RangeKing@GitHub, thank you very much!\n\nAnd the figure of P6 model is in [model_design.md](docs/en/recommended_topics/model_design.md).\n\n</details>\n\n## \ud83d\udee0\ufe0f Installation [\ud83d\udd1d](#-table-of-contents)\n\nMMYOLO relies on PyTorch, MMCV, MMEngine, and MMDetection. Below are quick steps for installation. Please refer to the [Install Guide](docs/en/get_started/installation.md) for more detailed instructions.\n\n```shell\nconda create -n mmyolo python=3.8 pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=11.3 -c pytorch -y\nconda activate mmyolo\npip install openmim\nmim install \"mmengine>=0.6.0\"\nmim install \"mmcv>=2.0.0rc4,<2.1.0\"\nmim install \"mmdet>=3.0.0,<4.0.0\"\ngit clone https://github.com/open-mmlab/mmyolo.git\ncd mmyolo\n# Install albumentations\npip install -r requirements/albu.txt\n# Install MMYOLO\nmim install -v -e .\n```\n\n## \ud83d\udc68\u200d\ud83c\udfeb Tutorial [\ud83d\udd1d](#-table-of-contents)\n\nMMYOLO is based on MMDetection and adopts the same code structure and design approach. To get better use of this, please read [MMDetection Overview](https://mmdetection.readthedocs.io/en/latest/get_started.html) for the first understanding of MMDetection.\n\nThe usage of MMYOLO is almost identical to MMDetection and all tutorials are straightforward to use, you can also learn about [MMDetection User Guide and Advanced Guide](https://mmdetection.readthedocs.io/en/3.x/).\n\nFor different parts from MMDetection, we have also prepared user guides and advanced guides, please read our [documentation](https://mmyolo.readthedocs.io/zenh_CN/latest/).\n\n<details>\n<summary>Get Started</summary>\n\n- [Overview](docs/en/get_started/overview.md)\n- [Dependencies](docs/en/get_started/dependencies.md)\n- [Installation](docs/en/get_started/installation.md)\n- [15 minutes object detection](docs/en/get_started/15_minutes_object_detection.md)\n- [15 minutes rotated object detection](docs/en/get_started/15_minutes_rotated_object_detection.md)\n- [15 minutes instance segmentation](docs/en/get_started/15_minutes_instance_segmentation.md)\n- [Resources summary](docs/en/get_started/article.md)\n\n</details>\n\n<details>\n<summary>Recommended Topics</summary>\n\n- [How to contribute code to MMYOLO](docs/en/recommended_topics/contributing.md)\n- [Training testing tricks](docs/en/recommended_topics/training_testing_tricks.md)\n- [MMYOLO model design](docs/en/recommended_topics/model_design.md)\n- [Algorithm principles and implementation](docs/en/recommended_topics/algorithm_descriptions/)\n- [Replace the backbone network](docs/en/recommended_topics/replace_backbone.md)\n- [MMYOLO model complexity analysis](docs/en/recommended_topics/complexity_analysis.md)\n- [Annotation-to-deployment workflow for custom dataset](docs/en/recommended_topics/labeling_to_deployment_tutorials.md)\n- [Visualization](docs/en/recommended_topics/visualization.md)\n- [Model deployment](docs/en/recommended_topics/deploy/)\n- [Troubleshooting steps](docs/en/recommended_topics/troubleshooting_steps.md)\n- [MMYOLO application examples](docs/en/recommended_topics/application_examples/)\n- [MM series repo essential basics](docs/en/recommended_topics/mm_basics.md)\n- [Dataset preparation and description](docs/en/recommended_topics/dataset_preparation.md)\n\n</details>\n\n<details>\n<summary>Common Usage</summary>\n\n- [Resume training](docs/en/common_usage/resume_training.md)\n- [Enabling and disabling SyncBatchNorm](docs/en/common_usage/syncbn.md)\n- [Enabling AMP](docs/en/common_usage/amp_training.md)\n- [Multi-scale training and testing](docs/en/common_usage/ms_training_testing.md)\n- [TTA Related Notes](docs/en/common_usage/tta.md)\n- [Add plugins to the backbone network](docs/en/common_usage/plugins.md)\n- [Freeze layers](docs/en/common_usage/freeze_layers.md)\n- [Output model predictions](docs/en/common_usage/output_predictions.md)\n- [Set random seed](docs/en/common_usage/set_random_seed.md)\n- [Module combination](docs/en/common_usage/module_combination.md)\n- [Cross-library calls using mim](docs/en/common_usage/mim_usage.md)\n- [Apply multiple Necks](docs/en/common_usage/multi_necks.md)\n- [Specify specific device training or inference](docs/en/common_usage/specify_device.md)\n- [Single and multi-channel application examples](docs/en/common_usage/single_multi_channel_applications.md)\n\n</details>\n\n<details>\n<summary>Useful Tools</summary>\n\n- [Browse coco json](docs/en/useful_tools/browse_coco_json.md)\n- [Browse dataset](docs/en/useful_tools/browse_dataset.md)\n- [Print config](docs/en/useful_tools/print_config.md)\n- [Dataset analysis](docs/en/useful_tools/dataset_analysis.md)\n- [Optimize anchors](docs/en/useful_tools/optimize_anchors.md)\n- [Extract subcoco](docs/en/useful_tools/extract_subcoco.md)\n- [Visualization scheduler](docs/en/useful_tools/vis_scheduler.md)\n- [Dataset converters](docs/en/useful_tools/dataset_converters.md)\n- [Download dataset](docs/en/useful_tools/download_dataset.md)\n- [Log analysis](docs/en/useful_tools/log_analysis.md)\n- [Model converters](docs/en/useful_tools/model_converters.md)\n\n</details>\n\n<details>\n<summary>Basic Tutorials</summary>\n\n- [Learn about configs with YOLOv5](docs/en/tutorials/config.md)\n- [Data flow](docs/en/tutorials/data_flow.md)\n- [Rotated detection](docs/en/tutorials/rotated_detection.md)\n- [Custom Installation](docs/en/tutorials/custom_installation.md)\n- [Common Warning Notes](docs/zh_cn/tutorials/warning_notes.md)\n- [FAQ](docs/en/tutorials/faq.md)\n\n</details>\n\n<details>\n<summary>Advanced Tutorials</summary>\n\n- [MMYOLO cross-library application](docs/en/advanced_guides/cross-library_application.md)\n\n</details>\n\n<details>\n<summary>Descriptions</summary>\n\n- [Changelog](docs/en/notes/changelog.md)\n- [Compatibility](docs/en/notes/compatibility.md)\n- [Conventions](docs/en/notes/conventions.md)\n- [Code Style](docs/en/notes/code_style.md)\n\n</details>\n\n## \ud83d\udcca Overview of Benchmark and Model Zoo [\ud83d\udd1d](#-table-of-contents)\n\n<div align=center>\n<img src=\"https://user-images.githubusercontent.com/17425982/222087414-168175cc-dae6-4c5c-a8e3-3109a152dd19.png\"/>\n</div>\n\nResults and models are available in the [model zoo](docs/en/model_zoo.md).\n\n<details open>\n<summary><b>Supported Tasks</b></summary>\n\n- [x] Object detection\n- [x] Rotated object detection\n\n</details>\n\n<details open>\n<summary><b>Supported Algorithms</b></summary>\n\n- [x] [YOLOv5](configs/yolov5)\n- [ ] [YOLOv5u](configs/yolov5/yolov5u) (Inference only)\n- [x] [YOLOX](configs/yolox)\n- [x] [RTMDet](configs/rtmdet)\n- [x] [RTMDet-Rotated](configs/rtmdet)\n- [x] [YOLOv6](configs/yolov6)\n- [x] [YOLOv7](configs/yolov7)\n- [x] [PPYOLOE](configs/ppyoloe)\n- [x] [YOLOv8](configs/yolov8)\n\n</details>\n\n<details open>\n<summary><b>Supported Datasets</b></summary>\n\n- [x] COCO Dataset\n- [x] VOC Dataset\n- [x] CrowdHuman Dataset\n- [x] DOTA 1.0 Dataset\n\n</details>\n\n<details open>\n<div align=\"center\">\n <b>Module Components</b>\n</div>\n<table align=\"center\">\n <tbody>\n <tr align=\"center\" valign=\"bottom\">\n <td>\n <b>Backbones</b>\n </td>\n <td>\n <b>Necks</b>\n </td>\n <td>\n <b>Loss</b>\n </td>\n <td>\n <b>Common</b>\n </td>\n </tr>\n <tr valign=\"top\">\n <td>\n <ul>\n <li>YOLOv5CSPDarknet</li>\n <li>YOLOv8CSPDarknet</li>\n <li>YOLOXCSPDarknet</li>\n <li>EfficientRep</li>\n <li>CSPNeXt</li>\n <li>YOLOv7Backbone</li>\n <li>PPYOLOECSPResNet</li>\n <li>mmdet backbone</li>\n <li>mmcls backbone</li>\n <li>timm</li>\n </ul>\n </td>\n <td>\n <ul>\n <li>YOLOv5PAFPN</li>\n <li>YOLOv8PAFPN</li>\n <li>YOLOv6RepPAFPN</li>\n <li>YOLOXPAFPN</li>\n <li>CSPNeXtPAFPN</li>\n <li>YOLOv7PAFPN</li>\n <li>PPYOLOECSPPAFPN</li>\n </ul>\n </td>\n <td>\n <ul>\n <li>IoULoss</li>\n <li>mmdet loss</li>\n </ul>\n </td>\n <td>\n <ul>\n </ul>\n </td>\n </tr>\n</td>\n </tr>\n </tbody>\n</table>\n\n</details>\n\n## \u2753 FAQ [\ud83d\udd1d](#-table-of-contents)\n\nPlease refer to the [FAQ](docs/en/tutorials/faq.md) for frequently asked questions.\n\n## \ud83d\ude4c Contributing [\ud83d\udd1d](#-table-of-contents)\n\nWe appreciate all contributions to improving MMYOLO. Ongoing projects can be found in our [GitHub Projects](https://github.com/open-mmlab/mmyolo/projects). Welcome community users to participate in these projects. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.\n\n## \ud83e\udd1d Acknowledgement [\ud83d\udd1d](#-table-of-contents)\n\nMMYOLO is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedback.\nWe wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to re-implement existing methods and develop their own new detectors.\n\n<div align=\"center\">\n <a href=\"https://github.com/open-mmlab/mmyolo/graphs/contributors\"><img src=\"https://contrib.rocks/image?repo=open-mmlab/mmyolo\"/></a>\n</div>\n\n## \ud83d\udd8a\ufe0f Citation [\ud83d\udd1d](#-table-of-contents)\n\nIf you find this project useful in your research, please consider citing:\n\n```latex\n@misc{mmyolo2022,\n title={{MMYOLO: OpenMMLab YOLO} series toolbox and benchmark},\n author={MMYOLO Contributors},\n howpublished = {\\url{https://github.com/open-mmlab/mmyolo}},\n year={2022}\n}\n```\n\n## \ud83c\udfab License [\ud83d\udd1d](#-table-of-contents)\n\nThis project is released under the [GPL 3.0 license](LICENSE).\n\n## \ud83c\udfd7\ufe0f Projects in OpenMMLab [\ud83d\udd1d](#-table-of-contents)\n\n- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.\n- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMEval](https://github.com/open-mmlab/mmeval): OpenMMLab machine learning evaluation library.\n- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.\n\n\n",
"bugtrack_url": null,
"license": "GPL License 3.0",
"summary": "OpenMMLab Toolbox of YOLO",
"version": "0.6.1",
"project_urls": {
"Homepage": "https://github.com/open-mmlab/mmyolo"
},
"split_keywords": [
"computer vision",
" object detection"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a741cda986639d8febf9ab7587cfe13991ef31d846031bab53b82a053c88adb3",
"md5": "8e9fbdd88204e222046d7ed944597005",
"sha256": "ef3dd66f8bf5965ba91310e1e88eccbfa3e63e6d54a203ffd225229c15476cfd"
},
"downloads": -1,
"filename": "mmyolo_open-0.6.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8e9fbdd88204e222046d7ed944597005",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 453751,
"upload_time": "2024-07-19T16:30:58",
"upload_time_iso_8601": "2024-07-19T16:30:58.034933Z",
"url": "https://files.pythonhosted.org/packages/a7/41/cda986639d8febf9ab7587cfe13991ef31d846031bab53b82a053c88adb3/mmyolo_open-0.6.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fe407350b0ad68274104274944db96a162ba09a2408088faf2395f3d92cd088d",
"md5": "c30dfb8547c57d6d2f7ad3d259fec265",
"sha256": "320e852cceae14c2b7b1c21dbf54776288c7d0c541a92fc19f0e1b37f309c09c"
},
"downloads": -1,
"filename": "mmyolo_open-0.6.1.tar.gz",
"has_sig": false,
"md5_digest": "c30dfb8547c57d6d2f7ad3d259fec265",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 283221,
"upload_time": "2024-07-19T16:31:00",
"upload_time_iso_8601": "2024-07-19T16:31:00.353851Z",
"url": "https://files.pythonhosted.org/packages/fe/40/7350b0ad68274104274944db96a162ba09a2408088faf2395f3d92cd088d/mmyolo_open-0.6.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-19 16:31:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "open-mmlab",
"github_project": "mmyolo",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"circle": true,
"requirements": [],
"lcname": "mmyolo-open"
}