<div align="center">
<h2>
Yolor-Pip: Packaged version of the Yolor repository
</h2>
<h4>
<img width="800" alt="teaser" src="doc/figure.png">
</h4>
<div>
<a href="https://pepy.tech/project/yolor"><img src="https://pepy.tech/badge/yolor" alt="downloads"></a>
<a href="https://badge.fury.io/py/yolor"><img src="https://badge.fury.io/py/yolor.svg" alt="pypi version"></a>
</div>
</div>
## <div align="center">Overview</div>
This repo is a packaged version of the [Yolor](https://github.com/WongKinYiu/yolor) model.
## Benchmark
| Model | Test Size | AP<sup>test</sup> | AP<sub>50</sub><sup>test</sup> | AP<sub>75</sub><sup>test</sup> | batch1 throughput | batch32 inference |
| :-- | :-: | :-: | :-: | :-: | :-: | :-: |
| **YOLOR-CSP** | 640 | **52.8%** | **71.2%** | **57.6%** | 106 *fps* | 3.2 *ms* |
| **YOLOR-CSP-X** | 640 | **54.8%** | **73.1%** | **59.7%** | 87 *fps* | 5.5 *ms* |
| **YOLOR-P6** | 1280 | **55.7%** | **73.3%** | **61.0%** | 76 *fps* | 8.3 *ms* |
| **YOLOR-W6** | 1280 | **56.9%** | **74.4%** | **62.2%** | 66 *fps* | 10.7 *ms* |
| **YOLOR-E6** | 1280 | **57.6%** | **75.2%** | **63.0%** | 45 *fps* | 17.1 *ms* |
| **YOLOR-D6** | 1280 | **58.2%** | **75.8%** | **63.8%** | 34 *fps* | 21.8 *ms* |
| | | | | | | |
| **YOLOv4-P5** | 896 | **51.8%** | **70.3%** | **56.6%** | 41 *fps* (old) | - |
| **YOLOv4-P6** | 1280 | **54.5%** | **72.6%** | **59.8%** | 30 *fps* (old) | - |
| **YOLOv4-P7** | 1536 | **55.5%** | **73.4%** | **60.8%** | 16 *fps* (old) | - |
| | | | | | | |
### Installation
```
pip install yolor
```
### Yolov6 Inference
```python
from yolor.helpers import Yolor
model = Yolor(cfg='yolor/cfg/yolor_p6.cfg', weights='yolor/yolor_p6.pt', imgsz=640, device='cuda:0')
model.classes = None
model.conf = 0.25
model.iou_ = 0.45
model.show = False
model.save = True
model.predict('yolor/data/highway.jpg')
```
### Citation
```bibtex
@article{wang2021you,
title={You Only Learn One Representation: Unified Network for Multiple Tasks},
author={Wang, Chien-Yao and Yeh, I-Hau and Liao, Hong-Yuan Mark},
journal={arXiv preprint arXiv:2105.04206},
year={2021}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/kadirnar/yolox-pip",
"name": "yolor",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "machine-learning,deep-learning,pytorch,vision,image-classification,object-detection,yolox,yolov7,yolov6,yolo detector,yolov5",
"author": "kadirnar",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/1d/13/2777a737946eabf1f204f1098c05a5bf8629d033c208aad8c5dfbf2f4d98/yolor-0.0.6.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n<h2>\n Yolor-Pip: Packaged version of the Yolor repository \n</h2>\n<h4>\n <img width=\"800\" alt=\"teaser\" src=\"doc/figure.png\">\n</h4>\n<div>\n <a href=\"https://pepy.tech/project/yolor\"><img src=\"https://pepy.tech/badge/yolor\" alt=\"downloads\"></a>\n <a href=\"https://badge.fury.io/py/yolor\"><img src=\"https://badge.fury.io/py/yolor.svg\" alt=\"pypi version\"></a>\n</div>\n</div>\n\n## <div align=\"center\">Overview</div>\n\nThis repo is a packaged version of the [Yolor](https://github.com/WongKinYiu/yolor) model.\n## Benchmark\n| Model | Test Size | AP<sup>test</sup> | AP<sub>50</sub><sup>test</sup> | AP<sub>75</sub><sup>test</sup> | batch1 throughput | batch32 inference |\n| :-- | :-: | :-: | :-: | :-: | :-: | :-: |\n| **YOLOR-CSP** | 640 | **52.8%** | **71.2%** | **57.6%** | 106 *fps* | 3.2 *ms* |\n| **YOLOR-CSP-X** | 640 | **54.8%** | **73.1%** | **59.7%** | 87 *fps* | 5.5 *ms* |\n| **YOLOR-P6** | 1280 | **55.7%** | **73.3%** | **61.0%** | 76 *fps* | 8.3 *ms* |\n| **YOLOR-W6** | 1280 | **56.9%** | **74.4%** | **62.2%** | 66 *fps* | 10.7 *ms* |\n| **YOLOR-E6** | 1280 | **57.6%** | **75.2%** | **63.0%** | 45 *fps* | 17.1 *ms* |\n| **YOLOR-D6** | 1280 | **58.2%** | **75.8%** | **63.8%** | 34 *fps* | 21.8 *ms* |\n| | | | | | | |\n| **YOLOv4-P5** | 896 | **51.8%** | **70.3%** | **56.6%** | 41 *fps* (old) | - |\n| **YOLOv4-P6** | 1280 | **54.5%** | **72.6%** | **59.8%** | 30 *fps* (old) | - |\n| **YOLOv4-P7** | 1536 | **55.5%** | **73.4%** | **60.8%** | 16 *fps* (old) | - |\n| | | | | | | |\n### Installation\n```\npip install yolor\n```\n\n### Yolov6 Inference\n```python\nfrom yolor.helpers import Yolor\n\nmodel = Yolor(cfg='yolor/cfg/yolor_p6.cfg', weights='yolor/yolor_p6.pt', imgsz=640, device='cuda:0')\n\nmodel.classes = None\nmodel.conf = 0.25\nmodel.iou_ = 0.45\nmodel.show = False\nmodel.save = True\n\nmodel.predict('yolor/data/highway.jpg')\n```\n### Citation\n```bibtex\n@article{wang2021you,\n title={You Only Learn One Representation: Unified Network for Multiple Tasks},\n author={Wang, Chien-Yao and Yeh, I-Hau and Liao, Hong-Yuan Mark},\n journal={arXiv preprint arXiv:2105.04206},\n year={2021}\n}\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Packaged version of the Yolor repository",
"version": "0.0.6",
"split_keywords": [
"machine-learning",
"deep-learning",
"pytorch",
"vision",
"image-classification",
"object-detection",
"yolox",
"yolov7",
"yolov6",
"yolo detector",
"yolov5"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1d132777a737946eabf1f204f1098c05a5bf8629d033c208aad8c5dfbf2f4d98",
"md5": "4c93f3aef32f20df1bb41d6ce2200731",
"sha256": "7c49c7131e49ec20e9f1cf09deb237da95ed6a9018376c530dcb65558c5c5979"
},
"downloads": -1,
"filename": "yolor-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "4c93f3aef32f20df1bb41d6ce2200731",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 90565,
"upload_time": "2023-01-15T18:57:06",
"upload_time_iso_8601": "2023-01-15T18:57:06.818553Z",
"url": "https://files.pythonhosted.org/packages/1d/13/2777a737946eabf1f204f1098c05a5bf8629d033c208aad8c5dfbf2f4d98/yolor-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-01-15 18:57:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "kadirnar",
"github_project": "yolox-pip",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "yolor"
}