torchyolo


Nametorchyolo JSON
Version 1.2.3 PyPI version JSON
download
home_pagehttps://github.com/kadirnar/torchyolo
SummaryPyTorch implementation of YOLOv5, YOLOv6, YOLOv7, YOLOv8
upload_time2023-01-30 13:46:16
maintainer
docs_urlNone
authorkadirnar
requires_python>=3.6
licenseGPL-3.0
keywords machine-learning deep-learning pytorch vision yolov6 yolox object-detection yolov7 detector yolov5
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<h2>
  TorchYolo: YOLO Series Object Detection and Track Algorithm Library
</h2>
<h4>
    <img width="700" alt="teaser" src="https://github.com/kadirnar/torchyolo/releases/download/v1.1.2/demo.gif">
<div>
    <a href="https://pepy.tech/project/torchyolo"><img src="https://pepy.tech/badge/torchyolo" alt="downloads"></a>
    <a href="https://badge.fury.io/py/torchyolo"><img src="https://badge.fury.io/py/torchyolo.svg" alt="pypi version"></a>
    <a href="https://huggingface.co/spaces/kadirnar/torchyolo"><img src="https://img.shields.io/badge/%20HuggingFace%20-Demo-blue.svg" alt="HuggingFace Spaces"></a>
</div>
</div>


### Introduction

The TorchYolo library aims to support YOLO models(like YOLOv5, YOLOv6, YOLOv7, YOLOv8) and Tracker Algorithm(Sort, StrongSort, ByteTrack, OcSort and Norfair) and provide a unified interface for training and inference. The library is based on PyTorch and is designed to be easy to use and extend.

### Installation 
```bash
pip install torchyolo
```
### Use From Python
First download the [default_config.yaml](https://github.com/kadirnar/torchyolo/releases/download/v1.0.0/default_config.yaml) file.

```python
from torchyolo import YoloHub

model = YoloHub(
    config_path="default_config.yaml",
    model_type="yolov8",
    model_path="yolov8s.pt",
)
result = model.predict(
    source="test.mp4", 
    tracker_type="NORFAIR", # or False
    tracker_config_path="norfair_track.yaml"
)
```
### Use From Command Line
```bash
torchyolo predict --config_path torchyolo/configs/default_config.yaml --model_type yolov5 --model_path yolov5s.pt
torchyolo predict --config_path torchyolo/configs/default_config.yaml --model_type yolov5 --model_path yolov5s.pt --tracker_config_path norfair.yaml
```

### Detect Configuration
```yaml
DETECTOR_CONFIG:
  # The threshold for the IOU score
  IOU_TH: 0.45
  # The threshold for the confidence score
  CONF_TH: 0.25
  # The size of the image
  IMAGE_SIZE: 640
  # The device to run the detector
  DEVICE: cuda:0
  # F16 precision
  HALF: False
  # The path of the yolov6 label file
  YOLOV6_YAML_FILE: torchyolo/configs/yolov6/coco.yaml
  # The path of the yolovx config file
  YOLOX_CONFIG_PATH: configs.yolox.yolox_s
  # The path of the Hugging Face model
  HUGGING_FACE_MODEL: False 

DATA_CONFIG:
  # The path of the output video
  OUTPUT_PATH: output.mp4
  # Save the video
  SHOW: False 
  # Show the video
  SAVE: True

```

### Tracker Config File

ByteTrack: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/default_config.yaml

OcSort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/oc_sort.yaml

StrongSort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/strong_sort.yaml

Norfair: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/norfair_track.yaml

Sort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/sort_track.yaml

## Model Architecture
```python
from torchyolo import YoloHub

model = YoloHub(config_path="torchyolo/default_config.yaml")
result = model.view_model(file_format="pdf")
```

# Contributing
Before opening a PR:
  - Install required development packages:
    ```bash
    pip install -r requirements-dev.txt
    ```
  - Reformat the code with black and isort:
    ```bash
    bash script/code_format.sh
    ``` 

### Acknowledgement
A part of the code is borrowed from [SAHI](https://github.com/obss/sahi). Many thanks for their wonderful works.

### Citation
```bibtex
@article{wang2022yolov7,
  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2207.02696},
  year={2022}
}
```
```bibtex
@article{li2022yolov6,
  title={YOLOv6: A single-stage object detection framework for industrial applications},
  author={Li, Chuyi and Li, Lulu and Jiang, Hongliang and Weng, Kaiheng and Geng, Yifei and Li, Liang and Ke, Zaidan and Li, Qingyuan and Cheng, Meng and Nie, Weiqiang and others},
  journal={arXiv preprint arXiv:2209.02976},
  year={2022}
}
```
```bibtex
@software{glenn_jocher_2020_4154370,
  author       = {Glenn Jocher and,Alex Stoken and,Jirka Borovec and,NanoCode012 and,ChristopherSTAN and,Liu Changyu and,Laughing and,tkianai and,Adam Hogan and,lorenzomammana and,yxNONG and,AlexWang1900 and,Laurentiu Diaconu and,Marc and,wanghaoyang0106 and,ml5ah and,Doug and,Francisco Ingham and,Frederik and,Guilhen and,Hatovix and,Jake Poznanski and,Jiacong Fang and,Lijun Yu δΊŽεŠ›ε†› and,changyu98 and,Mingyu Wang and,Naman Gupta and,Osama Akhtar and,PetrDvoracek and,Prashant Rai},
  title={{ultralytics/yolov5: v7.2 - Bug Fixes and 
                   Performance Improvements}},
  month= oct,
  year= 2020,
  publisher= {Zenodo},
  version= {v3.1},
  doi= {10.5281/zenodo.4154370},
  url= {https://doi.org/10.5281/zenodo.4154370}
}
```
```bibtex
@article{cao2022observation,
  title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
  author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
  journal={arXiv preprint arXiv:2203.14360},
  year={2022}
}
```
```bibtex
@article{zhang2022bytetrack,
  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Weng, Fucheng and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2022}
}
```
```bibtex
@article{du2022strongsort,
  title={Strongsort: Make deepsort great again},
  author={Du, Yunhao and Song, Yang and Yang, Bo and Zhao, Yanyun},
  journal={arXiv preprint arXiv:2202.13514},
  year={2022}
}
```
```bibtex
@inproceedings{Bewley2016_sort,
  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
  title={Simple online and realtime tracking},
  year={2016},
  pages={3464-3468},
  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
  doi={10.1109/ICIP.2016.7533003}
}
```
```bibtex

@article{torchreid,
    title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
    author={Zhou, Kaiyang and Xiang, Tao},
    journal={arXiv preprint arXiv:1910.10093},
    year={2019}
} 
```
```bibtex
@inproceedings{Bewley2016_sort,
  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
  title={Simple online and realtime tracking},
  year={2016},
  pages={3464-3468},
  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
  doi={10.1109/ICIP.2016.7533003}
}
```
```bibtex
@inproceedings{zhou2019osnet,
    title={Omni-Scale Feature Learning for Person Re-Identification},
    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
    booktitle={ICCV},
    year={2019}
}
```
```bibtex
@inproceedings{Bewley2016_sort,
  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},
  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},
  title={Simple online and realtime tracking},
  year={2016},
  pages={3464-3468},
  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},
  doi={10.1109/ICIP.2016.7533003}
}
```
```bibtex
@article{zhou2021osnet,
    title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
    journal={TPAMI},
    year={2021}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kadirnar/torchyolo",
    "name": "torchyolo",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "machine-learning,deep-learning,pytorch,vision,yolov6,yolox,object-detection,yolov7,detector,yolov5",
    "author": "kadirnar",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/33/bc/a109f1e55412f32196ca503c5bc3c5d7ac8a7b403b3fcf181848111f452a/torchyolo-1.2.3.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n<h2>\n  TorchYolo: YOLO Series Object Detection and Track Algorithm Library\n</h2>\n<h4>\n    <img width=\"700\" alt=\"teaser\" src=\"https://github.com/kadirnar/torchyolo/releases/download/v1.1.2/demo.gif\">\n<div>\n    <a href=\"https://pepy.tech/project/torchyolo\"><img src=\"https://pepy.tech/badge/torchyolo\" alt=\"downloads\"></a>\n    <a href=\"https://badge.fury.io/py/torchyolo\"><img src=\"https://badge.fury.io/py/torchyolo.svg\" alt=\"pypi version\"></a>\n    <a href=\"https://huggingface.co/spaces/kadirnar/torchyolo\"><img src=\"https://img.shields.io/badge/%20HuggingFace%20-Demo-blue.svg\" alt=\"HuggingFace Spaces\"></a>\n</div>\n</div>\n\n\n### Introduction\n\nThe TorchYolo library aims to support YOLO models(like YOLOv5, YOLOv6, YOLOv7, YOLOv8) and Tracker Algorithm(Sort, StrongSort, ByteTrack, OcSort and Norfair) and provide a unified interface for training and inference. The library is based on PyTorch and is designed to be easy to use and extend.\n\n### Installation \n```bash\npip install torchyolo\n```\n### Use From Python\nFirst download the [default_config.yaml](https://github.com/kadirnar/torchyolo/releases/download/v1.0.0/default_config.yaml) file.\n\n```python\nfrom torchyolo import YoloHub\n\nmodel = YoloHub(\n    config_path=\"default_config.yaml\",\n    model_type=\"yolov8\",\n    model_path=\"yolov8s.pt\",\n)\nresult = model.predict(\n    source=\"test.mp4\", \n    tracker_type=\"NORFAIR\", # or False\n    tracker_config_path=\"norfair_track.yaml\"\n)\n```\n### Use From Command Line\n```bash\ntorchyolo predict --config_path torchyolo/configs/default_config.yaml --model_type yolov5 --model_path yolov5s.pt\ntorchyolo predict --config_path torchyolo/configs/default_config.yaml --model_type yolov5 --model_path yolov5s.pt --tracker_config_path norfair.yaml\n```\n\n### Detect Configuration\n```yaml\nDETECTOR_CONFIG:\n  # The threshold for the IOU score\n  IOU_TH: 0.45\n  # The threshold for the confidence score\n  CONF_TH: 0.25\n  # The size of the image\n  IMAGE_SIZE: 640\n  # The device to run the detector\n  DEVICE: cuda:0\n  # F16 precision\n  HALF: False\n  # The path of the yolov6 label file\n  YOLOV6_YAML_FILE: torchyolo/configs/yolov6/coco.yaml\n  # The path of the yolovx config file\n  YOLOX_CONFIG_PATH: configs.yolox.yolox_s\n  # The path of the Hugging Face model\n  HUGGING_FACE_MODEL: False \n\nDATA_CONFIG:\n  # The path of the output video\n  OUTPUT_PATH: output.mp4\n  # Save the video\n  SHOW: False \n  # Show the video\n  SAVE: True\n\n```\n\n### Tracker Config File\n\nByteTrack: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/default_config.yaml\n\nOcSort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/oc_sort.yaml\n\nStrongSort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/strong_sort.yaml\n\nNorfair: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/norfair_track.yaml\n\nSort: https://github.com/kadirnar/torchyolo/releases/download/v0.0.5/sort_track.yaml\n\n## Model Architecture\n```python\nfrom torchyolo import YoloHub\n\nmodel = YoloHub(config_path=\"torchyolo/default_config.yaml\")\nresult = model.view_model(file_format=\"pdf\")\n```\n\n# Contributing\nBefore opening a PR:\n  - Install required development packages:\n    ```bash\n    pip install -r requirements-dev.txt\n    ```\n  - Reformat the code with black and isort:\n    ```bash\n    bash script/code_format.sh\n    ``` \n\n### Acknowledgement\nA part of the code is borrowed from [SAHI](https://github.com/obss/sahi). Many thanks for their wonderful works.\n\n### Citation\n```bibtex\n@article{wang2022yolov7,\n  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},\n  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},\n  journal={arXiv preprint arXiv:2207.02696},\n  year={2022}\n}\n```\n```bibtex\n@article{li2022yolov6,\n  title={YOLOv6: A single-stage object detection framework for industrial applications},\n  author={Li, Chuyi and Li, Lulu and Jiang, Hongliang and Weng, Kaiheng and Geng, Yifei and Li, Liang and Ke, Zaidan and Li, Qingyuan and Cheng, Meng and Nie, Weiqiang and others},\n  journal={arXiv preprint arXiv:2209.02976},\n  year={2022}\n}\n```\n```bibtex\n@software{glenn_jocher_2020_4154370,\n  author       = {Glenn Jocher and,Alex Stoken and,Jirka Borovec and,NanoCode012 and,ChristopherSTAN and,Liu Changyu and,Laughing and,tkianai and,Adam Hogan and,lorenzomammana and,yxNONG and,AlexWang1900 and,Laurentiu Diaconu and,Marc and,wanghaoyang0106 and,ml5ah and,Doug and,Francisco Ingham and,Frederik and,Guilhen and,Hatovix and,Jake Poznanski and,Jiacong Fang and,Lijun Yu \u4e8e\u529b\u519b and,changyu98 and,Mingyu Wang and,Naman Gupta and,Osama Akhtar and,PetrDvoracek and,Prashant Rai},\n  title={{ultralytics/yolov5: v7.2 - Bug Fixes and \n                   Performance Improvements}},\n  month= oct,\n  year= 2020,\n  publisher= {Zenodo},\n  version= {v3.1},\n  doi= {10.5281/zenodo.4154370},\n  url= {https://doi.org/10.5281/zenodo.4154370}\n}\n```\n```bibtex\n@article{cao2022observation,\n  title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},\n  author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},\n  journal={arXiv preprint arXiv:2203.14360},\n  year={2022}\n}\n```\n```bibtex\n@article{zhang2022bytetrack,\n  title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},\n  author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Weng, Fucheng and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},\n  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},\n  year={2022}\n}\n```\n```bibtex\n@article{du2022strongsort,\n  title={Strongsort: Make deepsort great again},\n  author={Du, Yunhao and Song, Yang and Yang, Bo and Zhao, Yanyun},\n  journal={arXiv preprint arXiv:2202.13514},\n  year={2022}\n}\n```\n```bibtex\n@inproceedings{Bewley2016_sort,\n  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},\n  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},\n  title={Simple online and realtime tracking},\n  year={2016},\n  pages={3464-3468},\n  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},\n  doi={10.1109/ICIP.2016.7533003}\n}\n```\n```bibtex\n\n@article{torchreid,\n    title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},\n    author={Zhou, Kaiyang and Xiang, Tao},\n    journal={arXiv preprint arXiv:1910.10093},\n    year={2019}\n} \n```\n```bibtex\n@inproceedings{Bewley2016_sort,\n  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},\n  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},\n  title={Simple online and realtime tracking},\n  year={2016},\n  pages={3464-3468},\n  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},\n  doi={10.1109/ICIP.2016.7533003}\n}\n```\n```bibtex\n@inproceedings{zhou2019osnet,\n    title={Omni-Scale Feature Learning for Person Re-Identification},\n    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},\n    booktitle={ICCV},\n    year={2019}\n}\n```\n```bibtex\n@inproceedings{Bewley2016_sort,\n  author={Bewley, Alex and Ge, Zongyuan and Ott, Lionel and Ramos, Fabio and Upcroft, Ben},\n  booktitle={2016 IEEE International Conference on Image Processing (ICIP)},\n  title={Simple online and realtime tracking},\n  year={2016},\n  pages={3464-3468},\n  keywords={Benchmark testing;Complexity theory;Detectors;Kalman filters;Target tracking;Visualization;Computer Vision;Data Association;Detection;Multiple Object Tracking},\n  doi={10.1109/ICIP.2016.7533003}\n}\n```\n```bibtex\n@article{zhou2021osnet,\n    title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},\n    author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},\n    journal={TPAMI},\n    year={2021}\n}\n```\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "PyTorch implementation of YOLOv5, YOLOv6, YOLOv7, YOLOv8",
    "version": "1.2.3",
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "pytorch",
        "vision",
        "yolov6",
        "yolox",
        "object-detection",
        "yolov7",
        "detector",
        "yolov5"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33bca109f1e55412f32196ca503c5bc3c5d7ac8a7b403b3fcf181848111f452a",
                "md5": "2ea1f3512955a0499a2ec00092f9cf07",
                "sha256": "c191515d8761f123b7a830f35d298ac8b8d6177885e4ba16bdbd3760715ea9c2"
            },
            "downloads": -1,
            "filename": "torchyolo-1.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "2ea1f3512955a0499a2ec00092f9cf07",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 29214,
            "upload_time": "2023-01-30T13:46:16",
            "upload_time_iso_8601": "2023-01-30T13:46:16.114872Z",
            "url": "https://files.pythonhosted.org/packages/33/bc/a109f1e55412f32196ca503c5bc3c5d7ac8a7b403b3fcf181848111f452a/torchyolo-1.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-30 13:46:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "kadirnar",
    "github_project": "torchyolo",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "torchyolo"
}
        
Elapsed time: 0.12429s