yolort


Nameyolort JSON
Version 0.3.2 PyPI version JSON
download
home_pagehttps://github.com/zhiqwang/yolov5-rt-stack
SummaryYet Another YOLOv5 and its Additional Runtime Stack
upload_time2021-02-23 17:06:29
maintainer
docs_urlNone
authorZhiqiang Wang
requires_python>=3.6, <4
licenseGPL-3.0
keywords machine-learning deep-learning ml pytorch yolo object-detection yolov5 torchscript
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🔦 yolort - YOLOv5 Runtime Stack

[![CI testing](https://github.com/zhiqwang/yolov5-rt-stack/workflows/CI%20testing/badge.svg)](https://github.com/zhiqwang/yolov5-rt-stack/actions?query=workflow%3A%22CI+testing%22)
[![PyPI version](https://badge.fury.io/py/yolort.svg)](https://badge.fury.io/py/yolort)
[![codecov](https://codecov.io/gh/zhiqwang/yolov5-rt-stack/branch/master/graph/badge.svg?token=1GX96EA72Y)](https://codecov.io/gh/zhiqwang/yolov5-rt-stack)
[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/yolort/shared_invite/zt-mqwc7235-940aAh8IaKYeWclrJx10SA)

**What it is.** Yet another implementation of Ultralytics's [yolov5](https://github.com/ultralytics/yolov5), and with modules refactoring to make it available in deployment backends such as `libtorch`, `onnxruntime`, `tvm` and so on.

**About the code.** Follow the design principle of [detr](https://github.com/facebookresearch/detr):

> object detection should not be more difficult than classification, and should not require complex libraries for training and inference.

`yolort` is very simple to implement and experiment with. You like the implementation of torchvision's faster-rcnn, retinanet or detr? You like yolov5? You love `yolort`!

<a href=".github/zidane.jpg"><img src=".github/zidane.jpg" alt="YOLO inference demo" width="500"/></a>

## 🆕 What's New

- Support exporting to `TorchScript` model. *Oct. 8, 2020*.
- Support inferring with `LibTorch` cpp interface. *Oct. 10, 2020*.
- Add `TorchScript` cpp inference example. *Nov. 4, 2020*.
- Refactor YOLO modules and support *dynmaic batching* inference. *Nov. 16, 2020*.
- Support exporting to `ONNX`, and inferring with `ONNXRuntime` interface. *Nov. 17, 2020*.
- Add graph visualization tools. *Nov. 21, 2020*.
- Add `TVM` compile and inference notebooks. *Feb. 5, 2021*.

## 🛠️ Usage

There are no extra compiled components in `yolort` and package dependencies are minimal, so the code is very simple to use.

### Installation and Inference Examples

- Installation via Pip

  Simple installation from PyPI

  ```bash
  pip install -U yolort
  ```

  Or from Source

  ```bash
  # clone yolort repository locally
  git clone https://github.com/zhiqwang/yolov5-rt-stack.git
  cd yolov5-rt-stack
  # install in editable mode
  pip install -e .
  ```

- To read a source of image(s) and detect its objects 🔥

  ```python
  from yolort.models import yolov5s

  # Load model
  model = yolov5s(pretrained=True, score_thresh=0.45)
  model.eval()

  # Perform inference on an image file
  predictions = model.predict('bus.jpg')
  # Perform inference on a list of image files
  predictions = model.predict(['bus.jpg', 'zidane.jpg'])
  ```

### Loading via `torch.hub`

The models are also available via torch hub, to load `yolov5s` with pretrained weights simply do:

```python
model = torch.hub.load('zhiqwang/yolov5-rt-stack', 'yolov5s', pretrained=True)
```

### Updating checkpoint from ultralytics/yolov5

The module state of `yolort` has some differences comparing to `ultralytics/yolov5`. We can load ultralytics's trained model checkpoint with minor changes, and we have converted ultralytics's release [v3.1](https://github.com/ultralytics/yolov5/releases/tag/v3.1) and [v4.0](https://github.com/ultralytics/yolov5/releases/tag/v4.0). For example, if you want to convert a `yolov5s` (release 4.0) model, you can just run the following script:

```python
from yolort.utils import update_module_state_from_ultralytics

# Update module state from ultralytics
model = update_module_state_from_ultralytics(arch='yolov5s', version='v4.0')
# Save updated module
torch.save(model.state_dict(), 'yolov5s_updated.pt')
```

### Inference on `LibTorch` backend 🚀

We provide a [notebook](notebooks/inference-pytorch-export-libtorch.ipynb) to demonstrate how the model is transformed into `torchscript`. And we provide an [C++ example](./deployment) of how to infer with the transformed `torchscript` model. For details see the [GitHub actions](.github/workflows/nightly.yml).

## 🎨 Model Graph Visualization

Now, `yolort` can draw the model graph directly, checkout our [visualize-jit-models](notebooks/visualize-jit-models.ipynb) notebook to see how to use and visualize the model graph.

<a href="notebooks/assets/yolov5.detail.svg"><img src="notebooks/assets/yolov5.detail.svg" alt="YOLO model visualize" width="500"/></a>

## 🎓 Acknowledgement

- The implementation of `yolov5` borrow the code from [ultralytics](https://github.com/ultralytics/yolov5).
- This repo borrows the architecture design and part of the code from [torchvision](https://github.com/pytorch/vision).

## 🤗 Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. *BTW, leave a 🌟 if you liked it, this means a lot to us* :)



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zhiqwang/yolov5-rt-stack",
    "name": "yolort",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6, <4",
    "maintainer_email": "",
    "keywords": "machine-learning,deep-learning,ml,pytorch,YOLO,object-detection,YOLOv5,TorchScript",
    "author": "Zhiqiang Wang",
    "author_email": "me@zhiqwang.com",
    "download_url": "https://files.pythonhosted.org/packages/9f/c0/63ad91573f4ce91d26bc93870598ed7425bac862d17f8e0f7ee8eaf93a7e/yolort-0.3.2.tar.gz",
    "platform": "",
    "description": "# \ud83d\udd26 yolort - YOLOv5 Runtime Stack\n\n[![CI testing](https://github.com/zhiqwang/yolov5-rt-stack/workflows/CI%20testing/badge.svg)](https://github.com/zhiqwang/yolov5-rt-stack/actions?query=workflow%3A%22CI+testing%22)\n[![PyPI version](https://badge.fury.io/py/yolort.svg)](https://badge.fury.io/py/yolort)\n[![codecov](https://codecov.io/gh/zhiqwang/yolov5-rt-stack/branch/master/graph/badge.svg?token=1GX96EA72Y)](https://codecov.io/gh/zhiqwang/yolov5-rt-stack)\n[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/yolort/shared_invite/zt-mqwc7235-940aAh8IaKYeWclrJx10SA)\n\n**What it is.** Yet another implementation of Ultralytics's [yolov5](https://github.com/ultralytics/yolov5), and with modules refactoring to make it available in deployment backends such as `libtorch`, `onnxruntime`, `tvm` and so on.\n\n**About the code.** Follow the design principle of [detr](https://github.com/facebookresearch/detr):\n\n> object detection should not be more difficult than classification, and should not require complex libraries for training and inference.\n\n`yolort` is very simple to implement and experiment with. You like the implementation of torchvision's faster-rcnn, retinanet or detr? You like yolov5? You love `yolort`!\n\n<a href=\".github/zidane.jpg\"><img src=\".github/zidane.jpg\" alt=\"YOLO inference demo\" width=\"500\"/></a>\n\n## \ud83c\udd95 What's New\n\n- Support exporting to `TorchScript` model. *Oct. 8, 2020*.\n- Support inferring with `LibTorch` cpp interface. *Oct. 10, 2020*.\n- Add `TorchScript` cpp inference example. *Nov. 4, 2020*.\n- Refactor YOLO modules and support *dynmaic batching* inference. *Nov. 16, 2020*.\n- Support exporting to `ONNX`, and inferring with `ONNXRuntime` interface. *Nov. 17, 2020*.\n- Add graph visualization tools. *Nov. 21, 2020*.\n- Add `TVM` compile and inference notebooks. *Feb. 5, 2021*.\n\n## \ud83d\udee0\ufe0f Usage\n\nThere are no extra compiled components in `yolort` and package dependencies are minimal, so the code is very simple to use.\n\n### Installation and Inference Examples\n\n- Installation via Pip\n\n  Simple installation from PyPI\n\n  ```bash\n  pip install -U yolort\n  ```\n\n  Or from Source\n\n  ```bash\n  # clone yolort repository locally\n  git clone https://github.com/zhiqwang/yolov5-rt-stack.git\n  cd yolov5-rt-stack\n  # install in editable mode\n  pip install -e .\n  ```\n\n- To read a source of image(s) and detect its objects \ud83d\udd25\n\n  ```python\n  from yolort.models import yolov5s\n\n  # Load model\n  model = yolov5s(pretrained=True, score_thresh=0.45)\n  model.eval()\n\n  # Perform inference on an image file\n  predictions = model.predict('bus.jpg')\n  # Perform inference on a list of image files\n  predictions = model.predict(['bus.jpg', 'zidane.jpg'])\n  ```\n\n### Loading via `torch.hub`\n\nThe models are also available via torch hub, to load `yolov5s` with pretrained weights simply do:\n\n```python\nmodel = torch.hub.load('zhiqwang/yolov5-rt-stack', 'yolov5s', pretrained=True)\n```\n\n### Updating checkpoint from ultralytics/yolov5\n\nThe module state of `yolort` has some differences comparing to `ultralytics/yolov5`. We can load ultralytics's trained model checkpoint with minor changes, and we have converted ultralytics's release [v3.1](https://github.com/ultralytics/yolov5/releases/tag/v3.1) and [v4.0](https://github.com/ultralytics/yolov5/releases/tag/v4.0). For example, if you want to convert a `yolov5s` (release 4.0) model, you can just run the following script:\n\n```python\nfrom yolort.utils import update_module_state_from_ultralytics\n\n# Update module state from ultralytics\nmodel = update_module_state_from_ultralytics(arch='yolov5s', version='v4.0')\n# Save updated module\ntorch.save(model.state_dict(), 'yolov5s_updated.pt')\n```\n\n### Inference on `LibTorch` backend \ud83d\ude80\n\nWe provide a [notebook](notebooks/inference-pytorch-export-libtorch.ipynb) to demonstrate how the model is transformed into `torchscript`. And we provide an [C++ example](./deployment) of how to infer with the transformed `torchscript` model. For details see the [GitHub actions](.github/workflows/nightly.yml).\n\n## \ud83c\udfa8 Model Graph Visualization\n\nNow, `yolort` can draw the model graph directly, checkout our [visualize-jit-models](notebooks/visualize-jit-models.ipynb) notebook to see how to use and visualize the model graph.\n\n<a href=\"notebooks/assets/yolov5.detail.svg\"><img src=\"notebooks/assets/yolov5.detail.svg\" alt=\"YOLO model visualize\" width=\"500\"/></a>\n\n## \ud83c\udf93 Acknowledgement\n\n- The implementation of `yolov5` borrow the code from [ultralytics](https://github.com/ultralytics/yolov5).\n- This repo borrows the architecture design and part of the code from [torchvision](https://github.com/pytorch/vision).\n\n## \ud83e\udd17 Contributing\n\nWe appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. *BTW, leave a \ud83c\udf1f if you liked it, this means a lot to us* :)\n\n\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "Yet Another YOLOv5 and its Additional Runtime Stack",
    "version": "0.3.2",
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "ml",
        "pytorch",
        "yolo",
        "object-detection",
        "yolov5",
        "torchscript"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "815c674d065590df217dc008bf85a528",
                "sha256": "f929d5fba57601e7458d1a77bd10056a0135a7f87898446abfb06742bd580a1a"
            },
            "downloads": -1,
            "filename": "yolort-0.3.2-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "815c674d065590df217dc008bf85a528",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.6, <4",
            "size": 72272,
            "upload_time": "2021-02-23T17:06:26",
            "upload_time_iso_8601": "2021-02-23T17:06:26.918823Z",
            "url": "https://files.pythonhosted.org/packages/8d/66/e0ac71a4348f032833a63b353013b0d40186fb70fe4669a4babad0d1a153/yolort-0.3.2-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "e97919a7df2c37e3abcbe218d4959ebc",
                "sha256": "64cc5b3afc3561548a742e361a465f3fb5e72bb67f9dee269c05f00823ade71f"
            },
            "downloads": -1,
            "filename": "yolort-0.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "e97919a7df2c37e3abcbe218d4959ebc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6, <4",
            "size": 70257,
            "upload_time": "2021-02-23T17:06:29",
            "upload_time_iso_8601": "2021-02-23T17:06:29.062410Z",
            "url": "https://files.pythonhosted.org/packages/9f/c0/63ad91573f4ce91d26bc93870598ed7425bac862d17f8e0f7ee8eaf93a7e/yolort-0.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2021-02-23 17:06:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "zhiqwang",
    "error": "Could not fetch GitHub repository",
    "lcname": "yolort"
}
        
Elapsed time: 0.20382s