Name | motrackers JSON |
Version |
0.0.2
JSON |
| download |
home_page | |
Summary | Multi-object trackers in Python |
upload_time | 2023-10-10 18:58:44 |
maintainer | |
docs_url | None |
author | |
requires_python | >3.6 |
license | |
keywords |
tracking
object
multi-object
python
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[cars-yolo-output]: examples/assets/cars.gif "Sample Output with YOLO"
[cows-tf-ssd-output]: examples/assets/cows.gif "Sample Output with SSD"
# Multi-object trackers in Python
Easy to use implementation of various multi-object tracking algorithms.
[![DOI](https://zenodo.org/badge/148338463.svg)](https://zenodo.org/badge/latestdoi/148338463)
<!-- [![Upload motrackers to PyPI](https://github.com/adipandas/multi-object-tracker/actions/workflows/python-publish.yml/badge.svg)](https://github.com/adipandas/multi-object-tracker/actions/workflows/python-publish.yml) -->
`YOLOv3 + CentroidTracker` | `TF-MobileNetSSD + CentroidTracker`
:-------------------------:|:-------------------------:
![Cars with YOLO][cars-yolo-output] | ![Cows with tf-SSD][cows-tf-ssd-output]
Video source: [link](https://flic.kr/p/L6qyxj) | Video source: [link](https://flic.kr/p/26WeEWy)
## Available Multi Object Trackers
- CentroidTracker
- IOUTracker
- CentroidKF_Tracker
- SORT
## Available OpenCV-based object detectors:
- detector.TF_SSDMobileNetV2
- detector.Caffe_SSDMobileNet
- detector.YOLOv3
## Installation
Pip install for OpenCV (version 3.4.3 or later) is available [here](https://pypi.org/project/opencv-python/) and can be done with the following command:
```
pip install motrackers
```
Additionally, you can install the package through GitHub instead:
```
git clone https://github.com/adipandas/multi-object-tracker
cd multi-object-tracker
pip install [-e] .
```
**Note - for using neural network models with GPU**
For using the opencv `dnn`-based object detection modules provided in this repository with GPU, you may have to compile a CUDA enabled version of OpenCV from source.
* To build opencv from source, refer the following links:
[[link-1](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html)],
[[link-2](https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/)]
## How to use?: Examples
The interface for each tracker is simple and similar. Please refer the example template below.
```
from motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT
input_data = ...
detector = ...
tracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)
while True:
done, image = <read(input_data)>
if done:
break
detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)
# NOTE:
# * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)
# * `detection_confidences` are numpy.ndarray of shape (n,);
# * `detection_class_ids` are numpy.ndarray of shape (n,).
output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)
# `output_tracks` is a list with each element containing tuple of
# (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)
for track in output_tracks:
frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
assert len(track) == 10
print(track)
```
Please refer [examples](https://github.com/adipandas/multi-object-tracker/tree/master/examples) folder of this repository for more details. You can clone and run the examples.
## Pretrained object detection models
You will have to download the pretrained weights for the neural-network models.
The shell scripts for downloading these are provided [here](https://github.com/adipandas/multi-object-tracker/tree/master/examples/pretrained_models) below respective folders.
Please refer [DOWNLOAD_WEIGHTS.md](https://github.com/adipandas/multi-object-tracker/blob/master/DOWNLOAD_WEIGHTS.md) for more details.
### Notes
* There are some variations in implementations as compared to what appeared in papers of `SORT` and `IoU Tracker`.
* In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out.
## References, Credits and Contributions
Please see [REFERENCES.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/REFERENCES.md) and [CONTRIBUTING.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/CONTRIBUTING.md).
## Citation
If you use this repository in your work, please consider citing it with:
```
@misc{multiobjtracker_amd2018,
author = {Deshpande, Aditya M.},
title = {Multi-object trackers in Python},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/adipandas/multi-object-tracker}},
}
```
Raw data
{
"_id": null,
"home_page": "",
"name": "motrackers",
"maintainer": "",
"docs_url": null,
"requires_python": ">3.6",
"maintainer_email": "",
"keywords": "tracking,object,multi-object,python",
"author": "",
"author_email": "\"Aditya M. Deshpande\" <adityadeshpande2010@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/1c/b7/b946b52a40fe7da12e2541a1053e2b21d0a1ad305a9776eea7b93fabb709/motrackers-0.0.2.tar.gz",
"platform": null,
"description": "[cars-yolo-output]: examples/assets/cars.gif \"Sample Output with YOLO\"\n[cows-tf-ssd-output]: examples/assets/cows.gif \"Sample Output with SSD\"\n\n# Multi-object trackers in Python\nEasy to use implementation of various multi-object tracking algorithms.\n\n[![DOI](https://zenodo.org/badge/148338463.svg)](https://zenodo.org/badge/latestdoi/148338463) \n<!-- [![Upload motrackers to PyPI](https://github.com/adipandas/multi-object-tracker/actions/workflows/python-publish.yml/badge.svg)](https://github.com/adipandas/multi-object-tracker/actions/workflows/python-publish.yml) -->\n\n\n`YOLOv3 + CentroidTracker` | `TF-MobileNetSSD + CentroidTracker`\n:-------------------------:|:-------------------------:\n![Cars with YOLO][cars-yolo-output] | ![Cows with tf-SSD][cows-tf-ssd-output]\nVideo source: [link](https://flic.kr/p/L6qyxj) | Video source: [link](https://flic.kr/p/26WeEWy)\n\n\n## Available Multi Object Trackers\n\n- CentroidTracker\n- IOUTracker\n- CentroidKF_Tracker\n- SORT\n\n\n## Available OpenCV-based object detectors:\n\n- detector.TF_SSDMobileNetV2\n- detector.Caffe_SSDMobileNet\n- detector.YOLOv3\n\n## Installation\n\nPip install for OpenCV (version 3.4.3 or later) is available [here](https://pypi.org/project/opencv-python/) and can be done with the following command:\n```\npip install motrackers\n```\n\nAdditionally, you can install the package through GitHub instead:\n```\ngit clone https://github.com/adipandas/multi-object-tracker\ncd multi-object-tracker\npip install [-e] .\n```\n\n**Note - for using neural network models with GPU** \nFor using the opencv `dnn`-based object detection modules provided in this repository with GPU, you may have to compile a CUDA enabled version of OpenCV from source. \n* To build opencv from source, refer the following links:\n[[link-1](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html)],\n[[link-2](https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/)]\n\n## How to use?: Examples\n\nThe interface for each tracker is simple and similar. Please refer the example template below.\n\n```\nfrom motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT\ninput_data = ...\ndetector = ...\ntracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)\nwhile True:\n done, image = <read(input_data)>\n if done:\n break\n detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)\n # NOTE: \n # * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)\n # * `detection_confidences` are numpy.ndarray of shape (n,);\n # * `detection_class_ids` are numpy.ndarray of shape (n,).\n output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)\n # `output_tracks` is a list with each element containing tuple of\n # (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)\n for track in output_tracks:\n frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track\n assert len(track) == 10\n print(track)\n```\n\nPlease refer [examples](https://github.com/adipandas/multi-object-tracker/tree/master/examples) folder of this repository for more details. You can clone and run the examples.\n\n## Pretrained object detection models\n\nYou will have to download the pretrained weights for the neural-network models. \nThe shell scripts for downloading these are provided [here](https://github.com/adipandas/multi-object-tracker/tree/master/examples/pretrained_models) below respective folders.\nPlease refer [DOWNLOAD_WEIGHTS.md](https://github.com/adipandas/multi-object-tracker/blob/master/DOWNLOAD_WEIGHTS.md) for more details.\n\n### Notes\n* There are some variations in implementations as compared to what appeared in papers of `SORT` and `IoU Tracker`.\n* In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out.\n\n## References, Credits and Contributions\nPlease see [REFERENCES.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/REFERENCES.md) and [CONTRIBUTING.md](https://github.com/adipandas/multi-object-tracker/blob/master/docs/readme/CONTRIBUTING.md).\n\n## Citation\n\nIf you use this repository in your work, please consider citing it with:\n```\n@misc{multiobjtracker_amd2018,\n author = {Deshpande, Aditya M.},\n title = {Multi-object trackers in Python},\n year = {2020},\n publisher = {GitHub},\n journal = {GitHub repository},\n howpublished = {\\url{https://github.com/adipandas/multi-object-tracker}},\n}\n```\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Multi-object trackers in Python",
"version": "0.0.2",
"project_urls": {
"homepath": "https://adipandas.github.io/multi-object-tracker",
"repository": "https://github.com/adipandas/multi-object-tracker"
},
"split_keywords": [
"tracking",
"object",
"multi-object",
"python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "42bfc03bc0ce8089479bdd8b1745490037f5387e75219a29d436a9328482eafb",
"md5": "edefb21d257061ac3be51adf4e25d4e6",
"sha256": "2d372f6d27d698cd94f683f6328eccab2dbe8be13730ddc88773bf3064b26622"
},
"downloads": -1,
"filename": "motrackers-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "edefb21d257061ac3be51adf4e25d4e6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">3.6",
"size": 34461,
"upload_time": "2023-10-10T18:58:43",
"upload_time_iso_8601": "2023-10-10T18:58:43.508995Z",
"url": "https://files.pythonhosted.org/packages/42/bf/c03bc0ce8089479bdd8b1745490037f5387e75219a29d436a9328482eafb/motrackers-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1cb7b946b52a40fe7da12e2541a1053e2b21d0a1ad305a9776eea7b93fabb709",
"md5": "fc7c510277fd76111eb0f969615452b5",
"sha256": "0c2b2b0d2c2cfc148b13eae6714b37c3872618a6e6247d39dd8808ff0b2c4520"
},
"downloads": -1,
"filename": "motrackers-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "fc7c510277fd76111eb0f969615452b5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">3.6",
"size": 24470,
"upload_time": "2023-10-10T18:58:44",
"upload_time_iso_8601": "2023-10-10T18:58:44.848073Z",
"url": "https://files.pythonhosted.org/packages/1c/b7/b946b52a40fe7da12e2541a1053e2b21d0a1ad305a9776eea7b93fabb709/motrackers-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-10-10 18:58:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "adipandas",
"github_project": "multi-object-tracker",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "motrackers"
}