# motpy - simple multi object tracking library
Project is meant to provide a simple yet powerful baseline for multiple object tracking without the hassle of writing the obvious algorithm stack yourself.
![2D tracking preview](assets/mot16_challange.gif)
_video source: <https://motchallenge.net/data/MOT16/> - sequence 11_
## Features
- tracking by detection paradigm
- IOU + (optional) feature similarity matching strategy
- Kalman filter used to model object trackers
- each object is modeled as a center point (n-dimensional) and its size (n-dimensional); e.g. 2D position with width and height would be the most popular use case for bounding boxes tracking
- seperately configurable system order for object position and size (currently 0th, 1st and 2nd order systems are allowed)
- quite fast, more than realtime performance even on Raspberry Pi
## Installation
### Latest release
```bash
pip install motpy
```
#### Additional installation steps on Raspberry Pi
You might need to have to install following dependencies on RPi platform:
```bash
sudo apt-get install python-scipy
sudo apt install libatlas-base-dev
```
### Develop
```bash
git clone https://github.com/wmuron/motpy
cd motpy
make install-develop # to install editable version of library
make test # to run all tests
```
## Example usage
### 2D tracking - synthetic example
Run demo example of tracking N objects in 2D space. In the ideal world it will show a bunch of colorful objects moving on a grey canvas in various directions, sometimes overlapping, sometimes not. Each object is detected from time to time (green box) and once it's being tracked by motpy, its track box is drawn in red with an ID above.
```bash
make demo
```
<https://user-images.githubusercontent.com/5874874/134305624-d6358cb1-39f8-4499-8a7b-64745f4795a6.mp4>
### Detect and track objects in the video
- example uses a COCO-trained model provided by torchvision library
- to run this example, you'll have to install `requirements_dev.txt` dependencies (`torch`, `torchvision`, etc.)
- to run on CPU, specify `--device=cpu`
```bash
python examples/detect_and_track_in_video.py \
--video_path=./assets/video.mp4 \
--detect_labels=['car','truck'] \
--tracker_min_iou=0.15 \
--device=cuda
```
<https://user-images.githubusercontent.com/5874874/134303165-b6835c8a-9cfe-486c-b79f-499f638c0a71.mp4>
_video source: <https://www.youtube.com/watch?v=PGMu_Z89Ao8/>, a great YT channel created by J Utah_
### MOT16 challange tracking
1. Download MOT16 dataset from `https://motchallenge.net/data/MOT16/` and extract to `~/Downloads/MOT16` directory,
2. Type the command:
```bash
python examples/mot16_challange.py --dataset_root=~/Downloads/MOT16 --seq_id=11
```
This will run a simplified example where a tracker processes artificially corrupted ground-truth bounding boxes from sequence 11; you can preview the expected results in the beginning of the README file.
### Face tracking on webcam
Run the following command to start tracking your own face.
```bash
python examples/webcam_face_tracking.py
```
## Basic usage
A minimal tracking example can be found below:
```python
import numpy as np
from motpy import Detection, MultiObjectTracker
# create a simple bounding box with format of [xmin, ymin, xmax, ymax]
object_box = np.array([1, 1, 10, 10])
# create a multi object tracker with a specified step time of 100ms
tracker = MultiObjectTracker(dt=0.1)
for step in range(10):
# let's simulate object movement by 1 unit (e.g. pixel)
object_box += 1
# update the state of the multi-object-tracker tracker
# with the list of bounding boxes
tracker.step(detections=[Detection(box=object_box)])
# retrieve the active tracks from the tracker (you can customize
# the hyperparameters of tracks filtering by passing extra arguments)
tracks = tracker.active_tracks()
print('MOT tracker tracks %d objects' % len(tracks))
print('first track box: %s' % str(tracks[0].box))
```
## Customization
To adapt the underlying motion model used to keep each object, you can pass a dictionary `model_spec` to `MultiObjectTracker`, which will be used to initialize each object tracker at its creation time. The exact parameters can be found in definition of `motpy.model.Model` class.
See the example below, where I've adapted the motion model to better fit the typical motion of face in the laptop camera and decent face detector.
```python
model_spec = {
'order_pos': 1, 'dim_pos': 2, # position is a center in 2D space; under constant velocity model
'order_size': 0, 'dim_size': 2, # bounding box is 2 dimensional; under constant velocity model
'q_var_pos': 1000., # process noise
'r_var_pos': 0.1 # measurement noise
}
tracker = MultiObjectTracker(dt=0.1, model_spec=model_spec)
```
The simplification used here is that the object position and size can be treated and modeled independently; hence you can use even 2D bounding boxes in 3D space.
Feel free to tune the parameter of Q and R matrix builders to better fit your use case.
## Tested platforms
- Linux (Ubuntu)
- macOS (Catalina)
- Raspberry Pi (4)
## Things to do
- [x] Initial version
- [ ] Documentation
- [ ] Performance optimization
- [x] Multiple object classes support via instance-level class_id counting
- [x] Allow tracking without Kalman filter
- [x] Easy to use and configurable example of video processing with off-the-shelf object detector
## References, papers, ideas and acknowledgements
- https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/
- http://elvera.nue.tu-berlin.de/files/1517Bochinski2017.pdf
- https://arxiv.org/abs/1602.00763
Raw data
{
"_id": null,
"home_page": "https://github.com/wmuron/motpy.git",
"name": "motpy",
"maintainer": "",
"docs_url": null,
"requires_python": ">3.6",
"maintainer_email": "",
"keywords": "multi-object-tracking,object-tracking,kalman-filter",
"author": "Wiktor Muron",
"author_email": "wiktormuron@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/0b/e6/a09d77540abcc8128742b43299cd72f7a382ade234bc75f5ef794bf2b23a/motpy-0.0.10.tar.gz",
"platform": "",
"description": "# motpy - simple multi object tracking library\n\nProject is meant to provide a simple yet powerful baseline for multiple object tracking without the hassle of writing the obvious algorithm stack yourself.\n\n![2D tracking preview](assets/mot16_challange.gif)\n\n_video source: <https://motchallenge.net/data/MOT16/> - sequence 11_\n\n## Features\n\n - tracking by detection paradigm\n - IOU + (optional) feature similarity matching strategy\n - Kalman filter used to model object trackers\n - each object is modeled as a center point (n-dimensional) and its size (n-dimensional); e.g. 2D position with width and height would be the most popular use case for bounding boxes tracking\n - seperately configurable system order for object position and size (currently 0th, 1st and 2nd order systems are allowed)\n - quite fast, more than realtime performance even on Raspberry Pi\n\n## Installation\n\n### Latest release\n\n```bash\npip install motpy\n```\n\n#### Additional installation steps on Raspberry Pi\n\nYou might need to have to install following dependencies on RPi platform:\n\n```bash\nsudo apt-get install python-scipy\nsudo apt install libatlas-base-dev\n```\n\n### Develop\n\n```bash\ngit clone https://github.com/wmuron/motpy\ncd motpy \nmake install-develop # to install editable version of library\nmake test # to run all tests\n```\n\n## Example usage\n\n### 2D tracking - synthetic example\n\nRun demo example of tracking N objects in 2D space. In the ideal world it will show a bunch of colorful objects moving on a grey canvas in various directions, sometimes overlapping, sometimes not. Each object is detected from time to time (green box) and once it's being tracked by motpy, its track box is drawn in red with an ID above.\n\n```bash\nmake demo\n```\n\n<https://user-images.githubusercontent.com/5874874/134305624-d6358cb1-39f8-4499-8a7b-64745f4795a6.mp4>\n\n### Detect and track objects in the video\n\n- example uses a COCO-trained model provided by torchvision library\n- to run this example, you'll have to install `requirements_dev.txt` dependencies (`torch`, `torchvision`, etc.)\n- to run on CPU, specify `--device=cpu` \n\n```bash\npython examples/detect_and_track_in_video.py \\\n --video_path=./assets/video.mp4 \\\n --detect_labels=['car','truck'] \\\n --tracker_min_iou=0.15 \\\n --device=cuda\n```\n\n<https://user-images.githubusercontent.com/5874874/134303165-b6835c8a-9cfe-486c-b79f-499f638c0a71.mp4>\n\n_video source: <https://www.youtube.com/watch?v=PGMu_Z89Ao8/>, a great YT channel created by J Utah_\n\n### MOT16 challange tracking\n\n1. Download MOT16 dataset from `https://motchallenge.net/data/MOT16/` and extract to `~/Downloads/MOT16` directory,\n2. Type the command: \n ```bash\n python examples/mot16_challange.py --dataset_root=~/Downloads/MOT16 --seq_id=11\n ```\n This will run a simplified example where a tracker processes artificially corrupted ground-truth bounding boxes from sequence 11; you can preview the expected results in the beginning of the README file.\n\n### Face tracking on webcam\n\nRun the following command to start tracking your own face.\n\n```bash\npython examples/webcam_face_tracking.py\n```\n\n## Basic usage\n\nA minimal tracking example can be found below:\n\n```python\nimport numpy as np\n\nfrom motpy import Detection, MultiObjectTracker\n\n# create a simple bounding box with format of [xmin, ymin, xmax, ymax]\nobject_box = np.array([1, 1, 10, 10])\n\n# create a multi object tracker with a specified step time of 100ms\ntracker = MultiObjectTracker(dt=0.1)\n\nfor step in range(10):\n # let's simulate object movement by 1 unit (e.g. pixel)\n object_box += 1\n\n # update the state of the multi-object-tracker tracker\n # with the list of bounding boxes\n tracker.step(detections=[Detection(box=object_box)])\n\n # retrieve the active tracks from the tracker (you can customize\n # the hyperparameters of tracks filtering by passing extra arguments)\n tracks = tracker.active_tracks()\n\n print('MOT tracker tracks %d objects' % len(tracks))\n print('first track box: %s' % str(tracks[0].box))\n\n```\n\n## Customization\n\nTo adapt the underlying motion model used to keep each object, you can pass a dictionary `model_spec` to `MultiObjectTracker`, which will be used to initialize each object tracker at its creation time. The exact parameters can be found in definition of `motpy.model.Model` class. \nSee the example below, where I've adapted the motion model to better fit the typical motion of face in the laptop camera and decent face detector.\n\n```python\nmodel_spec = {\n 'order_pos': 1, 'dim_pos': 2, # position is a center in 2D space; under constant velocity model\n 'order_size': 0, 'dim_size': 2, # bounding box is 2 dimensional; under constant velocity model\n 'q_var_pos': 1000., # process noise\n 'r_var_pos': 0.1 # measurement noise\n }\n\ntracker = MultiObjectTracker(dt=0.1, model_spec=model_spec)\n```\n\nThe simplification used here is that the object position and size can be treated and modeled independently; hence you can use even 2D bounding boxes in 3D space.\n\nFeel free to tune the parameter of Q and R matrix builders to better fit your use case.\n\n## Tested platforms\n\n - Linux (Ubuntu)\n - macOS (Catalina)\n - Raspberry Pi (4)\n\n## Things to do\n\n - [x] Initial version\n - [ ] Documentation\n - [ ] Performance optimization\n - [x] Multiple object classes support via instance-level class_id counting\n - [x] Allow tracking without Kalman filter\n - [x] Easy to use and configurable example of video processing with off-the-shelf object detector\n\n## References, papers, ideas and acknowledgements\n\n - https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/\n - http://elvera.nue.tu-berlin.de/files/1517Bochinski2017.pdf\n - https://arxiv.org/abs/1602.00763\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Library for track-by-detection multi object tracking implemented in python",
"version": "0.0.10",
"project_urls": {
"Download": "https://github.com/wmuron/motpy/releases/tag/v0.0.10-alpha",
"Homepage": "https://github.com/wmuron/motpy.git"
},
"split_keywords": [
"multi-object-tracking",
"object-tracking",
"kalman-filter"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "beff514e9719538dbcd98e51fbc0046b60bfd284f3577db4a9b80e071fc25c10",
"md5": "eddf1f34a68c521b9d4a881c09c8693a",
"sha256": "e5cd481aed38816494df60449659a1a9297f07d560b6891990d6fa93b9c1eb79"
},
"downloads": -1,
"filename": "motpy-0.0.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "eddf1f34a68c521b9d4a881c09c8693a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">3.6",
"size": 15907,
"upload_time": "2021-09-22T20:27:09",
"upload_time_iso_8601": "2021-09-22T20:27:09.012290Z",
"url": "https://files.pythonhosted.org/packages/be/ff/514e9719538dbcd98e51fbc0046b60bfd284f3577db4a9b80e071fc25c10/motpy-0.0.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0be6a09d77540abcc8128742b43299cd72f7a382ade234bc75f5ef794bf2b23a",
"md5": "415fb249b76edd37a4fa8503dbf2cce9",
"sha256": "57de19538367568c294ebcbf87c1dc2c3b989b1f0e3fa6ff10e408bcbad84062"
},
"downloads": -1,
"filename": "motpy-0.0.10.tar.gz",
"has_sig": false,
"md5_digest": "415fb249b76edd37a4fa8503dbf2cce9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">3.6",
"size": 16541,
"upload_time": "2021-09-22T20:27:10",
"upload_time_iso_8601": "2021-09-22T20:27:10.539477Z",
"url": "https://files.pythonhosted.org/packages/0b/e6/a09d77540abcc8128742b43299cd72f7a382ade234bc75f5ef794bf2b23a/motpy-0.0.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2021-09-22 20:27:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wmuron",
"github_project": "motpy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": []
},
{
"name": "scipy",
"specs": []
},
{
"name": "filterpy",
"specs": []
}
],
"lcname": "motpy"
}