[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://badge.fury.io/py/motmetrics) [![Build Status](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml/badge.svg)](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml)
# py-motmetrics
The **py-motmetrics** library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).
While benchmarking single object trackers is rather straightforward, measuring the performance of multiple object trackers needs careful design as multiple correspondence constellations can arise (see image below). A variety of methods have been proposed in the past and while there is no general agreement on a single method, the methods of [[1,2,3,4]](#References) have received considerable attention in recent years. **py-motmetrics** implements these [metrics](#Metrics).
<div style="text-align:center;">
![](./motmetrics/etc/mot.png)<br/>
_Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)_
</div>
In particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.
## Features at a glance
- _Variety of metrics_ <br/>
Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare).
- _Distance agnostic_ <br/>
Supports Euclidean, Intersection over Union and other distances measures.
- _Complete event history_ <br/>
Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
- _Flexible solver backend_ <br/>
Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).
- _Easy to extend_ <br/>
Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.
<a name="Metrics"></a>
## Metrics
**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][motchallenge] benchmarks.
```python
import motmetrics as mm
# List all default metrics
mh = mm.metrics.create()
print(mh.list_metrics_markdown())
```
| Name | Description |
| :------------------- | :--------------------------------------------------------------------------------- |
| num_frames | Total number of frames. |
| num_matches | Total number matches. |
| num_switches | Total number of track switches. |
| num_false_positives | Total number of false positives (false-alarms). |
| num_misses | Total number of misses. |
| num_detections | Total number of detected objects including matches and switches. |
| num_objects | Total number of unique object appearances over all frames. |
| num_predictions | Total number of unique prediction appearances over all frames. |
| num_unique_objects | Total number of unique object ids encountered. |
| mostly_tracked | Number of objects tracked for at least 80 percent of lifespan. |
| partially_tracked | Number of objects tracked between 20 and 80 percent of lifespan. |
| mostly_lost | Number of objects tracked less than 20 percent of lifespan. |
| num_fragmentations | Total number of switches from tracked to not tracked. |
| motp | Multiple object tracker precision. |
| mota | Multiple object tracker accuracy. |
| precision | Number of detected objects over sum of detected and false positives. |
| recall | Number of detections over number of objects. |
| idfp | ID measures: Number of false positive matches after global min-cost matching. |
| idfn | ID measures: Number of false negatives matches after global min-cost matching. |
| idtp | ID measures: Number of true positives matches after global min-cost matching. |
| idp | ID measures: global min-cost precision. |
| idr | ID measures: global min-cost recall. |
| idf1 | ID measures: global min-cost F1 score. |
| obj_frequencies | `pd.Series` Total number of occurrences of individual objects over all frames. |
| pred_frequencies | `pd.Series` Total number of occurrences of individual predictions over all frames. |
| track_ratios | `pd.Series` Ratio of assigned to total appearance count per unique object id. |
| id_global_assignment | `dict` ID measures: Global min-cost assignment for ID measures. |
<a name="MOTChallengeCompatibility"></a>
## MOTChallenge compatibility
**py-motmetrics** produces results compatible with popular [MOTChallenge][motchallenge] benchmarks [(\*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.
```
TUD-Campus
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
55.8 73.0 45.1| 58.2 94.1 0.18| 8 1 6 1| 13 150 7 7| 52.6 72.3 54.3
TUD-Stadtmitte
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
64.5 82.0 53.1| 60.9 94.0 0.25| 10 5 4 1| 45 452 7 6| 56.4 65.4 56.9
```
In comparison to **py-motmetrics**
```
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
TUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.277
TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346
```
<a name="asterixcompare"></a>(\*1) Besides naming conventions, the only obvious differences are
- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.
- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][motchallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).
You can compare tracker results to ground truth in MOTChallenge format by
```
python -m motmetrics.apps.eval_motchallenge --help
```
For MOT16/17, you can run
```
python -m motmetrics.apps.evaluateTracking --help
```
## Installation
To install latest development version of **py-motmetrics** (usually a bit more recent than PyPi below)
```
pip install git+https://github.com/cheind/py-motmetrics.git
```
### Install via PyPi
To install **py-motmetrics** use `pip`
```
pip install motmetrics
```
Python 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.
Alternatively for developing, clone or fork this repository and install in editing mode.
```
pip install -e <path/to/setup.py>
```
### Install via Conda
In case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies
```
conda env create -f environment.yml
> activate motmetrics-env
```
Then activate / source the `motmetrics-env` and install **py-motmetrics** and run the tests.
```
activate motmetrics-env
pip install .
pytest
```
In case you already have an environment you install the dependencies from within your environment by
```
conda install --file requirements.txt
pip install .
pytest
```
## Usage
### Populating the accumulator
```python
import motmetrics as mm
import numpy as np
# Create an accumulator that will be updated during each frame
acc = mm.MOTAccumulator(auto_id=True)
# Call update once for per frame. For now, assume distances between
# frame objects / hypotheses are given.
acc.update(
[1, 2], # Ground truth objects in this frame
[1, 2, 3], # Detector hypotheses in this frame
[
[0.1, np.nan, 0.3], # Distances from object 1 to hypotheses 1, 2, 3
[0.5, 0.2, 0.3] # Distances from object 2 to hypotheses 1, 2, 3
]
)
```
The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that object `1` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.
```python
print(acc.events) # a pandas DataFrame containing all events
"""
Type OId HId D
FrameId Event
0 0 RAW 1 1 0.1
1 RAW 1 2 NaN
2 RAW 1 3 0.3
3 RAW 2 1 0.5
4 RAW 2 2 0.2
5 RAW 2 3 0.3
6 MATCH 1 1 0.1
7 MATCH 2 2 0.2
8 FP NaN 3 NaN
"""
```
The above data frame contains `RAW` and MOT events. To obtain just MOT events type
```python
print(acc.mot_events) # a pandas DataFrame containing MOT only events
"""
Type OId HId D
FrameId Event
0 6 MATCH 1 1 0.1
7 MATCH 2 2 0.2
8 FP NaN 3 NaN
"""
```
Meaning object `1` was matched to hypothesis `1` with distance 0.1. Similarily, object `2` was matched to hypothesis `2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).
Continuing from above
```python
frameid = acc.update(
[1, 2],
[1],
[
[0.2],
[0.4]
]
)
print(acc.mot_events.loc[frameid])
"""
Type OId HId D
Event
2 MATCH 1 1 0.2
3 MISS 2 NaN NaN
"""
```
While object `1` was matched, object `2` couldn't be matched because no hypotheses are left to pair with.
```python
frameid = acc.update(
[1, 2],
[1, 3],
[
[0.6, 0.2],
[0.1, 0.6]
]
)
print(acc.mot_events.loc[frameid])
"""
Type OId HId D
Event
4 MATCH 1 1 0.6
5 SWITCH 2 3 0.6
"""
```
Object `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.
### Computing metrics
Once the accumulator has been populated you can compute and display metrics. Continuing the example from above
```python
mh = mm.metrics.create()
summary = mh.compute(acc, metrics=['num_frames', 'mota', 'motp'], name='acc')
print(summary)
"""
num_frames mota motp
acc 3 0.5 0.34
"""
```
Computing metrics for multiple accumulators or accumulator views is also possible
```python
summary = mh.compute_many(
[acc, acc.events.loc[0:1]],
metrics=['num_frames', 'mota', 'motp'],
names=['full', 'part'])
print(summary)
"""
num_frames mota motp
full 3 0.5 0.340000
part 2 0.5 0.166667
"""
```
Finally, you may want to reformat column names and how column values are displayed.
```python
strsummary = mm.io.render_summary(
summary,
formatters={'mota' : '{:.2%}'.format},
namemap={'mota': 'MOTA', 'motp' : 'MOTP'}
)
print(strsummary)
"""
num_frames MOTA MOTP
full 3 50.00% 0.340000
part 2 50.00% 0.166667
"""
```
For MOTChallenge **py-motmetrics** provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab `devkit`.
```python
summary = mh.compute_many(
[acc, acc.events.loc[0:1]],
metrics=mm.metrics.motchallenge_metrics,
names=['full', 'part'])
strsummary = mm.io.render_summary(
summary,
formatters=mh.formatters,
namemap=mm.io.motchallenge_metric_names
)
print(strsummary)
"""
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
full 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340
part 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167
"""
```
In order to generate an overall summary that computes the metrics jointly over all accumulators add `generate_overall=True` as follows
```python
summary = mh.compute_many(
[acc, acc.events.loc[0:1]],
metrics=mm.metrics.motchallenge_metrics,
names=['full', 'part'],
generate_overall=True
)
strsummary = mm.io.render_summary(
summary,
formatters=mh.formatters,
namemap=mm.io.motchallenge_metric_names
)
print(strsummary)
"""
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
full 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340
part 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167
OVERALL 80.0% 80.0% 80.0% 80.0% 80.0% 4 2 2 0 2 2 1 1 50.0% 0.275
"""
```
### Computing distances
Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.
#### Euclidean norm squared on points
```python
# Object related points
o = np.array([
[1., 2],
[2., 2],
[3., 2],
])
# Hypothesis related points
h = np.array([
[0., 0],
[1., 1],
])
C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)
"""
[[ 5. 1.]
[ nan 2.]
[ nan 5.]]
"""
```
#### Intersection over union norm for 2D rectangles
```python
a = np.array([
[0, 0, 1, 2], # Format X, Y, Width, Height
[0, 0, 0.8, 1.5],
])
b = np.array([
[0, 0, 1, 2],
[0, 0, 1, 1],
[0.1, 0.2, 2, 2],
])
mm.distances.iou_matrix(a, b, max_iou=0.5)
"""
[[ 0. 0.5 nan]
[ 0.4 0.42857143 nan]]
"""
```
<a name="SolverBackends"></a>
### Solver backends
For large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box
- `lapsolver` - https://github.com/cheind/py-lapsolver
- `lapjv` - https://github.com/gatagat/lap
- `scipy` - https://github.com/scipy/scipy/tree/master/scipy
- `ortools<9.4` - https://github.com/google/or-tools
- `munkres` - http://software.clapper.org/munkres/
A comparison for different sized matrices is shown below (taken from [here](https://github.com/cheind/py-lapsolver#benchmarks))
Please note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.
![](https://github.com/cheind/py-lapsolver/raw/master/lapsolver/etc/benchmark-dtype-numpy.float32.png)
By default **py-motmetrics** will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use
```python
costs = ...
mysolver = lambda x: ... # solver code that returns pairings
with lap.set_default_solver(mysolver):
...
```
## Running tests
**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.
<a name="References"></a>
### References
1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics."
EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene."
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.
## Docker
### Update ground truth and test data:
/data/train directory should contain MOT 2D 2015 Ground Truth files.
/data/test directory should contain your results.
You can check usage and directory listing at
https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py
### Build Image
docker build -t desired-image-name -f Dockerfile .
### Run Image
docker run desired-image-name
(credits to [christosavg](https://github.com/christosavg))
## License
```
MIT License
Copyright (c) 2017-2022 Christoph Heindl
Copyright (c) 2018 Toka
Copyright (c) 2019-2022 Jack Valmadre
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
[pandas]: http://pandas.pydata.org/
[motchallenge]: https://motchallenge.net/
[devkit]: https://motchallenge.net/devkit/
Raw data
{
"_id": null,
"home_page": "https://github.com/cheind/py-motmetrics",
"name": "motmetrics",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "tracker MOT evaluation metrics compare",
"author": "Christoph Heindl, Jack Valmadre",
"author_email": "",
"download_url": "",
"platform": null,
"description": "[![PyPI version](https://badge.fury.io/py/motmetrics.svg)](https://badge.fury.io/py/motmetrics) [![Build Status](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml/badge.svg)](https://github.com/cheind/py-motmetrics/actions/workflows/python-package.yml)\r\n\r\n# py-motmetrics\r\n\r\nThe **py-motmetrics** library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).\r\n\r\nWhile benchmarking single object trackers is rather straightforward, measuring the performance of multiple object trackers needs careful design as multiple correspondence constellations can arise (see image below). A variety of methods have been proposed in the past and while there is no general agreement on a single method, the methods of [[1,2,3,4]](#References) have received considerable attention in recent years. **py-motmetrics** implements these [metrics](#Metrics).\r\n\r\n<div style=\"text-align:center;\">\r\n\r\n![](./motmetrics/etc/mot.png)<br/>\r\n\r\n_Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [[1]](#References)_\r\n\r\n</div>\r\n\r\nIn particular **py-motmetrics** supports `CLEAR-MOT`[[1,2]](#References) metrics and `ID`[[4]](#References) metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, `ID-MEASURE` solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This [blog-post](https://web.archive.org/web/20190413133409/http://vision.cs.duke.edu:80/DukeMTMC/IDmeasures.html) by Ergys illustrates the differences in more detail.\r\n\r\n## Features at a glance\r\n\r\n- _Variety of metrics_ <br/>\r\n Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are [comparable](#MOTChallengeCompatibility) with the popular [MOTChallenge][motchallenge] benchmarks [(\\*1)](#asterixcompare).\r\n- _Distance agnostic_ <br/>\r\n Supports Euclidean, Intersection over Union and other distances measures.\r\n- _Complete event history_ <br/>\r\n Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.\r\n- _Flexible solver backend_ <br/>\r\n Support for switching minimum assignment cost solvers. Supports `scipy`, `ortools`, `munkres` out of the box. Auto-tunes solver selection based on [availability and problem size](#SolverBackends).\r\n- _Easy to extend_ <br/>\r\n Events and summaries are utilizing [pandas][pandas] for data structures and analysis. New metrics can reuse already computed values from depending metrics.\r\n\r\n<a name=\"Metrics\"></a>\r\n\r\n## Metrics\r\n\r\n**py-motmetrics** implements the following metrics. The metrics have been aligned with what is reported by [MOTChallenge][motchallenge] benchmarks.\r\n\r\n```python\r\nimport motmetrics as mm\r\n# List all default metrics\r\nmh = mm.metrics.create()\r\nprint(mh.list_metrics_markdown())\r\n```\r\n\r\n| Name | Description |\r\n| :------------------- | :--------------------------------------------------------------------------------- |\r\n| num_frames | Total number of frames. |\r\n| num_matches | Total number matches. |\r\n| num_switches | Total number of track switches. |\r\n| num_false_positives | Total number of false positives (false-alarms). |\r\n| num_misses | Total number of misses. |\r\n| num_detections | Total number of detected objects including matches and switches. |\r\n| num_objects | Total number of unique object appearances over all frames. |\r\n| num_predictions | Total number of unique prediction appearances over all frames. |\r\n| num_unique_objects | Total number of unique object ids encountered. |\r\n| mostly_tracked | Number of objects tracked for at least 80 percent of lifespan. |\r\n| partially_tracked | Number of objects tracked between 20 and 80 percent of lifespan. |\r\n| mostly_lost | Number of objects tracked less than 20 percent of lifespan. |\r\n| num_fragmentations | Total number of switches from tracked to not tracked. |\r\n| motp | Multiple object tracker precision. |\r\n| mota | Multiple object tracker accuracy. |\r\n| precision | Number of detected objects over sum of detected and false positives. |\r\n| recall | Number of detections over number of objects. |\r\n| idfp | ID measures: Number of false positive matches after global min-cost matching. |\r\n| idfn | ID measures: Number of false negatives matches after global min-cost matching. |\r\n| idtp | ID measures: Number of true positives matches after global min-cost matching. |\r\n| idp | ID measures: global min-cost precision. |\r\n| idr | ID measures: global min-cost recall. |\r\n| idf1 | ID measures: global min-cost F1 score. |\r\n| obj_frequencies | `pd.Series` Total number of occurrences of individual objects over all frames. |\r\n| pred_frequencies | `pd.Series` Total number of occurrences of individual predictions over all frames. |\r\n| track_ratios | `pd.Series` Ratio of assigned to total appearance count per unique object id. |\r\n| id_global_assignment | `dict` ID measures: Global min-cost assignment for ID measures. |\r\n\r\n<a name=\"MOTChallengeCompatibility\"></a>\r\n\r\n## MOTChallenge compatibility\r\n\r\n**py-motmetrics** produces results compatible with popular [MOTChallenge][motchallenge] benchmarks [(\\*1)](#asterixcompare). Below are two results taken from MOTChallenge [Matlab devkit][devkit] corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.\r\n\r\n```\r\n\r\nTUD-Campus\r\n IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL\r\n 55.8 73.0 45.1| 58.2 94.1 0.18| 8 1 6 1| 13 150 7 7| 52.6 72.3 54.3\r\n\r\nTUD-Stadtmitte\r\n IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL\r\n 64.5 82.0 53.1| 60.9 94.0 0.25| 10 5 4 1| 45 452 7 6| 56.4 65.4 56.9\r\n\r\n```\r\n\r\nIn comparison to **py-motmetrics**\r\n\r\n```\r\n IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP\r\nTUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.277\r\nTUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346\r\n```\r\n\r\n<a name=\"asterixcompare\"></a>(\\*1) Besides naming conventions, the only obvious differences are\r\n\r\n- Metric `FAR` is missing. This metric is given implicitly and can be recovered by `FalsePos / Frames * 100`.\r\n- Metric `MOTP` seems to be off. To convert compute `(1 - MOTP) * 100`. [MOTChallenge][motchallenge] benchmarks compute `MOTP` as percentage, while **py-motmetrics** sticks to the original definition of average distance over number of assigned objects [[1]](#References).\r\n\r\nYou can compare tracker results to ground truth in MOTChallenge format by\r\n\r\n```\r\npython -m motmetrics.apps.eval_motchallenge --help\r\n```\r\n\r\nFor MOT16/17, you can run\r\n\r\n```\r\npython -m motmetrics.apps.evaluateTracking --help\r\n```\r\n\r\n## Installation\r\n\r\nTo install latest development version of **py-motmetrics** (usually a bit more recent than PyPi below)\r\n\r\n```\r\npip install git+https://github.com/cheind/py-motmetrics.git\r\n```\r\n\r\n### Install via PyPi\r\n\r\nTo install **py-motmetrics** use `pip`\r\n\r\n```\r\npip install motmetrics\r\n```\r\n\r\nPython 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.\r\n\r\nAlternatively for developing, clone or fork this repository and install in editing mode.\r\n\r\n```\r\npip install -e <path/to/setup.py>\r\n```\r\n\r\n### Install via Conda\r\n\r\nIn case you are using Conda, a simple way to run **py-motmetrics** is to create a virtual environment with all the necessary dependencies\r\n\r\n```\r\nconda env create -f environment.yml\r\n> activate motmetrics-env\r\n```\r\n\r\nThen activate / source the `motmetrics-env` and install **py-motmetrics** and run the tests.\r\n\r\n```\r\nactivate motmetrics-env\r\npip install .\r\npytest\r\n```\r\n\r\nIn case you already have an environment you install the dependencies from within your environment by\r\n\r\n```\r\nconda install --file requirements.txt\r\npip install .\r\npytest\r\n```\r\n\r\n## Usage\r\n\r\n### Populating the accumulator\r\n\r\n```python\r\nimport motmetrics as mm\r\nimport numpy as np\r\n\r\n# Create an accumulator that will be updated during each frame\r\nacc = mm.MOTAccumulator(auto_id=True)\r\n\r\n# Call update once for per frame. For now, assume distances between\r\n# frame objects / hypotheses are given.\r\nacc.update(\r\n [1, 2], # Ground truth objects in this frame\r\n [1, 2, 3], # Detector hypotheses in this frame\r\n [\r\n [0.1, np.nan, 0.3], # Distances from object 1 to hypotheses 1, 2, 3\r\n [0.5, 0.2, 0.3] # Distances from object 2 to hypotheses 1, 2, 3\r\n ]\r\n)\r\n```\r\n\r\nThe code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note `np.nan` inside the distance matrix. It signals that object `1` cannot be paired with hypothesis `2`. To inspect the current event history simple print the events associated with the accumulator.\r\n\r\n```python\r\nprint(acc.events) # a pandas DataFrame containing all events\r\n\r\n\"\"\"\r\n Type OId HId D\r\nFrameId Event\r\n0 0 RAW 1 1 0.1\r\n 1 RAW 1 2 NaN\r\n 2 RAW 1 3 0.3\r\n 3 RAW 2 1 0.5\r\n 4 RAW 2 2 0.2\r\n 5 RAW 2 3 0.3\r\n 6 MATCH 1 1 0.1\r\n 7 MATCH 2 2 0.2\r\n 8 FP NaN 3 NaN\r\n\"\"\"\r\n```\r\n\r\nThe above data frame contains `RAW` and MOT events. To obtain just MOT events type\r\n\r\n```python\r\nprint(acc.mot_events) # a pandas DataFrame containing MOT only events\r\n\r\n\"\"\"\r\n Type OId HId D\r\nFrameId Event\r\n0 6 MATCH 1 1 0.1\r\n 7 MATCH 2 2 0.2\r\n 8 FP NaN 3 NaN\r\n\"\"\"\r\n```\r\n\r\nMeaning object `1` was matched to hypothesis `1` with distance 0.1. Similarily, object `2` was matched to hypothesis `2` with distance 0.2. Hypothesis `3` could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).\r\n\r\nContinuing from above\r\n\r\n```python\r\nframeid = acc.update(\r\n [1, 2],\r\n [1],\r\n [\r\n [0.2],\r\n [0.4]\r\n ]\r\n)\r\nprint(acc.mot_events.loc[frameid])\r\n\r\n\"\"\"\r\n Type OId HId D\r\nEvent\r\n2 MATCH 1 1 0.2\r\n3 MISS 2 NaN NaN\r\n\"\"\"\r\n```\r\n\r\nWhile object `1` was matched, object `2` couldn't be matched because no hypotheses are left to pair with.\r\n\r\n```python\r\nframeid = acc.update(\r\n [1, 2],\r\n [1, 3],\r\n [\r\n [0.6, 0.2],\r\n [0.1, 0.6]\r\n ]\r\n)\r\nprint(acc.mot_events.loc[frameid])\r\n\r\n\"\"\"\r\n Type OId HId D\r\nEvent\r\n4 MATCH 1 1 0.6\r\n5 SWITCH 2 3 0.6\r\n\"\"\"\r\n```\r\n\r\nObject `2` is now tracked by hypothesis `3` leading to a track switch. Note, although a pairing `(1, 3)` with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.\r\n\r\n### Computing metrics\r\n\r\nOnce the accumulator has been populated you can compute and display metrics. Continuing the example from above\r\n\r\n```python\r\nmh = mm.metrics.create()\r\nsummary = mh.compute(acc, metrics=['num_frames', 'mota', 'motp'], name='acc')\r\nprint(summary)\r\n\r\n\"\"\"\r\n num_frames mota motp\r\nacc 3 0.5 0.34\r\n\"\"\"\r\n```\r\n\r\nComputing metrics for multiple accumulators or accumulator views is also possible\r\n\r\n```python\r\nsummary = mh.compute_many(\r\n [acc, acc.events.loc[0:1]],\r\n metrics=['num_frames', 'mota', 'motp'],\r\n names=['full', 'part'])\r\nprint(summary)\r\n\r\n\"\"\"\r\n num_frames mota motp\r\nfull 3 0.5 0.340000\r\npart 2 0.5 0.166667\r\n\"\"\"\r\n```\r\n\r\nFinally, you may want to reformat column names and how column values are displayed.\r\n\r\n```python\r\nstrsummary = mm.io.render_summary(\r\n summary,\r\n formatters={'mota' : '{:.2%}'.format},\r\n namemap={'mota': 'MOTA', 'motp' : 'MOTP'}\r\n)\r\nprint(strsummary)\r\n\r\n\"\"\"\r\n num_frames MOTA MOTP\r\nfull 3 50.00% 0.340000\r\npart 2 50.00% 0.166667\r\n\"\"\"\r\n```\r\n\r\nFor MOTChallenge **py-motmetrics** provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab `devkit`.\r\n\r\n```python\r\nsummary = mh.compute_many(\r\n [acc, acc.events.loc[0:1]],\r\n metrics=mm.metrics.motchallenge_metrics,\r\n names=['full', 'part'])\r\n\r\nstrsummary = mm.io.render_summary(\r\n summary,\r\n formatters=mh.formatters,\r\n namemap=mm.io.motchallenge_metric_names\r\n)\r\nprint(strsummary)\r\n\r\n\"\"\"\r\n IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP\r\nfull 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340\r\npart 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167\r\n\"\"\"\r\n```\r\n\r\nIn order to generate an overall summary that computes the metrics jointly over all accumulators add `generate_overall=True` as follows\r\n\r\n```python\r\nsummary = mh.compute_many(\r\n [acc, acc.events.loc[0:1]],\r\n metrics=mm.metrics.motchallenge_metrics,\r\n names=['full', 'part'],\r\n generate_overall=True\r\n )\r\n\r\nstrsummary = mm.io.render_summary(\r\n summary,\r\n formatters=mh.formatters,\r\n namemap=mm.io.motchallenge_metric_names\r\n)\r\nprint(strsummary)\r\n\r\n\"\"\"\r\n IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP\r\nfull 83.3% 83.3% 83.3% 83.3% 83.3% 2 1 1 0 1 1 1 1 50.0% 0.340\r\npart 75.0% 75.0% 75.0% 75.0% 75.0% 2 1 1 0 1 1 0 0 50.0% 0.167\r\nOVERALL 80.0% 80.0% 80.0% 80.0% 80.0% 4 2 2 0 2 2 1 1 50.0% 0.275\r\n\"\"\"\r\n```\r\n\r\n### Computing distances\r\n\r\nUp until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use `motmetrics.distance` module as shown below.\r\n\r\n#### Euclidean norm squared on points\r\n\r\n```python\r\n# Object related points\r\no = np.array([\r\n [1., 2],\r\n [2., 2],\r\n [3., 2],\r\n])\r\n\r\n# Hypothesis related points\r\nh = np.array([\r\n [0., 0],\r\n [1., 1],\r\n])\r\n\r\nC = mm.distances.norm2squared_matrix(o, h, max_d2=5.)\r\n\r\n\"\"\"\r\n[[ 5. 1.]\r\n [ nan 2.]\r\n [ nan 5.]]\r\n\"\"\"\r\n```\r\n\r\n#### Intersection over union norm for 2D rectangles\r\n\r\n```python\r\na = np.array([\r\n [0, 0, 1, 2], # Format X, Y, Width, Height\r\n [0, 0, 0.8, 1.5],\r\n])\r\n\r\nb = np.array([\r\n [0, 0, 1, 2],\r\n [0, 0, 1, 1],\r\n [0.1, 0.2, 2, 2],\r\n])\r\nmm.distances.iou_matrix(a, b, max_iou=0.5)\r\n\r\n\"\"\"\r\n[[ 0. 0.5 nan]\r\n [ 0.4 0.42857143 nan]]\r\n\"\"\"\r\n```\r\n\r\n<a name=\"SolverBackends\"></a>\r\n\r\n### Solver backends\r\n\r\nFor large datasets solving the minimum cost assignment becomes the dominant runtime part. **py-motmetrics** therefore supports these solvers out of the box\r\n\r\n- `lapsolver` - https://github.com/cheind/py-lapsolver\r\n- `lapjv` - https://github.com/gatagat/lap\r\n- `scipy` - https://github.com/scipy/scipy/tree/master/scipy\r\n- `ortools<9.4` - https://github.com/google/or-tools\r\n- `munkres` - http://software.clapper.org/munkres/\r\n\r\nA comparison for different sized matrices is shown below (taken from [here](https://github.com/cheind/py-lapsolver#benchmarks))\r\n\r\nPlease note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.\r\n![](https://github.com/cheind/py-lapsolver/raw/master/lapsolver/etc/benchmark-dtype-numpy.float32.png)\r\n\r\nBy default **py-motmetrics** will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use\r\n\r\n```python\r\ncosts = ...\r\nmysolver = lambda x: ... # solver code that returns pairings\r\n\r\nwith lap.set_default_solver(mysolver):\r\n ...\r\n```\r\n\r\n## Running tests\r\n\r\n**py-motmetrics** uses the pytest framework. To run the tests, simply `cd` into the source directly and run `pytest`.\r\n\r\n<a name=\"References\"></a>\r\n\r\n### References\r\n\r\n1. Bernardin, Keni, and Rainer Stiefelhagen. \"Evaluating multiple object tracking performance: the CLEAR MOT metrics.\"\r\n EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.\r\n2. Milan, Anton, et al. \"Mot16: A benchmark for multi-object tracking.\" arXiv preprint arXiv:1603.00831 (2016).\r\n3. Li, Yuan, Chang Huang, and Ram Nevatia. \"Learning to associate: Hybridboosted multi-target tracker for crowded scene.\"\r\n Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.\r\n4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.\r\n\r\n## Docker\r\n\r\n### Update ground truth and test data:\r\n\r\n/data/train directory should contain MOT 2D 2015 Ground Truth files.\r\n/data/test directory should contain your results.\r\n\r\nYou can check usage and directory listing at\r\nhttps://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py\r\n\r\n### Build Image\r\n\r\ndocker build -t desired-image-name -f Dockerfile .\r\n\r\n### Run Image\r\n\r\ndocker run desired-image-name\r\n\r\n(credits to [christosavg](https://github.com/christosavg))\r\n\r\n## License\r\n\r\n```\r\nMIT License\r\n\r\nCopyright (c) 2017-2022 Christoph Heindl\r\nCopyright (c) 2018 Toka\r\nCopyright (c) 2019-2022 Jack Valmadre\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the \"Software\"), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\nSOFTWARE.\r\n```\r\n\r\n[pandas]: http://pandas.pydata.org/\r\n[motchallenge]: https://motchallenge.net/\r\n[devkit]: https://motchallenge.net/devkit/\r\n\r\n\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Metrics for multiple object tracker benchmarking.",
"version": "1.4.0",
"split_keywords": [
"tracker",
"mot",
"evaluation",
"metrics",
"compare"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "bdb98a2c381c9fb7724a3bdb084f9c78",
"sha256": "cd4d691bd787360f1cd0a2127fe8a14d0646fb2912b344a9498719e132b25738"
},
"downloads": -1,
"filename": "motmetrics-1.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bdb98a2c381c9fb7724a3bdb084f9c78",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 161542,
"upload_time": "2022-12-26T09:14:59",
"upload_time_iso_8601": "2022-12-26T09:14:59.218774Z",
"url": "https://files.pythonhosted.org/packages/2f/d9/7b77e1e2db80b6f8133065ffbccdaa3c911df5f95a7af30829fcaa10a3d7/motmetrics-1.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-26 09:14:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "cheind",
"github_project": "py-motmetrics",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "motmetrics"
}