# Unified Tracking in PyTorch
This package is a robust object tracking framework for PyTorch. It facilitates multi-stage and cascaded tracking algorithms under various modular configurations and assignment algorithms. This open-source implementation is designed to facilitate research in computer vision and machine learning.
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Documentation](#documentation)
- [Contribution](#contribution)
- [Citation](#citation)
- [License](#license)
- [Recommendations](#recommendations)
## Installation
Ensure your environment meets the following requirements:
- `python >= 3.9`
- `torch >= 2.0`
Install via PyPI using the following command:
```bash
pip install unitrack
```
## Usage
The following example demonstrates object tracking across a sequence with detections that have `category` and `position` fields. This script tracks objects, updates internal state buffers for each frame, and prints the assigned IDs.
```python3
import unitrack
# Detections from 10 video frames having fields `category` and `position`.
frames = [
{
"category": torch.ones(1 + frame * 2, dtype=torch.long),
"position": (torch.arange(1 + frame * 2, dtype=dtype)).unsqueeze(1),
}
for frame in range(0, 10)
]
# Multi-stage tracker with two value fields that map the detections' data
# to keys `pos_key` and `key_cat`, where the association stage calculates
# the Euclidean distance of the positions between frames and subsequently
# performs a Jonker-Volgenant assignment using the resulting cost matrix
tracker = unitrack.MultiStageTracker(
fields={
"key_pos": unitrack.fields.Value(key="category"),
"key_cat": unitrack.fields.Value(key="position"),
},
stages=[unitrack.stages.Association(cost=costs.Distance("key_pos"), assignment=unitrack.assignment.Jonker(10))],
)
# Tracking memory that stores the relevant information to compute the
# cost matrix in the module buffers. States are observed at each frame,
# where in this case no state prediction is performed.
memory = unitrack.TrackletMemory(
states={
"key_pos": unitrack.states.Value(dtype),
"key_cat": unitrack.states.Value(dtype=torch.long),
}
)
# Iterate over frames, performing state observation, tracking and state
# propagation at every step.
for frame, detections in enumerate(frames):
# Create a context object storing (meta)data about the current
# frame, i.e. feature maps, instance detections and the frame number.
ctx = unitrack.Context(None, detections, frame=frame)
# Observe the states in memory. This can be extended to
# run a prediction step (e.g. Kalman filter)
obs = memory.observe()
# Assign detections in the current frame to observations of
# the state memory, giving an updated observations object
# and the remaining unassigned new detections.
obs, new = tracker(ctx, obs)
# Update the tracking memory. Buffers are updated to match
# the data in `obs`, and new IDs are generated for detection
# data that could not be assigned in `new`. The returned tensor
# contains ordered tracklet IDs for the detections assigned
# to the frame context `ctx`.
ids = tracks.update(ctx, obs, new)
print(f"Assigned tracklet IDs {ids.tolist()} @ frame {frame}")
```
## Documentation
Technical documentation is provided inline with the source code.
## Contribution
Contributions that maintain backwards compatibility are welcome.
## Citation
If you utilize this package in your research, please cite the following paper:
```bib
@article{unifiedperception2023,
title={Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs},
author={Kurt Stolle and Gijs Dubbelman},
journal={arXiv preprint arXiv:2303.01991},
year={2023}
}
```
Access the full paper [here](https://arxiv.org/abs/2303.01991).
## License
This project is licensed under [MIT License](LICENSE).
## Recommendations
The contents of this repository are designed for research purposes and is not recommended for use in production environments. It has not undergone testing for scalability or stability in a commercial context. Please use this tool within its intended scope.
Raw data
{
"_id": null,
"home_page": null,
"name": "unitrack",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "perception, computer vision, deep learning, object detection, instance segmentation, semantic segmentation",
"author": null,
"author_email": "Kurt Stolle <k.h.w.stolle@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/9b/20/2ef05aba0d9473c9ce2bf43c14d834a4dd4599cf613670113e0d666099cd/unitrack-4.7.2.tar.gz",
"platform": null,
"description": "# Unified Tracking in PyTorch\n\nThis package is a robust object tracking framework for PyTorch. It facilitates multi-stage and cascaded tracking algorithms under various modular configurations and assignment algorithms. This open-source implementation is designed to facilitate research in computer vision and machine learning. \n\n## Table of Contents\n- [Installation](#installation)\n- [Usage](#usage)\n- [Documentation](#documentation)\n- [Contribution](#contribution)\n- [Citation](#citation)\n- [License](#license)\n- [Recommendations](#recommendations)\n\n## Installation\n\nEnsure your environment meets the following requirements:\n\n- `python >= 3.9`\n- `torch >= 2.0`\n\nInstall via PyPI using the following command:\n\n```bash\npip install unitrack\n```\n\n## Usage\n\nThe following example demonstrates object tracking across a sequence with detections that have `category` and `position` fields. This script tracks objects, updates internal state buffers for each frame, and prints the assigned IDs.\n\n```python3\nimport unitrack\n\n# Detections from 10 video frames having fields `category` and `position`.\nframes = [\n {\n \"category\": torch.ones(1 + frame * 2, dtype=torch.long),\n \"position\": (torch.arange(1 + frame * 2, dtype=dtype)).unsqueeze(1),\n }\n for frame in range(0, 10)\n]\n\n# Multi-stage tracker with two value fields that map the detections' data\n# to keys `pos_key` and `key_cat`, where the association stage calculates \n# the Euclidean distance of the positions between frames and subsequently \n# performs a Jonker-Volgenant assignment using the resulting cost matrix\ntracker = unitrack.MultiStageTracker(\n fields={\n \"key_pos\": unitrack.fields.Value(key=\"category\"),\n \"key_cat\": unitrack.fields.Value(key=\"position\"),\n },\n stages=[unitrack.stages.Association(cost=costs.Distance(\"key_pos\"), assignment=unitrack.assignment.Jonker(10))],\n)\n\n# Tracking memory that stores the relevant information to compute the\n# cost matrix in the module buffers. States are observed at each frame,\n# where in this case no state prediction is performed.\nmemory = unitrack.TrackletMemory(\n states={\n \"key_pos\": unitrack.states.Value(dtype),\n \"key_cat\": unitrack.states.Value(dtype=torch.long),\n }\n)\n\n# Iterate over frames, performing state observation, tracking and state\n# propagation at every step.\nfor frame, detections in enumerate(frames):\n # Create a context object storing (meta)data about the current\n # frame, i.e. feature maps, instance detections and the frame number.\n ctx = unitrack.Context(None, detections, frame=frame)\n \n # Observe the states in memory. This can be extended to \n # run a prediction step (e.g. Kalman filter) \n obs = memory.observe()\n \n # Assign detections in the current frame to observations of\n # the state memory, giving an updated observations object\n # and the remaining unassigned new detections.\n obs, new = tracker(ctx, obs)\n \n # Update the tracking memory. Buffers are updated to match\n # the data in `obs`, and new IDs are generated for detection\n # data that could not be assigned in `new`. The returned tensor\n # contains ordered tracklet IDs for the detections assigned\n # to the frame context `ctx`.\n ids = tracks.update(ctx, obs, new)\n\n print(f\"Assigned tracklet IDs {ids.tolist()} @ frame {frame}\")\n```\n\n## Documentation\n\nTechnical documentation is provided inline with the source code.\n\n## Contribution\n\nContributions that maintain backwards compatibility are welcome.\n\n## Citation\n\nIf you utilize this package in your research, please cite the following paper:\n\n```bib\n@article{unifiedperception2023,\n title={Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs},\n author={Kurt Stolle and Gijs Dubbelman},\n journal={arXiv preprint arXiv:2303.01991},\n year={2023}\n}\n```\n\nAccess the full paper [here](https://arxiv.org/abs/2303.01991).\n\n## License\n\nThis project is licensed under [MIT License](LICENSE).\n\n## Recommendations\n\nThe contents of this repository are designed for research purposes and is not recommended for use in production environments. It has not undergone testing for scalability or stability in a commercial context. Please use this tool within its intended scope.\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A multi-stage object tracking framework",
"version": "4.7.2",
"project_urls": null,
"split_keywords": [
"perception",
" computer vision",
" deep learning",
" object detection",
" instance segmentation",
" semantic segmentation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9378837de94fc854da334adbdfed96147cf980cce7c7ed4174ce3c4c9f69894c",
"md5": "ed2b5dc3d38668ca9c40d299985c6288",
"sha256": "588854d4e0331d83e77da973f363d777fda998e71a8c02049446383ccfd06da4"
},
"downloads": -1,
"filename": "unitrack-4.7.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ed2b5dc3d38668ca9c40d299985c6288",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 28944,
"upload_time": "2024-04-16T11:57:16",
"upload_time_iso_8601": "2024-04-16T11:57:16.331472Z",
"url": "https://files.pythonhosted.org/packages/93/78/837de94fc854da334adbdfed96147cf980cce7c7ed4174ce3c4c9f69894c/unitrack-4.7.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9b202ef05aba0d9473c9ce2bf43c14d834a4dd4599cf613670113e0d666099cd",
"md5": "14dd23c95cd1b327333903bb27baf0cb",
"sha256": "41993dc4d9ca863e253c5d198311404224c0f4f1fde17dbae9e06b0bb60a0e40"
},
"downloads": -1,
"filename": "unitrack-4.7.2.tar.gz",
"has_sig": false,
"md5_digest": "14dd23c95cd1b327333903bb27baf0cb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 27169,
"upload_time": "2024-04-16T11:57:18",
"upload_time_iso_8601": "2024-04-16T11:57:18.748276Z",
"url": "https://files.pythonhosted.org/packages/9b/20/2ef05aba0d9473c9ce2bf43c14d834a4dd4599cf613670113e0d666099cd/unitrack-4.7.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-16 11:57:18",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "unitrack"
}