unitrack


Nameunitrack JSON
Version 4.7.0 PyPI version JSON
download
home_page
SummaryA multi-stage object tracking framework
upload_time2024-01-26 13:31:55
maintainer
docs_urlNone
author
requires_python>=3.10
license
keywords perception computer vision deep learning object detection instance segmentation semantic segmentation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Unified Tracking in PyTorch

This package is a robust object tracking framework for PyTorch. It facilitates multi-stage and cascaded tracking algorithms under various modular configurations and assignment algorithms. This open-source implementation is designed to facilitate research in computer vision and machine learning. 

## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [Documentation](#documentation)
- [Contribution](#contribution)
- [Citation](#citation)
- [License](#license)
- [Recommendations](#recommendations)

## Installation

Ensure your environment meets the following requirements:

- `python >= 3.9`
- `torch >= 2.0`

Install via PyPI using the following command:

```bash
pip install unitrack
```

## Usage

The following example demonstrates object tracking across a sequence with detections that have `category` and `position` fields. This script tracks objects, updates internal state buffers for each frame, and prints the assigned IDs.

```python3
import unitrack

# Detections from 10 video frames having fields `category` and `position`.
frames = [
    {
        "category": torch.ones(1 + frame * 2, dtype=torch.long),
        "position": (torch.arange(1 + frame * 2, dtype=dtype)).unsqueeze(1),
    }
    for frame in range(0, 10)
]

# Multi-stage tracker with two value fields that map the detections' data
# to keys `pos_key` and `key_cat`, where the association stage calculates 
# the Euclidean distance of the positions between frames and subsequently 
# performs a Jonker-Volgenant assignment using the resulting cost matrix
tracker = unitrack.MultiStageTracker(
    fields={
        "key_pos": unitrack.fields.Value(key="category"),
        "key_cat": unitrack.fields.Value(key="position"),
    },
    stages=[unitrack.stages.Association(cost=costs.Distance("key_pos"), assignment=unitrack.assignment.Jonker(10))],
)

# Tracking memory that stores the relevant information to compute the
# cost matrix in the module buffers. States are observed at each frame,
# where in this case no state prediction is performed.
memory = unitrack.TrackletMemory(
    states={
        "key_pos": unitrack.states.Value(dtype),
        "key_cat": unitrack.states.Value(dtype=torch.long),
    }
)

# Iterate over frames, performing state observation, tracking and state
# propagation at every step.
for frame, detections in enumerate(frames):
    # Create a context object storing (meta)data about the current
    # frame, i.e. feature maps, instance detections and the frame number.
    ctx = unitrack.Context(None, detections, frame=frame)
    
    # Observe the states in memory. This can be extended to 
    # run a prediction step (e.g. Kalman filter) 
    obs = memory.observe()
    
    # Assign detections in the current frame to observations of
    # the state memory, giving an updated observations object
    # and the remaining unassigned new detections.
    obs, new = tracker(ctx, obs)
    
    # Update the tracking memory. Buffers are updated to match
    # the data in `obs`, and new IDs are generated for detection
    # data that could not be assigned in `new`. The returned tensor
    # contains ordered tracklet IDs for the detections assigned
    # to the frame context `ctx`.
    ids = tracks.update(ctx, obs, new)

    print(f"Assigned tracklet IDs {ids.tolist()} @ frame {frame}")
```

## Documentation

Technical documentation is provided inline with the source code.

## Contribution

Contributions that maintain backwards compatibility are welcome.

## Citation

If you utilize this package in your research, please cite the following paper:

```bib
@article{unifiedperception2023,
    title={Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs},
    author={Kurt Stolle and Gijs Dubbelman},
    journal={arXiv preprint arXiv:2303.01991},
    year={2023}
}
```

Access the full paper [here](https://arxiv.org/abs/2303.01991).

## License

This project is licensed under [MIT License](LICENSE).

## Recommendations

The contents of this repository are designed for research purposes and is not recommended for use in production environments. It has not undergone testing for scalability or stability in a commercial context. Please use this tool within its intended scope.



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "unitrack",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "perception,computer vision,deep learning,object detection,instance segmentation,semantic segmentation",
    "author": "",
    "author_email": "Kurt Stolle <k.h.w.stolle@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ac/e1/aaed3582ac981736115c4a9037dd8affec47ed39034ff4bf14974b44143a/unitrack-4.7.0.tar.gz",
    "platform": null,
    "description": "# Unified Tracking in PyTorch\n\nThis package is a robust object tracking framework for PyTorch. It facilitates multi-stage and cascaded tracking algorithms under various modular configurations and assignment algorithms. This open-source implementation is designed to facilitate research in computer vision and machine learning. \n\n## Table of Contents\n- [Installation](#installation)\n- [Usage](#usage)\n- [Documentation](#documentation)\n- [Contribution](#contribution)\n- [Citation](#citation)\n- [License](#license)\n- [Recommendations](#recommendations)\n\n## Installation\n\nEnsure your environment meets the following requirements:\n\n- `python >= 3.9`\n- `torch >= 2.0`\n\nInstall via PyPI using the following command:\n\n```bash\npip install unitrack\n```\n\n## Usage\n\nThe following example demonstrates object tracking across a sequence with detections that have `category` and `position` fields. This script tracks objects, updates internal state buffers for each frame, and prints the assigned IDs.\n\n```python3\nimport unitrack\n\n# Detections from 10 video frames having fields `category` and `position`.\nframes = [\n    {\n        \"category\": torch.ones(1 + frame * 2, dtype=torch.long),\n        \"position\": (torch.arange(1 + frame * 2, dtype=dtype)).unsqueeze(1),\n    }\n    for frame in range(0, 10)\n]\n\n# Multi-stage tracker with two value fields that map the detections' data\n# to keys `pos_key` and `key_cat`, where the association stage calculates \n# the Euclidean distance of the positions between frames and subsequently \n# performs a Jonker-Volgenant assignment using the resulting cost matrix\ntracker = unitrack.MultiStageTracker(\n    fields={\n        \"key_pos\": unitrack.fields.Value(key=\"category\"),\n        \"key_cat\": unitrack.fields.Value(key=\"position\"),\n    },\n    stages=[unitrack.stages.Association(cost=costs.Distance(\"key_pos\"), assignment=unitrack.assignment.Jonker(10))],\n)\n\n# Tracking memory that stores the relevant information to compute the\n# cost matrix in the module buffers. States are observed at each frame,\n# where in this case no state prediction is performed.\nmemory = unitrack.TrackletMemory(\n    states={\n        \"key_pos\": unitrack.states.Value(dtype),\n        \"key_cat\": unitrack.states.Value(dtype=torch.long),\n    }\n)\n\n# Iterate over frames, performing state observation, tracking and state\n# propagation at every step.\nfor frame, detections in enumerate(frames):\n    # Create a context object storing (meta)data about the current\n    # frame, i.e. feature maps, instance detections and the frame number.\n    ctx = unitrack.Context(None, detections, frame=frame)\n    \n    # Observe the states in memory. This can be extended to \n    # run a prediction step (e.g. Kalman filter) \n    obs = memory.observe()\n    \n    # Assign detections in the current frame to observations of\n    # the state memory, giving an updated observations object\n    # and the remaining unassigned new detections.\n    obs, new = tracker(ctx, obs)\n    \n    # Update the tracking memory. Buffers are updated to match\n    # the data in `obs`, and new IDs are generated for detection\n    # data that could not be assigned in `new`. The returned tensor\n    # contains ordered tracklet IDs for the detections assigned\n    # to the frame context `ctx`.\n    ids = tracks.update(ctx, obs, new)\n\n    print(f\"Assigned tracklet IDs {ids.tolist()} @ frame {frame}\")\n```\n\n## Documentation\n\nTechnical documentation is provided inline with the source code.\n\n## Contribution\n\nContributions that maintain backwards compatibility are welcome.\n\n## Citation\n\nIf you utilize this package in your research, please cite the following paper:\n\n```bib\n@article{unifiedperception2023,\n    title={Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation with Minimal Annotation Costs},\n    author={Kurt Stolle and Gijs Dubbelman},\n    journal={arXiv preprint arXiv:2303.01991},\n    year={2023}\n}\n```\n\nAccess the full paper [here](https://arxiv.org/abs/2303.01991).\n\n## License\n\nThis project is licensed under [MIT License](LICENSE).\n\n## Recommendations\n\nThe contents of this repository are designed for research purposes and is not recommended for use in production environments. It has not undergone testing for scalability or stability in a commercial context. Please use this tool within its intended scope.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A multi-stage object tracking framework",
    "version": "4.7.0",
    "project_urls": null,
    "split_keywords": [
        "perception",
        "computer vision",
        "deep learning",
        "object detection",
        "instance segmentation",
        "semantic segmentation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "099b51aa8ada85dbc45e6941b6b22f14cf380debe5426f8405f5fd9cc6e8bdfb",
                "md5": "f532a280be6c4ad5633f752d89c0dcb8",
                "sha256": "e3e625ece8dc7117c0f6b4ca839535a902613578f9298e11ff3b2627b9652db7"
            },
            "downloads": -1,
            "filename": "unitrack-4.7.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f532a280be6c4ad5633f752d89c0dcb8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 20787,
            "upload_time": "2024-01-26T13:31:53",
            "upload_time_iso_8601": "2024-01-26T13:31:53.781205Z",
            "url": "https://files.pythonhosted.org/packages/09/9b/51aa8ada85dbc45e6941b6b22f14cf380debe5426f8405f5fd9cc6e8bdfb/unitrack-4.7.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ace1aaed3582ac981736115c4a9037dd8affec47ed39034ff4bf14974b44143a",
                "md5": "bb7c13a5500c9d75c0c0bd1ab76b70ff",
                "sha256": "cf548cd6b5b2f0d671b66562bd2349b8543d923a29c1c04fead9f95da43e67fc"
            },
            "downloads": -1,
            "filename": "unitrack-4.7.0.tar.gz",
            "has_sig": false,
            "md5_digest": "bb7c13a5500c9d75c0c0bd1ab76b70ff",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 19843,
            "upload_time": "2024-01-26T13:31:55",
            "upload_time_iso_8601": "2024-01-26T13:31:55.919067Z",
            "url": "https://files.pythonhosted.org/packages/ac/e1/aaed3582ac981736115c4a9037dd8affec47ed39034ff4bf14974b44143a/unitrack-4.7.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-26 13:31:55",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "unitrack"
}
        
Elapsed time: 0.18787s