unipercept


Nameunipercept JSON
Version 5.1.3 PyPI version JSON
download
home_page
SummaryUniPecept: A unified framework for perception tasks focusing on research applications that require a high degree of flexibility and customization.
upload_time2024-02-23 15:38:12
maintainer
docs_urlNone
author
requires_python>=3.11
license
keywords perception computer vision deep learning object detection instance segmentation semantic segmentation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # UniPercept

## Installation

This package requires at least Python 3.11 and PyTorch 2.1. Once you have created an environment with these 
dependencies, we can proceed to install `unipercept` using one of three installation methods.

### Stable release (recommended)
You can install the latest stable release from PyPI via
```bash
pip install unipercept
```

### Master branch
To install the latest version, which is not guaranteed to be stable, install from GitHub using 
```bash
pip install git+https://github.com/kurt-stolle/unipercept.git
```

### Developers
If your use-case requires changes to our codebase, we recommend that you first fork this repository and download your
own fork locally. Assuming you have the GitHub CLI installed, you can clone your fork with
```bash
gh repo clone unipercept
```
Then, you can proceed to install the package in editable mode by running
```bash
pip install --editable unipercept
```
You are invited to share your improvements to the codebase through a pull request on this repository. 
Before making a pull request, please ensure your changes follow our code guidelines by running `pre-commit` before 
adding your files into a Git commit.

## Training and evaluation

Models can be trained and evalurated from the CLI or through the Python API. 

### CLI
To train a model with the CLI:
```bash
unicli train --config <config path>
```
Without a `<config name>`, an interactive prompt will be started to assist in finding a configuration file.

## Developer guidelines
All tests can ran via `python -m pytest`. 
However, we also provide a `make` directive that uses `pytorch-xdist` to speed up the process:
```
make test
```
You may need to tune the parameters if memory problems arise during testing. 

Similarly, benchmarks are implemented using `pytest-benchmark`. To run them, use:
```
make benchmark
```

Coverage reports are built using `pytest-cov`, and executed using:
```
make coverage
```

Last, we largely follow the same principles and methods as [Transformers](https://huggingface.co/docs/transformers) uses for testing. 
For more information on using `pytest` for automated testing, check out [their documentation](https://huggingface.co/transformers/v3.4.0/testing.html).

## Acknowledgements

We would like to express our gratitude to the developers of the following open-source projects, which have significantly contributed to the success of our work:

- [PyTorch](https://github.com/pytorch/pytorch): An open-source machine learning framework that accelerates the path from research prototyping to production deployment.
- [Detectron2](https://github.com/facebookresearch/detectron2): A platform for object detection and segmentation built on PyTorch. We liberally use the packages and code from this project.
- [PyTorch3D](https://github.com/facebookresearch/pytorch3d): A library on which we base our camera projection from 2D to 3D space.
- [Panoptic FCN](https://github.com/DdeGeus/PanopticFCN_cityscapes): An implementation of the Panoptic FCN method for panoptic segmentation tasks.
- [ViP-DeepLab](https://github.com/google-research/deeplab2/blob/main/g3doc/projects/vip_deeplab.md): The baseline implementation for the depth-aware video panoptic segmentation task.
- [Panoptic Depth](https://github.com/NaiyuGao/PanopticDepth): A repository that implements the instance (de)normalization procedure that significantly improves depth estimation for _things_.

The Unified Perception implementation contains extracts of the above repositories that have been edited to suit the specific needs of this project.
Whenever possible, the original libraries are used instead.

## License

This repository is released under the MIT License. For more information, please refer to the LICENSE file.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "unipercept",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": "",
    "keywords": "perception,computer vision,deep learning,object detection,instance segmentation,semantic segmentation",
    "author": "",
    "author_email": "Kurt Stolle <k.h.w.stolle@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/58/52/422e99412ced71023aded639534fd82f5895742169105a2c783b0d26040a/unipercept-5.1.3.tar.gz",
    "platform": null,
    "description": "# UniPercept\n\n## Installation\n\nThis package requires at least Python 3.11 and PyTorch 2.1. Once you have created an environment with these \ndependencies, we can proceed to install `unipercept` using one of three installation methods.\n\n### Stable release (recommended)\nYou can install the latest stable release from PyPI via\n```bash\npip install unipercept\n```\n\n### Master branch\nTo install the latest version, which is not guaranteed to be stable, install from GitHub using \n```bash\npip install git+https://github.com/kurt-stolle/unipercept.git\n```\n\n### Developers\nIf your use-case requires changes to our codebase, we recommend that you first fork this repository and download your\nown fork locally. Assuming you have the GitHub CLI installed, you can clone your fork with\n```bash\ngh repo clone unipercept\n```\nThen, you can proceed to install the package in editable mode by running\n```bash\npip install --editable unipercept\n```\nYou are invited to share your improvements to the codebase through a pull request on this repository. \nBefore making a pull request, please ensure your changes follow our code guidelines by running `pre-commit` before \nadding your files into a Git commit.\n\n## Training and evaluation\n\nModels can be trained and evalurated from the CLI or through the Python API. \n\n### CLI\nTo train a model with the CLI:\n```bash\nunicli train --config <config path>\n```\nWithout a `<config name>`, an interactive prompt will be started to assist in finding a configuration file.\n\n## Developer guidelines\nAll tests can ran via `python -m pytest`. \nHowever, we also provide a `make` directive that uses `pytorch-xdist` to speed up the process:\n```\nmake test\n```\nYou may need to tune the parameters if memory problems arise during testing. \n\nSimilarly, benchmarks are implemented using `pytest-benchmark`. To run them, use:\n```\nmake benchmark\n```\n\nCoverage reports are built using `pytest-cov`, and executed using:\n```\nmake coverage\n```\n\nLast, we largely follow the same principles and methods as [Transformers](https://huggingface.co/docs/transformers) uses for testing. \nFor more information on using `pytest` for automated testing, check out [their documentation](https://huggingface.co/transformers/v3.4.0/testing.html).\n\n## Acknowledgements\n\nWe would like to express our gratitude to the developers of the following open-source projects, which have significantly contributed to the success of our work:\n\n- [PyTorch](https://github.com/pytorch/pytorch): An open-source machine learning framework that accelerates the path from research prototyping to production deployment.\n- [Detectron2](https://github.com/facebookresearch/detectron2): A platform for object detection and segmentation built on PyTorch. We liberally use the packages and code from this project.\n- [PyTorch3D](https://github.com/facebookresearch/pytorch3d): A library on which we base our camera projection from 2D to 3D space.\n- [Panoptic FCN](https://github.com/DdeGeus/PanopticFCN_cityscapes): An implementation of the Panoptic FCN method for panoptic segmentation tasks.\n- [ViP-DeepLab](https://github.com/google-research/deeplab2/blob/main/g3doc/projects/vip_deeplab.md): The baseline implementation for the depth-aware video panoptic segmentation task.\n- [Panoptic Depth](https://github.com/NaiyuGao/PanopticDepth): A repository that implements the instance (de)normalization procedure that significantly improves depth estimation for _things_.\n\nThe Unified Perception implementation contains extracts of the above repositories that have been edited to suit the specific needs of this project.\nWhenever possible, the original libraries are used instead.\n\n## License\n\nThis repository is released under the MIT License. For more information, please refer to the LICENSE file.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "UniPecept: A unified framework for perception tasks focusing on research applications that require a high degree of flexibility and customization.",
    "version": "5.1.3",
    "project_urls": null,
    "split_keywords": [
        "perception",
        "computer vision",
        "deep learning",
        "object detection",
        "instance segmentation",
        "semantic segmentation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "730622e853b205cd194204a436abc35a6047241e6627b22de26087240444a90f",
                "md5": "bda10e2acae71cbe475d9cb578bfdbb7",
                "sha256": "bff608b9a3b5d3192cf3abbfba108f6c0baf02feb60754f0533469c5e7dc9af9"
            },
            "downloads": -1,
            "filename": "unipercept-5.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bda10e2acae71cbe475d9cb578bfdbb7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 316197,
            "upload_time": "2024-02-23T15:38:09",
            "upload_time_iso_8601": "2024-02-23T15:38:09.905620Z",
            "url": "https://files.pythonhosted.org/packages/73/06/22e853b205cd194204a436abc35a6047241e6627b22de26087240444a90f/unipercept-5.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5852422e99412ced71023aded639534fd82f5895742169105a2c783b0d26040a",
                "md5": "cab686618b8b17d6b3a6439870e215a1",
                "sha256": "beacf704915b50a499a6a570f6d81507aaa1538eb7fba93c340ac79d778623b8"
            },
            "downloads": -1,
            "filename": "unipercept-5.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "cab686618b8b17d6b3a6439870e215a1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 253498,
            "upload_time": "2024-02-23T15:38:12",
            "upload_time_iso_8601": "2024-02-23T15:38:12.605458Z",
            "url": "https://files.pythonhosted.org/packages/58/52/422e99412ced71023aded639534fd82f5895742169105a2c783b0d26040a/unipercept-5.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-23 15:38:12",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "unipercept"
}
        
Elapsed time: 0.22109s