![UVal](https://gitlab.com/smithsdetection/uval/-/raw/main/icon_uval.png) [![Smiths Detection](https://gitlab.com/smithsdetection/uval/-/raw/main/SD_logo.png)](https://www.smithsdetection.com/ "Redirect to homepage")
---
UVal - Unified eValuation framework for 3D X-ray data
---
> This python package is meant to provide a high level interface to facilitate the evaluation of object detection and segmentation algorithms that operate on 3D volumetric data.
---
- There is a growing need for high performance detection algorithms using 3D data, and it is very important to be able to compare them. By far, there has not been a trivial solution for a straightforward comparison between different 3D detection algorithms.
- This framework seeks a way to address the aforementioned problem by introducing a simple and standard layout of the popular HDF5 data format as input.
- Each detection algorithm can export the results and groundtruth data according to the defined layout principles. Then, UVal can evaluate the performance and come up with common comparison metrics.
| ![](3d_vol.gif "3d CT Volume") | ![](https://gitlab.com/smithsdetection/uval/-/raw/main/dets_anim.gif "Detections") |
| :---: | :---: |
## Installation (non-development)
If you are not developing and only using UVal, you can simply install it as a `pypi` package (requires **Python 3.8** or higher); simply run:
```shell
pip install uval
```
If you would like to have UVal installation to be independant of a specific python environment, simply use `pipx` instead of `pip`.
To run the code you can type:
```shell
uval --config-file ${workspaceFolder}/output/example/config.yaml
```
For an example of the outputs see [here](https://gitlab.com/smithsdetection/uval/-/tree/main/output/example) and the report [here](https://gitlab.com/smithsdetection/uval/-/raw/main/output/example/report.pdf).
For the details of each entry in the config file please see [here](https://gitlab.com/smithsdetection/uval/-/raw/main/src/uval/config/README.md).
## Development setup
* First, please clone the UVal's git repository by executing the following command:
```git clone https://gitlab.com/smithsdetection/uval.git```
* You will need a `python >= 3.8` environment to develop Uval.
We recommend using Anaconda due to its ease of use:
```shell
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
```
For x86_64(amd64) use "Miniforge3-Linux-x86_64". For other operating systems see [here](https://github.com/conda-forge/miniforge).
* Close and reopen your terminal session.
* Setting up a `conda virtual environment` with `poetry` using the following commands:
```shell
mamba env create -f environment.yml
mamba activate UVALENV
poetry install
pre-commit install
```
Alternatively, you can create your own conda environment from scratch. and follow with `poetry` and `pre-commit` installations.
## Example code
* A step-by-step walkthrough for reading and evaluating data with the UVal is available as a jupyter document:
* [jupyter notebook demo](https://gitlab.com/smithsdetection/uval/-/blob/main/demo/sample-data-evaluation.ipynb)
------
* **Hint:** Prior to running the demo jupyter notebook walkthrough, the following steps must be performed:
* The `ipykernel` conda package must be installed
```shell
conda install -c anaconda ipykernel
```
* The `uvalenv` environment must be added as an ipykernel:
```shell
python3 -m ipykernel install --user --name uvalenv --display-name "uvalenv Python38"
```
* The `uvalenv Python38` kernel, which includes all the required python packages must be selected in `jupyter` environment to run the code.
------
## Documentations
Read the docs: https://uval.readthedocs.io/
## Release History
* 0.1.x
* The first ready-to-use version of UVal releases publicly
## Meta
Smiths Detection – [@Twitter](https://twitter.com/smithsdetection) – uval@smithsdetection.com
``UVal`` is released under the [GPL V3.0 license](LICENSE).
## Contributing
1. Fork it (<https://gitlab.com/smithsdetection/uval/fork>)
2. Create your feature branch (`git checkout -b feature/fooBar`)
3. Commit your changes (`git commit -am 'Add some fooBar'`)
4. Push to the branch (`git push origin feature/fooBar`)
5. Create a new merge Request
## Citing UVal
If you use UVal in your research or wish to refer to the results, please use the following BibTeX entry.
```****BibTeX****
@misc{smithsdetection2022uval,
author = {Philipp Fischer, Geert Heilmann, Mohammad Razavi, Faraz Saeedan},
title = {UVal},
howpublished = {\url{https://gitlab.com/smithsdetection/uval}},
year = {2022}
}
```
Raw data
{
"_id": null,
"home_page": "https://gitlab.com/smithsdetection/uval",
"name": "uval",
"maintainer": "Faraz Saeedan",
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": "faraz.saeedan@smithsdetection.com",
"keywords": "evaluation, object detection, 3D, x-ray, CT",
"author": "Geert Heilmann",
"author_email": "geert.heilmann@smithsdetection.com",
"download_url": "https://files.pythonhosted.org/packages/47/ee/075b908004344bb6a0fe95e1e12b61b0babc207a8e5a43b0ecbc3a80d349/uval-0.2.1.tar.gz",
"platform": null,
"description": " ![UVal](https://gitlab.com/smithsdetection/uval/-/raw/main/icon_uval.png) [![Smiths Detection](https://gitlab.com/smithsdetection/uval/-/raw/main/SD_logo.png)](https://www.smithsdetection.com/ \"Redirect to homepage\")\n---\nUVal - Unified eValuation framework for 3D X-ray data\n---\n> This python package is meant to provide a high level interface to facilitate the evaluation of object detection and segmentation algorithms that operate on 3D volumetric data.\n---\n- There is a growing need for high performance detection algorithms using 3D data, and it is very important to be able to compare them. By far, there has not been a trivial solution for a straightforward comparison between different 3D detection algorithms.\n- This framework seeks a way to address the aforementioned problem by introducing a simple and standard layout of the popular HDF5 data format as input. \n- Each detection algorithm can export the results and groundtruth data according to the defined layout principles. Then, UVal can evaluate the performance and come up with common comparison metrics.\n\n| ![](3d_vol.gif \"3d CT Volume\") | ![](https://gitlab.com/smithsdetection/uval/-/raw/main/dets_anim.gif \"Detections\") |\n| :---: | :---: |\n\n## Installation (non-development)\nIf you are not developing and only using UVal, you can simply install it as a `pypi` package (requires **Python 3.8** or higher); simply run:\n```shell\npip install uval\n```\n\nIf you would like to have UVal installation to be independant of a specific python environment, simply use `pipx` instead of `pip`.\n\nTo run the code you can type:\n```shell\nuval --config-file ${workspaceFolder}/output/example/config.yaml\n```\nFor an example of the outputs see [here](https://gitlab.com/smithsdetection/uval/-/tree/main/output/example) and the report [here](https://gitlab.com/smithsdetection/uval/-/raw/main/output/example/report.pdf).\n\nFor the details of each entry in the config file please see [here](https://gitlab.com/smithsdetection/uval/-/raw/main/src/uval/config/README.md).\n\n## Development setup\n\n* First, please clone the UVal's git repository by executing the following command: \n ```git clone https://gitlab.com/smithsdetection/uval.git```\n \n\n* You will need a `python >= 3.8` environment to develop Uval. \n We recommend using Anaconda due to its ease of use:\n ```shell\n curl -L -O \"https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh\"\n bash Miniforge3-$(uname)-$(uname -m).sh\n ```\n For x86_64(amd64) use \"Miniforge3-Linux-x86_64\". For other operating systems see [here](https://github.com/conda-forge/miniforge).\n\n* Close and reopen your terminal session.\n\n* Setting up a `conda virtual environment` with `poetry` using the following commands:\n \n ```shell\n mamba env create -f environment.yml\n mamba activate UVALENV\n poetry install\n pre-commit install\n ```\n Alternatively, you can create your own conda environment from scratch. and follow with `poetry` and `pre-commit` installations.\n\n## Example code\n* A step-by-step walkthrough for reading and evaluating data with the UVal is available as a jupyter document:\n * [jupyter notebook demo](https://gitlab.com/smithsdetection/uval/-/blob/main/demo/sample-data-evaluation.ipynb)\n------\n * **Hint:** Prior to running the demo jupyter notebook walkthrough, the following steps must be performed:\n \n * The `ipykernel` conda package must be installed\n ```shell\n conda install -c anaconda ipykernel\n ```\n * The `uvalenv` environment must be added as an ipykernel: \n ```shell \n python3 -m ipykernel install --user --name uvalenv --display-name \"uvalenv Python38\"\n ```\n * The `uvalenv Python38` kernel, which includes all the required python packages must be selected in `jupyter` environment to run the code.\n------\n\n## Documentations\nRead the docs: https://uval.readthedocs.io/\n\n## Release History\n\n* 0.1.x\n * The first ready-to-use version of UVal releases publicly\n\n## Meta\n\nSmiths Detection \u2013 [@Twitter](https://twitter.com/smithsdetection) \u2013 uval@smithsdetection.com\n\n``UVal`` is released under the [GPL V3.0 license](LICENSE).\n\n## Contributing\n\n1. Fork it (<https://gitlab.com/smithsdetection/uval/fork>)\n2. Create your feature branch (`git checkout -b feature/fooBar`)\n3. Commit your changes (`git commit -am 'Add some fooBar'`)\n4. Push to the branch (`git push origin feature/fooBar`)\n5. Create a new merge Request\n\n## Citing UVal\nIf you use UVal in your research or wish to refer to the results, please use the following BibTeX entry.\n\n```****BibTeX****\n@misc{smithsdetection2022uval,\n author = {Philipp Fischer, Geert Heilmann, Mohammad Razavi, Faraz Saeedan},\n title = {UVal},\n howpublished = {\\url{https://gitlab.com/smithsdetection/uval}},\n year = {2022}\n}\n```",
"bugtrack_url": null,
"license": "GPL-3.0-only",
"summary": "This python package is meant to provide a high level interface to facilitate the evaluation of object detection and segmentation algorithms that operate on 3D volumetric data.",
"version": "0.2.1",
"project_urls": {
"Documentation": "https://uval.readthedocs.io/",
"Homepage": "https://gitlab.com/smithsdetection/uval",
"Repository": "https://gitlab.com/smithsdetection/uval"
},
"split_keywords": [
"evaluation",
" object detection",
" 3d",
" x-ray",
" ct"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d13353beb598efab1bcdf09748911e1f8adf7eaa45d368036d7bcba049ac0b61",
"md5": "16ebf2340b59b9c296364ce87a2d8fed",
"sha256": "c9fda2542f7cff900783f93b15f73b135a61983d31a970a106f0a90a74b65f5f"
},
"downloads": -1,
"filename": "uval-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "16ebf2340b59b9c296364ce87a2d8fed",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 99385,
"upload_time": "2025-01-21T18:48:26",
"upload_time_iso_8601": "2025-01-21T18:48:26.140217Z",
"url": "https://files.pythonhosted.org/packages/d1/33/53beb598efab1bcdf09748911e1f8adf7eaa45d368036d7bcba049ac0b61/uval-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "47ee075b908004344bb6a0fe95e1e12b61b0babc207a8e5a43b0ecbc3a80d349",
"md5": "7da4029c12946b4acc7c612e2ce9856f",
"sha256": "6e203784ccb1cbe5288d673d6e193d593f29873011e6cf27d74436ffbe778633"
},
"downloads": -1,
"filename": "uval-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "7da4029c12946b4acc7c612e2ce9856f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 77522,
"upload_time": "2025-01-21T18:48:28",
"upload_time_iso_8601": "2025-01-21T18:48:28.498770Z",
"url": "https://files.pythonhosted.org/packages/47/ee/075b908004344bb6a0fe95e1e12b61b0babc207a8e5a43b0ecbc3a80d349/uval-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-21 18:48:28",
"github": false,
"gitlab": true,
"bitbucket": false,
"codeberg": false,
"gitlab_user": "smithsdetection",
"gitlab_project": "uval",
"lcname": "uval"
}