[//]: # (![alt text](https://github.com/caumente/AUDIT/blob/main/src/app/util/images/AUDIT_big.jpeg))
![alt text](https://github.com/caumente/AUDIT/blob/main/src/app/util/images/AUDIT_medium.jpeg)
<a href="https://github.com/caumente/AUDIT" title="Go to GitHub repo"><img src="https://img.shields.io/static/v1?label=caumente&message=AUDIT&color=e78ac3&logo=github" alt="caumente - AUDIT"></a>
<a href="https://github.com/caumente/AUDIT"><img src="https://img.shields.io/github/stars/caumente/AUDIT?style=social" alt="stars - AUDIT"></a>
<a href="https://github.com/caumente/AUDIT"><img src="https://img.shields.io/github/forks/caumente/AUDIT?style=social" alt="forks - AUDIT"></a>
<a href="https://github.com/caumente/audit/releases/"><img src="https://img.shields.io/github/release/caumente/audit?include_prereleases=&sort=semver&color=e78ac3" alt="GitHub release"></a>
<a href="#license"><img src="https://img.shields.io/badge/License-Apache_2.0-e78ac3" alt="License"></a>
<a href="https://github.com/caumente/audit/issues"><img src="https://img.shields.io/github/issues/caumente/audit" alt="issues - AUDIT"></a>
## Summary
AUDIT, Analysis & evalUation Dashboard of artIficial inTelligence, is a tool designed to analyze,
visualize, and detect biases in brain MRI data and models. It provides tools for loading and processing MRI data,
extracting relevant features, and visualizing model performance and biases in predictions. AUDIT presents the
following features:
- **Data management**: Easily work with MRI data from various sources.
- **Feature extraction**: Extract relevant features from MRI images and their segmentations for analysis.
- **Visualization**: Visualize model performance, including false positives and negatives, using interactive plots.
- **Model robustness**: Assess the robustness of the model by evaluating its performance across different datasets and conditions.
- **Bias detection**: Identify potential biases in model predictions and performance.
- **Longitudinal analysis**: Track your model performance over different time points.
Details of our work are provided in [*our paper*](...........), **AUDIT**. We hope that
users will use *AUDIT* to gain novel insights into brain tumor segmentation field.
## Usage
- **Home Page**: The main landing page of the tool.
- **Univariate Analysis**: Analysis of individual variables to understand their distributions and characteristics.
- **Multivariate Analysis**: Examination of multiple variables simultaneously to explore relationships and patterns.
- **Segmentation Error Matrix**: A table displaying the errors associated with different segmentation tasks.
- **Model Performance Analysis**: Evaluation of the effectiveness and accuracy of a single model.
- **Pairwise Model Performance Comparison**: Comparison of performance metrics between two different models.
- **Multi-model Performance Comparison**: Comparative analysis of performance metrics across multiple models.
- **Longitudinal Measurements**: Analysis of data collected over time to observe trends and changes.
- **Subjects Exploration**: Detailed examination of individual subjects within the dataset.
## Web AUDIT
Last released version of **AUDIT** is hosted at https://audit.streamlitapp.com for an online overview of its functionalities.
## Getting Started
### 1.1 Installation via PIP installer (Not available yet)
```bash
pip install audit
```
### 1.2. Installation via AUDIT repository
### 1.2.1. Using Anaconda
(Recommended) Create an isolated Anaconda environment:
```bash
conda create -n audit_env python=3.10
conda activate audit_env
```
Clone the repository:
```bash
git clone https://github.com/caumente/AUDIT.git
cd AUDIT
```
Install the required packages:
```bash
pip install -r requirements.txt
```
### 1.2.2 Using Poetry
The library _poetry_ must be installed in your environment to follow this via of installation.
Clone the repository:
```bash
git clone https://github.com/caumente/AUDIT.git
cd AUDIT
```
Install the dependencies:
```bash
poetry install
```
Activate virtual environment:
```bash
poetry shell
```
### 2. Configuration
Edit the config files in `./src/configs/` directory to set up the paths for data loading and other configurations:
<details>
<summary><strong>2.1. Feature extractor config</strong></summary>
```yaml
# Paths to all the datasets
data_paths:
dataset_1: '/home/user/AUDIT/datasets/dataset_1/dataset_1_images'
dataset_N: '/home/user/AUDIT/datasets/dataset_N/dataset_N_images'
# Mapping of labels to their numeric values
labels:
BKG: 0
EDE: 3
ENH: 1
NEC: 2
# List of features to extract
features:
statistical: true
texture: false
spatial: false
tumor: false
# Longitudinal study settings
#longitudinal:
# dataset_N:
# pattern: "_" # Pattern used for splitting filename
# longitudinal_id: 1 # Index position for the subject ID after splitting the filename
# time_point: 2 # Index position for the time point after splitting the filename
# Path where extracted features will be saved
output_path: '/home/user/AUDIT/outputs/features'
```
</details>
<details>
<summary><strong>2.2. Metric extractor config</strong></summary>
```yaml
# Path to the raw dataset
data_path: '/home/carlos/AUDIT/datasets/dataset_1/dataset_1_images'
# Paths to model predictions
model_predictions_paths:
model_1: '/home/user/AUDIT/datasets/dataset_1/dataset_1_seg/model_1'
model_M: '/home/user/AUDIT/datasets/dataset_1/dataset_1_seg/model_M'
# Mapping of labels to their numeric values
labels:
BKG: 0
EDE: 3
ENH: 1
NEC: 2
# List of metrics to compute
metrics:
dice: true
jacc: false
accu: false
prec: false
sens: false
spec: false
haus: false
# Library used for computing all the metrics
package: custom
calculate_stats: false
# Path where output metrics will be saved
output_path: '/home/user/AUDIT/outputs/metrics'
# Filename for the extracted information
filename: 'dataset_1'
```
</details>
<details>
<summary><strong>2.3. APP config</strong></summary>
```yaml
# Mapping of labels to their numeric values
labels:
BKG: 0
EDE: 3
ENH: 1
NEC: 2
# Root path for datasets, features extracted, and metrics extracted
datasets_path: '/home/user/AUDIT/datasets'
features_path: '/home/user/AUDIT/outputs/features'
metrics_path: '/home/user/AUDIT/outputs/metrics'
# Paths for raw datasets
raw_datasets:
dataset_1: "${datasets_path}/dataset_1/dataset_1_images"
dataset_N: "${datasets_path}/dataset_N/dataset_N_images"
# Paths for feature extraction CSV files
features:
dataset_1: "${features_path}/extracted_information_dataset_1.csv"
dataset_N: "${features_path}/extracted_information_dataset_N.csv"
# Paths for metric extraction CSV files
metrics:
dataset_1: "${metrics_path}/extracted_information_dataset_1.csv"
dataset_N: "${metrics_path}/extracted_information_dataset_N.csv"
# Paths for models predictions
predictions:
dataset_1:
model_1: "${datasets_path}/dataset_1/dataset_1_seg/model_1"
model_M: "${datasets_path}/dataset_1/dataset_1_seg/model_M"
dataset_N:
model_1: "${datasets_path}/dataset_N/dataset_N_seg/model_1"
model_M: "${datasets_path}/dataset_N/dataset_N_seg/model_M"
```
</details>
### 3. Run AUDIT backend
Use the following commands to run the *Feature extractor* and *Metric extractor* scripts:
```bash
python src/feature_extractor.py
```
```bash
python src/metric_extractor.py
```
A _logs_ folder will be created after running each of the scripts to keep track of the execution. All the output files
will be stored in the folder defined in the corresponding config file (by default in the _output_ folder).
### 4. Run AUDIT app
Use the following streamlit command to run the APP and start the data exploration:
```bash
streamlit run src/app/APP.py
```
### 5. Additional configurations
#### 5.1. ITK-Snap
AUDIT is prepared for opening cases with ITK-Snap while exploring the data in the different dashboards. However, the
ITK-Snap tool must have been installed and preconfigured before. Here we provide a simple necessary configuration to
use it in each operative system:
<details>
<summary><strong>5.1.1. On Mac OS</strong></summary>
```bash
```
</details>
<details>
<summary><strong>5.1.2. On Linux OS</strong></summary>
```bash
```
</details>
## Authors
Please feel free to contact us with any issues, comments, or questions.
#### Carlos Aumente
- Email: <UO297103@uniovi.es>
- GitHub: https://github.com/caumente
#### Mauricio Reyes
#### Michael Muller
#### Jorge Díez
#### Beatriz Remeseiro
## License
Apache License 2.0
Raw data
{
"_id": null,
"home_page": "https://github.com/caumente/AUDIT",
"name": "auditapp",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "medical image analysis, deep learning, mri, model evaluation, dashboard",
"author": "Carlos Aumente",
"author_email": "carlosaumente@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/6f/cd/136c69dee9ac683793791f51d65a362a4e7fb01887117d6ca671edffbc0e/auditapp-0.0.6.tar.gz",
"platform": null,
"description": "\n[//]: # (![alt text](https://github.com/caumente/AUDIT/blob/main/src/app/util/images/AUDIT_big.jpeg))\n![alt text](https://github.com/caumente/AUDIT/blob/main/src/app/util/images/AUDIT_medium.jpeg)\n\n\n<a href=\"https://github.com/caumente/AUDIT\" title=\"Go to GitHub repo\"><img src=\"https://img.shields.io/static/v1?label=caumente&message=AUDIT&color=e78ac3&logo=github\" alt=\"caumente - AUDIT\"></a>\n<a href=\"https://github.com/caumente/AUDIT\"><img src=\"https://img.shields.io/github/stars/caumente/AUDIT?style=social\" alt=\"stars - AUDIT\"></a>\n<a href=\"https://github.com/caumente/AUDIT\"><img src=\"https://img.shields.io/github/forks/caumente/AUDIT?style=social\" alt=\"forks - AUDIT\"></a>\n\n\n<a href=\"https://github.com/caumente/audit/releases/\"><img src=\"https://img.shields.io/github/release/caumente/audit?include_prereleases=&sort=semver&color=e78ac3\" alt=\"GitHub release\"></a>\n<a href=\"#license\"><img src=\"https://img.shields.io/badge/License-Apache_2.0-e78ac3\" alt=\"License\"></a>\n<a href=\"https://github.com/caumente/audit/issues\"><img src=\"https://img.shields.io/github/issues/caumente/audit\" alt=\"issues - AUDIT\"></a>\n\n\n## Summary\n\nAUDIT, Analysis & evalUation Dashboard of artIficial inTelligence, is a tool designed to analyze,\nvisualize, and detect biases in brain MRI data and models. It provides tools for loading and processing MRI data,\nextracting relevant features, and visualizing model performance and biases in predictions. AUDIT presents the \nfollowing features:\n\n\n- **Data management**: Easily work with MRI data from various sources.\n- **Feature extraction**: Extract relevant features from MRI images and their segmentations for analysis.\n- **Visualization**: Visualize model performance, including false positives and negatives, using interactive plots.\n- **Model robustness**: Assess the robustness of the model by evaluating its performance across different datasets and conditions.\n- **Bias detection**: Identify potential biases in model predictions and performance.\n- **Longitudinal analysis**: Track your model performance over different time points.\n\nDetails of our work are provided in [*our paper*](...........), **AUDIT**. We hope that \nusers will use *AUDIT* to gain novel insights into brain tumor segmentation field. \n\n\n## Usage\n- **Home Page**: The main landing page of the tool.\n- **Univariate Analysis**: Analysis of individual variables to understand their distributions and characteristics.\n- **Multivariate Analysis**: Examination of multiple variables simultaneously to explore relationships and patterns.\n- **Segmentation Error Matrix**: A table displaying the errors associated with different segmentation tasks.\n- **Model Performance Analysis**: Evaluation of the effectiveness and accuracy of a single model.\n- **Pairwise Model Performance Comparison**: Comparison of performance metrics between two different models.\n- **Multi-model Performance Comparison**: Comparative analysis of performance metrics across multiple models.\n- **Longitudinal Measurements**: Analysis of data collected over time to observe trends and changes.\n- **Subjects Exploration**: Detailed examination of individual subjects within the dataset.\n\n\n## Web AUDIT\n\nLast released version of **AUDIT** is hosted at https://audit.streamlitapp.com for an online overview of its functionalities.\n\n\n## Getting Started\n\n### 1.1 Installation via PIP installer (Not available yet)\n\n```bash\npip install audit\n```\n\n### 1.2. Installation via AUDIT repository \n\n### 1.2.1. Using Anaconda\n(Recommended) Create an isolated Anaconda environment:\n\n```bash\nconda create -n audit_env python=3.10\nconda activate audit_env\n```\n\nClone the repository:\n ```bash\n git clone https://github.com/caumente/AUDIT.git\n cd AUDIT\n ```\n\nInstall the required packages:\n ```bash\n pip install -r requirements.txt\n ```\n\n### 1.2.2 Using Poetry\n\nThe library _poetry_ must be installed in your environment to follow this via of installation.\n\nClone the repository:\n ```bash\n git clone https://github.com/caumente/AUDIT.git\n cd AUDIT\n```\n\nInstall the dependencies:\n```bash\npoetry install\n```\n\nActivate virtual environment:\n```bash\npoetry shell\n```\n\n\n### 2. Configuration\n\nEdit the config files in `./src/configs/` directory to set up the paths for data loading and other configurations:\n\n\n<details>\n <summary><strong>2.1. Feature extractor config</strong></summary>\n\n```yaml\n# Paths to all the datasets\ndata_paths:\n dataset_1: '/home/user/AUDIT/datasets/dataset_1/dataset_1_images'\n dataset_N: '/home/user/AUDIT/datasets/dataset_N/dataset_N_images'\n\n# Mapping of labels to their numeric values\nlabels:\n BKG: 0\n EDE: 3\n ENH: 1\n NEC: 2\n\n# List of features to extract\nfeatures:\n statistical: true\n texture: false\n spatial: false\n tumor: false\n\n# Longitudinal study settings\n#longitudinal:\n# dataset_N:\n# pattern: \"_\" # Pattern used for splitting filename\n# longitudinal_id: 1 # Index position for the subject ID after splitting the filename\n# time_point: 2 # Index position for the time point after splitting the filename\n\n\n# Path where extracted features will be saved\noutput_path: '/home/user/AUDIT/outputs/features'\n```\n</details>\n\n\n<details>\n <summary><strong>2.2. Metric extractor config</strong></summary>\n\n```yaml\n# Path to the raw dataset\ndata_path: '/home/carlos/AUDIT/datasets/dataset_1/dataset_1_images'\n\n# Paths to model predictions\nmodel_predictions_paths:\n model_1: '/home/user/AUDIT/datasets/dataset_1/dataset_1_seg/model_1'\n model_M: '/home/user/AUDIT/datasets/dataset_1/dataset_1_seg/model_M'\n\n# Mapping of labels to their numeric values\nlabels:\n BKG: 0\n EDE: 3\n ENH: 1\n NEC: 2\n\n# List of metrics to compute\nmetrics:\n dice: true\n jacc: false\n accu: false\n prec: false\n sens: false\n spec: false\n haus: false\n\n# Library used for computing all the metrics\npackage: custom\ncalculate_stats: false\n\n# Path where output metrics will be saved\noutput_path: '/home/user/AUDIT/outputs/metrics'\n\n# Filename for the extracted information\nfilename: 'dataset_1'\n```\n</details>\n\n\n<details>\n <summary><strong>2.3. APP config</strong></summary>\n\n```yaml\n# Mapping of labels to their numeric values\nlabels:\n BKG: 0\n EDE: 3\n ENH: 1\n NEC: 2\n\n# Root path for datasets, features extracted, and metrics extracted\ndatasets_path: '/home/user/AUDIT/datasets'\nfeatures_path: '/home/user/AUDIT/outputs/features'\nmetrics_path: '/home/user/AUDIT/outputs/metrics'\n\n# Paths for raw datasets\nraw_datasets:\n dataset_1: \"${datasets_path}/dataset_1/dataset_1_images\"\n dataset_N: \"${datasets_path}/dataset_N/dataset_N_images\"\n\n# Paths for feature extraction CSV files\nfeatures:\n dataset_1: \"${features_path}/extracted_information_dataset_1.csv\"\n dataset_N: \"${features_path}/extracted_information_dataset_N.csv\"\n\n# Paths for metric extraction CSV files\nmetrics:\n dataset_1: \"${metrics_path}/extracted_information_dataset_1.csv\"\n dataset_N: \"${metrics_path}/extracted_information_dataset_N.csv\"\n\n# Paths for models predictions\npredictions:\n dataset_1:\n model_1: \"${datasets_path}/dataset_1/dataset_1_seg/model_1\"\n model_M: \"${datasets_path}/dataset_1/dataset_1_seg/model_M\"\n dataset_N:\n model_1: \"${datasets_path}/dataset_N/dataset_N_seg/model_1\"\n model_M: \"${datasets_path}/dataset_N/dataset_N_seg/model_M\"\n```\n</details>\n\n\n### 3. Run AUDIT backend\n\nUse the following commands to run the *Feature extractor* and *Metric extractor* scripts:\n\n```bash\npython src/feature_extractor.py\n```\n\n```bash\npython src/metric_extractor.py\n```\n\nA _logs_ folder will be created after running each of the scripts to keep track of the execution. All the output files \nwill be stored in the folder defined in the corresponding config file (by default in the _output_ folder).\n\n### 4. Run AUDIT app\n\nUse the following streamlit command to run the APP and start the data exploration:\n\n```bash\nstreamlit run src/app/APP.py\n```\n\n### 5. Additional configurations\n\n#### 5.1. ITK-Snap\n\nAUDIT is prepared for opening cases with ITK-Snap while exploring the data in the different dashboards. However, the \nITK-Snap tool must have been installed and preconfigured before. Here we provide a simple necessary configuration to \nuse it in each operative system:\n\n<details>\n <summary><strong>5.1.1. On Mac OS</strong></summary>\n\n```bash\n```\n</details>\n\n\n<details>\n <summary><strong>5.1.2. On Linux OS</strong></summary>\n\n```bash\n```\n</details>\n\n\n\n## Authors\n\nPlease feel free to contact us with any issues, comments, or questions.\n\n#### Carlos Aumente \n\n- Email: <UO297103@uniovi.es>\n- GitHub: https://github.com/caumente\n\n#### Mauricio Reyes \n#### Michael Muller \n#### Jorge D\u00edez \n#### Beatriz Remeseiro \n\n## License\nApache License 2.0\n\n\n\n\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "AUDIT, Analysis & evalUation Dashboard of artIficial inTelligence",
"version": "0.0.6",
"project_urls": {
"Documentation": "https://caumente.github.io/AUDIT/",
"Homepage": "https://github.com/caumente/AUDIT",
"Repository": "https://github.com/caumente/AUDIT"
},
"split_keywords": [
"medical image analysis",
" deep learning",
" mri",
" model evaluation",
" dashboard"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8ff43d069a04685f67e369080222cb542973d303b878775c150c4f003987b11c",
"md5": "483f1480f00c7462ac69cf8b27bd4530",
"sha256": "bc6f8a643d85675d4c4a969762964219119f2fbd93003109a04d599c59c48485"
},
"downloads": -1,
"filename": "auditapp-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "483f1480f00c7462ac69cf8b27bd4530",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 498279,
"upload_time": "2024-12-30T07:27:22",
"upload_time_iso_8601": "2024-12-30T07:27:22.532829Z",
"url": "https://files.pythonhosted.org/packages/8f/f4/3d069a04685f67e369080222cb542973d303b878775c150c4f003987b11c/auditapp-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6fcd136c69dee9ac683793791f51d65a362a4e7fb01887117d6ca671edffbc0e",
"md5": "ae7a091f2305470db7d229fdd8e217c7",
"sha256": "c42f55328706d82c7fc9bbcf603ef50958507cae984d880518c47310ae6753d2"
},
"downloads": -1,
"filename": "auditapp-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "ae7a091f2305470db7d229fdd8e217c7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 483069,
"upload_time": "2024-12-30T07:27:26",
"upload_time_iso_8601": "2024-12-30T07:27:26.817826Z",
"url": "https://files.pythonhosted.org/packages/6f/cd/136c69dee9ac683793791f51d65a362a4e7fb01887117d6ca671edffbc0e/auditapp-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-30 07:27:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "caumente",
"github_project": "AUDIT",
"github_not_found": true,
"lcname": "auditapp"
}