shapeaxi


Nameshapeaxi JSON
Version 0.8.8 PyPI version JSON
download
home_pageNone
SummaryShape Analysis Exploration and Interpretability
upload_time2024-05-01 20:20:34
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords shape analysis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ShapeAXI

Welcome to the official documentation for **ShapeAXI**. Dive into the cutting-edge framework designed for comprehensive shape analysis.  

---

## Table of Contents
- [Introduction](#introduction)
- [Installation](#installation)
- [Usage](#usage)
- [How does it work](#how-does-it-work)
- [Experiments & Results](#experiments--results)
- [Explainability](#explainability)
- [Contribute](#contribute)
- [Application](#application)
- [FAQs](#faqs)
- [License](#license)

---

## Introduction

**ShapeAXI** is a state-of-the-art shape analysis framework that harnesses a multi-view approach. This approach is adept at capturing 3D objects from a variety of viewpoints and analyzing them through 2D Convolutional Neural Networks (CNNs).

---

## Installation

(python 3.8 or 3.9 are required, no other versions)

### Installation of shapeaxi
```bash
pip install shapeaxi
```

### Installation of pytorch3d 

For this installation, we are going to use a variable, **{YOURVERSION}**, because this installation is specific to each computer configuration.
First, you can run this line to print the content of the variable **{YOURVERSION}** that we will use :
```bash
python -c "import sys; import torch; pyt_version_str=torch.__version__.split('+')[0].replace('.', ''); version_str=''.join([f'py3{sys.version_info.minor}_cu', torch.version.cuda.replace('.', ''), f'_pyt{pyt_version_str}']); print(version_str)"
```
It will print something like this : **py39_cu117_pyt201**.  
- Finally, you can run this line by adding your editing **{YOURVERSION}**, 
```bash
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{YOURVERSION}/download.html
```

Finally, check the installation,
```bash
pip show pytorch3d
```
---

## Usage

In this package you have the possibility of running four different models :
- **Classification**
- **Regression**
- **Segmentation**
- **IcoConv**

### Running ShapeAXI

To use ShapeAXI, execute the `saxi_folds.py` script with several options:

```bash
usage: shapeaxi [-h] [--csv CSV] [--csv_first_train CSV_FIRST_TRAIN] [--csv_first_test CSV_FIRST_TEST] [--folds FOLDS] [--valid_split VALID_SPLIT] [--group_by GROUP_BY] --nn {SaxiClassification,SaxiRegression,SaxiSegmentation,SaxiIcoClassification}
                [--csv_train CSV_TRAIN] [--csv_valid CSV_VALID] [--csv_test CSV_TEST] [--model MODEL] [--train_sphere_samples TRAIN_SPHERE_SAMPLES] [--surf_column SURF_COLUMN] [--class_column CLASS_COLUMN] [--scale_factor SCALE_FACTOR]
                [--column_scale_factor COLUMN_SCALE_FACTOR] [--profiler PROFILER] [--compute_scale_factor COMPUTE_SCALE_FACTOR] [--mount_point MOUNT_POINT] [--num_workers NUM_WORKERS] [--base_encoder BASE_ENCODER] [--base_encoder_params BASE_ENCODER_PARAMS]
                [--hidden_dim HIDDEN_DIM] [--radius RADIUS] [--image_size IMAGE_SIZE] [--lr LR] [--epochs EPOCHS] [--batch_size BATCH_SIZE] [--patience PATIENCE] [--log_every_n_steps LOG_EVERY_N_STEPS] [--tb_dir TB_DIR] [--tb_name TB_NAME]
                [--neptune_project NEPTUNE_PROJECT] [--neptune_tags NEPTUNE_TAGS] [--path_ico_right PATH_ICO_RIGHT] [--path_ico_left PATH_ICO_LEFT] [--layer LAYER] [--ico_lvl ICO_LVL] [--mean MEAN] [--std STD] [--crown_segmentation CROWN_SEGMENTATION] [--fdi FDI]
                [--csv_true_column CSV_TRUE_COLUMN] [--csv_tag_column CSV_TAG_COLUMN] [--csv_prediction_column CSV_PREDICTION_COLUMN] [--eval_metric {F1,AUC}] [--target_layer TARGET_LAYER] [--fps FPS] [--out OUT]

Automatically train and evaluate a N fold cross-validation model for Shape Analysis Explainability and Interpretability

optional arguments:
  -h, --help            show this help message and exit

Split:
  --csv CSV             CSV with columns surf,class
  --csv_first_train CSV_FIRST_TRAIN
                        CSV with column surf
  --csv_first_test CSV_FIRST_TEST
                        CSV with column surf
  --folds FOLDS         Number of folds
  --valid_split VALID_SPLIT
                        Split float [0-1]
  --group_by GROUP_BY   GroupBy criteria in the CSV. For example, SubjectID in case the same subjects has multiple timepoints/data points and the subject must belong to the same data split

Train:
  --nn {SaxiClassification,SaxiRegression,SaxiSegmentation,SaxiIcoClassification}
                        Neural network name : SaxiClassification, SaxiRegression, SaxiSegmentation, SaxiIcoClassification
  --csv_train CSV_TRAIN
                        CSV with column surf
  --csv_valid CSV_VALID
                        CSV with column surf
  --csv_test CSV_TEST   CSV with column surf
  --model MODEL         Model to continue training
  --train_sphere_samples TRAIN_SPHERE_SAMPLES
                        Number of samples for the training sphere
  --surf_column SURF_COLUMN
                        Surface column name
  --class_column CLASS_COLUMN
                        Class column name
  --scale_factor SCALE_FACTOR
                        Scale factor for the shapes
  --column_scale_factor COLUMN_SCALE_FACTOR
                        Specify the name if there already is a column with scale factor in the input file
  --profiler PROFILER   Profiler
  --compute_scale_factor COMPUTE_SCALE_FACTOR
                        Compute a global scale factor for all shapes in the population.
  --mount_point MOUNT_POINT
                        Dataset mount directory
  --num_workers NUM_WORKERS
                        Number of workers for loading
  --base_encoder BASE_ENCODER
                        Base encoder for the feature extraction
  --base_encoder_params BASE_ENCODER_PARAMS
                        Base encoder parameters that are passed to build the feature extraction
  --hidden_dim HIDDEN_DIM
                        Hidden dimension for features output. Should match with output of base_encoder. Default value is 512
  --radius RADIUS       Radius of icosphere
  --image_size IMAGE_SIZE
                        Image resolution size
  --lr LR               Learning rate
  --epochs EPOCHS       Max number of epochs
  --batch_size BATCH_SIZE
                        Batch size
  --patience PATIENCE   Patience for early stopping
  --log_every_n_steps LOG_EVERY_N_STEPS
                        Log every n steps
  --tb_dir TB_DIR       Tensorboard output dir
  --tb_name TB_NAME     Tensorboard experiment name
  --neptune_project NEPTUNE_PROJECT
                        Neptune project
  --neptune_tags NEPTUNE_TAGS
                        Neptune tags
  --path_ico_right PATH_ICO_RIGHT
                        Path to ico right (default: ../3DObject/sphere_f327680_v163842.vtk)
  --path_ico_left PATH_ICO_LEFT
                        Path to ico left (default: ../3DObject/sphere_f327680_v163842.vtk)
  --layer LAYER         Layer, choose between 'Att','IcoConv2D','IcoConv1D','IcoLinear' (default: IcoConv2D)
  --ico_lvl ICO_LVL     Ico level, minimum level is 1 (default: 2)
  --mean MEAN           Mean (default: 0)
  --std STD             Standard deviation (default: 0.005)

Prediction group:
  --crown_segmentation CROWN_SEGMENTATION
                        Isolation of each different tooth in a specific vtk file
  --fdi FDI             numbering system. 0: universal numbering; 1: FDI world dental Federation notation

Test group:
  --csv_true_column CSV_TRUE_COLUMN
                        Which column to do the stats on
  --csv_tag_column CSV_TAG_COLUMN
                        Which column has the actual names
  --csv_prediction_column CSV_PREDICTION_COLUMN
                        csv true class
  --eval_metric {F1,AUC}
                        Score you want to choose for picking the best model : F1 or AUC

Explainability group:
  --target_layer TARGET_LAYER
                        Target layer for explainability
  --fps FPS             Frames per second

Output:
  --out OUT             Output
```  

---

## How does it work

### 1. Compute scale factor

The first step of the code is to compute a global scale factor for all shapes in the population.  
If you want to do this, add ```--compute_scale_factor 1```.  
Moreover, if you already has a column with **scale_factor** in your csv file, specify it with ```--column_scale_factor name_of_your_column```.
Otherwise, if you do not want to compute a global scale factor, do not specify anything and it will skip this part.  


### 2. Split

No matter which model you choose you must specify your input.  
You can choose between ```--csv``` input or both ```--csv_first_train``` and ```--csv_first_test```. If you specify one csv file as input, ShapeAXI will split the dataset between train and test (80% and 20% but you can specify the split using ```--valid_split``` if you want 85% and 15% for example).  
Otherwise, if you specify csv_train and csv_test, it will skip this first split.    
Now, ShapeAXI uses the train set and split it between : train, test and valid dataset for each folder.  
ShapeAXI will train, test and evaluate one model for each folder.
Here, an example of the content of you csv files : 

| surf                                 | class  |
|--------------------------------------|--------|
| path/to/shape1.vtk                   | class1 |
| path/to/shape2.stl                   | class2 |
| path/to/shape3.vtk                   | class1 |
| ...                                  | ...    |

**surf**: This column holds the file paths to the 3D shape objects. The tool supports the formats `.vtk` and `.stl`.  
**class**: This column indicates the class of the 3D object.

### 3. Training 

For this step, there are one training, validation and testing dataset for each fold. You can choose for the training : batch size, maximum number of epochs, model you want.  
If you want, you can specify ```--csv_train```, ```--csv_test``` and ```--csv_valid``` for each fold.

### 4. Test, evaluation, aggregate and explainability

Finally, ShapeAXI will test the different models after their training. It will test them during the evaluation part.  
During this process, ShapeAXI will pick the best model from all the different folders depending on F1 score or AUC score.  
You can choose the evaluation metric with the option ```--eval_metric F1``` or ```--eval_metric AUC```.  
Your best model will then be tested on the first test dataset of the first split.
ShapeAXI will produce : 
- A confusion matrix.
- ROC curves.
- Explainability maps for each shape in the dataset.


### Examples

#### Classification (--nn SaxiClassification)

```bash
shapeaxi --csv your_data.csv --nn SaxiClassification --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --compute_scale_factor 1 --surf_column surf --class_column class --batch_size 8
```

#### IcoConv (--nn SaxiIcoClassification)

```bash
shapeaxi --csv your_data.csv --nn SaxiIcoClassification --epochs 30 --folds 3 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --path_ico_left /path/to/vtk/left/hemisphere --path_ico_right /path/to/vtk/right/hemisphere --class_column ASD_administered
```
For this model, you **have to** specify the path to your right vtk hemisphere data and same for the left one.

#### Regression (--nn SaxiRegression)

```bash
shapeaxi --csv your_data.csv --nn SaxiClassification --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --compute_scale_factor 1 --surf_column surf --class_column class --batch_size 8
```

#### Segmentation (--nn SaxiSegmentation)

```bash
shapeaxi --csv your_data.csv --nn SaxiSegmentation --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --eval_metric AUC
```



## Experiments & Results

**ShapeAXI** has been rigorously tested across multiple domains. Below is a summary of our key experiments:

### Condyles Classification

- **Categories**: Healthy vs. Degenerative states
- **Accuracy**: ~79.78%

![Condyles Classification Results Placeholder](doc/images/Deg_classification_aggregate_long_exists_aggregate_prediction_norm_confusion.png)
![Condyles Classification ROC](doc/images/Deg_classification_aggregate_long_exists_aggregate_prediction_roc.png)

### Cleft Patients Severity Classification

- **Classes**: Severity levels 0 to 3
- **Accuracy**: ~81.58%

![Cleft Patients Severity Classification Results Placeholder](doc/images/01.Final_ClassificationALLfold_test_prediction_norm_confusion.png)
![Cleft Patients Severity Classification ROC](doc/images//01.Final_ClassificationALLfold_test_prediction_roc.png)
---

## Explainability

In **ShapeAXI**, we prioritize transparency and understanding. The explainability feature of our framework offers heat-maps which grant insights into its classification rationale.

https://github.com/DCBIA-OrthoLab/ShapeAXI/assets/7086191/120b0095-5f2d-4f0d-b650-a0587a33e067

https://github.com/DCBIA-OrthoLab/ShapeAXI/assets/7086191/2c635250-624f-4cce-b150-4d5507b398b4

---

## Contribute

We welcome community contributions to **ShapeAXI**. For those keen on enhancing this tool, please adhere to the steps below:

1. **Fork** the repository.
2. Create your **feature branch** (`git checkout -b feature/YourFeature`).
3. Commit your changes (`git commit -am 'Add some feature'`).
4. Push to the branch (`git push origin feature/YourFeature`).
5. Open a **pull request**.

For a comprehensive understanding of our contribution process, consult our [Contribution Guidelines](path/to/contribution_guidelines.md).

---

Of course! Here are some general FAQ entries tailored for a tool/framework like ShapeAXI:

---

## Application 

One of the application on this tool is to be able to run a prediction using a pretrained model on your own data.  
If you want, you can have access to this package documentation called **dentalmodelseg** :

[DentalModelSeg](DentalModelSeg.md)

## FAQs

### What is ShapeAXI?

**Answer:** ShapeAXI is an innovative shape analysis framework that employs a multi-view approach, rendering 3D objects from varied perspectives and analyzing them using 2D Convolutional Neural Networks (CNNs).

---

### How do I install and set up ShapeAXI?

**Answer:** Detailed installation and setup instructions can be found in the 'Installation' section of our documentation. Simply follow the steps mentioned, and you should have ShapeAXI up and running in no time.

---

### Can I use ShapeAXI for my own datasets?

**Answer:** Absolutely! ShapeAXI is designed to be versatile. You can use it on a wide variety of shape datasets. Ensure your data is in the required format as outlined in the 'Usage' section.

---

### How does ShapeAXI handle explainability?

**Answer:** ShapeAXI offers a unique approach to explainability, providing heat-maps for each class across every shape. These visualizations provide insights into the underlying object characteristics and the classification rationale.

---

### Are there any known limitations of ShapeAXI?

**Answer:** Like all models and frameworks, ShapeAXI has its constraints. It is optimized for the datasets and tasks it has been trained and tested on. While it offers versatility across a range of datasets, results may vary based on the quality and type of data. We continually work on refining and improving ShapeAXI to overcome any limitations.

---

### How can I contribute to ShapeAXI's development?

**Answer:** We welcome contributions! Please refer to the 'Contribute' section of our documentation for guidelines on how you can contribute.

---

### Who do I contact for technical support or questions about ShapeAXI?

**Answer:** For technical support or any questions, please create a new issue in our GitHub repository.

---

### Will there be future updates to ShapeAXI?

**Answer:** Yes, we plan on continuously improving and expanding ShapeAXI based on user feedback, new research, and technological advancements. Stay tuned to our repository for updates.

---

## License

**ShapeAXI** is under the [APACHE 2.0](LICENSE) license.

---

**ShapeAXI Team**: For further details, inquiries, or suggestions, feel free to [contact us](mailto:juan_prieto@med.unc.edu).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "shapeaxi",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "Shape Analysis",
    "author": null,
    "author_email": "juanprietob <juanprietob@gmail.com>, FlorianDAVAUX <florian.davaux@cpe.fr>",
    "download_url": "https://files.pythonhosted.org/packages/36/6e/af507c802afaa21f229dd043fc8d65e180811c23637cbac0d120397af8a6/shapeaxi-0.8.8.tar.gz",
    "platform": null,
    "description": "# ShapeAXI\n\nWelcome to the official documentation for **ShapeAXI**. Dive into the cutting-edge framework designed for comprehensive shape analysis.  \n\n---\n\n## Table of Contents\n- [Introduction](#introduction)\n- [Installation](#installation)\n- [Usage](#usage)\n- [How does it work](#how-does-it-work)\n- [Experiments & Results](#experiments--results)\n- [Explainability](#explainability)\n- [Contribute](#contribute)\n- [Application](#application)\n- [FAQs](#faqs)\n- [License](#license)\n\n---\n\n## Introduction\n\n**ShapeAXI** is a state-of-the-art shape analysis framework that harnesses a multi-view approach. This approach is adept at capturing 3D objects from a variety of viewpoints and analyzing them through 2D Convolutional Neural Networks (CNNs).\n\n---\n\n## Installation\n\n(python 3.8 or 3.9 are required, no other versions)\n\n### Installation of shapeaxi\n```bash\npip install shapeaxi\n```\n\n### Installation of pytorch3d \n\nFor this installation, we are going to use a variable, **{YOURVERSION}**, because this installation is specific to each computer configuration.\nFirst, you can run this line to print the content of the variable **{YOURVERSION}** that we will use :\n```bash\npython -c \"import sys; import torch; pyt_version_str=torch.__version__.split('+')[0].replace('.', ''); version_str=''.join([f'py3{sys.version_info.minor}_cu', torch.version.cuda.replace('.', ''), f'_pyt{pyt_version_str}']); print(version_str)\"\n```\nIt will print something like this : **py39_cu117_pyt201**.  \n- Finally, you can run this line by adding your editing **{YOURVERSION}**, \n```bash\npip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{YOURVERSION}/download.html\n```\n\nFinally, check the installation,\n```bash\npip show pytorch3d\n```\n---\n\n## Usage\n\nIn this package you have the possibility of running four different models :\n- **Classification**\n- **Regression**\n- **Segmentation**\n- **IcoConv**\n\n### Running ShapeAXI\n\nTo use ShapeAXI, execute the `saxi_folds.py` script with several options:\n\n```bash\nusage: shapeaxi [-h] [--csv CSV] [--csv_first_train CSV_FIRST_TRAIN] [--csv_first_test CSV_FIRST_TEST] [--folds FOLDS] [--valid_split VALID_SPLIT] [--group_by GROUP_BY] --nn {SaxiClassification,SaxiRegression,SaxiSegmentation,SaxiIcoClassification}\n                [--csv_train CSV_TRAIN] [--csv_valid CSV_VALID] [--csv_test CSV_TEST] [--model MODEL] [--train_sphere_samples TRAIN_SPHERE_SAMPLES] [--surf_column SURF_COLUMN] [--class_column CLASS_COLUMN] [--scale_factor SCALE_FACTOR]\n                [--column_scale_factor COLUMN_SCALE_FACTOR] [--profiler PROFILER] [--compute_scale_factor COMPUTE_SCALE_FACTOR] [--mount_point MOUNT_POINT] [--num_workers NUM_WORKERS] [--base_encoder BASE_ENCODER] [--base_encoder_params BASE_ENCODER_PARAMS]\n                [--hidden_dim HIDDEN_DIM] [--radius RADIUS] [--image_size IMAGE_SIZE] [--lr LR] [--epochs EPOCHS] [--batch_size BATCH_SIZE] [--patience PATIENCE] [--log_every_n_steps LOG_EVERY_N_STEPS] [--tb_dir TB_DIR] [--tb_name TB_NAME]\n                [--neptune_project NEPTUNE_PROJECT] [--neptune_tags NEPTUNE_TAGS] [--path_ico_right PATH_ICO_RIGHT] [--path_ico_left PATH_ICO_LEFT] [--layer LAYER] [--ico_lvl ICO_LVL] [--mean MEAN] [--std STD] [--crown_segmentation CROWN_SEGMENTATION] [--fdi FDI]\n                [--csv_true_column CSV_TRUE_COLUMN] [--csv_tag_column CSV_TAG_COLUMN] [--csv_prediction_column CSV_PREDICTION_COLUMN] [--eval_metric {F1,AUC}] [--target_layer TARGET_LAYER] [--fps FPS] [--out OUT]\n\nAutomatically train and evaluate a N fold cross-validation model for Shape Analysis Explainability and Interpretability\n\noptional arguments:\n  -h, --help            show this help message and exit\n\nSplit:\n  --csv CSV             CSV with columns surf,class\n  --csv_first_train CSV_FIRST_TRAIN\n                        CSV with column surf\n  --csv_first_test CSV_FIRST_TEST\n                        CSV with column surf\n  --folds FOLDS         Number of folds\n  --valid_split VALID_SPLIT\n                        Split float [0-1]\n  --group_by GROUP_BY   GroupBy criteria in the CSV. For example, SubjectID in case the same subjects has multiple timepoints/data points and the subject must belong to the same data split\n\nTrain:\n  --nn {SaxiClassification,SaxiRegression,SaxiSegmentation,SaxiIcoClassification}\n                        Neural network name : SaxiClassification, SaxiRegression, SaxiSegmentation, SaxiIcoClassification\n  --csv_train CSV_TRAIN\n                        CSV with column surf\n  --csv_valid CSV_VALID\n                        CSV with column surf\n  --csv_test CSV_TEST   CSV with column surf\n  --model MODEL         Model to continue training\n  --train_sphere_samples TRAIN_SPHERE_SAMPLES\n                        Number of samples for the training sphere\n  --surf_column SURF_COLUMN\n                        Surface column name\n  --class_column CLASS_COLUMN\n                        Class column name\n  --scale_factor SCALE_FACTOR\n                        Scale factor for the shapes\n  --column_scale_factor COLUMN_SCALE_FACTOR\n                        Specify the name if there already is a column with scale factor in the input file\n  --profiler PROFILER   Profiler\n  --compute_scale_factor COMPUTE_SCALE_FACTOR\n                        Compute a global scale factor for all shapes in the population.\n  --mount_point MOUNT_POINT\n                        Dataset mount directory\n  --num_workers NUM_WORKERS\n                        Number of workers for loading\n  --base_encoder BASE_ENCODER\n                        Base encoder for the feature extraction\n  --base_encoder_params BASE_ENCODER_PARAMS\n                        Base encoder parameters that are passed to build the feature extraction\n  --hidden_dim HIDDEN_DIM\n                        Hidden dimension for features output. Should match with output of base_encoder. Default value is 512\n  --radius RADIUS       Radius of icosphere\n  --image_size IMAGE_SIZE\n                        Image resolution size\n  --lr LR               Learning rate\n  --epochs EPOCHS       Max number of epochs\n  --batch_size BATCH_SIZE\n                        Batch size\n  --patience PATIENCE   Patience for early stopping\n  --log_every_n_steps LOG_EVERY_N_STEPS\n                        Log every n steps\n  --tb_dir TB_DIR       Tensorboard output dir\n  --tb_name TB_NAME     Tensorboard experiment name\n  --neptune_project NEPTUNE_PROJECT\n                        Neptune project\n  --neptune_tags NEPTUNE_TAGS\n                        Neptune tags\n  --path_ico_right PATH_ICO_RIGHT\n                        Path to ico right (default: ../3DObject/sphere_f327680_v163842.vtk)\n  --path_ico_left PATH_ICO_LEFT\n                        Path to ico left (default: ../3DObject/sphere_f327680_v163842.vtk)\n  --layer LAYER         Layer, choose between 'Att','IcoConv2D','IcoConv1D','IcoLinear' (default: IcoConv2D)\n  --ico_lvl ICO_LVL     Ico level, minimum level is 1 (default: 2)\n  --mean MEAN           Mean (default: 0)\n  --std STD             Standard deviation (default: 0.005)\n\nPrediction group:\n  --crown_segmentation CROWN_SEGMENTATION\n                        Isolation of each different tooth in a specific vtk file\n  --fdi FDI             numbering system. 0: universal numbering; 1: FDI world dental Federation notation\n\nTest group:\n  --csv_true_column CSV_TRUE_COLUMN\n                        Which column to do the stats on\n  --csv_tag_column CSV_TAG_COLUMN\n                        Which column has the actual names\n  --csv_prediction_column CSV_PREDICTION_COLUMN\n                        csv true class\n  --eval_metric {F1,AUC}\n                        Score you want to choose for picking the best model : F1 or AUC\n\nExplainability group:\n  --target_layer TARGET_LAYER\n                        Target layer for explainability\n  --fps FPS             Frames per second\n\nOutput:\n  --out OUT             Output\n```  \n\n---\n\n## How does it work\n\n### 1. Compute scale factor\n\nThe first step of the code is to compute a global scale factor for all shapes in the population.  \nIf you want to do this, add ```--compute_scale_factor 1```.  \nMoreover, if you already has a column with **scale_factor** in your csv file, specify it with ```--column_scale_factor name_of_your_column```.\nOtherwise, if you do not want to compute a global scale factor, do not specify anything and it will skip this part.  \n\n\n### 2. Split\n\nNo matter which model you choose you must specify your input.  \nYou can choose between ```--csv``` input or both ```--csv_first_train``` and ```--csv_first_test```. If you specify one csv file as input, ShapeAXI will split the dataset between train and test (80% and 20% but you can specify the split using ```--valid_split``` if you want 85% and 15% for example).  \nOtherwise, if you specify csv_train and csv_test, it will skip this first split.    \nNow, ShapeAXI uses the train set and split it between : train, test and valid dataset for each folder.  \nShapeAXI will train, test and evaluate one model for each folder.\nHere, an example of the content of you csv files : \n\n| surf                                 | class  |\n|--------------------------------------|--------|\n| path/to/shape1.vtk                   | class1 |\n| path/to/shape2.stl                   | class2 |\n| path/to/shape3.vtk                   | class1 |\n| ...                                  | ...    |\n\n**surf**: This column holds the file paths to the 3D shape objects. The tool supports the formats `.vtk` and `.stl`.  \n**class**: This column indicates the class of the 3D object.\n\n### 3. Training \n\nFor this step, there are one training, validation and testing dataset for each fold. You can choose for the training : batch size, maximum number of epochs, model you want.  \nIf you want, you can specify ```--csv_train```, ```--csv_test``` and ```--csv_valid``` for each fold.\n\n### 4. Test, evaluation, aggregate and explainability\n\nFinally, ShapeAXI will test the different models after their training. It will test them during the evaluation part.  \nDuring this process, ShapeAXI will pick the best model from all the different folders depending on F1 score or AUC score.  \nYou can choose the evaluation metric with the option ```--eval_metric F1``` or ```--eval_metric AUC```.  \nYour best model will then be tested on the first test dataset of the first split.\nShapeAXI will produce : \n- A confusion matrix.\n- ROC curves.\n- Explainability maps for each shape in the dataset.\n\n\n### Examples\n\n#### Classification (--nn SaxiClassification)\n\n```bash\nshapeaxi --csv your_data.csv --nn SaxiClassification --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --compute_scale_factor 1 --surf_column surf --class_column class --batch_size 8\n```\n\n#### IcoConv (--nn SaxiIcoClassification)\n\n```bash\nshapeaxi --csv your_data.csv --nn SaxiIcoClassification --epochs 30 --folds 3 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --path_ico_left /path/to/vtk/left/hemisphere --path_ico_right /path/to/vtk/right/hemisphere --class_column ASD_administered\n```\nFor this model, you **have to** specify the path to your right vtk hemisphere data and same for the left one.\n\n#### Regression (--nn SaxiRegression)\n\n```bash\nshapeaxi --csv your_data.csv --nn SaxiClassification --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --compute_scale_factor 1 --surf_column surf --class_column class --batch_size 8\n```\n\n#### Segmentation (--nn SaxiSegmentation)\n\n```bash\nshapeaxi --csv your_data.csv --nn SaxiSegmentation --epochs 40 --folds 5 --mount_point /path/to/your/data/directory --out /path/to/your/output_directory --eval_metric AUC\n```\n\n\n\n## Experiments & Results\n\n**ShapeAXI** has been rigorously tested across multiple domains. Below is a summary of our key experiments:\n\n### Condyles Classification\n\n- **Categories**: Healthy vs. Degenerative states\n- **Accuracy**: ~79.78%\n\n![Condyles Classification Results Placeholder](doc/images/Deg_classification_aggregate_long_exists_aggregate_prediction_norm_confusion.png)\n![Condyles Classification ROC](doc/images/Deg_classification_aggregate_long_exists_aggregate_prediction_roc.png)\n\n### Cleft Patients Severity Classification\n\n- **Classes**: Severity levels 0 to 3\n- **Accuracy**: ~81.58%\n\n![Cleft Patients Severity Classification Results Placeholder](doc/images/01.Final_ClassificationALLfold_test_prediction_norm_confusion.png)\n![Cleft Patients Severity Classification ROC](doc/images//01.Final_ClassificationALLfold_test_prediction_roc.png)\n---\n\n## Explainability\n\nIn **ShapeAXI**, we prioritize transparency and understanding. The explainability feature of our framework offers heat-maps which grant insights into its classification rationale.\n\nhttps://github.com/DCBIA-OrthoLab/ShapeAXI/assets/7086191/120b0095-5f2d-4f0d-b650-a0587a33e067\n\nhttps://github.com/DCBIA-OrthoLab/ShapeAXI/assets/7086191/2c635250-624f-4cce-b150-4d5507b398b4\n\n---\n\n## Contribute\n\nWe welcome community contributions to **ShapeAXI**. For those keen on enhancing this tool, please adhere to the steps below:\n\n1. **Fork** the repository.\n2. Create your **feature branch** (`git checkout -b feature/YourFeature`).\n3. Commit your changes (`git commit -am 'Add some feature'`).\n4. Push to the branch (`git push origin feature/YourFeature`).\n5. Open a **pull request**.\n\nFor a comprehensive understanding of our contribution process, consult our [Contribution Guidelines](path/to/contribution_guidelines.md).\n\n---\n\nOf course! Here are some general FAQ entries tailored for a tool/framework like ShapeAXI:\n\n---\n\n## Application \n\nOne of the application on this tool is to be able to run a prediction using a pretrained model on your own data.  \nIf you want, you can have access to this package documentation called **dentalmodelseg** :\n\n[DentalModelSeg](DentalModelSeg.md)\n\n## FAQs\n\n### What is ShapeAXI?\n\n**Answer:** ShapeAXI is an innovative shape analysis framework that employs a multi-view approach, rendering 3D objects from varied perspectives and analyzing them using 2D Convolutional Neural Networks (CNNs).\n\n---\n\n### How do I install and set up ShapeAXI?\n\n**Answer:** Detailed installation and setup instructions can be found in the 'Installation' section of our documentation. Simply follow the steps mentioned, and you should have ShapeAXI up and running in no time.\n\n---\n\n### Can I use ShapeAXI for my own datasets?\n\n**Answer:** Absolutely! ShapeAXI is designed to be versatile. You can use it on a wide variety of shape datasets. Ensure your data is in the required format as outlined in the 'Usage' section.\n\n---\n\n### How does ShapeAXI handle explainability?\n\n**Answer:** ShapeAXI offers a unique approach to explainability, providing heat-maps for each class across every shape. These visualizations provide insights into the underlying object characteristics and the classification rationale.\n\n---\n\n### Are there any known limitations of ShapeAXI?\n\n**Answer:** Like all models and frameworks, ShapeAXI has its constraints. It is optimized for the datasets and tasks it has been trained and tested on. While it offers versatility across a range of datasets, results may vary based on the quality and type of data. We continually work on refining and improving ShapeAXI to overcome any limitations.\n\n---\n\n### How can I contribute to ShapeAXI's development?\n\n**Answer:** We welcome contributions! Please refer to the 'Contribute' section of our documentation for guidelines on how you can contribute.\n\n---\n\n### Who do I contact for technical support or questions about ShapeAXI?\n\n**Answer:** For technical support or any questions, please create a new issue in our GitHub repository.\n\n---\n\n### Will there be future updates to ShapeAXI?\n\n**Answer:** Yes, we plan on continuously improving and expanding ShapeAXI based on user feedback, new research, and technological advancements. Stay tuned to our repository for updates.\n\n---\n\n## License\n\n**ShapeAXI** is under the [APACHE 2.0](LICENSE) license.\n\n---\n\n**ShapeAXI Team**: For further details, inquiries, or suggestions, feel free to [contact us](mailto:juan_prieto@med.unc.edu).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Shape Analysis Exploration and Interpretability",
    "version": "0.8.8",
    "project_urls": {
        "Source Code": "https://github.com/DCBIA-OrthoLab/ShapeAXI/"
    },
    "split_keywords": [
        "shape",
        "analysis"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "588874ecdbad8b7384bfb112a37d51f203659787abdc4bb95edcd7315878b6d7",
                "md5": "174005665569e5202f9cec7c30c2d484",
                "sha256": "b3d7d87bc372deab1870a575115693fd6f322e2066259f184fb878e1028cd33a"
            },
            "downloads": -1,
            "filename": "shapeaxi-0.8.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "174005665569e5202f9cec7c30c2d484",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 112003,
            "upload_time": "2024-05-01T20:20:38",
            "upload_time_iso_8601": "2024-05-01T20:20:38.993557Z",
            "url": "https://files.pythonhosted.org/packages/58/88/74ecdbad8b7384bfb112a37d51f203659787abdc4bb95edcd7315878b6d7/shapeaxi-0.8.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "366eaf507c802afaa21f229dd043fc8d65e180811c23637cbac0d120397af8a6",
                "md5": "6f729c9628cf34e439255a4e32534050",
                "sha256": "e0a2ff51fe2088796c895371baa2318ce61535bd67080a6390013aff23daa903"
            },
            "downloads": -1,
            "filename": "shapeaxi-0.8.8.tar.gz",
            "has_sig": false,
            "md5_digest": "6f729c9628cf34e439255a4e32534050",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 43987409,
            "upload_time": "2024-05-01T20:20:34",
            "upload_time_iso_8601": "2024-05-01T20:20:34.951551Z",
            "url": "https://files.pythonhosted.org/packages/36/6e/af507c802afaa21f229dd043fc8d65e180811c23637cbac0d120397af8a6/shapeaxi-0.8.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-01 20:20:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "DCBIA-OrthoLab",
    "github_project": "ShapeAXI",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "shapeaxi"
}
        
Elapsed time: 0.25160s