# BLAST-CT
[![DOI](https://zenodo.org/badge/246262662.svg)](https://zenodo.org/badge/latestdoi/246262662)
**B**rain **L**esion **A**nalysis and **S**egmentation **T**ool for **C**omputed **T**omography - Version 2.0.0
This repository provides our deep learning image segmentation tool for traumatic brain injuries in 3D CT scans.
Please consider citing our article when using our software:
> Monteiro M, Newcombe VFJ, Mathieu F, Adatia K, Kamnitsas K, Ferrante E, Das T, Whitehouse D, Rueckert D, Menon DK, Glocker B. **[Multi-class semantic segmentation and quantification of traumatic brain injury lesions on head CT using deep learning – an algorithm development and multi-centre validation study](https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30085-6/fulltext)**. _The Lancet Digital Health_ (2020).
Monteiro and Newcombe are equal first authors. Menon and Glocker are equal senior authors.
**NOTE:** This software is not intended for clinical use.
![Examples for automatic lesion segmentation](blast-ct.png)
## Source code
The provided source code enables training and testing of our convolutional neural network designed for multi-class brain lesion segmentation in head CT.
Additionally, it allows for localisation of the segmented image, i.e. calculation of the volume of lesion per brain region (list of regions in blast_ct/data/localisation_files/atlas_labels.csv).
**NOTE:** The localisation is based on linear image registration, hence it does not allow for voxel-wise precision.
## Pre-trained model
In version 2.0.0 of this tool, we also make available a model that has been trained on a set of 680 annotated CT scans obtained from multiple clinical sites.
The output of our lesion segmentation tool is a segmentation map in NIfTI format with integer values ranging from 1 to 4 representing:
1. Intraparenchymal haemorrhage (IPH);
2. Extra-axial haemorrhage (EAH);
3. Perilesional oedema;
4. Intraventricular haemorrhage (IVH).
A CSV file with the total volume of lesion calculated for each lesion class is also part of the output. If the user chooses to perform localisation of lesions,
this file will also include the volume of lesion per brain region, the volume of each brain region as well as the total brain volume.
**As of the latest version, the tool resamples images internally and returns the output segmentation in the same space as the input image, so there is no need to preprocess the input.**
## Installation
### Linux and MacOS
On a fresh python3 virtual environment install `blast-ct` via
`pip install blast-ct`
### Windows
If you are using miniconda, create a new conda environment and install PyTorch
```
conda create -n blast-ct python=3
conda activate blast-ct
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
```
Then install `blast-ct` via
`pip install blast-ct`
# Usage with examples
Please run the following in your bash console to obtain an example data that we use to illustrate the usage of our tool in the following:
```
mkdir blast-ct-example
cd blast-ct-example
svn checkout "https://github.com/biomedia-mira/blast-ct/trunk/blast_ct/data/"
```
## Inference on one image
To run inference on one image using our pre-trained model:
`blast-ct --input <path-to-input-image> --output <path-to-output-image> --device <device-id>`
1. `--input`: path to the input input image which must be in nifti format (`.nii` or `.nii.gz`);
2. `--output`: path where prediction will be saved (with extension `.nii.gz`);
3. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer
indexing a cuda capable GPU on your machine. Defaults to CPU;
4. Pass `--ensemble True`: to use an ensemble of 15 models which improves segmentation quality but slows down inference
(recommended for gpu).
5. Pass `--localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.
6. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.
##### Working example:
Run the following in the `blast-ct-example` directory (might take up to an hour on CPU):
`blast-ct --input data/scans/scan_0/scan_0_image.nii.gz --output scan_0_prediction.nii.gz`
## Inference on multiple images
To run inference on multiple images using our ensemble of pre-trained models:
```
blast-ct-inference \
--job-dir <path-to-job-dir> \
--test-csv-path <path-to-test-csv> \
--device <device-id>
```
1. `--job-dir`: the path to the directory where the predictions and logs will be saved;
2. `--test-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the
images to be processed;
3. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer
indexing a cuda capable GPU on your machine. Defaults to CPU;
4. Pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.
5. Pass `--do-localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.
6. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.
##### Working example:
Run the following in the `blast-ct-example` directory (GPU example):
`blast-ct-inference --job-dir my-inference-job --test-csv-path data/data.csv --device 0`
**NOTE:** If the run breaks before all images are processed, run again with `--overwrite False` to finish from where it was left on the previous run.
## Training models on your own data
To train your own model:
```
blast-ct-train \
--job-dir <path-to-job-dir> \
--config-file <path-to-config-file> \
--train-csv-path <path-to-train-csv> \
--valid-csv-path <path-to-valid-csv> \
--num-epochs <num-epochs> \
--device <gpu_id> \
--random-seed <list-of-random-seeds>
```
1. `--job-dir`: the path to the directory where the predictions and logs will be saved;
2. `--config-file`: the path to a json config file (see `data/config.json` for example);
3. `--train-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the
images, targets and sampling masks used to train th model;
4. `--valid-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the
images used to keep track of the model's performance during training;
5. `--num-epochs`: the number of epochs for which to train the model (1200 was used with the example config)
6. `--device <device-id>` the device used for computation (`'cpu'` or integer indexing GPU). GPU is strongly recommended.
7. `-random-seeds`: a list of random seeds used for training.
Pass more than one to train multiple models one after the other.
8. pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.
##### Working example:
Run the following in the `blast-ct-example` directory (GPU example, takes time):
```
blast-ct-train \
--job-dir my-training-job \
--config-file data/config.json \
--train-csv-path data/data.csv \
--valid-csv-path data/data.csv \
--num-epochs 10 \
--device 0 \
--random-seeds "1"
```
## Inference with your model
To run inference with your own models and config use
```
blast-ct-inference \
--job-dir <path-to-job-dir> \
--config-file <path-to-config-file> \
--test-csv-path <path-to-test-csv> \
--device <gpu_id> \
--saved-model-paths <list-of-paths-to-saved-models>
```
1. `--job-dir`: the path to the directory where the predictions and logs will be saved;
2. `--config-file`: the path to a json config file (see `data/config.json` for example);
3. `--test-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the
images to be processed;
4. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer
indexing a cuda capable GPU on your machine. Defaults to CPU;
`--saved-model-paths` is a list of pre-trained model paths;
5. pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.
6. pass `--do-localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.
7. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.
##### Working example:
Run the following in the `blast-ct-example` directory (GPU example):
```
blast-ct-inference \
--job-dir my-custom-inference-job \
--config-file data/config.json \
--test-csv-path data/data.csv \
--device 0 \
--saved-model-paths "data/saved_models/model_1.pt data/saved_models/model_3.pt data/saved_models/model_6.pt
--do-localisation True
```
## csv files for inference and training
The tool takes input from csv files containing lists of images with unique ids.
Each row in the csv represents a scan and must contain:
1. A column named `id` which must be unique for each row (otherwise overwriting will happen);
2. A column named `image` which must contain the path to a nifti file;
3. (training only) A column named `target` containing a nifti file with the corresponding labels for training;
4. (training only; optional) A column named `sampling_mask` containing a nifti file with the corresponding sampling mask
for training;
See `data/data.csv` for a working example with 10 rows/ids (even though in this example they point to the same image).
Raw data
{
"_id": null,
"home_page": null,
"name": "blast-ct",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "CT, traumatic brain injury, segmentation, medical imaging",
"author": null,
"author_email": "Miguel Monteiro <miguel.monteiro@imperial.ac.uk>",
"download_url": "https://files.pythonhosted.org/packages/1d/82/b17ac78473081ed0ee19ae6d12fbd90741bcd4aed1c44cf63a2b7f47cac6/blast_ct-4.0.0.tar.gz",
"platform": null,
"description": "# BLAST-CT\n[![DOI](https://zenodo.org/badge/246262662.svg)](https://zenodo.org/badge/latestdoi/246262662)\n\n**B**rain **L**esion **A**nalysis and **S**egmentation **T**ool for **C**omputed **T**omography - Version 2.0.0\n\nThis repository provides our deep learning image segmentation tool for traumatic brain injuries in 3D CT scans.\n\nPlease consider citing our article when using our software:\n> Monteiro M, Newcombe VFJ, Mathieu F, Adatia K, Kamnitsas K, Ferrante E, Das T, Whitehouse D, Rueckert D, Menon DK, Glocker B. **[Multi-class semantic segmentation and quantification of traumatic brain injury lesions on head CT using deep learning \u2013 an algorithm development and multi-centre validation study](https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30085-6/fulltext)**. _The Lancet Digital Health_ (2020).\nMonteiro and Newcombe are equal first authors. Menon and Glocker are equal senior authors.\n\n**NOTE:** This software is not intended for clinical use.\n\n![Examples for automatic lesion segmentation](blast-ct.png)\n\n## Source code\n\nThe provided source code enables training and testing of our convolutional neural network designed for multi-class brain lesion segmentation in head CT.\nAdditionally, it allows for localisation of the segmented image, i.e. calculation of the volume of lesion per brain region (list of regions in blast_ct/data/localisation_files/atlas_labels.csv).\n**NOTE:** The localisation is based on linear image registration, hence it does not allow for voxel-wise precision.\n\n## Pre-trained model\n\nIn version 2.0.0 of this tool, we also make available a model that has been trained on a set of 680 annotated CT scans obtained from multiple clinical sites. \n\nThe output of our lesion segmentation tool is a segmentation map in NIfTI format with integer values ranging from 1 to 4 representing:\n1. Intraparenchymal haemorrhage (IPH);\n2. Extra-axial haemorrhage (EAH);\n3. Perilesional oedema;\n4. Intraventricular haemorrhage (IVH).\n\nA CSV file with the total volume of lesion calculated for each lesion class is also part of the output. If the user chooses to perform localisation of lesions, \nthis file will also include the volume of lesion per brain region, the volume of each brain region as well as the total brain volume.\n\n**As of the latest version, the tool resamples images internally and returns the output segmentation in the same space as the input image, so there is no need to preprocess the input.**\n## Installation\n\n### Linux and MacOS\nOn a fresh python3 virtual environment install `blast-ct` via\n\n`pip install blast-ct`\n\n### Windows\nIf you are using miniconda, create a new conda environment and install PyTorch\n\n```\nconda create -n blast-ct python=3\nconda activate blast-ct\nconda install pytorch torchvision cudatoolkit=10.1 -c pytorch\n```\n\nThen install `blast-ct` via\n\n`pip install blast-ct`\n\n# Usage with examples\n\nPlease run the following in your bash console to obtain an example data that we use to illustrate the usage of our tool in the following:\n```\nmkdir blast-ct-example\ncd blast-ct-example\nsvn checkout \"https://github.com/biomedia-mira/blast-ct/trunk/blast_ct/data/\"\n```\n\n## Inference on one image\nTo run inference on one image using our pre-trained model:\n\n`blast-ct --input <path-to-input-image> --output <path-to-output-image> --device <device-id>`\n\n1. `--input`: path to the input input image which must be in nifti format (`.nii` or `.nii.gz`);\n2. `--output`: path where prediction will be saved (with extension `.nii.gz`);\n3. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer \nindexing a cuda capable GPU on your machine. Defaults to CPU;\n4. Pass `--ensemble True`: to use an ensemble of 15 models which improves segmentation quality but slows down inference\n (recommended for gpu).\n5. Pass `--localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.\n6. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.\n\n##### Working example:\n\nRun the following in the `blast-ct-example` directory (might take up to an hour on CPU):\n\n`blast-ct --input data/scans/scan_0/scan_0_image.nii.gz --output scan_0_prediction.nii.gz`\n\n\n## Inference on multiple images\nTo run inference on multiple images using our ensemble of pre-trained models:\n\n```\nblast-ct-inference \\\n --job-dir <path-to-job-dir> \\\n --test-csv-path <path-to-test-csv> \\ \n --device <device-id>\n```\n\n1. `--job-dir`: the path to the directory where the predictions and logs will be saved;\n2. `--test-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the \nimages to be processed;\n3. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer \nindexing a cuda capable GPU on your machine. Defaults to CPU;\n4. Pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.\n5. Pass `--do-localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.\n6. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.\n\n##### Working example:\n\nRun the following in the `blast-ct-example` directory (GPU example):\n\n`blast-ct-inference --job-dir my-inference-job --test-csv-path data/data.csv --device 0`\n\n**NOTE:** If the run breaks before all images are processed, run again with `--overwrite False` to finish from where it was left on the previous run.\n\n## Training models on your own data\n\nTo train your own model:\n\n```\nblast-ct-train \\\n --job-dir <path-to-job-dir> \\\n --config-file <path-to-config-file> \\\n --train-csv-path <path-to-train-csv> \\\n --valid-csv-path <path-to-valid-csv> \\\n --num-epochs <num-epochs> \\\n --device <gpu_id> \\\n --random-seed <list-of-random-seeds>\n```\n\n1. `--job-dir`: the path to the directory where the predictions and logs will be saved;\n2. `--config-file`: the path to a json config file (see `data/config.json` for example);\n3. `--train-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the \nimages, targets and sampling masks used to train th model;\n4. `--valid-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the \nimages used to keep track of the model's performance during training;\n5. `--num-epochs`: the number of epochs for which to train the model (1200 was used with the example config)\n6. `--device <device-id>` the device used for computation (`'cpu'` or integer indexing GPU). GPU is strongly recommended.\n7. `-random-seeds`: a list of random seeds used for training. \nPass more than one to train multiple models one after the other.\n8. pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.\n\n\n##### Working example:\n\nRun the following in the `blast-ct-example` directory (GPU example, takes time):\n```\nblast-ct-train \\\n --job-dir my-training-job \\\n --config-file data/config.json \\\n --train-csv-path data/data.csv \\\n --valid-csv-path data/data.csv \\\n --num-epochs 10 \\\n --device 0 \\\n --random-seeds \"1\"\n```\n\n\n## Inference with your model\n\nTo run inference with your own models and config use\n```\nblast-ct-inference \\\n --job-dir <path-to-job-dir> \\\n --config-file <path-to-config-file> \\\n --test-csv-path <path-to-test-csv> \\\n --device <gpu_id> \\\n --saved-model-paths <list-of-paths-to-saved-models>\n```\n\n1. `--job-dir`: the path to the directory where the predictions and logs will be saved;\n2. `--config-file`: the path to a json config file (see `data/config.json` for example);\n3. `--test-csv-path`: the path to a [csv file](#csv-files-for-inference-and-training) containing the paths of the \nimages to be processed;\n4. `--device <device-id>` the device used for computation. Can be `'cpu'` (up to 1 hour per image) or an integer \nindexing a cuda capable GPU on your machine. Defaults to CPU;\n `--saved-model-paths` is a list of pre-trained model paths;\n5. pass `--overwrite True`: to write over existing `job-dir`. Set as `False` if you want to continue a run previously started.\n6. pass `--do-localisation True` to localise the segmented lesion, i.e. calculate the volume of lesion per brain region.\n7. (Only if `--do-localisation True`) `'--num-reg-runs'`: how many times to run registration between native scan and CT template. Running it more than one time prevents initialisation errors, as only the best performing run is kept.\n\n\n##### Working example:\n\nRun the following in the `blast-ct-example` directory (GPU example):\n```\nblast-ct-inference \\\n --job-dir my-custom-inference-job \\\n --config-file data/config.json \\\n --test-csv-path data/data.csv \\\n --device 0 \\\n --saved-model-paths \"data/saved_models/model_1.pt data/saved_models/model_3.pt data/saved_models/model_6.pt\n --do-localisation True\n```\n\n## csv files for inference and training\n\nThe tool takes input from csv files containing lists of images with unique ids.\nEach row in the csv represents a scan and must contain:\n1. A column named `id` which must be unique for each row (otherwise overwriting will happen);\n2. A column named `image` which must contain the path to a nifti file;\n3. (training only) A column named `target` containing a nifti file with the corresponding labels for training;\n4. (training only; optional) A column named `sampling_mask` containing a nifti file with the corresponding sampling mask \nfor training;\nSee `data/data.csv` for a working example with 10 rows/ids (even though in this example they point to the same image).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Automatic segmentation of Traumatic Brain Injury (TBI) in Head CT",
"version": "4.0.0",
"project_urls": {
"Repository": "https://github.com/biomedia-mira/blast_ct"
},
"split_keywords": [
"ct",
" traumatic brain injury",
" segmentation",
" medical imaging"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1d82b17ac78473081ed0ee19ae6d12fbd90741bcd4aed1c44cf63a2b7f47cac6",
"md5": "7c5f1967c271d9a283e4bd29af41393c",
"sha256": "1f02187fd4bded70642c5396020e37f2bdc69ded5ab0c57770f4cbb769b42025"
},
"downloads": -1,
"filename": "blast_ct-4.0.0.tar.gz",
"has_sig": false,
"md5_digest": "7c5f1967c271d9a283e4bd29af41393c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 99208998,
"upload_time": "2024-10-24T06:29:53",
"upload_time_iso_8601": "2024-10-24T06:29:53.801045Z",
"url": "https://files.pythonhosted.org/packages/1d/82/b17ac78473081ed0ee19ae6d12fbd90741bcd4aed1c44cf63a2b7f47cac6/blast_ct-4.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-24 06:29:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "biomedia-mira",
"github_project": "blast_ct",
"github_not_found": true,
"lcname": "blast-ct"
}