seqseg


Nameseqseg JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryA deep learning-based medical image blood vessel tracking and segmentation tool.
upload_time2025-11-05 18:22:53
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseApache-2.0
keywords segmentation deep learning medical imaging nnunet medical image analysis medical image segmentation nnu-net blood vessel segmentation vascular segmentation vascular tracking seqseg
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![example workflow](https://github.com/numisveinsson/SeqSeg/actions/workflows/python-app.yml/badge.svg)
![example workflow](https://github.com/numisveinsson/SeqSeg/actions/workflows/test.yml/badge.svg)

# SeqSeg: Automatic Tracking and Segmentation of Blood Vessels in CT and MR Images

See paper [here](https://rdcu.be/dU0wy) for detailed explanations and citation.

Below is an example showing the algorithm tracking and segmenting an abdominal aorta in 3D MR image scan:

![](seqseg/assets/mr_model_tracing_fast_shorter.gif)

## Tutorial
[Here](https://github.com/numisveinsson/SeqSeg/blob/main/seqseg/tutorial/tutorial.md) is a tutorial on how to run the code, including installation instructions, downloading model weights, and running the segmentation pipeline on a medical image.

## How it works
SeqSeg is a method for automatic tracking and segmentation of blood vessels in medical images. The algorithm uses a neural network to segment the vasculature locally and uses a tracking algorithm to take steps along the direction of the vessel and down bifurcation detected.

Here is the workflow of the algorithm:

![](seqseg/assets/seqseg.png)

where the neural network was trained on local subvolume patches of the image:

![](seqseg/assets/seqseg_training.png)

## Set Up
If you are familiar with python, you can simply install SeqSeg using pip:
```bash
pip install seqseg
```
Check to see if the installation was successful by running:
```bash
seqseg --help
```

Example setup using conda:
```bash
conda create -n seqseg python=3.11
conda activate seqseg
pip install seqseg

```
Example setup using pip (first create a virtual environment, see [here](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/)):
```bash
python3 -m venv seqseg
source seqseg/bin/activate
pip install seqseg
```

SeqSeg relies on [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) for segmentation of the local medical image volumes. You will need model weights to run the algorithm - either use pretrained weights (available) or train a model yourself. After training a nnU-Net model, the weights will be saved in a `nnUNet_results` folder. This folder is required to run SeqSeg, and its path is set as an argument when running the script.

Main package dependencies:

Basic:
- Python 3.11

Machine Learning (Note: must be installed according to nnU-Net instructions):
- nnU-Net, nnunetv2=2.5.1
- Pytorch, torch=2.3.1

Image and Data Processing:
- SITK, simpleitk=2.2.1
- VTK, vtk=9.1.0
- PyYaml, pyyaml=6.0.1
- Matplotlib (optional)
- Pyyaml (optional)

and if using VMTK (not required):
- VMTK

Note: The code is tested with Python 3.11 and nnU-Net 2.5.1. If you are using a different version, please check the compatibility of the packages.

## Running

See [here](https://github.com/numisveinsson/SeqSeg/blob/main/seqseg/tutorial/tutorial.md) for tutorial on how to run the code.

### Set up data directory
Create a directory structure for your data as follows:

1. Images: Directory containing the medical images to be segmented. Image extension can be `.nii.gz`, `.mha`, `.nrrd`, or any of [these](https://simpleitk.readthedocs.io/en/master/IO.html).
2. Seeds: A `seeds.json` file containing the seed points for initialization.
3. Centerlines (optional): Directory containing centerline files if available.
4. Truths (optional): Directory containing ground truth segmentations if available.
seqseg/tutorial/data/
    ├── images
    ├── seeds.json
    ├── centerlines (if applicable)

SeqSeg requires a seed point for initialization. This can be given by either:
- `seeds.json` file: located in data directory (see sample under data)
- centerline: if centerlines are given, we initialize using the first points of the centerline
- cardiac mesh: then the aortic valve must be labeled as Region 8 and LV 7

### Activate environment (eg. conda)
```bash
conda activate seqseg
```
or if using virtual environment:
```bash
source seqseg/bin/activate
```

### Run
```bash
seqseg \
    -data_dir seqseg/tutorial/data/ \
    -nnunet_results_path nnUNet_results/ \
    -test_name 3d_fullres \
    -train_dataset Dataset005_SEQAORTANDFEMOMR \
    -fold 0 \
    -img_ext .mha \
    -config_name aorta_tutorial \
    -max_n_steps 5 \
    -max_n_branches 2 \
    -outdir output/ \
    -unit cm \
    -scale 1 \
    -start 0 \
    -stop -1
```

Note on units: typically the images used for training and tesing have the same units (e.g. mm or cm). If the units are different, you can set the `scale` argument to convert between the two. Here are the two examples where the units are different:
- If the nnUNet model was trained on mm and the testing data is in cm, then set `scale=10`.
- If the nnUNet model was trained on cm and the testing data is in mm, then set `scale=0.1`.

### Details

`seqseg`: Main script to run.

Arguments:

-`data_dir`: This argument specifies the name of the folder containing the testing data (and test.json if applicable).

-`test_name`: This argument specifies the name of the nnUNet test to use. The default value is '3d_fullres'. Other possible values could be '2d', etc.

-`train_dataset`: This argument specifies the name of the dataset used to train the nnUNet model. For example, 'Dataset010_SEQCOROASOCACT'.

-'config_name': This argument specifies the name of the config file to use. The default value is 'global.yml'.

-`fold`: This argument specifies which fold to use for the nnUNet model. The default value is 'all'.

-`img_ext`: This argument specifies the image extension. For example, '.nii.gz'.

-`outdir`: This argument specifies the output directory where the results will be saved.

-`scale`: This argument specifies whether to scale image data. This is needed if the units for the nnUNet model and testing data are different. Example: if the nnUNet model was trained on mm and the testing data is in cm, then set scale=10. The default value is 1.

-`start`: This argument specifies where to start in the list of testing samples. The default value is 0.

-`stop`: This argument specifies where to stop in the list of testing samples. The default value is -1, which means to process all samples until the end of the list.

-`max_n_steps`: This argument specifies the maximum number of steps to run the algorithm. The default value is 1000.

-`unit`: This argument specifies the unit of the image data. The default value is 'cm'.

## Config file
`config/xx.yml`: File contains config parameters, default is set but can be changed depending on task

We recommend duplicating the file and changing the name to avoid overwriting the default values.
If so, the config file must be passed as an argument when running the script: `config_name`

## Citation
When using SeqSeg, please cite the following [paper](https://rdcu.be/dU0wy):
    
```
@Article{SveinssonCepero2024,
author={Sveinsson Cepero, Numi
and Shadden, Shawn C.},
title={SeqSeg: Learning Local Segments for Automatic Vascular Model Construction},
journal={Annals of Biomedical Engineering},
year={2024},
month={Sep},
day={18},
issn={1573-9686},
doi={10.1007/s10439-024-03611-z},
url={https://doi.org/10.1007/s10439-024-03611-z},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "seqseg",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "segmentation, deep learning, medical imaging, nnunet, medical image analysis, medical image segmentation, nnU-Net, blood vessel segmentation, vascular segmentation, vascular tracking, seqseg",
    "author": null,
    "author_email": "Numi Sveinsson Cepero <numi@berkeley.com>",
    "download_url": "https://files.pythonhosted.org/packages/81/c1/00915c40775bca4a1a7cfe9cf674b4a3717191473807db88413159b3b6e6/seqseg-1.0.0.tar.gz",
    "platform": null,
    "description": "![example workflow](https://github.com/numisveinsson/SeqSeg/actions/workflows/python-app.yml/badge.svg)\n![example workflow](https://github.com/numisveinsson/SeqSeg/actions/workflows/test.yml/badge.svg)\n\n# SeqSeg: Automatic Tracking and Segmentation of Blood Vessels in CT and MR Images\n\nSee paper [here](https://rdcu.be/dU0wy) for detailed explanations and citation.\n\nBelow is an example showing the algorithm tracking and segmenting an abdominal aorta in 3D MR image scan:\n\n![](seqseg/assets/mr_model_tracing_fast_shorter.gif)\n\n## Tutorial\n[Here](https://github.com/numisveinsson/SeqSeg/blob/main/seqseg/tutorial/tutorial.md) is a tutorial on how to run the code, including installation instructions, downloading model weights, and running the segmentation pipeline on a medical image.\n\n## How it works\nSeqSeg is a method for automatic tracking and segmentation of blood vessels in medical images. The algorithm uses a neural network to segment the vasculature locally and uses a tracking algorithm to take steps along the direction of the vessel and down bifurcation detected.\n\nHere is the workflow of the algorithm:\n\n![](seqseg/assets/seqseg.png)\n\nwhere the neural network was trained on local subvolume patches of the image:\n\n![](seqseg/assets/seqseg_training.png)\n\n## Set Up\nIf you are familiar with python, you can simply install SeqSeg using pip:\n```bash\npip install seqseg\n```\nCheck to see if the installation was successful by running:\n```bash\nseqseg --help\n```\n\nExample setup using conda:\n```bash\nconda create -n seqseg python=3.11\nconda activate seqseg\npip install seqseg\n\n```\nExample setup using pip (first create a virtual environment, see [here](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/)):\n```bash\npython3 -m venv seqseg\nsource seqseg/bin/activate\npip install seqseg\n```\n\nSeqSeg relies on [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) for segmentation of the local medical image volumes. You will need model weights to run the algorithm - either use pretrained weights (available) or train a model yourself. After training a nnU-Net model, the weights will be saved in a `nnUNet_results` folder. This folder is required to run SeqSeg, and its path is set as an argument when running the script.\n\nMain package dependencies:\n\nBasic:\n- Python 3.11\n\nMachine Learning (Note: must be installed according to nnU-Net instructions):\n- nnU-Net, nnunetv2=2.5.1\n- Pytorch, torch=2.3.1\n\nImage and Data Processing:\n- SITK, simpleitk=2.2.1\n- VTK, vtk=9.1.0\n- PyYaml, pyyaml=6.0.1\n- Matplotlib (optional)\n- Pyyaml (optional)\n\nand if using VMTK (not required):\n- VMTK\n\nNote: The code is tested with Python 3.11 and nnU-Net 2.5.1. If you are using a different version, please check the compatibility of the packages.\n\n## Running\n\nSee [here](https://github.com/numisveinsson/SeqSeg/blob/main/seqseg/tutorial/tutorial.md) for tutorial on how to run the code.\n\n### Set up data directory\nCreate a directory structure for your data as follows:\n\n1. Images: Directory containing the medical images to be segmented. Image extension can be `.nii.gz`, `.mha`, `.nrrd`, or any of [these](https://simpleitk.readthedocs.io/en/master/IO.html).\n2. Seeds: A `seeds.json` file containing the seed points for initialization.\n3. Centerlines (optional): Directory containing centerline files if available.\n4. Truths (optional): Directory containing ground truth segmentations if available.\nseqseg/tutorial/data/\n    \u251c\u2500\u2500 images\n    \u251c\u2500\u2500 seeds.json\n    \u251c\u2500\u2500 centerlines (if applicable)\n\nSeqSeg requires a seed point for initialization. This can be given by either:\n- `seeds.json` file: located in data directory (see sample under data)\n- centerline: if centerlines are given, we initialize using the first points of the centerline\n- cardiac mesh: then the aortic valve must be labeled as Region 8 and LV 7\n\n### Activate environment (eg. conda)\n```bash\nconda activate seqseg\n```\nor if using virtual environment:\n```bash\nsource seqseg/bin/activate\n```\n\n### Run\n```bash\nseqseg \\\n    -data_dir seqseg/tutorial/data/ \\\n    -nnunet_results_path nnUNet_results/ \\\n    -test_name 3d_fullres \\\n    -train_dataset Dataset005_SEQAORTANDFEMOMR \\\n    -fold 0 \\\n    -img_ext .mha \\\n    -config_name aorta_tutorial \\\n    -max_n_steps 5 \\\n    -max_n_branches 2 \\\n    -outdir output/ \\\n    -unit cm \\\n    -scale 1 \\\n    -start 0 \\\n    -stop -1\n```\n\nNote on units: typically the images used for training and tesing have the same units (e.g. mm or cm). If the units are different, you can set the `scale` argument to convert between the two. Here are the two examples where the units are different:\n- If the nnUNet model was trained on mm and the testing data is in cm, then set `scale=10`.\n- If the nnUNet model was trained on cm and the testing data is in mm, then set `scale=0.1`.\n\n### Details\n\n`seqseg`: Main script to run.\n\nArguments:\n\n-`data_dir`: This argument specifies the name of the folder containing the testing data (and test.json if applicable).\n\n-`test_name`: This argument specifies the name of the nnUNet test to use. The default value is '3d_fullres'. Other possible values could be '2d', etc.\n\n-`train_dataset`: This argument specifies the name of the dataset used to train the nnUNet model. For example, 'Dataset010_SEQCOROASOCACT'.\n\n-'config_name': This argument specifies the name of the config file to use. The default value is 'global.yml'.\n\n-`fold`: This argument specifies which fold to use for the nnUNet model. The default value is 'all'.\n\n-`img_ext`: This argument specifies the image extension. For example, '.nii.gz'.\n\n-`outdir`: This argument specifies the output directory where the results will be saved.\n\n-`scale`: This argument specifies whether to scale image data. This is needed if the units for the nnUNet model and testing data are different. Example: if the nnUNet model was trained on mm and the testing data is in cm, then set scale=10. The default value is 1.\n\n-`start`: This argument specifies where to start in the list of testing samples. The default value is 0.\n\n-`stop`: This argument specifies where to stop in the list of testing samples. The default value is -1, which means to process all samples until the end of the list.\n\n-`max_n_steps`: This argument specifies the maximum number of steps to run the algorithm. The default value is 1000.\n\n-`unit`: This argument specifies the unit of the image data. The default value is 'cm'.\n\n## Config file\n`config/xx.yml`: File contains config parameters, default is set but can be changed depending on task\n\nWe recommend duplicating the file and changing the name to avoid overwriting the default values.\nIf so, the config file must be passed as an argument when running the script: `config_name`\n\n## Citation\nWhen using SeqSeg, please cite the following [paper](https://rdcu.be/dU0wy):\n    \n```\n@Article{SveinssonCepero2024,\nauthor={Sveinsson Cepero, Numi\nand Shadden, Shawn C.},\ntitle={SeqSeg: Learning Local Segments for Automatic Vascular Model Construction},\njournal={Annals of Biomedical Engineering},\nyear={2024},\nmonth={Sep},\nday={18},\nissn={1573-9686},\ndoi={10.1007/s10439-024-03611-z},\nurl={https://doi.org/10.1007/s10439-024-03611-z},\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A deep learning-based medical image blood vessel tracking and segmentation tool.",
    "version": "1.0.0",
    "project_urls": {
        "homepage": "https://github.com/numisveinsson/SeqSeg",
        "repository": "https://github.com/numisveinsson/SeqSeg"
    },
    "split_keywords": [
        "segmentation",
        " deep learning",
        " medical imaging",
        " nnunet",
        " medical image analysis",
        " medical image segmentation",
        " nnu-net",
        " blood vessel segmentation",
        " vascular segmentation",
        " vascular tracking",
        " seqseg"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5742ea5adc17109ac4356e985d7d0de8ba26ad4c757c00e09e101129a2ff7245",
                "md5": "1d5d4f77f3cf740a4e4e6f60793423b6",
                "sha256": "50d13ec8bf4900412aa25418a22c3abd83952beb6d6402fdbe2d55a34449557d"
            },
            "downloads": -1,
            "filename": "seqseg-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1d5d4f77f3cf740a4e4e6f60793423b6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 168135,
            "upload_time": "2025-11-05T18:22:49",
            "upload_time_iso_8601": "2025-11-05T18:22:49.948388Z",
            "url": "https://files.pythonhosted.org/packages/57/42/ea5adc17109ac4356e985d7d0de8ba26ad4c757c00e09e101129a2ff7245/seqseg-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "81c100915c40775bca4a1a7cfe9cf674b4a3717191473807db88413159b3b6e6",
                "md5": "db145ea3c9a840ada051b32aa01b0f12",
                "sha256": "b1139ac58ad21c5a3ec01d71401f2a90140e7dbcf0a6b8635978b30a44138995"
            },
            "downloads": -1,
            "filename": "seqseg-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "db145ea3c9a840ada051b32aa01b0f12",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 124759,
            "upload_time": "2025-11-05T18:22:53",
            "upload_time_iso_8601": "2025-11-05T18:22:53.099201Z",
            "url": "https://files.pythonhosted.org/packages/81/c1/00915c40775bca4a1a7cfe9cf674b4a3717191473807db88413159b3b6e6/seqseg-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-05 18:22:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "numisveinsson",
    "github_project": "SeqSeg",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "seqseg"
}
        
Elapsed time: 2.16572s