instanseg-torch


Nameinstanseg-torch JSON
Version 0.0.6 PyPI version JSON
download
home_pageNone
SummaryPackage for instanseg-torch PyPi
upload_time2024-11-15 17:34:18
maintainerNone
docs_urlNone
authorNone
requires_python<3.12,>=3.9
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<p align="center">
  <img src="https://github.com/ThibautGoldsborough/instanseg_thibaut/blob/main/assets/instanseg_logo.png?raw=True" alt="Instanseg Logo" width="25%">
</p>



## Overview

InstanSeg is a pytorch-based cell and nucleus segmentation pipeline for fluorescence and brightfield microscopy images. This README provides instructions for setting up the environment, installing dependencies, and using the provided tools and models.

## Why should I use InstanSeg?

1. InstanSeg is freely available and open source.
2. It's faster than other cell segmentation methods… sometimes much faster.
3. It's capable of accurately segmenting both nuclei and whole cells.
4. InstanSeg can be entirely compiled in TorchScript - including postprocessing! This means it's not only easy to use in Python but also works with LibTorch alone. This allows you to run InstanSeg directly in QuPath!
5. You can use InstanSeg on multiplexed images (images that have more than three channels) on novel biomarker panels, without retraining or manual intervention.
6. We plan to release more InstanSeg models trained on public datasets. If there's a nucleus and/or cell segmentation dataset under a permissive open license (e.g. CC0 or CC-BY) that we missed, let us know, and we may be able to increase our InstanSeg model zoo.
 

## InstanSeg has its own QuPath extension!

InstanSeg is introduced in the [QuPath pre-release v0.6.0-rc2](https://github.com/qupath/qupath/releases/tag/v0.6.0-rc2), so you can start using InstanSeg immediately. You can find the QuPath extension source code [in its GitHub repository](https://github.com/qupath/qupath-extension-instanseg).

## How to cite InstanSeg:

If you use InstanSeg for nucleus segmentation of brightfield histology images, please cite:

> Goldsborough, T. et al. (2024) ‘InstanSeg: an embedding-based instance segmentation algorithm optimized for accurate, efficient and portable cell segmentation’. _arXiv_. Available at: https://doi.org/10.48550/arXiv.2408.15954.

If you use InstanSeg for nucleus and/or cell segmentation in fluorescence images, please cite:

> Goldsborough, T. et al. (2024) ‘A novel channel invariant architecture for the segmentation of cells and nuclei in multiplexed images using InstanSeg’. _bioRxiv_, p. 2024.09.04.611150. Available at: https://doi.org/10.1101/2024.09.04.611150.



<p align="center">
  <img src="https://github.com/ThibautGoldsborough/instanseg_thibaut/blob/main/assets/instanseg_main_figure.png?raw=True" alt="Instanseg Main Figure" width="50%">
</p>

## Table of Contents

- [Installation](#installation)
  - [Local Installation](#local-installation)
  - [GPU Version (CUDA) for Windows and Linux](#gpu-version-cuda-for-windows-and-linux)
- [Usage](#usage)
  - [Training Models](#training-models)
  - [Testing Models](#testing-models)


## Installing using pip

For a minimal installation:
```bash
pip install instanseg-torch
```

if you want all the requirements used for training:

```bash
pip install instanseg-torch[full]
```
You can get started immediately by calling the InstanSeg class:

```python
from instanseg import InstanSeg
instanseg_brightfield = InstanSeg("brightfield_nuclei", image_reader= "tiffslide", verbosity=1)

labeled_output = instanseg_brightfield.eval(image = "../instanseg/examples/HE_example.tif",
                                            save_output = True,
                                            save_overlay = True)
```

Alternatively, if you want more control over the intermediate steps:

```python
image_array, pixel_size = instanseg_brightfield.read_image("../instanseg/examples/HE_example.tif")

labeled_output, image_tensor  = instanseg_brightfield.eval_small_image(image_array, pixel_size)

display = instanseg_brightfield.display(image_tensor, labeled_output)

from instanseg.utils.utils import show_images
show_images(image_tensor,display, colorbar=False, titles = ["Normalized Image", "Image with segmentation"])
```

### Local Installation

To install InstanSeg locally, follow these steps:

1. Install either Anaconda, Mamba, or [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html). We use micromamba for speed and simplicity, but you can replace "micromamba" with the distribution you are using.

2. In your terminal or Anaconda prompt on Windows, create a new environment and install dependencies using the provided `env.yml` file:
    ```bash
    micromamba create -n instanseg --file env.yml
    ```

3. Activate your environment:
    ```bash
    micromamba activate instanseg
    ```

### GPU Version (CUDA) for Windows and Linux

If you intend to use GPU acceleration and CUDA, follow these additional steps:

4. Uninstall existing PyTorch and reinstall with CUDA support:
    ```bash
    micromamba remove pytorch torchvision monai
    micromamba install pytorch==2.1.1 torchvision==0.16.1 monai=1.3.0 pytorch-cuda=12.1 -c conda-forge -c pytorch -c nvidia
    pip install cupy-cuda12x
    ```

5. Check if CUDA is available:
    ```bash
    python -c "import torch; print('CUDA is available') if torch.cuda.is_available() else print('CUDA is not available')"
    ```

The repository may work with older versions of CUDA. For this replace "12.1" and "12" with the required version. 

### Setup Repository

3. Build repository:
    ```bash
    pip install -e .
    ```

## Usage

### Download Datasets

To download public datasets and example images, follow the instructions under **instanseg/notebooks/load_datasets.ipynb**

To train InstanSeg on your own dataset, extend the **instanseg/notebooks/load_datasets.ipynb** with one of the templates provided.

### Training Models

To train models using InstanSeg, use the **train.py** script under the scripts folder.

For example, to train InstanSeg on the TNBC_2018 dataset over 250 epochs at a pixel resolution of 0.25 microns/pixel, run the following command:
```bash
cd instanseg/scripts
python train.py -data segmentation_dataset.pth -source "[TNBC_2018]" --num_epochs 250 --experiment_str my_first_instanseg --requested_pixel_size 0.25
```

To train a channel invariant InstanSeg on the CPDMI_2023 dataset, predicting both nuclei and cells, run the following command:
```bash
cd instanseg/scripts
python train.py -data segmentation_dataset.pth -source "[CPDMI_2023]" --num_epochs 250 --experiment_str my_first_instanseg -target NC --channel_invariant True --requested_pixel_size 0.5
```

Each epoch should take approximately 1 to 3 minutes to complete (with mps or cuda support).

For more options and configurations, refer to the parser arguments in the train.py file.

### Testing Models

To test trained models and obtain F1 metrics, use the following command:
```bash
python test.py --model_folder my_first_instanseg -test_set Validation --optimize_hyperparameters True
python test.py --model_folder my_first_instanseg -test_set Test --params best_params
```

### Using InstanSeg for inference

```bash
python inference.py --model_folder my_first_instanseg --image_path ../examples
```
Replace "../examples" with the path to your images. If InstanSeg cannot read the image pixel size from the image metadata, the user is required to provide a --pixel_size parameter. InstanSeg provides (limited) support for whole slide images (WSIs). For more options and configurations, refer to the parser arguments in the inference.py file.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "instanseg-torch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Thibaut Goldsborough <thibaut.golds@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/88/b0/6dd5fe347b8a240aaaff4fe435537ae454ef5509882b31d62c08e20aa5b5/instanseg_torch-0.0.6.tar.gz",
    "platform": null,
    "description": "\n<p align=\"center\">\n  <img src=\"https://github.com/ThibautGoldsborough/instanseg_thibaut/blob/main/assets/instanseg_logo.png?raw=True\" alt=\"Instanseg Logo\" width=\"25%\">\n</p>\n\n\n\n## Overview\n\nInstanSeg is a pytorch-based cell and nucleus segmentation pipeline for fluorescence and brightfield microscopy images. This README provides instructions for setting up the environment, installing dependencies, and using the provided tools and models.\n\n## Why should I use InstanSeg?\n\n1. InstanSeg is freely available and open source.\n2. It's faster than other cell segmentation methods\u2026 sometimes much faster.\n3. It's capable of accurately segmenting both nuclei and whole cells.\n4. InstanSeg can be entirely compiled in TorchScript - including postprocessing! This means it's not only easy to use in Python but also works with LibTorch alone. This allows you to run InstanSeg directly in QuPath!\n5. You can use InstanSeg on multiplexed images (images that have more than three channels) on novel biomarker panels, without retraining or manual intervention.\n6. We plan to release more InstanSeg models trained on public datasets. If there's a nucleus and/or cell segmentation dataset under a permissive open license (e.g. CC0 or CC-BY) that we missed, let us know, and we may be able to increase our InstanSeg model zoo.\n \n\n## InstanSeg has its own QuPath extension!\n\nInstanSeg is introduced in the [QuPath pre-release v0.6.0-rc2](https://github.com/qupath/qupath/releases/tag/v0.6.0-rc2), so you can start using InstanSeg immediately. You can find the QuPath extension source code [in its GitHub repository](https://github.com/qupath/qupath-extension-instanseg).\n\n## How to cite InstanSeg:\n\nIf you use InstanSeg for nucleus segmentation of brightfield histology images, please cite:\n\n> Goldsborough, T. et al. (2024) \u2018InstanSeg: an embedding-based instance segmentation algorithm optimized for accurate, efficient and portable cell segmentation\u2019. _arXiv_. Available at: https://doi.org/10.48550/arXiv.2408.15954.\n\nIf you use InstanSeg for nucleus and/or cell segmentation in fluorescence images, please cite:\n\n> Goldsborough, T. et al. (2024) \u2018A novel channel invariant architecture for the segmentation of cells and nuclei in multiplexed images using InstanSeg\u2019. _bioRxiv_, p. 2024.09.04.611150. Available at: https://doi.org/10.1101/2024.09.04.611150.\n\n\n\n<p align=\"center\">\n  <img src=\"https://github.com/ThibautGoldsborough/instanseg_thibaut/blob/main/assets/instanseg_main_figure.png?raw=True\" alt=\"Instanseg Main Figure\" width=\"50%\">\n</p>\n\n## Table of Contents\n\n- [Installation](#installation)\n  - [Local Installation](#local-installation)\n  - [GPU Version (CUDA) for Windows and Linux](#gpu-version-cuda-for-windows-and-linux)\n- [Usage](#usage)\n  - [Training Models](#training-models)\n  - [Testing Models](#testing-models)\n\n\n## Installing using pip\n\nFor a minimal installation:\n```bash\npip install instanseg-torch\n```\n\nif you want all the requirements used for training:\n\n```bash\npip install instanseg-torch[full]\n```\nYou can get started immediately by calling the InstanSeg class:\n\n```python\nfrom instanseg import InstanSeg\ninstanseg_brightfield = InstanSeg(\"brightfield_nuclei\", image_reader= \"tiffslide\", verbosity=1)\n\nlabeled_output = instanseg_brightfield.eval(image = \"../instanseg/examples/HE_example.tif\",\n                                            save_output = True,\n                                            save_overlay = True)\n```\n\nAlternatively, if you want more control over the intermediate steps:\n\n```python\nimage_array, pixel_size = instanseg_brightfield.read_image(\"../instanseg/examples/HE_example.tif\")\n\nlabeled_output, image_tensor  = instanseg_brightfield.eval_small_image(image_array, pixel_size)\n\ndisplay = instanseg_brightfield.display(image_tensor, labeled_output)\n\nfrom instanseg.utils.utils import show_images\nshow_images(image_tensor,display, colorbar=False, titles = [\"Normalized Image\", \"Image with segmentation\"])\n```\n\n### Local Installation\n\nTo install InstanSeg locally, follow these steps:\n\n1. Install either Anaconda, Mamba, or [micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html). We use micromamba for speed and simplicity, but you can replace \"micromamba\" with the distribution you are using.\n\n2. In your terminal or Anaconda prompt on Windows, create a new environment and install dependencies using the provided `env.yml` file:\n    ```bash\n    micromamba create -n instanseg --file env.yml\n    ```\n\n3. Activate your environment:\n    ```bash\n    micromamba activate instanseg\n    ```\n\n### GPU Version (CUDA) for Windows and Linux\n\nIf you intend to use GPU acceleration and CUDA, follow these additional steps:\n\n4. Uninstall existing PyTorch and reinstall with CUDA support:\n    ```bash\n    micromamba remove pytorch torchvision monai\n    micromamba install pytorch==2.1.1 torchvision==0.16.1 monai=1.3.0 pytorch-cuda=12.1 -c conda-forge -c pytorch -c nvidia\n    pip install cupy-cuda12x\n    ```\n\n5. Check if CUDA is available:\n    ```bash\n    python -c \"import torch; print('CUDA is available') if torch.cuda.is_available() else print('CUDA is not available')\"\n    ```\n\nThe repository may work with older versions of CUDA. For this replace \"12.1\" and \"12\" with the required version. \n\n### Setup Repository\n\n3. Build repository:\n    ```bash\n    pip install -e .\n    ```\n\n## Usage\n\n### Download Datasets\n\nTo download public datasets and example images, follow the instructions under **instanseg/notebooks/load_datasets.ipynb**\n\nTo train InstanSeg on your own dataset, extend the **instanseg/notebooks/load_datasets.ipynb** with one of the templates provided.\n\n### Training Models\n\nTo train models using InstanSeg, use the **train.py** script under the scripts folder.\n\nFor example, to train InstanSeg on the TNBC_2018 dataset over 250 epochs at a pixel resolution of 0.25 microns/pixel, run the following command:\n```bash\ncd instanseg/scripts\npython train.py -data segmentation_dataset.pth -source \"[TNBC_2018]\" --num_epochs 250 --experiment_str my_first_instanseg --requested_pixel_size 0.25\n```\n\nTo train a channel invariant InstanSeg on the CPDMI_2023 dataset, predicting both nuclei and cells, run the following command:\n```bash\ncd instanseg/scripts\npython train.py -data segmentation_dataset.pth -source \"[CPDMI_2023]\" --num_epochs 250 --experiment_str my_first_instanseg -target NC --channel_invariant True --requested_pixel_size 0.5\n```\n\nEach epoch should take approximately 1 to 3 minutes to complete (with mps or cuda support).\n\nFor more options and configurations, refer to the parser arguments in the train.py file.\n\n### Testing Models\n\nTo test trained models and obtain F1 metrics, use the following command:\n```bash\npython test.py --model_folder my_first_instanseg -test_set Validation --optimize_hyperparameters True\npython test.py --model_folder my_first_instanseg -test_set Test --params best_params\n```\n\n### Using InstanSeg for inference\n\n```bash\npython inference.py --model_folder my_first_instanseg --image_path ../examples\n```\nReplace \"../examples\" with the path to your images. If InstanSeg cannot read the image pixel size from the image metadata, the user is required to provide a --pixel_size parameter. InstanSeg provides (limited) support for whole slide images (WSIs). For more options and configurations, refer to the parser arguments in the inference.py file.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Package for instanseg-torch PyPi",
    "version": "0.0.6",
    "project_urls": {
        "Homepage": "https://github.com/instanseg/instanseg",
        "Issues": "https://github.com/instanseg/instanseg/issues"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d85519aa3f249163c69f2e75a73c2008c02ddce817d7c14eac013bab0c82e455",
                "md5": "d9f6f3524c3412b9c8883cc93b276cc5",
                "sha256": "2db5ae93f202040da7188a320ade9f8eaab2e12fbbff2821350c1c4e9bdc86eb"
            },
            "downloads": -1,
            "filename": "instanseg_torch-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d9f6f3524c3412b9c8883cc93b276cc5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.9",
            "size": 106033,
            "upload_time": "2024-11-15T17:34:14",
            "upload_time_iso_8601": "2024-11-15T17:34:14.982346Z",
            "url": "https://files.pythonhosted.org/packages/d8/55/19aa3f249163c69f2e75a73c2008c02ddce817d7c14eac013bab0c82e455/instanseg_torch-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "88b06dd5fe347b8a240aaaff4fe435537ae454ef5509882b31d62c08e20aa5b5",
                "md5": "5b98d34fa0ccfc344b23f73ca1845188",
                "sha256": "4234b920f0180db774a7e35672df84547d9c6ced77aafcb584d154b4c6bb64a6"
            },
            "downloads": -1,
            "filename": "instanseg_torch-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "5b98d34fa0ccfc344b23f73ca1845188",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.9",
            "size": 93810,
            "upload_time": "2024-11-15T17:34:18",
            "upload_time_iso_8601": "2024-11-15T17:34:18.905208Z",
            "url": "https://files.pythonhosted.org/packages/88/b0/6dd5fe347b8a240aaaff4fe435537ae454ef5509882b31d62c08e20aa5b5/instanseg_torch-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-15 17:34:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "instanseg",
    "github_project": "instanseg",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "instanseg-torch"
}
        
Elapsed time: 1.50624s