# Welcome to
![deepflash2](https://raw.githubusercontent.com/matjesg/deepflash2/master/nbs/media/logo/deepflash2_logo_medium.png)
Official repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images.
[![PyPI](https://img.shields.io/pypi/v/deepflash2?color=blue&label=pypi%20version)](https://pypi.org/project/deepflash2/#description)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/deepflash2)](https://pypistats.org/packages/deepflash2)
[![Conda (channel only)](https://img.shields.io/conda/vn/matjesg/deepflash2?color=seagreen&label=conda%20version)](https://anaconda.org/matjesg/deepflash2)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7653312.svg)](https://doi.org/10.5281/zenodo.7653312)
***
__The best of two worlds:__
Combining state-of-the-art deep learning with a barrier free environment for life science researchers.
> Read the [paper](https://www.nature.com/articles/s41467-023-36960-9), watch the [tutorials](https://matjesg.github.io/deepflash2/tutorial.html), or read the [docs](https://matjesg.github.io/deepflash2/).
- **No coding skills required** (graphical user interface)
- **Ground truth estimation** from the annotations of multiple experts for model training and validation
- **Quality assurance and out-of-distribution detection** for reliable prediction on new data
- **Best-in-class performance** for semantic and instance segmentation
<img src="https://github.com/matjesg/deepflash2/blob/master/nbs/media/sample_images.png?raw=true" width="800px" style="max-width: 800pxpx">
<img style="float: left;padding: 0px 10px 0px 0px;" src="https://www.kaggle.com/static/images/medals/competitions/goldl@1x.png">
**Kaggle Gold Medal and Innovation Price Winner:** The *deepflash2* Python API built the foundation for winning the [Innovation Award](https://hubmapconsortium.github.io/ccf/pages/kaggle.html) a Kaggle Gold Medal in the [HuBMAP - Hacking the Kidney](https://www.kaggle.com/c/hubmap-kidney-segmentation) challenge.
Have a look at our [solution](https://www.kaggle.com/matjes/hubmap-deepflash2-judge-price)
## Quick Start and Demo
> Get started in less than a minute. Watch the <a href="https://matjesg.github.io/deepflash2/tutorial.html" target="_blank">tutorials</a> for help.
#### Demo on Hugging Face Spaces
Go to the [demo space](https://huggingface.co/spaces/matjesg/deepflash2) -- inference only (no training possible).
#### Demo usage with Google Colab
For a quick start, run *deepflash2* in Google Colaboratory (Google account required).
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb)
<video src="https://user-images.githubusercontent.com/13711052/139751414-acf737db-2d8a-4203-8a34-7a38e5326b5e.mov" controls width="100%"></video>
The GUI provides a build-in use for our [sample data](https://github.com/matjesg/deepflash2/releases/tag/sample_data).
1. Starting the GUI (in <a href="https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb" target="_blank">Colab</a> or follow the installation instructions below)
2. Select the task (GT Estimation, Training, or Prediction)
3. Click the `Load Sample Data` button in the sidebar and continue to the next sidebar section.
For futher instructions watch the [tutorials](https://matjesg.github.io/deepflash2/tutorial.html).
We provide an overview of the tasks below:
| | Ground Truth (GT) Estimation | Training | Prediction |
|---|---|---|---|
| Main Task | STAPLE or Majority Voting | Ensemble training and validation | Semantic and instance segmentation |
| Sample Data | 5 masks from 5 experts each | 5 image/mask pairs | 5 images and 2 trained models |
| Expected Output | 5 GT Segmentation Masks | 5 models | 5 predicted segmentation masks (semantic and instance) and uncertainty maps|
| Estimated Time | ~ 1 min | ~ 150 min | ~ 4 min |
Times are estimated for Google Colab (with free NVIDIA Tesla K80 GPU).
## Paper and Experiments
We provide a complete guide to reproduce our experiments using the *deepflash2 Python API* [here](https://github.com/matjesg/deepflash2/tree/master/paper). The data is currently available on [Google Drive](https://drive.google.com/drive/folders/1r9AqP9qW9JThbMIvT0jhoA5mPxWEeIjs?usp=sharing) and [Zenodo](https://doi.org/10.5281/zenodo.7653312).
Our Nature Communications article is available [here](https://www.nature.com/articles/s41467-023-36960-9). Please cite
```
@article{Griebel2023,
doi = {10.1038/s41467-023-36960-9},
url = {https://doi.org/10.1038/s41467-023-36960-9},
year = {2023},
month = mar,
publisher = {Springer Science and Business Media {LLC}},
volume = {14},
number = {1},
author = {Matthias Griebel and Dennis Segebarth and Nikolai Stein and Nina Schukraft and Philip Tovote and Robert Blum and Christoph M. Flath},
title = {Deep learning-enabled segmentation of ambiguous bioimages with deepflash2},
journal = {Nature Communications}
}
```
## System requirements
> Works in the browser or on your local pc/server
*deepflash2* is designed to run on Windows, Linux, or Mac (x86-64) if [pytorch](https://pytorch.org/get-started/locally/) is installable.
We generally recommend using Google Colab as it only requires a Google Account and a device with a web browser.
To run *deepflash2* locally, we recommend using a system with a GPU (e.g., 2 CPUs, 8 GB RAM, NVIDIA GPU with 8GB VRAM or better).
*deepflash2* requires Python>3.6 and the software dependencies are defined in the [settings.ini](https://github.com/matjesg/deepflash2/blob/master/settings.ini) file. Additionally, the ground truth estimation functionalities are based on simpleITK>=2.0 and the instance segmentation capabilities are complemented using cellpose v0.6.6.dev13+g316927e.
*deepflash2* is tested on Google Colab (Ubuntu 18.04.5 LTS) and locally (Ubuntu 20.04 LTS, Windows 10, MacOS 12.0.1).
## Installation Guide
> Typical install time is about 1-5 minutes, depending on your internet connection
The GUI of *deepflash2* runs as a web application inside a Jupyter Notebook, the de-facto standard of computational notebooks in the scientific community. The GUI is built on top of the *deepflash2* Python API, which can be used independently (read the [docs](https://matjesg.github.io/deepflash2/)).
### Google Colab
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb)
Open <a href="https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb" target="_blank">Colab</a> and excute the `Set up environment` cell or follow the `pip` instructions. Colab provides free access to graphics processing units (GPUs) for fast model training and prediction (Google account required).
### Other systems
We recommend installation into a clean Python 3.7, 3.8, or 3.9 environment (e.g., using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)).
#### [mamba](https://github.com/mamba-org/mamba)/[conda](https://docs.conda.io/en/latest/)
Installation with mamba (installaton [instructions](https://github.com/mamba-org/mamba)) allows a fast and realiable installation process (you can replace `mamba` with `conda` and add the `--update-all` flag to do the installation with conda).
```bash
mamba install -c fastchan -c conda-forge -c matjesg deepflash2
```
#### [pip](https://pip.pypa.io/en/stable/)
If you want to use your GPU and install with pip, we recommend installing PyTorch first by following the [installation instructions](https://pytorch.org/get-started/locally/).
```bash
pip install -U deepflash2
```
#### Using the GUI
If you want to use the GUI, make sure to download the GUI notebook, e.g., using `curl`
```bash
curl -o deepflash2_GUI.ipynb https://raw.githubusercontent.com/matjesg/deepflash2/master/deepflash2_GUI.ipynb
```
and start a Jupyter server.
```bash
jupyter notebook
```
Then, open `deepflash2_GUI.ipynb` within Notebook environment.
### Docker
Docker images for __deepflash2__ are built on top of [the latest pytorch image](https://hub.docker.com/r/pytorch/pytorch/).
- CPU only
> `docker run -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh`
- For training, we recommend to run docker with GPU support (You need to install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker) to enable gpu compatibility with these containers.)
> `docker run --gpus all --shm-size=256m -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh`
All docker containers are configured to start a jupyter server. To add data, we recomment using [bind mounts](https://docs.docker.com/storage/bind-mounts/) with `/workspace` as target. To start the GUI, open `deepflash2_GUI.ipynb` within Notebook environment.
For more information on how to run docker see [docker orientation and setup](https://docs.docker.com/get-started/).
## Creating segmentation masks with Fiji/ImageJ
If you don't have labelled training data available, you can use this [instruction manual](https://github.com/matjesg/DeepFLaSH/raw/master/ImageJ/create_maps_howto.pdf) for creating segmentation maps.
The ImagJ-Macro is available [here](https://raw.githubusercontent.com/matjesg/DeepFLaSH/master/ImageJ/Macro_create_maps.ijm).
Raw data
{
"_id": null,
"home_page": "https://github.com/matjesg/deepflash2",
"name": "deepflash2",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "unet,deep learning,semantic segmentation,microscopy,fluorescent labels",
"author": "Matthias Griebel",
"author_email": "matthias.griebel@uni-wuerzburg.com",
"download_url": "https://files.pythonhosted.org/packages/50/e0/b2eccb0658cb348518962ffac449d8e457c241f5e5e8bebe5c5be0090b2a/deepflash2-0.2.3.tar.gz",
"platform": null,
"description": "# Welcome to\n\n\n\n![deepflash2](https://raw.githubusercontent.com/matjesg/deepflash2/master/nbs/media/logo/deepflash2_logo_medium.png)\n\nOfficial repository of deepflash2 - a deep-learning pipeline for segmentation of ambiguous microscopic images.\n\n[![PyPI](https://img.shields.io/pypi/v/deepflash2?color=blue&label=pypi%20version)](https://pypi.org/project/deepflash2/#description) \n[![PyPI - Downloads](https://img.shields.io/pypi/dm/deepflash2)](https://pypistats.org/packages/deepflash2)\n[![Conda (channel only)](https://img.shields.io/conda/vn/matjesg/deepflash2?color=seagreen&label=conda%20version)](https://anaconda.org/matjesg/deepflash2)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7653312.svg)](https://doi.org/10.5281/zenodo.7653312)\n***\n\n__The best of two worlds:__\nCombining state-of-the-art deep learning with a barrier free environment for life science researchers. \n> Read the [paper](https://www.nature.com/articles/s41467-023-36960-9), watch the [tutorials](https://matjesg.github.io/deepflash2/tutorial.html), or read the [docs](https://matjesg.github.io/deepflash2/). \n- **No coding skills required** (graphical user interface)\n- **Ground truth estimation** from the annotations of multiple experts for model training and validation\n- **Quality assurance and out-of-distribution detection** for reliable prediction on new data \n- **Best-in-class performance** for semantic and instance segmentation\n\n<img src=\"https://github.com/matjesg/deepflash2/blob/master/nbs/media/sample_images.png?raw=true\" width=\"800px\" style=\"max-width: 800pxpx\">\n\n\n<img style=\"float: left;padding: 0px 10px 0px 0px;\" src=\"https://www.kaggle.com/static/images/medals/competitions/goldl@1x.png\">\n\n**Kaggle Gold Medal and Innovation Price Winner:** The *deepflash2* Python API built the foundation for winning the [Innovation Award](https://hubmapconsortium.github.io/ccf/pages/kaggle.html) a Kaggle Gold Medal in the [HuBMAP - Hacking the Kidney](https://www.kaggle.com/c/hubmap-kidney-segmentation) challenge. \nHave a look at our [solution](https://www.kaggle.com/matjes/hubmap-deepflash2-judge-price)\n\n## Quick Start and Demo\n> Get started in less than a minute. Watch the <a href=\"https://matjesg.github.io/deepflash2/tutorial.html\" target=\"_blank\">tutorials</a> for help.\n#### Demo on Hugging Face Spaces\n\nGo to the [demo space](https://huggingface.co/spaces/matjesg/deepflash2) -- inference only (no training possible).\n\n#### Demo usage with Google Colab\n\nFor a quick start, run *deepflash2* in Google Colaboratory (Google account required).\n\n[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb) \n<video src=\"https://user-images.githubusercontent.com/13711052/139751414-acf737db-2d8a-4203-8a34-7a38e5326b5e.mov\" controls width=\"100%\"></video>\n\nThe GUI provides a build-in use for our [sample data](https://github.com/matjesg/deepflash2/releases/tag/sample_data).\n\n1. Starting the GUI (in <a href=\"https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb\" target=\"_blank\">Colab</a> or follow the installation instructions below)\n2. Select the task (GT Estimation, Training, or Prediction) \n3. Click the `Load Sample Data` button in the sidebar and continue to the next sidebar section.\n\nFor futher instructions watch the [tutorials](https://matjesg.github.io/deepflash2/tutorial.html).\n\nWe provide an overview of the tasks below:\n\n| | Ground Truth (GT) Estimation | Training | Prediction |\n|---|---|---|---|\n| Main Task | STAPLE or Majority Voting | Ensemble training and validation | Semantic and instance segmentation |\n| Sample Data | 5 masks from 5 experts each | 5 image/mask pairs | 5 images and 2 trained models |\n| Expected Output | 5 GT Segmentation Masks | 5 models | 5 predicted segmentation masks (semantic and instance) and uncertainty maps|\n| Estimated Time | ~ 1 min | ~ 150 min | ~ 4 min |\n\nTimes are estimated for Google Colab (with free NVIDIA Tesla K80 GPU).\n\n## Paper and Experiments \n\nWe provide a complete guide to reproduce our experiments using the *deepflash2 Python API* [here](https://github.com/matjesg/deepflash2/tree/master/paper). The data is currently available on [Google Drive](https://drive.google.com/drive/folders/1r9AqP9qW9JThbMIvT0jhoA5mPxWEeIjs?usp=sharing) and [Zenodo](https://doi.org/10.5281/zenodo.7653312).\n\nOur Nature Communications article is available [here](https://www.nature.com/articles/s41467-023-36960-9). Please cite\n\n```\n@article{Griebel2023,\n doi = {10.1038/s41467-023-36960-9},\n url = {https://doi.org/10.1038/s41467-023-36960-9},\n year = {2023},\n month = mar,\n publisher = {Springer Science and Business Media {LLC}},\n volume = {14},\n number = {1},\n author = {Matthias Griebel and Dennis Segebarth and Nikolai Stein and Nina Schukraft and Philip Tovote and Robert Blum and Christoph M. Flath},\n title = {Deep learning-enabled segmentation of ambiguous bioimages with deepflash2},\n journal = {Nature Communications}\n}\n```\n\n\n\n## System requirements\n> Works in the browser or on your local pc/server\n\n*deepflash2* is designed to run on Windows, Linux, or Mac (x86-64) if [pytorch](https://pytorch.org/get-started/locally/) is installable.\nWe generally recommend using Google Colab as it only requires a Google Account and a device with a web browser. \nTo run *deepflash2* locally, we recommend using a system with a GPU (e.g., 2 CPUs, 8 GB RAM, NVIDIA GPU with 8GB VRAM or better).\n\n*deepflash2* requires Python>3.6 and the software dependencies are defined in the [settings.ini](https://github.com/matjesg/deepflash2/blob/master/settings.ini) file. Additionally, the ground truth estimation functionalities are based on simpleITK>=2.0 and the instance segmentation capabilities are complemented using cellpose v0.6.6.dev13+g316927e.\n\n*deepflash2* is tested on Google Colab (Ubuntu 18.04.5 LTS) and locally (Ubuntu 20.04 LTS, Windows 10, MacOS 12.0.1).\n\n## Installation Guide\n> Typical install time is about 1-5 minutes, depending on your internet connection\n\nThe GUI of *deepflash2* runs as a web application inside a Jupyter Notebook, the de-facto standard of computational notebooks in the scientific community. The GUI is built on top of the *deepflash2* Python API, which can be used independently (read the [docs](https://matjesg.github.io/deepflash2/)).\n### Google Colab\n\n[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb) \n\nOpen <a href=\"https://colab.research.google.com/github/matjesg/deepflash2/blob/master/deepflash2_GUI.ipynb\" target=\"_blank\">Colab</a> and excute the `Set up environment` cell or follow the `pip` instructions. Colab provides free access to graphics processing units (GPUs) for fast model training and prediction (Google account required).\n\n\n### Other systems\n\nWe recommend installation into a clean Python 3.7, 3.8, or 3.9 environment (e.g., using [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html)).\n\n#### [mamba](https://github.com/mamba-org/mamba)/[conda](https://docs.conda.io/en/latest/)\n\nInstallation with mamba (installaton [instructions](https://github.com/mamba-org/mamba)) allows a fast and realiable installation process (you can replace `mamba` with `conda` and add the `--update-all` flag to do the installation with conda).\n\n```bash\nmamba install -c fastchan -c conda-forge -c matjesg deepflash2 \n```\n\n#### [pip](https://pip.pypa.io/en/stable/)\n\nIf you want to use your GPU and install with pip, we recommend installing PyTorch first by following the [installation instructions](https://pytorch.org/get-started/locally/).\n\n```bash\npip install -U deepflash2\n```\n\n#### Using the GUI\n\nIf you want to use the GUI, make sure to download the GUI notebook, e.g., using `curl` \n\n```bash\ncurl -o deepflash2_GUI.ipynb https://raw.githubusercontent.com/matjesg/deepflash2/master/deepflash2_GUI.ipynb\n```\n\nand start a Jupyter server.\n\n```bash\njupyter notebook\n```\n\nThen, open `deepflash2_GUI.ipynb` within Notebook environment.\n\n### Docker\n\nDocker images for __deepflash2__ are built on top of [the latest pytorch image](https://hub.docker.com/r/pytorch/pytorch/). \n\n- CPU only\n> `docker run -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh`\n- For training, we recommend to run docker with GPU support (You need to install [Nvidia-Docker](https://github.com/NVIDIA/nvidia-docker) to enable gpu compatibility with these containers.)\n> `docker run --gpus all --shm-size=256m -p 8888:8888 matjes/deepflash2 ./run_jupyter.sh`\n\nAll docker containers are configured to start a jupyter server. To add data, we recomment using [bind mounts](https://docs.docker.com/storage/bind-mounts/) with `/workspace` as target. To start the GUI, open `deepflash2_GUI.ipynb` within Notebook environment.\n\nFor more information on how to run docker see [docker orientation and setup](https://docs.docker.com/get-started/).\n\n## Creating segmentation masks with Fiji/ImageJ\n\nIf you don't have labelled training data available, you can use this [instruction manual](https://github.com/matjesg/DeepFLaSH/raw/master/ImageJ/create_maps_howto.pdf) for creating segmentation maps.\nThe ImagJ-Macro is available [here](https://raw.githubusercontent.com/matjesg/DeepFLaSH/master/ImageJ/Macro_create_maps.ijm).\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "A Deep learning pipeline for segmentation of fluorescent labels in microscopy images",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://github.com/matjesg/deepflash2"
},
"split_keywords": [
"unet",
"deep learning",
"semantic segmentation",
"microscopy",
"fluorescent labels"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "444f3d729908c9f2b4f08cef4faf5c93756d7e584cb725fafa2d5323dae80b13",
"md5": "31557b9bcb695a098298f981b883ef53",
"sha256": "a7b800459bc5902c5d3ffdf866bbea7b051e3e24e5d9b1bb4523e69edfad11b4"
},
"downloads": -1,
"filename": "deepflash2-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "31557b9bcb695a098298f981b883ef53",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 60413,
"upload_time": "2023-06-02T06:33:56",
"upload_time_iso_8601": "2023-06-02T06:33:56.151766Z",
"url": "https://files.pythonhosted.org/packages/44/4f/3d729908c9f2b4f08cef4faf5c93756d7e584cb725fafa2d5323dae80b13/deepflash2-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "50e0b2eccb0658cb348518962ffac449d8e457c241f5e5e8bebe5c5be0090b2a",
"md5": "e7b03b4b942d74ef039fe23b81184fbb",
"sha256": "fd1346103a1af6838efca1a4ca55fbb520fc68783880dca71dcd2ec7da6e8212"
},
"downloads": -1,
"filename": "deepflash2-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "e7b03b4b942d74ef039fe23b81184fbb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 61233,
"upload_time": "2023-06-02T06:33:58",
"upload_time_iso_8601": "2023-06-02T06:33:58.071713Z",
"url": "https://files.pythonhosted.org/packages/50/e0/b2eccb0658cb348518962ffac449d8e457c241f5e5e8bebe5c5be0090b2a/deepflash2-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-02 06:33:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "matjesg",
"github_project": "deepflash2",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "deepflash2"
}