exodeepfinder


Nameexodeepfinder JSON
Version 0.3.3 PyPI version JSON
download
home_pageNone
SummaryExoDeepFinder is an original deep learning approach to localize macromolecules in cryo electron tomography images. The method is based on image segmentation using a 3D convolutional neural network.
upload_time2024-08-12 13:31:39
maintainerA. Masson
docs_urlNone
authorE. Moebel
requires_python>=3.9
licenseGPL-3.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ExoDeepFinder

ExoDeepFinder is an exocytosis event detection tool.

This work is based on [DeepFinder](https://github.com/deep-finder/cryoet-deepfinder) which has been customized for TIRF microscopy.

## Requirements

The following software are required for GPU support:
- NVIDIA® GPU drivers,
  - >= 525.60.13 for Linux,
  - >= 528.33 for WSL on Windows,
- CUDA® Toolkit 12.3,
- cuDNN SDK 8.9.7,
- (Optional) TensorRT to improve latency and throughput for inference.

## Installation guide

[ExoDeepFinder binaries are available](https://github.com/deep-finder/tirfm-deepfinder/releases/tag/v0.2.3) for Windows, Linux and Mac, so there is no need to install anything (except the Tensorflow requirements described above for GPU support) if you just want to use the Graphical User Interface (GUI). The Linux release is big (over 4Gb) because it contains the libraries required for the GPU acceleration. Thus they are split in two parts (`ExoDeepFinder_Linux-x86_64_part1.tar.gz` and `ExoDeepFinder_Linux-x86_64_part2.tar.gz`). To uncompress them, use the following command: `tarcat ExoDeepFinder_Linux-x86_64_part*.tar.gz  | tar -xvzf -`.

> **_Note:_** ExoDeepFinder depends on Tensorflow which is only GPU-accelerated on Linux. There is currently no official GPU support for MacOS and native Windows, so the CPU will be used on those platform, but you can still use it (it will just be slower, yet the training might be very slow and is not well supported). On Windows, WSL2 can be used to run tensorflow code with GPU; see the [install instructions](https://www.tensorflow.org/install/pip?hl=fr#windows-wsl2) for more information.

### Python installation

Alternatively, to install ExoDeepFinder and use it with command lines, create and activate a virtual environment with python 3.11 or later (see the [Virtual environments](#virtual-environments) section for more information), install dependencies (on Linux only, and only if you wish to use the GUI, see bellow), and run `pip install exodeepfinder[GUI]` (you can also omit `[GUI]` if you only want to use the command line).

On Linux, the GUI requires [`wxPython` dependencies](https://github.com/wxWidgets/Phoenix/blob/master/README.rst#prerequisites) to be installed (you can just run `pip install exodeepfinder` if you don't want the GUI). 
The simplest way is to use conda (or micromamba, see the [Conda alternatives](#conda-alternatives) section): 
- create a new environment named exodeepfinder with Python 3.10 and Gooey (which installs wxPython): `conda create -n exodeepfinder python=3.10 gooey==1.0.8.1`
- activate it: `conda activate exodeepfinder`
- install exodeepfinder: `pip install exodeepfinder`

You can also install wxPython manually (`sudo apt install libgtk-3-dev`, etc.) or use one [precompiled wxPython version](https://wxpython.org/pages/downloads/index.html) (use `pip install -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-16.04 wxPython` with your Ubuntu version number, or use `conda install wxpython` to install a compiled wxPython from conda). The rest can be installed with `pip install exodeepfinder`. 

Note that on Windows, the `python` command is often replaced by `py` and `pip` by `py -m pip`; so you migth need adapt the commands in this documentation depending on your system settings.

## Usage

Here are all ExoDeepFinder commands (described later):

```
convert_tiff_to_h5              # convert tiff folders to a single h5 file
segment                         # segment a movie
generate_annotation             # generate an annotation file from a segmentation by clustering it
generate_segmentation           # generate a segmentation from an annotation file
detect_spots                    # detect bright spots in movies
merge_detector_expert           # merge the expert annotations with the detector segmentations for training
structure_training_dataset      # structure dataset files for training
train                           # train a new model
exodeepfinder                   # combine all above commands
```

The ExoDeepFinder main GUI enables to execute each of those commands (listed on the Actions panel).

### Command-line usage

All commands (except `exodeepfinder`) must be prefixed with `edf_` when using the command-line interface.

For more information about an ExoDeepFinder command, use the `--help` option (run `edf_detect_spots --help` to know more about `edf_detect_spots`).

To open a Graphical User Interface (GUI) for a given command, run it without any argument. For example, `edf_segment` opens a GUI which can execute the `edf_segment` command with the arguments specified with the graphical interface.

`exodeepfinder` runs any of the other command as a subcommand (for example `exodeepfinder segment -m movie.h5` is equivalent to `edf_segment -m movie.h5`); and it opens a GUI for all other commands when called without any argument.

If you installed ExoDeepFinder as a developer (see [Development section](## Development)), all commands can either be called directly (`edf_segment -m movie.h5`) or with python and the proper path (`python deepfinder/commands/segment.py -m movie.h5` when in the project root directory).

### Exocytosis events detection

The detection of exocytosis events is formally the segmentation of events in 3D (2D + time) TIRF movies followed by the clustering of the resulting segmentation map.

Detecting exocytosis events in ExoDeepFinder involves executing the following commands:
  1. `convert_tiff_to_h5`  to convert tiff folders to a single h5 file,
  1. `segment` to generate segmentation maps from movies, where 2s will be exocytosis events and 1s will be bright spots,
  1. `generate_annotation` to generate an annotation file from a segmentation by clustering it.

#### 1. Convert movies to h5 format

ExoDeepFinder handles exocytosis movies made from tiff files, where each tiff file is a frame of the movie, and their name ends with the frame number; like in the following structure:

```
exocytosis_data/
├── movie1/
│   ├── frame_1.tiff
│   ├── frame_2.tiff
│   └── ...
```

The frame extensions can be .tif, .tiff, .TIF or .TIFF.

There is no constraint on the file names, but they must contain the frame number (the last number in the file name must be the frame number), and be in the tiff format (it could work with other format like .png since images are read with the `skimage.io.imread()` function of the scikit-image library). For example `frame_1.tiff` could also be named `IMAGE32_1.TIF`. Similarly, there is no constraint on the movie names. In addition, although there is no strict constraint on the file names, be aware that it is much simpler to work with simple file names with no space nor special characters. Lastly, make sure that folders contain only the .tiff frame of your movie and no additional images (e.g. a mask of the cell, etc.).

The movie folders (containing the frames in tiff format) can be converted into a single `.h5` file with the `convert_tiff_to_h5` command.
Most ExoDeepFinder commands take h5 files as input, so the first step is to convert the data to h5 format with the `convert_tiff_to_h5` action in the GUI, or with the following command:
`edf_convert_tiff_to_h5 --tiff path/to/movie/folder/ --output path/to/output/movie.h5`

You can also generate all your movie folders at once using the `--batch` option. 
For example:

`edf_convert_tiff_to_h5 --batch path/to/movies/ --output path/to/outputs/ --make_subfolder`

where `path/to/movies/` contains movies folders (which in turn contains tiff files).
The `--make_subfolder` option enable to put all tiff files in a `tiff/` subfolder; which is useful in batch mode.
The `--batch` option enables to process multiple movie folders at once and work in the same way in all ExoDeepFinder commands.

The above command will turn the following file structure:

```
exocytosis_data/
├── movie1/
│   ├── frame_1.tiff
│   ├── frame_2.tiff
│   └── ...
├── movie2/
│   ├── frame_1.tiff
│   └── ...
└── ...
```

into this one:

```
exocytosis_data/
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   └── movie.h5
├── movie2/
│   ├── tiff/
│   |   ├── frame_1.tiff
│   │   └── ...
│   └── movie.h5
└── ...
```

#### 2. Segment movies

To generate segmentations, you can either use ExoDeepFinder or [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder).

To segment a movie, use the `segment` action in the GUI, or the following command:
`edf_segment --movie path/to/movie.h5 --model_weights examples/analyze/in/net_weights_FINAL.h5 --patch_size 160 --visualization`

The `--patch-size` argument corresponds to the size of the input patch for the network. The movie is split in cubes of `--patch_size` voxels before being processed. `--patch_size` must be a multiple of 4. Bigger patch sizes will be faster but will take more space on your GPU.

To detect exocytosis events, you can either use the pretrained segmentation model (available in `examples/analyze/in/net_weights_FINAL.h5`), or you can annotate your exocytosis movies and train your own model (see the training section bellow).

You can omit the model weights path (`--model_weights`) if you use the release (downloaded from [here](https://github.com/deep-finder/tirfm-deepfinder/releases/)) or if you cloned the repository since the default example weights will be found automatically. Otherwise (for example if you installed with `pip install exodeepfinder`), the default weights can also be downloaded manually [here](https://github.com/deep-finder/tirfm-deepfinder/raw/master/examples/analyze/in/net_weights_FINAL.h5).

This will generate a segmentation named `path/to/movie_semgmentation.h5` with the pretrained weigths in `examples/analyze/in/net_weights_FINAL.h5` and patches of size 160. It will also generate visualization images.

This should take 10 to 15 minutes for a movie of 1000 frames of size 400 x 300 pixels on a modern CPU (mac M1) and only few dozens of seconds on an A100 GPU.

Use the `--visualization` argument to also generate visualization images and get a quick overview of the segmentation results.

See `edf_segment --help` for more information about the input arguments.

#### 3. Generate annotations

To cluster a segmentation file and create an annotation file from it, use the `generate_annotation` action in the GUI, or the following command:
`edf_generate_annotation --segmentation path/to/movie_segmentation.h5 --cluster_radius 5`

The clustering will convert the segmentation map (here `movie_segmentation.h5`) into an event list. The algorithm groups and labels the voxels so that all voxels of the same event share the same label, and each event gets a different label. The cluster radius is the approximate size in voxel of the objects to cluster.
5 voxels is best for films with a pixel size of 160nm, for exocytosis events of 1 second and of size 300nm.

ExoDeepFinder detects both bright spots (that could be confused with exocytosis events) and genuine exocytosis events. By default, the command will ignore all bright spots (replace label "1" with 0) and will replace exocytosis events (label "2") to ones. Indeed, ExoDeepFinder is an exocytosis event detector, so its output is only composed of exocytosis events labelled with ones. Use the --keep_labels_unchanged option to skip this step and use the raw label map (segmentation) instead. This can be useful if you use a custom detector and want to check the corresponding annotations for example.

#### Using napari-exodeepfinder

The [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder) plugin can be used to compute predictions.
Open the movie you want to segment in napari (it must be in h5 format).
In the menu, choose `Plugins > Napari DeepFinder > Segmentation`  to open the segmentation tools.
Choose the image layer you want to segment.
Select the `examples/analyze/in/net_weights_FINAL.h5` net weights; or the path of the model weights you want to use for the segmentation.
Use 3 for the number of class (0: background, 1: bright spots, 2: exocytosis events), and 160 for the patch size.
Choose an output image name (with the .h5 extension), then launch the segmentation.

### Training

Training requires considerable computing resources, so the use of a GPU is highly recommended. Thus, we strongly suggest using Linux for the Training, although using WLS2 on Windows should also work (see the "Installaton Guide" section).

To train a model, your data should be organized in the following way:

```
exocytosis_data/
├── movie1/
│   ├── frame_1.tiff
│   ├── frame_2.tiff
│   └── ...
├── movie2/
│   ├── frame_1.tiff
│   └── ...
└── ...
```

#### 1. Convert movies to h5 format

For each movie, tiff files must be converted to a single `.h5` using the `convert_tiff_to_h5` action from the GUI, or the `edf_convert_tiff_to_h5` command, as explained in the [Exocytosis events detection section](#Exocytosis-events-detection):

`edf_convert_tiff_to_h5 --batch path/to/exocytosis_data/ --make_subfolder`

This will change the `exocytosis_data` structure into the following one:

```
exocytosis_data/
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   └── movie.h5
├── movie2/
│   ├── tiff/
│   |   ├── frame_1.tiff
│   │   └── ...
│   └── movie.h5
└── ...
```

#### 2. Detect bright spots

ExoDeepFinder can generate false positives by confusing bright spots with genuine exocytosis events. The strategy to reduce this type of false positive is to explicitly present these bright spots as counter-examples during the training. Hence,  the training requires bright spots to be annotated. You can use any suitable methods that will accurately detect counter-examples bright spots in your data, or use our spot detector [Atlas](https://gitlab.inria.fr/serpico/atlas). The Atlas installation instructions are detailed in the repository, but the most simple way of installing it is by using conda: `conda install bioimageit::atlas`.

Once atlas (or the detector of your choice) is installed, you can detect spots in each frame using the `detect_spots` action in the GUI, or the `edf_detect_spots` command:

`edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/`

where `path/to/atlas/` is the root path of atlas (containing the `build/` directory with the binaries inside if you followed the manual installation instructions).

This will generate `detector_segmentation.h5` files (the semgentations of spots) in the movie folders:

```
exocytosis_data/
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   ├── detector_segmentation.h5
│   └── movie.h5
├── movie2/
└── ...
```

There are two ways of using an alternative detector:

1) Call a custom detector command from the `edf_detect_spots` command. Make sure your detector generates segmentation maps with 1s where there are bright spots (no matter whether they are exocytosis events or not) and 0s elsewhere. You can specify the command to call the detector with the `--detector_command` and/or the `--detector_path` arguments. For example `edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/ --detector_path path/to/custom_detector.py --detector_command 'python "{detector}" -i "{input}" -o "{output}"'` will call `custom_detector.py` with each each movies in the dataset like so: `python path/to/custom_detector.py -i path/to/exocytosis_data/movieN/tiff/ -o path/to/exocytosis_data/movieN/detector_segmentation.h5`. The detector will have to handle all `.tiff` frames and generate a segmentation in the `.h5` format.

You can make sure that the detector segmentations are correct by opening them in napari with the corresponding movie. Open both `.h5` files in napari, put the `detector_segmentation.h5` layer on top, then right-click on it and select "Convert to labels". You should see the detections in red on top of the movie.`

2) Use the software of your choise (e.g. ImageJ) to create annotations files. An annotation file consists of a list of bright spots coordinates (no matter whether they are exocytosis events or not). It can be a .csv or .xml file, and must follow the same format as described in the [3. Annotate exocytosis events](3.-Annotate-exocytosis-events) section bellow (bright spots must have a class_label equal to 1).

Note that one can convert annotations (.xml or .csv files describing bright spots) to segmentation maps (.h5 files) with the `edf_generate_segmentation` command, and segmentation maps to annotations with the `edf_generate_annotation` command. This can be useful if you use your own detector which generates either annotations or segmentations.

#### 3. Annotate exocytosis events

The training requires movies to be annotated with the localizations of exocytosis events and bright spots. The recommended way to annotate exocytosis events is to use the [`napari-exodeepfinder` plugin](https://github.com/deep-finder/napari-exodeepfinder) but it is also possible to use other software (e.g. ImageJ) as long as the output annotations respect the format described below.

Annotate the exocytosis events in the movies with the `napari-exodeepfinder` plugin:

- Follow the install instructions, and open napari.
- In the menu, choose `Plugins > Napari DeepFinder > Annotation`  to open the annotation tools.
- Open a movie (for example `exocytosis_data/movie1/movie.h5`).
- Create a new points layer, and name it `movie_1` (any name with the `_1` suffix, since we want to annotate with the class 1). 
- In the annotation panel, select the layer you just created in the "Points layer" select box (you can skip this step and use the "Add points" and "Delete selected point" buttons from the layer controls).
- You can use the Orthoslice view to easily navigate in the volume, by using the `Plugins > Napari DeepFinder > Orthoslice view` menu.
- Scroll in the movie until you find and exocytosis event.
- If you opened the Orthoslice view, you can click on an exocytosis event to put the red cursor at its location, then click the "Add point" button in the annotation panel to annotate the event.
- You can also use the "Add points" and "Delete selected point" buttons from the layer controls.
- When you annotated all events, save your annotations to xml by choosing the `File > Save selected layer(s)...` menu, or by using ctrl+S (command+S on a mac), **and choose the *Napadi DeepFinder (\*.xml)* format**. Save the file beside the movie, and name it `expert_annotation.xml` (this should result in the `exocytosis_data/movie1/expert_annotation.xml` with the above example).

Annotate all training and validation movies with this procedure; you should end up with the following folder structure:

```
exocytosis_data
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   ├── detector_segmentation.h5
│   ├── expert_annotation.xml
│   └── movie.h5
├── movie2/
└── ...
```

Make sure that the `expert_annotation.xml` files you just created have the following format:

```
<objlist>
  <object tomo_idx="0" class_label="1" x="71" y="152" z="470"/>
  <object tomo_idx="0" class_label="1" x="76" y="184" z="445"/>
  <object tomo_idx="0" class_label="1" x="141" y="150" z="400"/>
  <object tomo_idx="0" class_label="1" x="200" y="237" z="420"/>
  <object tomo_idx="0" class_label="1" x="95" y="229" z="438"/>
  ...
</objlist>
```

If you used a software other than `napari-exodeepfinder` (e.g. ImageJ) to annotate exocytosis events, make sure your output files follow the same structure. It can be `csv` files, but they must follow the same naming, as in the following `example.csv`:

```
tomo_idx,class_label,x,y,z
0,1,133,257,518
0,1,169,230,519
0,1,184,237,534
0,1,146,260,546
```

The `class_label` must be 1, and `tomo_idx` must be 0.

#### 4. Convert expert annotations to expert segmentations

Convert your manual annotations (named expert annotations) into expert segmentations so that they can be merged with the detected bright spots and used for the training.

Use the `generate_segmentation` action in the GUI, or the following command to convert the annotations to segmentations:

`edf_generate_segmentation --batch path/to/exocytosis_data/`

You will end up with the following structure:

```
exocytosis_data/
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   ├── detector_segmentation.h5
│   ├── expert_annotation.xml
│   ├── expert_segmentation.h5
│   └── movie.h5
├── movie2/
└── ...
```

Note that the expert annotation can be a `.csv` as long as it respects the correct labeling.

Again, you can check on napari that everything went right by opening all images and checking that `expert_segmentation.h5` corresponds to `expert_annotation.xml` and the movie.

#### 5. Merge detector and expert data

Then, merge detector detections with expert annotations with the `merge_detector_expert` action in the GUI, or the `edf_merge_detector_expert` command:

`edf_merge_detector_expert --batch path/to/exocytosis_data/`

This will create two new files `merged_annotation.xml` (the merged annotations) and `merged_segmentation.h5` (the merged segmentations). The exocytosis events are first removed from the detector segmentation (`detector_segmentation.h5`), then the remaining events (from the detector and the expert) are transferred to the merged segmentation (`merged_segmentation.h5`), with class 2 for exocytosis events and class 1 for others events. The maximum number of other events in the annotation is 9800; meaning that if there are more than 9800 other events, only 9800 events will be picked randomly and the others will be discarded.

The `exocytosis_data/` folder will then follow this structure:

```
exocytosis_data/
├── movie1/
│   ├── tiff/
│   │   ├── frame_1.tiff
│   │   ├── frame_2.tiff
│   │   └── ...
│   ├── detector_segmentation.h5
│   ├── expert_annotation.xml
│   ├── expert_segmentation.h5
│   ├── merged_annotation.xml
│   ├── merged_segmentation.h5
│   └── movie.h5
├── movie2/
└── ...
```

Again, make sure everything looks right in napari.

#### 6. Organize training files

Finally, the training data should be organized in the following way:

```
dataset/
├── train/
│   ├── movie1.h5
│   ├── movie1_objl.xml
│   ├── movie1_target.h5
│   ├── movie2.h5
│   ├── movie2_objl.xml
│   ├── movie2_target.h5
...
└── valid/
    ├── movie3.h5
    ├── movie3_objl.xml
    ├── movie3_target.h5
...
```

This structure can be obtained with the `structure_training_dataset` action in the GUI, or by using the `edf_structure_training_dataset` command:

`edf_structure_training_dataset --input path/to/exocytosis_data/ --output path/to/dataset/`

This will organize the input folder (which should be structured as in the previous step) with the above final structure, by putting 70% of the movies in the train/ folder, and 30% of them in the valid/ folder.

Make sure the output folder is correct, and that you can open its content in napari.

#### 7. Train your custom model

Finally, launch the training with `train` action in the GUI, or the command `edf_train --dataset path/to/dataset/ --output path/to/model/`.

#### Summary

Here are all the steps you should execute to train a new model:

1. Convert tiff frames to h5 file: `edf_convert_tiff_to_h5 --batch path/to/exocytosis_data/ --make_subfolder`
1. Use [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder) to annotation exocytosis events in the movies
1. Detect all spots: `edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/`
1. Generate detector segmentations: `edf_generate_segmentation --batch path/to/exocytosis_data/`
1. Merge expert and detector segmentation: `edf_merge_detector_expert --batch path/to/exocytosis_data/`
1. Structure the files: `edf_structure_training_dataset --dataset path/to/exocytosis_data/ --training path/to/dataset/`
1. Train the model: `edf_train --dataset path/to/dataset/ --output path/to/model/`

## Virtual environments & package managers

There are two major ways of creating virtual environments in Python: venv and conda ; and two major ways of installing packages: pip and conda.

### Virtual environment: venv & conda

The simplest way of creating a virtual environment in python is to use [venv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#create-and-use-virtual-environments). Make sure your Python version greater or equal to 3.10, and simply run `python -m venv ExoDeepFinder/` (`py -m venv ExoDeepFinder/` on Windows) to create your environment (replace `ExoDeepFinder` by the name you want for your environment). Then run `source ExoDeepFinder/bin/activate` to activate it (`ExoDeepFinder\Scripts\activate` on Windows).

Alternatively, you can use [Conda](https://conda.io/projects/conda/en/latest/index.html) (or a nice minimalist alternative like [Micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html), see bellow) to create a Python 3.10 environment, even if your python version is different.

Once conda is installed, run `conda create -n ExoDeepFinder python=3.10` to create the environment with python 3.10, and `conda activate ExoDeepFinder` to activate it.

#### Conda alternatives

The simplest way to install and use Conda is via [Micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html), which a minimalist drop-in replacement. Once you installed it, just use `micromamba` instead of `conda` for all you conda commands (some unusual commands might not be implemented in micromamba, but it is really sufficient for most use cases). 

For example, run `micromamba create -n ExoDeepFinder python=3.10` to create the environment with python 3.10, and `micromamba activate ExoDeepFinder` to activate it.

### Package managers: pip & conda

The [Numpy documentation](https://numpy.org/install/#pip--conda) explains the main differences between pip and conda:

> The two main tools that install Python packages are `pip` and `conda`. Their functionality partially overlaps (e.g. both can install `numpy`), however, they can also work together. We’ll discuss the major differences between pip and conda here - this is important to understand if you want to manage packages effectively.

> The first difference is that conda is cross-language and it can install Python, while pip is installed for a particular Python on your system and installs other packages to that same Python install only. This also means conda can install non-Python libraries and tools you may need (e.g. compilers, CUDA, HDF5), while pip can’t.

> The second difference is that pip installs from the Python Packaging Index (PyPI), while conda installs from its own channels (typically “defaults” or “conda-forge”). PyPI is the largest collection of packages by far, however, all popular packages are available for conda as well.

> The third difference is that conda is an integrated solution for managing packages, dependencies and environments, while with pip you may need another tool (there are many!) for dealing with environments or complex dependencies.

## Development

To install ExoDeepFinder for development, clone the repository (`git clone git@github.com:deep-finder/tirfm-deepfinder.git`), create and activate a virtual environment (see section above), and install it with `pip install -e ./tirfm-deepfinder/[GUI]`.

To generate the release binaries, install PyInstaller with `pip install pyinstaller` in your virtual environment; and package ExoDeepFinder with `pyinstaller exodeepfinder.spec`. You must run this command on the destination platform (run on Windows for a Windows release, on Mac for a Mac release, and Linux for a Linux release).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "exodeepfinder",
    "maintainer": "A. Masson",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "arthur.masson@inria.fr",
    "keywords": null,
    "author": "E. Moebel",
    "author_email": "emmanuel.moebel@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/56/4c/97700c4e761ad35f033aa2f346b68abe34d2cdcb939434c247f3116831be/exodeepfinder-0.3.3.tar.gz",
    "platform": null,
    "description": "# ExoDeepFinder\n\nExoDeepFinder is an exocytosis event detection tool.\n\nThis work is based on [DeepFinder](https://github.com/deep-finder/cryoet-deepfinder) which has been customized for TIRF microscopy.\n\n## Requirements\n\nThe following software are required for GPU support:\n- NVIDIA\u00ae GPU drivers,\n  - >= 525.60.13 for Linux,\n  - >= 528.33 for WSL on Windows,\n- CUDA\u00ae Toolkit 12.3,\n- cuDNN SDK 8.9.7,\n- (Optional) TensorRT to improve latency and throughput for inference.\n\n## Installation guide\n\n[ExoDeepFinder binaries are available](https://github.com/deep-finder/tirfm-deepfinder/releases/tag/v0.2.3) for Windows, Linux and Mac, so there is no need to install anything (except the Tensorflow requirements described above for GPU support) if you just want to use the Graphical User Interface (GUI). The Linux release is big (over 4Gb) because it contains the libraries required for the GPU acceleration. Thus they are split in two parts (`ExoDeepFinder_Linux-x86_64_part1.tar.gz` and `ExoDeepFinder_Linux-x86_64_part2.tar.gz`). To uncompress them, use the following command: `tarcat ExoDeepFinder_Linux-x86_64_part*.tar.gz  | tar -xvzf -`.\n\n> **_Note:_** ExoDeepFinder depends on Tensorflow which is only GPU-accelerated on Linux. There is currently no official GPU support for MacOS and native Windows, so the CPU will be used on those platform, but you can still use it (it will just be slower, yet the training might be very slow and is not well supported). On Windows, WSL2 can be used to run tensorflow code with GPU; see the [install instructions](https://www.tensorflow.org/install/pip?hl=fr#windows-wsl2) for more information.\n\n### Python installation\n\nAlternatively, to install ExoDeepFinder and use it with command lines, create and activate a virtual environment with python 3.11 or later (see the [Virtual environments](#virtual-environments) section for more information), install dependencies (on Linux only, and only if you wish to use the GUI, see bellow), and run `pip install exodeepfinder[GUI]` (you can also omit `[GUI]` if you only want to use the command line).\n\nOn Linux, the GUI requires [`wxPython` dependencies](https://github.com/wxWidgets/Phoenix/blob/master/README.rst#prerequisites) to be installed (you can just run `pip install exodeepfinder` if you don't want the GUI). \nThe simplest way is to use conda (or micromamba, see the [Conda alternatives](#conda-alternatives) section): \n- create a new environment named exodeepfinder with Python 3.10 and Gooey (which installs wxPython): `conda create -n exodeepfinder python=3.10 gooey==1.0.8.1`\n- activate it: `conda activate exodeepfinder`\n- install exodeepfinder: `pip install exodeepfinder`\n\nYou can also install wxPython manually (`sudo apt install libgtk-3-dev`, etc.) or use one [precompiled wxPython version](https://wxpython.org/pages/downloads/index.html) (use `pip install -f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-16.04 wxPython` with your Ubuntu version number, or use `conda install wxpython` to install a compiled wxPython from conda). The rest can be installed with `pip install exodeepfinder`. \n\nNote that on Windows, the `python` command is often replaced by `py` and `pip` by `py -m pip`; so you migth need adapt the commands in this documentation depending on your system settings.\n\n## Usage\n\nHere are all ExoDeepFinder commands (described later):\n\n```\nconvert_tiff_to_h5              # convert tiff folders to a single h5 file\nsegment                         # segment a movie\ngenerate_annotation             # generate an annotation file from a segmentation by clustering it\ngenerate_segmentation           # generate a segmentation from an annotation file\ndetect_spots                    # detect bright spots in movies\nmerge_detector_expert           # merge the expert annotations with the detector segmentations for training\nstructure_training_dataset      # structure dataset files for training\ntrain                           # train a new model\nexodeepfinder                   # combine all above commands\n```\n\nThe ExoDeepFinder main GUI enables to execute each of those commands (listed on the Actions panel).\n\n### Command-line usage\n\nAll commands (except `exodeepfinder`) must be prefixed with `edf_` when using the command-line interface.\n\nFor more information about an ExoDeepFinder command, use the `--help` option (run `edf_detect_spots --help` to know more about `edf_detect_spots`).\n\nTo open a Graphical User Interface (GUI) for a given command, run it without any argument. For example, `edf_segment` opens a GUI which can execute the `edf_segment` command with the arguments specified with the graphical interface.\n\n`exodeepfinder` runs any of the other command as a subcommand (for example `exodeepfinder segment -m movie.h5` is equivalent to `edf_segment -m movie.h5`); and it opens a GUI for all other commands when called without any argument.\n\nIf you installed ExoDeepFinder as a developer (see [Development section](## Development)), all commands can either be called directly (`edf_segment -m movie.h5`) or with python and the proper path (`python deepfinder/commands/segment.py -m movie.h5` when in the project root directory).\n\n### Exocytosis events detection\n\nThe detection of exocytosis events is formally the segmentation of events in 3D (2D + time) TIRF movies followed by the clustering of the resulting segmentation map.\n\nDetecting exocytosis events in ExoDeepFinder involves executing the following commands:\n  1. `convert_tiff_to_h5`  to convert tiff folders to a single h5 file,\n  1. `segment` to generate segmentation maps from movies, where 2s will be exocytosis events and 1s will be bright spots,\n  1. `generate_annotation` to generate an annotation file from a segmentation by clustering it.\n\n#### 1. Convert movies to h5 format\n\nExoDeepFinder handles exocytosis movies made from tiff files, where each tiff file is a frame of the movie, and their name ends with the frame number; like in the following structure:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n```\n\nThe frame extensions can be .tif, .tiff, .TIF or .TIFF.\n\nThere is no constraint on the file names, but they must contain the frame number (the last number in the file name must be the frame number), and be in the tiff format (it could work with other format like .png since images are read with the `skimage.io.imread()` function of the scikit-image library). For example `frame_1.tiff` could also be named `IMAGE32_1.TIF`. Similarly, there is no constraint on the movie names. In addition, although there is no strict constraint on the file names, be aware that it is much simpler to work with simple file names with no space nor special characters. Lastly, make sure that folders contain only the .tiff frame of your movie and no additional images (e.g. a mask of the cell, etc.).\n\nThe movie folders (containing the frames in tiff format) can be converted into a single `.h5` file with the `convert_tiff_to_h5` command.\nMost ExoDeepFinder commands take h5 files as input, so the first step is to convert the data to h5 format with the `convert_tiff_to_h5` action in the GUI, or with the following command:\n`edf_convert_tiff_to_h5 --tiff path/to/movie/folder/ --output path/to/output/movie.h5`\n\nYou can also generate all your movie folders at once using the `--batch` option. \nFor example:\n\n`edf_convert_tiff_to_h5 --batch path/to/movies/ --output path/to/outputs/ --make_subfolder`\n\nwhere `path/to/movies/` contains movies folders (which in turn contains tiff files).\nThe `--make_subfolder` option enable to put all tiff files in a `tiff/` subfolder; which is useful in batch mode.\nThe `--batch` option enables to process multiple movie folders at once and work in the same way in all ExoDeepFinder commands.\n\nThe above command will turn the following file structure:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 movie2/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 ...\n```\n\ninto this one:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2502 \u00a0\u00a0\u251c\u2500\u2500 tiff/\n\u2502   |   \u251c\u2500\u2500 frame_1.tiff\n\u2502   \u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u2502 \u00a0\u00a0\u2514\u2500\u2500 movie.h5\n\u2514\u2500\u2500 ...\n```\n\n#### 2. Segment movies\n\nTo generate segmentations, you can either use ExoDeepFinder or [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder).\n\nTo segment a movie, use the `segment` action in the GUI, or the following command:\n`edf_segment --movie path/to/movie.h5 --model_weights examples/analyze/in/net_weights_FINAL.h5 --patch_size 160 --visualization`\n\nThe `--patch-size` argument corresponds to the size of the input patch for the network. The movie is split in cubes of `--patch_size` voxels before being processed. `--patch_size` must be a multiple of 4. Bigger patch sizes will be faster but will take more space on your GPU.\n\nTo detect exocytosis events, you can either use the pretrained segmentation model (available in `examples/analyze/in/net_weights_FINAL.h5`), or you can annotate your exocytosis movies and train your own model (see the training section bellow).\n\nYou can omit the model weights path (`--model_weights`) if you use the release (downloaded from [here](https://github.com/deep-finder/tirfm-deepfinder/releases/)) or if you cloned the repository since the default example weights will be found automatically. Otherwise (for example if you installed with `pip install exodeepfinder`), the default weights can also be downloaded manually [here](https://github.com/deep-finder/tirfm-deepfinder/raw/master/examples/analyze/in/net_weights_FINAL.h5).\n\nThis will generate a segmentation named `path/to/movie_semgmentation.h5` with the pretrained weigths in `examples/analyze/in/net_weights_FINAL.h5` and patches of size 160. It will also generate visualization images.\n\nThis should take 10 to 15 minutes for a movie of 1000 frames of size 400 x 300 pixels on a modern CPU (mac M1) and only few dozens of seconds on an A100 GPU.\n\nUse the `--visualization` argument to also generate visualization images and get a quick overview of the segmentation results.\n\nSee `edf_segment --help` for more information about the input arguments.\n\n#### 3. Generate annotations\n\nTo cluster a segmentation file and create an annotation file from it, use the `generate_annotation` action in the GUI, or the following command:\n`edf_generate_annotation --segmentation path/to/movie_segmentation.h5 --cluster_radius 5`\n\nThe clustering will convert the segmentation map (here `movie_segmentation.h5`) into an event list. The algorithm groups and labels the voxels so that all voxels of the same event share the same label, and each event gets a different label. The cluster radius is the approximate size in voxel of the objects to cluster.\n5 voxels is best for films with a pixel size of 160nm, for exocytosis events of 1 second and of size 300nm.\n\nExoDeepFinder detects both bright spots (that could be confused with exocytosis events) and genuine exocytosis events. By default, the command will ignore all bright spots (replace label \"1\" with 0) and will replace exocytosis events (label \"2\") to ones. Indeed, ExoDeepFinder is an exocytosis event detector, so its output is only composed of exocytosis events labelled with ones. Use the --keep_labels_unchanged option to skip this step and use the raw label map (segmentation) instead. This can be useful if you use a custom detector and want to check the corresponding annotations for example.\n\n#### Using napari-exodeepfinder\n\nThe [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder) plugin can be used to compute predictions.\nOpen the movie you want to segment in napari (it must be in h5 format).\nIn the menu, choose `Plugins > Napari DeepFinder > Segmentation`  to open the segmentation tools.\nChoose the image layer you want to segment.\nSelect the `examples/analyze/in/net_weights_FINAL.h5` net weights; or the path of the model weights you want to use for the segmentation.\nUse 3 for the number of class (0: background, 1: bright spots, 2: exocytosis events), and 160 for the patch size.\nChoose an output image name (with the .h5 extension), then launch the segmentation.\n\n### Training\n\nTraining requires considerable computing resources, so the use of a GPU is highly recommended. Thus, we strongly suggest using Linux for the Training, although using WLS2 on Windows should also work (see the \"Installaton Guide\" section).\n\nTo train a model, your data should be organized in the following way:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 movie2/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 ...\n```\n\n#### 1. Convert movies to h5 format\n\nFor each movie, tiff files must be converted to a single `.h5` using the `convert_tiff_to_h5` action from the GUI, or the `edf_convert_tiff_to_h5` command, as explained in the [Exocytosis events detection section](#Exocytosis-events-detection):\n\n`edf_convert_tiff_to_h5 --batch path/to/exocytosis_data/ --make_subfolder`\n\nThis will change the `exocytosis_data` structure into the following one:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2502 \u00a0\u00a0\u251c\u2500\u2500 tiff/\n\u2502   |   \u251c\u2500\u2500 frame_1.tiff\n\u2502   \u2502\u00a0\u00a0 \u2514\u2500\u2500 ...\n\u2502 \u00a0\u00a0\u2514\u2500\u2500 movie.h5\n\u2514\u2500\u2500 ...\n```\n\n#### 2. Detect bright spots\n\nExoDeepFinder can generate false positives by confusing bright spots with genuine exocytosis events. The strategy to reduce this type of false positive is to explicitly present these bright spots as counter-examples during the training. Hence,  the training requires bright spots to be annotated. You can use any suitable methods that will accurately detect counter-examples bright spots in your data, or use our spot detector [Atlas](https://gitlab.inria.fr/serpico/atlas). The Atlas installation instructions are detailed in the repository, but the most simple way of installing it is by using conda: `conda install bioimageit::atlas`.\n\nOnce atlas (or the detector of your choice) is installed, you can detect spots in each frame using the `detect_spots` action in the GUI, or the `edf_detect_spots` command:\n\n`edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/`\n\nwhere `path/to/atlas/` is the root path of atlas (containing the `build/` directory with the binaries inside if you followed the manual installation instructions).\n\nThis will generate `detector_segmentation.h5` files (the semgentations of spots) in the movie folders:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 detector_segmentation.h5\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2514\u2500\u2500 ...\n```\n\nThere are two ways of using an alternative detector:\n\n1) Call a custom detector command from the `edf_detect_spots` command. Make sure your detector generates segmentation maps with 1s where there are bright spots (no matter whether they are exocytosis events or not) and 0s elsewhere. You can specify the command to call the detector with the `--detector_command` and/or the `--detector_path` arguments. For example `edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/ --detector_path path/to/custom_detector.py --detector_command 'python \"{detector}\" -i \"{input}\" -o \"{output}\"'` will call `custom_detector.py` with each each movies in the dataset like so: `python path/to/custom_detector.py -i path/to/exocytosis_data/movieN/tiff/ -o path/to/exocytosis_data/movieN/detector_segmentation.h5`. The detector will have to handle all `.tiff` frames and generate a segmentation in the `.h5` format.\n\nYou can make sure that the detector segmentations are correct by opening them in napari with the corresponding movie. Open both `.h5` files in napari, put the `detector_segmentation.h5` layer on top, then right-click on it and select \"Convert to labels\". You should see the detections in red on top of the movie.`\n\n2) Use the software of your choise (e.g. ImageJ) to create annotations files. An annotation file consists of a list of bright spots coordinates (no matter whether they are exocytosis events or not). It can be a .csv or .xml file, and must follow the same format as described in the [3. Annotate exocytosis events](3.-Annotate-exocytosis-events) section bellow (bright spots must have a class_label equal to 1).\n\nNote that one can convert annotations (.xml or .csv files describing bright spots) to segmentation maps (.h5 files) with the `edf_generate_segmentation` command, and segmentation maps to annotations with the `edf_generate_annotation` command. This can be useful if you use your own detector which generates either annotations or segmentations.\n\n#### 3. Annotate exocytosis events\n\nThe training requires movies to be annotated with the localizations of exocytosis events and bright spots. The recommended way to annotate exocytosis events is to use the [`napari-exodeepfinder` plugin](https://github.com/deep-finder/napari-exodeepfinder) but it is also possible to use other software (e.g. ImageJ) as long as the output annotations respect the format described below.\n\nAnnotate the exocytosis events in the movies with the `napari-exodeepfinder` plugin:\n\n- Follow the install instructions, and open napari.\n- In the menu, choose `Plugins > Napari DeepFinder > Annotation`  to open the annotation tools.\n- Open a movie (for example `exocytosis_data/movie1/movie.h5`).\n- Create a new points layer, and name it `movie_1` (any name with the `_1` suffix, since we want to annotate with the class 1). \n- In the annotation panel, select the layer you just created in the \"Points layer\" select box (you can skip this step and use the \"Add points\" and \"Delete selected point\" buttons from the layer controls).\n- You can use the Orthoslice view to easily navigate in the volume, by using the `Plugins > Napari DeepFinder > Orthoslice view` menu.\n- Scroll in the movie until you find and exocytosis event.\n- If you opened the Orthoslice view, you can click on an exocytosis event to put the red cursor at its location, then click the \"Add point\" button in the annotation panel to annotate the event.\n- You can also use the \"Add points\" and \"Delete selected point\" buttons from the layer controls.\n- When you annotated all events, save your annotations to xml by choosing the `File > Save selected layer(s)...` menu, or by using ctrl+S (command+S on a mac), **and choose the *Napadi DeepFinder (\\*.xml)* format**. Save the file beside the movie, and name it `expert_annotation.xml` (this should result in the `exocytosis_data/movie1/expert_annotation.xml` with the above example).\n\nAnnotate all training and validation movies with this procedure; you should end up with the following folder structure:\n\n```\nexocytosis_data\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 detector_segmentation.h5\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 expert_annotation.xml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2514\u2500\u2500 ...\n```\n\nMake sure that the `expert_annotation.xml` files you just created have the following format:\n\n```\n<objlist>\n  <object tomo_idx=\"0\" class_label=\"1\" x=\"71\" y=\"152\" z=\"470\"/>\n  <object tomo_idx=\"0\" class_label=\"1\" x=\"76\" y=\"184\" z=\"445\"/>\n  <object tomo_idx=\"0\" class_label=\"1\" x=\"141\" y=\"150\" z=\"400\"/>\n  <object tomo_idx=\"0\" class_label=\"1\" x=\"200\" y=\"237\" z=\"420\"/>\n  <object tomo_idx=\"0\" class_label=\"1\" x=\"95\" y=\"229\" z=\"438\"/>\n  ...\n</objlist>\n```\n\nIf you used a software other than `napari-exodeepfinder` (e.g. ImageJ) to annotate exocytosis events, make sure your output files follow the same structure. It can be `csv` files, but they must follow the same naming, as in the following `example.csv`:\n\n```\ntomo_idx,class_label,x,y,z\n0,1,133,257,518\n0,1,169,230,519\n0,1,184,237,534\n0,1,146,260,546\n```\n\nThe `class_label` must be 1, and `tomo_idx` must be 0.\n\n#### 4. Convert expert annotations to expert segmentations\n\nConvert your manual annotations (named expert annotations) into expert segmentations so that they can be merged with the detected bright spots and used for the training.\n\nUse the `generate_segmentation` action in the GUI, or the following command to convert the annotations to segmentations:\n\n`edf_generate_segmentation --batch path/to/exocytosis_data/`\n\nYou will end up with the following structure:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 detector_segmentation.h5\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 expert_annotation.xml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 expert_segmentation.h5\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2514\u2500\u2500 ...\n```\n\nNote that the expert annotation can be a `.csv` as long as it respects the correct labeling.\n\nAgain, you can check on napari that everything went right by opening all images and checking that `expert_segmentation.h5` corresponds to `expert_annotation.xml` and the movie.\n\n#### 5. Merge detector and expert data\n\nThen, merge detector detections with expert annotations with the `merge_detector_expert` action in the GUI, or the `edf_merge_detector_expert` command:\n\n`edf_merge_detector_expert --batch path/to/exocytosis_data/`\n\nThis will create two new files `merged_annotation.xml` (the merged annotations) and `merged_segmentation.h5` (the merged segmentations). The exocytosis events are first removed from the detector segmentation (`detector_segmentation.h5`), then the remaining events (from the detector and the expert) are transferred to the merged segmentation (`merged_segmentation.h5`), with class 2 for exocytosis events and class 1 for others events. The maximum number of other events in the annotation is 9800; meaning that if there are more than 9800 other events, only 9800 events will be picked randomly and the others will be discarded.\n\nThe `exocytosis_data/` folder will then follow this structure:\n\n```\nexocytosis_data/\n\u251c\u2500\u2500 movie1/\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 tiff/\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_1.tiff\n\u2502\u00a0\u00a0 \u2502   \u251c\u2500\u2500 frame_2.tiff\n\u2502\u00a0\u00a0 \u2502   \u2514\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 detector_segmentation.h5\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 expert_annotation.xml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 expert_segmentation.h5\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 merged_annotation.xml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 merged_segmentation.h5\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 movie.h5\n\u251c\u2500\u2500 movie2/\n\u2514\u2500\u2500 ...\n```\n\nAgain, make sure everything looks right in napari.\n\n#### 6. Organize training files\n\nFinally, the training data should be organized in the following way:\n\n```\ndataset/\n\u251c\u2500\u2500 train/\n\u2502   \u251c\u2500\u2500 movie1.h5\n\u2502   \u251c\u2500\u2500 movie1_objl.xml\n\u2502   \u251c\u2500\u2500 movie1_target.h5\n\u2502   \u251c\u2500\u2500 movie2.h5\n\u2502   \u251c\u2500\u2500 movie2_objl.xml\n\u2502   \u251c\u2500\u2500 movie2_target.h5\n...\n\u2514\u2500\u2500 valid/\n    \u251c\u2500\u2500 movie3.h5\n    \u251c\u2500\u2500 movie3_objl.xml\n    \u251c\u2500\u2500 movie3_target.h5\n...\n```\n\nThis structure can be obtained with the `structure_training_dataset` action in the GUI, or by using the `edf_structure_training_dataset` command:\n\n`edf_structure_training_dataset --input path/to/exocytosis_data/ --output path/to/dataset/`\n\nThis will organize the input folder (which should be structured as in the previous step) with the above final structure, by putting 70% of the movies in the train/ folder, and 30% of them in the valid/ folder.\n\nMake sure the output folder is correct, and that you can open its content in napari.\n\n#### 7. Train your custom model\n\nFinally, launch the training with `train` action in the GUI, or the command `edf_train --dataset path/to/dataset/ --output path/to/model/`.\n\n#### Summary\n\nHere are all the steps you should execute to train a new model:\n\n1. Convert tiff frames to h5 file: `edf_convert_tiff_to_h5 --batch path/to/exocytosis_data/ --make_subfolder`\n1. Use [`napari-exodeepfinder`](https://github.com/deep-finder/napari-exodeepfinder) to annotation exocytosis events in the movies\n1. Detect all spots: `edf_detect_spots --detector_path path/to/atlas/ --batch path/to/exocytosis_data/`\n1. Generate detector segmentations: `edf_generate_segmentation --batch path/to/exocytosis_data/`\n1. Merge expert and detector segmentation: `edf_merge_detector_expert --batch path/to/exocytosis_data/`\n1. Structure the files: `edf_structure_training_dataset --dataset path/to/exocytosis_data/ --training path/to/dataset/`\n1. Train the model: `edf_train --dataset path/to/dataset/ --output path/to/model/`\n\n## Virtual environments & package managers\n\nThere are two major ways of creating virtual environments in Python: venv and conda ; and two major ways of installing packages: pip and conda.\n\n### Virtual environment: venv & conda\n\nThe simplest way of creating a virtual environment in python is to use [venv](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#create-and-use-virtual-environments). Make sure your Python version greater or equal to 3.10, and simply run `python -m venv ExoDeepFinder/` (`py -m venv ExoDeepFinder/` on Windows) to create your environment (replace `ExoDeepFinder` by the name you want for your environment). Then run `source ExoDeepFinder/bin/activate` to activate it (`ExoDeepFinder\\Scripts\\activate` on Windows).\n\nAlternatively, you can use [Conda](https://conda.io/projects/conda/en/latest/index.html) (or a nice minimalist alternative like [Micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html), see bellow) to create a Python 3.10 environment, even if your python version is different.\n\nOnce conda is installed, run `conda create -n ExoDeepFinder python=3.10` to create the environment with python 3.10, and `conda activate ExoDeepFinder` to activate it.\n\n#### Conda alternatives\n\nThe simplest way to install and use Conda is via [Micromamba](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html), which a minimalist drop-in replacement. Once you installed it, just use `micromamba` instead of `conda` for all you conda commands (some unusual commands might not be implemented in micromamba, but it is really sufficient for most use cases). \n\nFor example, run `micromamba create -n ExoDeepFinder python=3.10` to create the environment with python 3.10, and `micromamba activate ExoDeepFinder` to activate it.\n\n### Package managers: pip & conda\n\nThe [Numpy documentation](https://numpy.org/install/#pip--conda) explains the main differences between pip and conda:\n\n> The two main tools that install Python packages are `pip` and `conda`. Their functionality partially overlaps (e.g. both can install `numpy`), however, they can also work together. We\u2019ll discuss the major differences between pip and conda here - this is important to understand if you want to manage packages effectively.\n\n> The first difference is that conda is cross-language and it can install Python, while pip is installed for a particular Python on your system and installs other packages to that same Python install only. This also means conda can install non-Python libraries and tools you may need (e.g. compilers, CUDA, HDF5), while pip can\u2019t.\n\n> The second difference is that pip installs from the Python Packaging Index (PyPI), while conda installs from its own channels (typically \u201cdefaults\u201d or \u201cconda-forge\u201d). PyPI is the largest collection of packages by far, however, all popular packages are available for conda as well.\n\n> The third difference is that conda is an integrated solution for managing packages, dependencies and environments, while with pip you may need another tool (there are many!) for dealing with environments or complex dependencies.\n\n## Development\n\nTo install ExoDeepFinder for development, clone the repository (`git clone git@github.com:deep-finder/tirfm-deepfinder.git`), create and activate a virtual environment (see section above), and install it with `pip install -e ./tirfm-deepfinder/[GUI]`.\n\nTo generate the release binaries, install PyInstaller with `pip install pyinstaller` in your virtual environment; and package ExoDeepFinder with `pyinstaller exodeepfinder.spec`. You must run this command on the destination platform (run on Windows for a Windows release, on Mac for a Mac release, and Linux for a Linux release).\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "ExoDeepFinder is an original deep learning approach to localize macromolecules in cryo electron tomography images. The method is based on image segmentation using a 3D convolutional neural network.",
    "version": "0.3.3",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "de2f14a0f4d9d3bad22419bcb66cd4d39b76b53ee211c16fe9c86cbc9129f2d9",
                "md5": "3ed3a3036291ebe988f330e74ed89a95",
                "sha256": "05de81928806d8b6188fc906174068405b6487e5a0f1a1dadc47e1bfec023601"
            },
            "downloads": -1,
            "filename": "exodeepfinder-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3ed3a3036291ebe988f330e74ed89a95",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 71025,
            "upload_time": "2024-08-12T13:31:37",
            "upload_time_iso_8601": "2024-08-12T13:31:37.529553Z",
            "url": "https://files.pythonhosted.org/packages/de/2f/14a0f4d9d3bad22419bcb66cd4d39b76b53ee211c16fe9c86cbc9129f2d9/exodeepfinder-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "564c97700c4e761ad35f033aa2f346b68abe34d2cdcb939434c247f3116831be",
                "md5": "a5a5cfd3ae08c520b5f91ee894ff05b1",
                "sha256": "06e7e654a31fc4dd4e9f83b53152a8cee821a8c931b81a8494a2a0c58dce2399"
            },
            "downloads": -1,
            "filename": "exodeepfinder-0.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "a5a5cfd3ae08c520b5f91ee894ff05b1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 10186700,
            "upload_time": "2024-08-12T13:31:39",
            "upload_time_iso_8601": "2024-08-12T13:31:39.944912Z",
            "url": "https://files.pythonhosted.org/packages/56/4c/97700c4e761ad35f033aa2f346b68abe34d2cdcb939434c247f3116831be/exodeepfinder-0.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-12 13:31:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "exodeepfinder"
}
        
Elapsed time: 0.31571s