ViNeSeg


NameViNeSeg JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://github.com/NiRuff/ViNe-Seg/tree/main
SummaryImage Polygonal Annotation with Python
upload_time2023-12-11 09:17:53
maintainer
docs_urlNone
authorNicolas Ruffini
requires_python
licenseGPLv3
keywords image annotation machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Visible Neuron Segmentation: ViNe-Seg (beta version)

<img align="right" width="420" height="420" src=https://user-images.githubusercontent.com/50486014/173547029-a4a1bfac-379f-42ef-aaec-166d814ea421.png width="30%" height="30%" /> <br>


### Embedding Deep-Learning assisted segmentation of Visible Neurons and subsequent Analysis in one Graphical User Interface
**ViNe-Seg** comes in two versions, that include:


| |ViNe-Seg|<nobr> ViNe-Seg :heavy_plus_sign: </nobr>|
|:----|:------:|:------:|
|Autosegmentation Model Manager| :heavy_check_mark: | :heavy_check_mark: |
|ViNe-Seg Autosegmentation step| :heavy_check_mark: | :heavy_check_mark: |
|Manual refinement of segmentation results| :heavy_check_mark: | :heavy_check_mark: |
|Trace Extraction| :heavy_check_mark: | :heavy_check_mark: |
|ΔF/F conversion| :heavy_check_mark: | :heavy_check_mark: |
|Microscope Mode| :heavy_check_mark: | :heavy_check_mark: |
|Free| :heavy_check_mark: | :heavy_check_mark: |
|Open-Source| :heavy_check_mark: | :heavy_check_mark: |
|CASCADE SPIKE Inference| :x: | :heavy_check_mark: |


## Installation of the basic version of ViNe-Seg
We aimed to develop ViNe-Seg as user-friendly as possible. Therefor, ViNe-Seg comes with a GUI and is easily installable using pip with the command:

### General recommendation: Create a new conda environment
```
conda create -n vineseg_env python=3.9
conda activate vineseg_env
```
## Then run:
### Windows recommendation:
```
pip install PyQt5
pip install vineseg
```

### Mac recommendation:
```
conda install pyqt
pip install vineseg
```

### Ubuntu recommendation:
```
conda install pyqt
pip install vineseg
pip uninstall opencv-python
pip install opencv-python-headless 
```

ViNe-Seg will be downloaded and installed with all necessary dependencies.

### Installation of the advanced version of ViNe-Seg, including the CASCADE SPIKE inference

If you also want to use the CASCADE SPIKE Inference (see https://github.com/HelmchenLabSoftware/Cascade) you might want to install the advanced version of ViNe-Seg instead. To do so, install conda or anaconda on your machine and run the following commands in the given order, to create a conda environment and install ViNe-Seg there:

```
conda create -n vineseg-adv python=3.7 tensorflow==2.3 keras==2.3.1 h5py numpy scipy matplotlib seaborn ruamel.yaml
conda activate vineseg-adv
pip install vineseg
```


<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Installation.gif)--->
<p align="center"><img src="https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Installation.gif"  width="85%"></p>

From now on, the advanced version of ViNe-Seg will be installed in your conda environment called *vineseg-adv*.

## Starting ViNe-Seg
You can start ViNe-Seg using the following command after installation (make sure not to work in an environment in which ViNe-Seg is not installed):
```
python -m vineseg
```

If you have to start the specific environment first, run:

```
conda activate vineseg-adv
python -m vineseg
```

Now, ViNe-Seg will check if you already have a version of the trained models installed and will download the default model version if none is currently in use on your machine.

After this step, the GUI will automatically be opened where you have the chance to download other models, choose between them, load your mean image in PNG or TIFF format and to run the autosegmentation using the ```Autosegmentation``` command shown in the menu bar in the top of the screen.
We embedded the ViNe-Seg functionality in the labelme GUI (see https://github.com/wkentaro/labelme) by adding a button for running the autosegmentation step of ViNe-Seg in the GUI, as well as a model manager, trace extraction, baseline correction, the CASCADE SPIKE Inference. We also added some other underlying funtionalities such as automatic loading of the generated JSON labeling files and the option to load them from old ViNe-Seg runs using the new ```Load Polygon``` button or the possibility to manipulate the resulting segmentation output either by controlling the minimum ocnfidence from which one you want to see the segmentation results or by manually editing, adding or deleting shapes. You can further switch between enumerated Neuron labels (Neuron1, ..., NeuronX) and area-based Neuron labels (Neuron too small, ... , Neuron too big) by clicking a button or remove all neurons bigger/smaller than you previously defined within the GUI. 

## What ViNe-Seg can be used for:

Here you can see some example videos of ViNe-Seg applied to the neurofinder dataset (https://github.com/codeneuro/neurofinder).

### The Autosegmentation:

<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Autosegmentation.gif)--->
<p align="center"><img src="https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Autosegmentation.gif "  width="85%"></p>

### Refining the Autosegmentation:
<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Refine.gif)--->
<p align="center"><img src="https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Refine.gif "  width="85%"></p>

### The Trace Extraction and CASCADE SPIKE Inference:
<p align="center"><img src="https://github.com/NiRuff/GithubMedia/blob/main/VineSeg2_.gif" width="85%"  />


### The Microscope Mode:
<p align="center"><img src="https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Microscope.gif" width="85%"  />

## Train a custom vine-seg model

### Data Preparation

In this section, we'll walk you through the steps to prepare your dataset for training. This involves creating projections of your data, annotating images, and setting up the folder structure.

#### Create Mean/Max Projection of Your Data

Before you start annotating, you should create mean or max projections of your data. This will help in better visualization and annotation. You can use image processing libraries like OpenCV or ImageJ to accomplish this.

#### Annotate with LabelImg

1. Download and install [LabelImg](https://github.com/HumanSignal/labelImg).
2. Open LabelImg and load your mean/max projected images.
3. Annotate the objects in the images and save the annotations in the COCO format.

#### Create Data Folder

Create a folder for your dataset in the `data/` directory. For example, create a folder named `example`:

```bash
mkdir data/example
```

#### Create Required Folder Structure

Use the function `create_yolo_folder_struct`  (located in utils.py) to set up the required folder structure:

```python
create_yolo_folder_struct("data/example")
```

Your folder should now have the following structure:

```
data/example  
├── coco  
├── raw  
└── yolo  
```

#### Prepare Training Data

Use the `coco_seg_to_yolov8` function to prepare your training data:

```python
coco_seg_to_yolov8(
    coco_path="data/example/coco",
    yolo_path="data/example/yolo",
    splits=[0.7, 0.2, 0.1]
)
```

Here, `splits` is a list of three floats that add up to 1. The first number is the share of images used for training, the second for validation, and the third for testing.

#### Create `data.yaml`

Finally, create a `data.yaml` file inside the `data/example/yolo` folder with the following structure:

```yaml
path: /path/to/vineseg/data/example
train: yolo/train/images
val: yolo/val/images
test: yolo/test/images
nc: 1
names: ['cell']
```

This YAML file will be used during the training process to locate your dataset and set other configurations.

### Training

In this section, we'll guide you through the process of training your custom model. This involves setting up your training environment and running the training script.

#### Hardware Requirements

It's recommended to use a PC with a sufficient GPU for training. Ensure that your GPU has a vRAM of at least 8GB for optimal performance.

#### Open Jupyter Notebook

Open the Jupyter notebook named `train.ipynb` where the training code is located.

#### Choose a Model

You have two options for starting your training:

1. **Pre-trained Model**: Use a pre-trained model from Ultralytics that hasn't been trained on neurons yet. You can find a list of available models [here](https://docs.ultralytics.com/tasks/detect/#models). To load a small model, for example, use:
   
   ```python
   model = YOLO('yolov8s-seg.pt')
   ```

2. **Downloaded Model**: Use one of the downloaded models that you can access through the ViNeSeg GUI. To specify the path to one of these models, use:
   
   ```python
   model = YOLO('/path/to/vineseg/models/model.pt')
   ```

#### Start Training

To start the training process, use the `train` method:

```python
model.train(
    data='/path/to/vineseg/data/example/data.yaml',
    epochs=100,
    imgsz=640,
    batch=16,
    show_labels=True
)
```

Here, you can adjust the `epochs`, `imgsz`, `batch`, and `show_labels` parameters according to your needs.

#### Parameters for `model.train()`

- **`data: str`**: This is the path to the `data.yaml` file that contains metadata about your dataset. It specifies where your training, validation, and test data are located.
  
  Example: `data='/path/to/vineseg/data/example/data.yaml'`

- **`epochs: int = 100`**: The number of training epochs. An epoch is one complete forward and backward pass of all the training examples. The default value is 100.
  
  Example: `epochs=100`

- **`imgsz: int = 640`**: The size of the images for training. The images will be resized to this dimension (width x height). The default value is 640.
  
  Example: `imgsz=640`

- **`batch: int = 16`**: The batch size for training. This is the number of training examples utilized in one iteration. The default value is 16.
  
  Example: `batch=16`

- **`show_labels: bool = True`**: Whether or not to display the labels during training. This is useful for visualizing the training process. The default value is True.
  
  Example: `show_labels=True`

#### Locate Trained Model and Weights

After the training is complete, you can find the trained model in the following directory:

```
/path/to/vineseg/runs/train
```

The weights for the trained model will be stored in:

```
/path/to/vineseg/runs/train/weights
```

#### Make Weights Accessible in ViNeSeg

To use the trained model in ViNeSeg, you need to copy the weights from the `runs/weights` folder to the ViNeSeg model folder.

### Evaluate

After training your custom model, it's crucial to evaluate its performance to ensure it meets your project's requirements. This section will guide you through the evaluation process.

#### Validation Set

To get performance numbers for the validation set, run:

```python
model.val()
```

#### Test Set

To evaluate the model on the test set, specify the `mode` parameter as 'test':

```python
model.val(mode='test')
```

### Share Your Custom Model

We are excited to host your custom model to make it accessible to other researchers.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NiRuff/ViNe-Seg/tree/main",
    "name": "ViNeSeg",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Image Annotation,Machine Learning",
    "author": "Nicolas Ruffini",
    "author_email": "nicolas.ruffini@lir-mainz.de",
    "download_url": "https://files.pythonhosted.org/packages/d2/05/32dab286f24cbb4c6acd856b1db025ec1b0569802ab16c7f27ec908d1006/ViNeSeg-0.1.4.tar.gz",
    "platform": null,
    "description": "# Visible Neuron Segmentation: ViNe-Seg (beta version)\n\n<img align=\"right\" width=\"420\" height=\"420\" src=https://user-images.githubusercontent.com/50486014/173547029-a4a1bfac-379f-42ef-aaec-166d814ea421.png width=\"30%\" height=\"30%\" /> <br>\n\n\n### Embedding Deep-Learning assisted segmentation of Visible Neurons and subsequent Analysis in one Graphical User Interface\n**ViNe-Seg** comes in two versions, that include:\n\n\n| |ViNe-Seg|<nobr> ViNe-Seg :heavy_plus_sign: </nobr>|\n|:----|:------:|:------:|\n|Autosegmentation Model Manager| :heavy_check_mark: | :heavy_check_mark: |\n|ViNe-Seg Autosegmentation step| :heavy_check_mark: | :heavy_check_mark: |\n|Manual refinement of segmentation results| :heavy_check_mark: | :heavy_check_mark: |\n|Trace Extraction| :heavy_check_mark: | :heavy_check_mark: |\n|\u0394F/F conversion| :heavy_check_mark: | :heavy_check_mark: |\n|Microscope Mode| :heavy_check_mark: | :heavy_check_mark: |\n|Free| :heavy_check_mark: | :heavy_check_mark: |\n|Open-Source| :heavy_check_mark: | :heavy_check_mark: |\n|CASCADE SPIKE Inference| :x: | :heavy_check_mark: |\n\n\n## Installation of the basic version of ViNe-Seg\nWe aimed to develop ViNe-Seg as user-friendly as possible. Therefor, ViNe-Seg comes with a GUI and is easily installable using pip with the command:\n\n### General recommendation: Create a new conda environment\n```\nconda create -n vineseg_env python=3.9\nconda activate vineseg_env\n```\n## Then run:\n### Windows recommendation:\n```\npip install PyQt5\npip install vineseg\n```\n\n### Mac recommendation:\n```\nconda install pyqt\npip install vineseg\n```\n\n### Ubuntu recommendation:\n```\nconda install pyqt\npip install vineseg\npip uninstall opencv-python\npip install opencv-python-headless \n```\n\nViNe-Seg will be downloaded and installed with all necessary dependencies.\n\n### Installation of the advanced version of ViNe-Seg, including the CASCADE SPIKE inference\n\nIf you also want to use the CASCADE SPIKE Inference (see https://github.com/HelmchenLabSoftware/Cascade) you might want to install the advanced version of ViNe-Seg instead. To do so, install conda or anaconda on your machine and run the following commands in the given order, to create a conda environment and install ViNe-Seg there:\n\n```\nconda create -n vineseg-adv python=3.7 tensorflow==2.3 keras==2.3.1 h5py numpy scipy matplotlib seaborn ruamel.yaml\nconda activate vineseg-adv\npip install vineseg\n```\n\n\n<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Installation.gif)--->\n<p align=\"center\"><img src=\"https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Installation.gif\"  width=\"85%\"></p>\n\nFrom now on, the advanced version of ViNe-Seg will be installed in your conda environment called *vineseg-adv*.\n\n## Starting ViNe-Seg\nYou can start ViNe-Seg using the following command after installation (make sure not to work in an environment in which ViNe-Seg is not installed):\n```\npython -m vineseg\n```\n\nIf you have to start the specific environment first, run:\n\n```\nconda activate vineseg-adv\npython -m vineseg\n```\n\nNow, ViNe-Seg will check if you already have a version of the trained models installed and will download the default model version if none is currently in use on your machine.\n\nAfter this step, the GUI will automatically be opened where you have the chance to download other models, choose between them, load your mean image in PNG or TIFF format and to run the autosegmentation using the ```Autosegmentation``` command shown in the menu bar in the top of the screen.\nWe embedded the ViNe-Seg functionality in the labelme GUI (see https://github.com/wkentaro/labelme) by adding a button for running the autosegmentation step of ViNe-Seg in the GUI, as well as a model manager, trace extraction, baseline correction, the CASCADE SPIKE Inference. We also added some other underlying funtionalities such as automatic loading of the generated JSON labeling files and the option to load them from old ViNe-Seg runs using the new ```Load Polygon``` button or the possibility to manipulate the resulting segmentation output either by controlling the minimum ocnfidence from which one you want to see the segmentation results or by manually editing, adding or deleting shapes. You can further switch between enumerated Neuron labels (Neuron1, ..., NeuronX) and area-based Neuron labels (Neuron too small, ... , Neuron too big) by clicking a button or remove all neurons bigger/smaller than you previously defined within the GUI. \n\n## What ViNe-Seg can be used for:\n\nHere you can see some example videos of ViNe-Seg applied to the neurofinder dataset (https://github.com/codeneuro/neurofinder).\n\n### The Autosegmentation:\n\n<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Autosegmentation.gif)--->\n<p align=\"center\"><img src=\"https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Autosegmentation.gif \"  width=\"85%\"></p>\n\n### Refining the Autosegmentation:\n<!---![showcase gif](https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Refine.gif)--->\n<p align=\"center\"><img src=\"https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Refine.gif \"  width=\"85%\"></p>\n\n### The Trace Extraction and CASCADE SPIKE Inference:\n<p align=\"center\"><img src=\"https://github.com/NiRuff/GithubMedia/blob/main/VineSeg2_.gif\" width=\"85%\"  />\n\n\n### The Microscope Mode:\n<p align=\"center\"><img src=\"https://github.com/NiRuff/GithubMedia/blob/main/ViNeSeg_Microscope.gif\" width=\"85%\"  />\n\n## Train a custom vine-seg model\n\n### Data Preparation\n\nIn this section, we'll walk you through the steps to prepare your dataset for training. This involves creating projections of your data, annotating images, and setting up the folder structure.\n\n#### Create Mean/Max Projection of Your Data\n\nBefore you start annotating, you should create mean or max projections of your data. This will help in better visualization and annotation. You can use image processing libraries like OpenCV or ImageJ to accomplish this.\n\n#### Annotate with LabelImg\n\n1. Download and install [LabelImg](https://github.com/HumanSignal/labelImg).\n2. Open LabelImg and load your mean/max projected images.\n3. Annotate the objects in the images and save the annotations in the COCO format.\n\n#### Create Data Folder\n\nCreate a folder for your dataset in the `data/` directory. For example, create a folder named `example`:\n\n```bash\nmkdir data/example\n```\n\n#### Create Required Folder Structure\n\nUse the function `create_yolo_folder_struct`  (located in utils.py) to set up the required folder structure:\n\n```python\ncreate_yolo_folder_struct(\"data/example\")\n```\n\nYour folder should now have the following structure:\n\n```\ndata/example  \n\u251c\u2500\u2500 coco  \n\u251c\u2500\u2500 raw  \n\u2514\u2500\u2500 yolo  \n```\n\n#### Prepare Training Data\n\nUse the `coco_seg_to_yolov8` function to prepare your training data:\n\n```python\ncoco_seg_to_yolov8(\n    coco_path=\"data/example/coco\",\n    yolo_path=\"data/example/yolo\",\n    splits=[0.7, 0.2, 0.1]\n)\n```\n\nHere, `splits` is a list of three floats that add up to 1. The first number is the share of images used for training, the second for validation, and the third for testing.\n\n#### Create `data.yaml`\n\nFinally, create a `data.yaml` file inside the `data/example/yolo` folder with the following structure:\n\n```yaml\npath: /path/to/vineseg/data/example\ntrain: yolo/train/images\nval: yolo/val/images\ntest: yolo/test/images\nnc: 1\nnames: ['cell']\n```\n\nThis YAML file will be used during the training process to locate your dataset and set other configurations.\n\n### Training\n\nIn this section, we'll guide you through the process of training your custom model. This involves setting up your training environment and running the training script.\n\n#### Hardware Requirements\n\nIt's recommended to use a PC with a sufficient GPU for training. Ensure that your GPU has a vRAM of at least 8GB for optimal performance.\n\n#### Open Jupyter Notebook\n\nOpen the Jupyter notebook named `train.ipynb` where the training code is located.\n\n#### Choose a Model\n\nYou have two options for starting your training:\n\n1. **Pre-trained Model**: Use a pre-trained model from Ultralytics that hasn't been trained on neurons yet. You can find a list of available models [here](https://docs.ultralytics.com/tasks/detect/#models). To load a small model, for example, use:\n   \n   ```python\n   model = YOLO('yolov8s-seg.pt')\n   ```\n\n2. **Downloaded Model**: Use one of the downloaded models that you can access through the ViNeSeg GUI. To specify the path to one of these models, use:\n   \n   ```python\n   model = YOLO('/path/to/vineseg/models/model.pt')\n   ```\n\n#### Start Training\n\nTo start the training process, use the `train` method:\n\n```python\nmodel.train(\n    data='/path/to/vineseg/data/example/data.yaml',\n    epochs=100,\n    imgsz=640,\n    batch=16,\n    show_labels=True\n)\n```\n\nHere, you can adjust the `epochs`, `imgsz`, `batch`, and `show_labels` parameters according to your needs.\n\n#### Parameters for `model.train()`\n\n- **`data: str`**: This is the path to the `data.yaml` file that contains metadata about your dataset. It specifies where your training, validation, and test data are located.\n  \n  Example: `data='/path/to/vineseg/data/example/data.yaml'`\n\n- **`epochs: int = 100`**: The number of training epochs. An epoch is one complete forward and backward pass of all the training examples. The default value is 100.\n  \n  Example: `epochs=100`\n\n- **`imgsz: int = 640`**: The size of the images for training. The images will be resized to this dimension (width x height). The default value is 640.\n  \n  Example: `imgsz=640`\n\n- **`batch: int = 16`**: The batch size for training. This is the number of training examples utilized in one iteration. The default value is 16.\n  \n  Example: `batch=16`\n\n- **`show_labels: bool = True`**: Whether or not to display the labels during training. This is useful for visualizing the training process. The default value is True.\n  \n  Example: `show_labels=True`\n\n#### Locate Trained Model and Weights\n\nAfter the training is complete, you can find the trained model in the following directory:\n\n```\n/path/to/vineseg/runs/train\n```\n\nThe weights for the trained model will be stored in:\n\n```\n/path/to/vineseg/runs/train/weights\n```\n\n#### Make Weights Accessible in ViNeSeg\n\nTo use the trained model in ViNeSeg, you need to copy the weights from the `runs/weights` folder to the ViNeSeg model folder.\n\n### Evaluate\n\nAfter training your custom model, it's crucial to evaluate its performance to ensure it meets your project's requirements. This section will guide you through the evaluation process.\n\n#### Validation Set\n\nTo get performance numbers for the validation set, run:\n\n```python\nmodel.val()\n```\n\n#### Test Set\n\nTo evaluate the model on the test set, specify the `mode` parameter as 'test':\n\n```python\nmodel.val(mode='test')\n```\n\n### Share Your Custom Model\n\nWe are excited to host your custom model to make it accessible to other researchers.",
    "bugtrack_url": null,
    "license": "GPLv3",
    "summary": "Image Polygonal Annotation with Python",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/NiRuff/ViNe-Seg/tree/main"
    },
    "split_keywords": [
        "image annotation",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d20532dab286f24cbb4c6acd856b1db025ec1b0569802ab16c7f27ec908d1006",
                "md5": "1a09673096d33a2bcd5be5861325ab0a",
                "sha256": "b2e5e09ec42f2f85426339d50bcafb6db4ce0563acc98111a529ce72afa422b4"
            },
            "downloads": -1,
            "filename": "ViNeSeg-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "1a09673096d33a2bcd5be5861325ab0a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 1916258,
            "upload_time": "2023-12-11T09:17:53",
            "upload_time_iso_8601": "2023-12-11T09:17:53.110420Z",
            "url": "https://files.pythonhosted.org/packages/d2/05/32dab286f24cbb4c6acd856b1db025ec1b0569802ab16c7f27ec908d1006/ViNeSeg-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-11 09:17:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NiRuff",
    "github_project": "ViNe-Seg",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "vineseg"
}
        
Elapsed time: 0.15831s