isi-segmentation


Nameisi-segmentation JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/AllenNeuralDynamics/isi_segmentation
SummarySupervised ISI segmentaion using tensorflow
upload_time2024-04-18 23:54:02
maintainerNone
docs_urlNone
authorDi Wang
requires_python>=3.7
licenseNone
keywords deep learning computer vision
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Welcome!
This is a repository for segmenting visual cortex areas for sign map. 
The model was trained on about 2000 isi-experiment data using UNet and TensorFlow.

The sign map will be segmented into different regions and 14 cortex areas could be identified.
The output label map will be saved as '.png' file with different values (i.e., 1, 2, 3 ...) 
corresponding to different visual cortex areas (i.e., VISp, VISam, VISal ...). 
The class definition is as follows:  
| Class | acronym | name | 
| :---------- | :----------- | :------------ |
| 1 | VISp | Primary visual area |
| 2 | VISam | Anteromedial visual area |
| 3 | VISal | Anterolateral visual area |
| 4 | VISl | Lateral visual area |
| 5 | VISrl | Rostrolateral visual area |
| 6 | VISpl | Posterolateral visual area |
| 7 | VISpm | posteromedial visual area |
| 8 | VISli | Laterointermediate area |
| 9 | VISpor | Postrhinal area |
| 10 | VISrll | Rostrolateral lateral visual area |
| 11 | VISlla | Laterolateral anterior visual area |
| 12 | VISmma | Mediomedial anterior visual area |
| 13 | VISmmp | Mediomedial posterior visual area |
| 14 | VISm | Medial visual area |


## Installation
To use isi-segmentation library, either install directly with pip or clone this repository and install the requirements listed in setup.py.

#### Method 1. pip install
```
pip install isi-segmentation
```

#### Method 2: conda from source
1. First, ensure git is installed:
```
git --version
```
If `git` is not recognized, install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).

2. Move into the directory where you want to place the repository folder, and then download it from GitHub:

```
cd <SOME_FOLDER>
git clone https://github.com/AllenNeuralDynamics/isi_segmentation.git
cd isi_segmentation
pip install -e .
```

The script should take four inputs:

- hdf5_path (PathLike): path to the hdf5 file which contains the sign map
- sign_map_path (PathLike): path to the sign map extracted from .hdf5 file for prediction
- label_map_path (PathLike): path to save the output label map
- model_path (PathLike): path to trained model (to download it, follow [here](#Download-trained-model))


## Download trained model
<!-- 
```
mkdir -p model
gdown 'https://drive.google.com/uc?id=13ZSmV9CHDon4D7NwoPQTZub1WmSA5bPD' -O ./model/isi_segmentation_model.h5
```
-->

<!-- retrain model on the clean data based on Shiella's feedback on 5-fold corss validation results -->
```
mkdir -p model
gdown 'https://drive.google.com/uc?id=1X5C0avuOcjnbZDcS0hG6yujd2bY1hrK1' -O ./model/isi_segmentation_model.h5
```

## Usage 
To predict the label map for the sample sign map with the download model, run:
```
python run_predict.py \
    --hdf5_path ./sample_data/661511116_372583_20180207_processed.hdf5\
    --sign_map_path ./sample_data/661511116_372583_20180207_sign_map.jpg\
    --label_map_path ./sample_data/661511116_372583_20180207_label_map.png\
    --model_path ./model/isi_segmentation_model.h5
```

Or you could directly run 
```
sh run.sh
```

Please make sure you have already downloaded the trained model (follow [here](#Download-trained-model)) and update `model_path`. 


## Model output directory structure
After running prediction, a directory will be created with the following structure
```console
    /path/to/outputs/
      ├── <experiment_name>.png
      └── <experiment_name>_visualize.png
```      
* `<experiment_name>.png`: prediction from the sign map, the filename is set to `label_map_path`
* `<experiment_name>_visualize.png`: visualize the sign map and its resulting label map

An example of isi segmentation outputs is `./sample_data/`


## Visualization

To visualize the output label map, the plot will be saved as `<experiment_name>_visualize.png` and stored in the same folder as the label map.






            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/AllenNeuralDynamics/isi_segmentation",
    "name": "isi-segmentation",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "deep learning, computer vision",
    "author": "Di Wang",
    "author_email": "di.wang@alleninstitute.org",
    "download_url": "https://files.pythonhosted.org/packages/67/11/fd848f31b2c34d84f2c3e5b16d57b2b00a0eee84a524aa43bad50d05c2a6/isi_segmentation-0.1.1.tar.gz",
    "platform": null,
    "description": "## Welcome!\nThis is a repository for segmenting visual cortex areas for sign map. \nThe model was trained on about 2000 isi-experiment data using UNet and TensorFlow.\n\nThe sign map will be segmented into different regions and 14 cortex areas could be identified.\nThe output label map will be saved as '.png' file with different values (i.e., 1, 2, 3 ...) \ncorresponding to different visual cortex areas (i.e., VISp, VISam, VISal ...). \nThe class definition is as follows:  \n| Class | acronym | name | \n| :---------- | :----------- | :------------ |\n| 1 | VISp | Primary visual area |\n| 2 | VISam | Anteromedial visual area |\n| 3 | VISal | Anterolateral visual area |\n| 4 | VISl | Lateral visual area |\n| 5 | VISrl | Rostrolateral visual area |\n| 6 | VISpl | Posterolateral visual area |\n| 7 | VISpm | posteromedial visual area |\n| 8 | VISli | Laterointermediate area |\n| 9 | VISpor | Postrhinal area |\n| 10 | VISrll | Rostrolateral lateral visual area |\n| 11 | VISlla | Laterolateral anterior visual area |\n| 12 | VISmma | Mediomedial anterior visual area |\n| 13 | VISmmp | Mediomedial posterior visual area |\n| 14 | VISm | Medial visual area |\n\n\n## Installation\nTo use isi-segmentation library, either install directly with pip or clone this repository and install the requirements listed in setup.py.\n\n#### Method 1. pip install\n```\npip install isi-segmentation\n```\n\n#### Method 2: conda from source\n1. First, ensure git is installed:\n```\ngit --version\n```\nIf `git` is not recognized, install [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).\n\n2. Move into the directory where you want to place the repository folder, and then download it from GitHub:\n\n```\ncd <SOME_FOLDER>\ngit clone https://github.com/AllenNeuralDynamics/isi_segmentation.git\ncd isi_segmentation\npip install -e .\n```\n\nThe script should take four inputs:\n\n- hdf5_path (PathLike): path to the hdf5 file which contains the sign map\n- sign_map_path (PathLike): path to the sign map extracted from .hdf5 file for prediction\n- label_map_path (PathLike): path to save the output label map\n- model_path (PathLike): path to trained model (to download it, follow [here](#Download-trained-model))\n\n\n## Download trained model\n<!-- \n```\nmkdir -p model\ngdown 'https://drive.google.com/uc?id=13ZSmV9CHDon4D7NwoPQTZub1WmSA5bPD' -O ./model/isi_segmentation_model.h5\n```\n-->\n\n<!-- retrain model on the clean data based on Shiella's feedback on 5-fold corss validation results -->\n```\nmkdir -p model\ngdown 'https://drive.google.com/uc?id=1X5C0avuOcjnbZDcS0hG6yujd2bY1hrK1' -O ./model/isi_segmentation_model.h5\n```\n\n## Usage \nTo predict the label map for the sample sign map with the download model, run:\n```\npython run_predict.py \\\n    --hdf5_path ./sample_data/661511116_372583_20180207_processed.hdf5\\\n    --sign_map_path ./sample_data/661511116_372583_20180207_sign_map.jpg\\\n    --label_map_path ./sample_data/661511116_372583_20180207_label_map.png\\\n    --model_path ./model/isi_segmentation_model.h5\n```\n\nOr you could directly run \n```\nsh run.sh\n```\n\nPlease make sure you have already downloaded the trained model (follow [here](#Download-trained-model)) and update `model_path`. \n\n\n## Model output directory structure\nAfter running prediction, a directory will be created with the following structure\n```console\n    /path/to/outputs/\n      \u251c\u2500\u2500 <experiment_name>.png\n      \u2514\u2500\u2500 <experiment_name>_visualize.png\n```      \n* `<experiment_name>.png`: prediction from the sign map, the filename is set to `label_map_path`\n* `<experiment_name>_visualize.png`: visualize the sign map and its resulting label map\n\nAn example of isi segmentation outputs is `./sample_data/`\n\n\n## Visualization\n\nTo visualize the output label map, the plot will be saved as `<experiment_name>_visualize.png` and stored in the same folder as the label map.\n\n\n\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Supervised ISI segmentaion using tensorflow",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/AllenNeuralDynamics/isi_segmentation"
    },
    "split_keywords": [
        "deep learning",
        " computer vision"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0cc502a504bce1f9c1cdf05ecd6d00f8c6b031ef030aab456c28483209831539",
                "md5": "f850569571ce70cb2859e17f542cc993",
                "sha256": "efb03e5da08e023ccf4f45c42b120295877b543750fa8758b46523de78643f76"
            },
            "downloads": -1,
            "filename": "isi_segmentation-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f850569571ce70cb2859e17f542cc993",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 9621,
            "upload_time": "2024-04-18T23:54:00",
            "upload_time_iso_8601": "2024-04-18T23:54:00.962019Z",
            "url": "https://files.pythonhosted.org/packages/0c/c5/02a504bce1f9c1cdf05ecd6d00f8c6b031ef030aab456c28483209831539/isi_segmentation-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6711fd848f31b2c34d84f2c3e5b16d57b2b00a0eee84a524aa43bad50d05c2a6",
                "md5": "74ce0e76b9e1f558252f8905630a79f1",
                "sha256": "0e510071ca9e92119081fa01655e0022181d6f598d96cd29ef423e252ef0bbf3"
            },
            "downloads": -1,
            "filename": "isi_segmentation-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "74ce0e76b9e1f558252f8905630a79f1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 9375,
            "upload_time": "2024-04-18T23:54:02",
            "upload_time_iso_8601": "2024-04-18T23:54:02.626957Z",
            "url": "https://files.pythonhosted.org/packages/67/11/fd848f31b2c34d84f2c3e5b16d57b2b00a0eee84a524aa43bad50d05c2a6/isi_segmentation-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-18 23:54:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AllenNeuralDynamics",
    "github_project": "isi_segmentation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "isi-segmentation"
}
        
Elapsed time: 0.29885s