segmentation-skeleton-metrics


Namesegmentation-skeleton-metrics JSON
Version 4.2.11 PyPI version JSON
download
home_pageNone
SummaryPython package for evaluating neuron segmentations in terms of the number of splits and merges
upload_time2024-07-11 06:26:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # segmentation-skeleton-metrics

[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)
![Code Style](https://img.shields.io/badge/code%20style-black-black)

[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)

Python package for performing a skeleton-based evaluation of a predicted segmentation of neural arbors. This tool detects topological mistakes (i.e. splits and merges) in the predicted segmentation by comparing the ground truth skeleton to it. Once this comparison is complete, several statistics (e.g. edge accuracy, split count, merge count) are computed and returned in a dictionary.


## Usage

Here is a simple example of evaluating a predicted segmentation. Note that this package supports a number of different input types, see documentation for details. 

```python
import os

from aind_segmentation_evaluation.evaluate import run_evaluation
from aind_segmentation_evaluation.conversions import volume_to_graph
from tifffile import imread


if __name__ == "__main__":

    # Initializations
    data_dir = "./resources"
    target_graphs_dir = os.path.join(data_dir, "target_graphs")
    path_to_target_labels = os.path.join(data_dir, "target_labels.tif")
    pred_labels = imread(os.path.join(data_dir, "pred_labels.tif"))
    pred_graphs = volume_to_graph(pred_labels)

    # Evaluation
    stats = run_evaluation(
        target_graphs_dir,
        path_to_target_labels,
        pred_graphs,
        pred_labels,
        filetype="tif",
        output="tif",
        output_dir=data_dir,
        permute=[2, 1, 0],
        scale=[1.101, 1.101, 1.101],
    )

    # Write out results
    print("Graph-based evaluation...")
    for key in stats.keys():
        print("   {}: {}".format(key, stats[key])

```

## Installation
To use the software, in the root directory, run
```bash
pip install -e .
```

To develop the code, run
```bash
pip install -e .[dev]
```

To install this package from PyPI, run
```bash
pip install aind-segmentation-evaluation
```

### Pull requests

For internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:
```text
<type>(<scope>): <short summary>
```

where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:

- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)
- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)
- **docs**: Documentation only changes
- **feat**: A new feature
- **fix**: A bugfix
- **perf**: A code change that improves performance
- **refactor**: A code change that neither fixes a bug nor adds a feature
- **test**: Adding missing tests or correcting existing tests

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "segmentation-skeleton-metrics",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Anna Grim <anna.grim@alleninstitute.org>",
    "download_url": "https://files.pythonhosted.org/packages/50/6b/cb87a929a16b459a41ade6da4f37899e4e412600811b0a265f23be80e012/segmentation_skeleton_metrics-4.2.11.tar.gz",
    "platform": null,
    "description": "# segmentation-skeleton-metrics\n\n[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)\n![Code Style](https://img.shields.io/badge/code%20style-black-black)\n\n[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)\n\nPython package for performing a skeleton-based evaluation of a predicted segmentation of neural arbors. This tool detects topological mistakes (i.e. splits and merges) in the predicted segmentation by comparing the ground truth skeleton to it. Once this comparison is complete, several statistics (e.g. edge accuracy, split count, merge count) are computed and returned in a dictionary.\n\n\n## Usage\n\nHere is a simple example of evaluating a predicted segmentation. Note that this package supports a number of different input types, see documentation for details. \n\n```python\nimport os\n\nfrom aind_segmentation_evaluation.evaluate import run_evaluation\nfrom aind_segmentation_evaluation.conversions import volume_to_graph\nfrom tifffile import imread\n\n\nif __name__ == \"__main__\":\n\n    # Initializations\n    data_dir = \"./resources\"\n    target_graphs_dir = os.path.join(data_dir, \"target_graphs\")\n    path_to_target_labels = os.path.join(data_dir, \"target_labels.tif\")\n    pred_labels = imread(os.path.join(data_dir, \"pred_labels.tif\"))\n    pred_graphs = volume_to_graph(pred_labels)\n\n    # Evaluation\n    stats = run_evaluation(\n        target_graphs_dir,\n        path_to_target_labels,\n        pred_graphs,\n        pred_labels,\n        filetype=\"tif\",\n        output=\"tif\",\n        output_dir=data_dir,\n        permute=[2, 1, 0],\n        scale=[1.101, 1.101, 1.101],\n    )\n\n    # Write out results\n    print(\"Graph-based evaluation...\")\n    for key in stats.keys():\n        print(\"   {}: {}\".format(key, stats[key])\n\n```\n\n## Installation\nTo use the software, in the root directory, run\n```bash\npip install -e .\n```\n\nTo develop the code, run\n```bash\npip install -e .[dev]\n```\n\nTo install this package from PyPI, run\n```bash\npip install aind-segmentation-evaluation\n```\n\n### Pull requests\n\nFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:\n```text\n<type>(<scope>): <short summary>\n```\n\nwhere scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:\n\n- **build**: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)\n- **ci**: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)\n- **docs**: Documentation only changes\n- **feat**: A new feature\n- **fix**: A bugfix\n- **perf**: A code change that improves performance\n- **refactor**: A code change that neither fixes a bug nor adds a feature\n- **test**: Adding missing tests or correcting existing tests\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package for evaluating neuron segmentations in terms of the number of splits and merges",
    "version": "4.2.11",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "10eb7866f1f6d879440ad5812b731958bf69a607ab5895b43b62da4742b80af1",
                "md5": "08da572b3eb1b6b8ef01bb1bf9f958df",
                "sha256": "495b82f34941ed20829cd2d5c9f6362fd971b87bfde28b0ee40135ae78364734"
            },
            "downloads": -1,
            "filename": "segmentation_skeleton_metrics-4.2.11-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "08da572b3eb1b6b8ef01bb1bf9f958df",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 20420,
            "upload_time": "2024-07-11T06:26:25",
            "upload_time_iso_8601": "2024-07-11T06:26:25.683927Z",
            "url": "https://files.pythonhosted.org/packages/10/eb/7866f1f6d879440ad5812b731958bf69a607ab5895b43b62da4742b80af1/segmentation_skeleton_metrics-4.2.11-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "506bcb87a929a16b459a41ade6da4f37899e4e412600811b0a265f23be80e012",
                "md5": "55a3cdb27d61795ada0d5250d0390d99",
                "sha256": "31781ab95870e658a1db243e435643d65173801481c0d79f312fadeeecb79f6b"
            },
            "downloads": -1,
            "filename": "segmentation_skeleton_metrics-4.2.11.tar.gz",
            "has_sig": false,
            "md5_digest": "55a3cdb27d61795ada0d5250d0390d99",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 119732,
            "upload_time": "2024-07-11T06:26:27",
            "upload_time_iso_8601": "2024-07-11T06:26:27.898000Z",
            "url": "https://files.pythonhosted.org/packages/50/6b/cb87a929a16b459a41ade6da4f37899e4e412600811b0a265f23be80e012/segmentation_skeleton_metrics-4.2.11.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-11 06:26:27",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "segmentation-skeleton-metrics"
}
        
Elapsed time: 0.27801s