segmentation-skeleton-metrics


Namesegmentation-skeleton-metrics JSON
Version 4.9.26 PyPI version JSON
download
home_pageNone
SummaryPython package for evaluating neuron segmentations in terms of the number of splits and merges
upload_time2024-11-19 19:24:33
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SkeletonMetrics

[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)
![Code Style](https://img.shields.io/badge/code%20style-black-black)

[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)

Python package for assessing the accuracy of a predicted neuron segmentation by comparing it to a set of ground truth skeletons. This tool detects topological mistakes (i.e. splits and merges) in a predicted segmentation and then computes several skeleton-based metrics that quantify its topological accuracy.

## Details

We begin with a set of ground truth skeletons stored as individual SWC files, where the "xyz" coordinates correspond to voxels in an image. Each ground truth skeleton is loaded and represented as a NetworkX graph with the voxel coordinates as a node-level attribute. The evaluation is performed by first labeling the nodes of each graph with the corresponding segment IDs from the predicted segmentation. Topological mistakes are then detected by examining the labels of each edge, see figure below for an overview of how splits and merges are detected.

<p>
  <img src="imgs/topological_mistakes.png" width="180" alt="Topological mistakes detected in skeleton">
  <br>
  <b> Figure: </b>Edges in skeletons are either correctly or incorrectly reconstructed based on the presence of mergers or splits that affect nodes attached to an edge. Colors correspond to segment IDs. From top to bottom: correct edge (both nodes have the same ID), split edge (nodes assigned to different segments), omitted edge (one or two nodes do not have an associated ID), merged edge (node assigned to a segment that covers more than one skeleton).
</p>

Metrics computed for each ground truth skeleton:

- Number of Splits: Number of segments that a ground truth skeleton is broken into.
- Number of Merges: Number of segments that are incorrectly merged into a single segment.
- Percentage of Omit Edges: Proportion of edges in the ground truth that are omitted in the predicted segmentation.
- Percentage of Merged Edges: Proportion of edges that are merged in the predicted segmentation.
- Edge Accuracy: Proportion of edges that are correctly reconstructed in the predicted segmentation.
- Expected Run Length (ERL): Expected length of segments or edges in the predicted segmentation.

## Usage

Here is a simple example of evaluating a predicted segmentation.

```python
from tifffile import imread
from xlwt import Workbook

import numpy as np

from segmentation_skeleton_metrics.skeleton_metric import SkeletonMetric


def evaluate():
    # Initializations
    pred_labels = imread(pred_labels_path)
    skeleton_metric = SkeletonMetric(
        target_swcs_pointer,
        pred_labels,
        fragments_pointer=pred_swcs_pointer,
        output_dir=output_dir,
    )
    full_results, avg_results = skeleton_metric.run()

    # Report results
    print(f"Averaged Results...")
    for stat_name in avg_results.keys():
        print(f"   {stat_name}: {round(avg_results[stat_name], 4)}")

    print(f"\nTotal Results...")
    print("# splits:", np.sum(list(skeleton_metric.split_cnt.values())))
    print("# merges:", np.sum(list(skeleton_metric.merge_cnt.values())))


if __name__ == "__main__":
    # Initializations
    output_dir = "./"
    pred_labels_path = "./pred_labels.tif"
    pred_swcs_pointer = "./pred_swcs.zip"
    target_swcs_pointer = "./target_swcs.zip"

    # Run
    evaluate()


```

<p>
  <img src="imgs/printouts.png" width=750">
</p>

Note: this Python package can also be used to evaluate the accuracy of a segmentation in which split mistakes have been corrected.

## Installation
To use the software, in the root directory, run
```bash
pip install -e .
```

To develop the code, run
```bash
pip install -e .[dev]
```

To install this package from PyPI, run
```bash
pip install aind-segmentation-evaluation
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "segmentation-skeleton-metrics",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Anna Grim <anna.grim@alleninstitute.org>",
    "download_url": "https://files.pythonhosted.org/packages/61/a8/e38f75ba56f1239b5777bead11d90943406059d10e0cf55d3e0e00a688a4/segmentation_skeleton_metrics-4.9.26.tar.gz",
    "platform": null,
    "description": "# SkeletonMetrics\n\n[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)\n![Code Style](https://img.shields.io/badge/code%20style-black-black)\n\n[![semantic-release: angular](https://img.shields.io/badge/semantic--release-angular-e10079?logo=semantic-release)](https://github.com/semantic-release/semantic-release)\n\nPython package for assessing the accuracy of a predicted neuron segmentation by comparing it to a set of ground truth skeletons. This tool detects topological mistakes (i.e. splits and merges) in a predicted segmentation and then computes several skeleton-based metrics that quantify its topological accuracy.\n\n## Details\n\nWe begin with a set of ground truth skeletons stored as individual SWC files, where the \"xyz\" coordinates correspond to voxels in an image. Each ground truth skeleton is loaded and represented as a NetworkX graph with the voxel coordinates as a node-level attribute. The evaluation is performed by first labeling the nodes of each graph with the corresponding segment IDs from the predicted segmentation. Topological mistakes are then detected by examining the labels of each edge, see figure below for an overview of how splits and merges are detected.\n\n<p>\n  <img src=\"imgs/topological_mistakes.png\" width=\"180\" alt=\"Topological mistakes detected in skeleton\">\n  <br>\n  <b> Figure: </b>Edges in skeletons are either correctly or incorrectly reconstructed based on the presence of mergers or splits that affect nodes attached to an edge. Colors correspond to segment IDs. From top to bottom: correct edge (both nodes have the same ID), split edge (nodes assigned to different segments), omitted edge (one or two nodes do not have an associated ID), merged edge (node assigned to a segment that covers more than one skeleton).\n</p>\n\nMetrics computed for each ground truth skeleton:\n\n- Number of Splits: Number of segments that a ground truth skeleton is broken into.\n- Number of Merges: Number of segments that are incorrectly merged into a single segment.\n- Percentage of Omit Edges: Proportion of edges in the ground truth that are omitted in the predicted segmentation.\n- Percentage of Merged Edges: Proportion of edges that are merged in the predicted segmentation.\n- Edge Accuracy: Proportion of edges that are correctly reconstructed in the predicted segmentation.\n- Expected Run Length (ERL): Expected length of segments or edges in the predicted segmentation.\n\n## Usage\n\nHere is a simple example of evaluating a predicted segmentation.\n\n```python\nfrom tifffile import imread\nfrom xlwt import Workbook\n\nimport numpy as np\n\nfrom segmentation_skeleton_metrics.skeleton_metric import SkeletonMetric\n\n\ndef evaluate():\n    # Initializations\n    pred_labels = imread(pred_labels_path)\n    skeleton_metric = SkeletonMetric(\n        target_swcs_pointer,\n        pred_labels,\n        fragments_pointer=pred_swcs_pointer,\n        output_dir=output_dir,\n    )\n    full_results, avg_results = skeleton_metric.run()\n\n    # Report results\n    print(f\"Averaged Results...\")\n    for stat_name in avg_results.keys():\n        print(f\"   {stat_name}: {round(avg_results[stat_name], 4)}\")\n\n    print(f\"\\nTotal Results...\")\n    print(\"# splits:\", np.sum(list(skeleton_metric.split_cnt.values())))\n    print(\"# merges:\", np.sum(list(skeleton_metric.merge_cnt.values())))\n\n\nif __name__ == \"__main__\":\n    # Initializations\n    output_dir = \"./\"\n    pred_labels_path = \"./pred_labels.tif\"\n    pred_swcs_pointer = \"./pred_swcs.zip\"\n    target_swcs_pointer = \"./target_swcs.zip\"\n\n    # Run\n    evaluate()\n\n\n```\n\n<p>\n  <img src=\"imgs/printouts.png\" width=750\">\n</p>\n\nNote: this Python package can also be used to evaluate the accuracy of a segmentation in which split mistakes have been corrected.\n\n## Installation\nTo use the software, in the root directory, run\n```bash\npip install -e .\n```\n\nTo develop the code, run\n```bash\npip install -e .[dev]\n```\n\nTo install this package from PyPI, run\n```bash\npip install aind-segmentation-evaluation\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package for evaluating neuron segmentations in terms of the number of splits and merges",
    "version": "4.9.26",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0e18eab0549a011e03451c548610f14ccd48a2067398385faea8f8dd652d4f4a",
                "md5": "3b00a956224c94d48ce91d067e8bb9ea",
                "sha256": "62ad83a9bb4eed66376099e85230c3add233f5fb1b682bb1df75ae22c3070a87"
            },
            "downloads": -1,
            "filename": "segmentation_skeleton_metrics-4.9.26-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3b00a956224c94d48ce91d067e8bb9ea",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 22723,
            "upload_time": "2024-11-19T19:24:30",
            "upload_time_iso_8601": "2024-11-19T19:24:30.654058Z",
            "url": "https://files.pythonhosted.org/packages/0e/18/eab0549a011e03451c548610f14ccd48a2067398385faea8f8dd652d4f4a/segmentation_skeleton_metrics-4.9.26-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "61a8e38f75ba56f1239b5777bead11d90943406059d10e0cf55d3e0e00a688a4",
                "md5": "388fbe9f09f7511f2f5170497282d022",
                "sha256": "5f7e28e29427560f7e358da1f9ab7c24be4c841a692ad1c0e80b6d85631206ae"
            },
            "downloads": -1,
            "filename": "segmentation_skeleton_metrics-4.9.26.tar.gz",
            "has_sig": false,
            "md5_digest": "388fbe9f09f7511f2f5170497282d022",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 641893,
            "upload_time": "2024-11-19T19:24:33",
            "upload_time_iso_8601": "2024-11-19T19:24:33.604458Z",
            "url": "https://files.pythonhosted.org/packages/61/a8/e38f75ba56f1239b5777bead11d90943406059d10e0cf55d3e0e00a688a4/segmentation_skeleton_metrics-4.9.26.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-19 19:24:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "segmentation-skeleton-metrics"
}
        
Elapsed time: 0.39027s