# retinal_thin_vessels
A Python package for computing the recall and precision scores specifically on thin vessels in retinal images and generating weight masks for BCE Loss to enhance models perfomance on segmenting these fine structures, as detailed in the paper "Vessel-Width-Based Metrics and Weight Masks for Retinal Blood Vessel Segmentation", published in WUW-SIBGRAPI 2025. The package also includes a function for visualizing thickness-based filtered masks, the basic structure for computing the proposed metrics.
It is worth stating that the functions for computing these metrics and the function for obtaining the weight masks accept, as input:
- A batch of segmentation images
- A single segmentation image (with or without the channels dimension)
In order to better understand this, you may find helpful to read the documentation of these functions.
## Package installation
```bash
pip install retinal_thin_vessels
```
## Usage Demonstration with DRIVE and CHASEDB1
### Recall and Precision on Thin Vessels Metrics
To ensure the metrics are reliable, it is important to visualize the specific thin-vessel mask used by the given functions in their calculations. Therefore, a core function, get_thin_vessels_mask(), is also provided. This function takes a standard segmentation mask and returns a new mask containing only the thin vessels.
The following code demonstrates how to generate this filtered mask using images from two public datasets: DRIVE and CHASEDB1.
```python
from PIL import Image
from retinal_thin_vessels.core import get_thin_vessels_mask
from retinal_thin_vessels.metrics import recall_thin_vessels, precision_thin_vessels
from sklearn.metrics import recall_score, precision_score
```
```python
# Import the original segmentation masks
seg_DRIVE = Image.open(f"tests/imgs/DRIVE_seg_example.png")
seg_CDB1 = Image.open(f"tests/imgs/CHASEDB1_seg_example.png")
# generates new masks containing only thin vessels
thin_vessels_seg_DRIVE = get_thin_vessels_mask(seg_DRIVE)
thin_vessels_seg_CDB1 = get_thin_vessels_mask(seg_CDB1)
# Display the original segmentation mask and the resulting thin-vessel-only mask for comparison
seg_DRIVE.show()
img_DRIVE = Image.fromarray(thin_vessels_seg_DRIVE)
img_DRIVE.show()
seg_CDB1.show()
img_CDB1 = Image.fromarray(thin_vessels_seg_CDB1)
img_CDB1.show()
```
<img src="tests/imgs/DRIVE_seg_example.png" alt="DRIVE_thin_vessels_example" width=450/>
<img src="tests/imgs/DRIVE_seg_thin_example.png" alt="DRIVE_thin_vessels_example" width=450/>
<img src="tests/imgs/CHASEDB1_seg_example.png" alt="CHASEDB1_thin_vessels_example" width=450/>
<img src="tests/imgs/CHASEDB1_seg_thin_example.png" alt="CHASEDB1_thin_vessels_example" width=450/>
Furthermore, to demonstrate the metric calculation functions, you can run the code below. It compares the overall metrics (calculated with scikit-learn) with the thin-vessel-specific metrics calculated by this package.
```python
# Load the ground truth segmentation mask and a sample prediction
pred = Image.open(f"tests/imgs/DRIVE_pred_example.png")
seg_DRIVE = Image.open(f"tests/imgs/DRIVE_seg_example.png").resize((pred.size), Image.NEAREST)
# Binarize images to a 0/1 format for scikit-learn compatibility
seg_DRIVE = np.where(np.array(seg_DRIVE) > 0, 1, 0)
pred = np.where(np.array(pred) > 0, 1, 0)
# Compute and print the metrics
print(f"Overall Recall score: {recall_score(seg_DRIVE.flatten(), pred.flatten())}")
print(f"Recall score on thin vessels: {recall_thin_vessels(seg_DRIVE, pred)}")
print("-" * 30)
print(f"Overall Precision score: {precision_score(seg_DRIVE.flatten(), pred.flatten())}")
print(f"Precision score on thin Vessels: {precision_thin_vessels(seg_DRIVE, pred)}")
```
If the program is running correctly with the provided sample images, the results should be similar to this:
```bash
Overall Recall score: 0.8553852359822509
Recall score on thin vessels: 0.751244555071562
------------------------------
Overall Precision score: 0.8422369623068674
Precision score on thin Vessels: 0.6527915897144481
```
### Weight masks
In the paper, it is proposed two weight masks formulations for setting the weight for a pixel $i$:
- W0 formulation: $$W_i = \frac{2}{R^2}$$
- W1 formulation: $$W_i = \frac{D_i+1}{R^2}$$
where, respectively, $R$ and $D_i$ refer to the radius of the vessel to which the pixel belongs to and the pixel's distance to the background.
The following code demonstrates how to generate weight masks using images from two public datasets: DRIVE and CHASEDB1.
```python
from PIL import Image
from retinal_thin_vessels.weights import get_weight_mask
```
```python
# Import the original segmentation masks
seg_DRIVE = Image.open(f"tests/imgs/DRIVE_seg_example.png")
seg_CDB1 = Image.open(f"tests/imgs/CHASEDB1_seg_example.png")
# Generates the weight masks using the W1 formulation (just for example)
W_1_DRIVE = get_weight_mask(seg_DRIVE, weights_function=1)
W_1_CHASEDB1 = get_weight_mask(seg_CDB1, weights_function=1)
print(f"Weights in the weight mask produced by W1 formulation over the DRIVE segmentation mask belong to the interval [{W_1_DRIVE.min()},{W_1_DRIVE.max()}]")
print(f"Weights in the weight mask produced by W1 formulation over the CHASEDB1 segmentation mask belong to the interval [{W_1_CDB1.min()},{W_1_CDB1.max()}]")
# Displays a greyscale image for each computed weight mask followed by the segmentation mask
seg_DRIVE.show()
W_1_DRIVE_greyscale = 255*(W_1_DRIVE - W_1_DRIVE.min())/(W_1_DRIVE.max()-W_1_DRIVE.min())
img_DRIVE = Image.fromarray(W_1_DRIVE_greyscale.astype(np.uint8))
img_DRIVE.show()
seg_CDB1.show()
W_1_CDB1_greyscale = 255*(W_1_CDB1 - W_1_CDB1.min())/(W_1_CDB1.max()-W_1_CDB1.min())
img_CDB1 = Image.fromarray(W_1_CDB1_greyscale.astype(np.uint8))
img_CDB1.show()
```
If the program is running correctly with the provided sample images, the results should be similar to this:
```bash
Weights in the weight mask produced by W1 formulation over the DRIVE segmentation mask belong to the interval [0.0,3.2360680103302]
Weights in the weight mask produced by W1 formulation over the CHASEDB1 segmentation mask belong to the interval [0.0,3.0]
```
<img src="tests/imgs/DRIVE_seg_example.png" alt="CHASEDB1_thin_vessels_example.png" width=450/>
<img src="tests/imgs/DRIVE_W1_grey_example.png" alt="DRIVE_W1_greyscale_weight_mask_example.png" width=450/>
<img src="tests/imgs/CHASEDB1_seg_example.png" alt="CHASEDB1_thin_vessels_example.png" width=450/>
<img src="tests/imgs/CHASEDB1_W1_grey_example.png" alt="CHASEDB1_W1_greyscale_weight_mask_example.png" width=450/>
## Overall view
According to the study conducted in the referred paper (in which it was used the U-Net architecture and the BCE loss), the expected effect of each weight mask is:
- W0 mask: enhances models ability to preserve vessel architecture (high precision on thin vessels and lower recall on thin vessels)
- W1 mask: enhances models ability in correctly detecting thin vessels, at the expense of anatomical fidelity (high recall on thin vessels and lower precision on thin vessels)
Therefore, it was noticed a kind of opposite behavior provoked by each one of these masks. This conclusion and both statements above are supported by the results in the following table:
<p align="center">
<img src="tests/misc/table_weight_masks.png" alt="weight_masks_table.png" width=900/>
</p>
OBS: Standard weight mask (Std) stands for sklearn's compute_class_weight function, that aims soly on balancing the impact of each class in the loss function. Therefore, it only makes white and black pixels have the same impact on the loss function (it was used as the baseline for the paper). Morevoer, "WBCE" stands for Weighted Binary Cross Entropy loss.
Raw data
{
"_id": null,
"home_page": null,
"name": "retinal-thin-vessels",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "retina, retinal, retinal-vessels, vessel-segmentation, image-analysis, medical-imaging, deep-learning, weight-map, pytorch, thin-vessels, image-segmentation, binary-masks, binary-mask",
"author": null,
"author_email": "Jo\u00e3o Paulo Menezes Linaris <joaolinaris@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/f6/29/cf15823b697f5aea53ad514fe0dc559d5acd6a72715815194f396051a05f/retinal_thin_vessels-2.2.tar.gz",
"platform": null,
"description": "# retinal_thin_vessels\n\nA Python package for computing the recall and precision scores specifically on thin vessels in retinal images and generating weight masks for BCE Loss to enhance models perfomance on segmenting these fine structures, as detailed in the paper \"Vessel-Width-Based Metrics and Weight Masks for Retinal Blood Vessel Segmentation\", published in WUW-SIBGRAPI 2025. The package also includes a function for visualizing thickness-based filtered masks, the basic structure for computing the proposed metrics.\n\nIt is worth stating that the functions for computing these metrics and the function for obtaining the weight masks accept, as input:\n- A batch of segmentation images\n- A single segmentation image (with or without the channels dimension)\n\nIn order to better understand this, you may find helpful to read the documentation of these functions.\n\n## Package installation\n\n```bash\npip install retinal_thin_vessels\n```\n\n## Usage Demonstration with DRIVE and CHASEDB1\n\n### Recall and Precision on Thin Vessels Metrics\nTo ensure the metrics are reliable, it is important to visualize the specific thin-vessel mask used by the given functions in their calculations. Therefore, a core function, get_thin_vessels_mask(), is also provided. This function takes a standard segmentation mask and returns a new mask containing only the thin vessels.\n\nThe following code demonstrates how to generate this filtered mask using images from two public datasets: DRIVE and CHASEDB1.\n\n```python\nfrom PIL import Image\nfrom retinal_thin_vessels.core import get_thin_vessels_mask\nfrom retinal_thin_vessels.metrics import recall_thin_vessels, precision_thin_vessels\nfrom sklearn.metrics import recall_score, precision_score\n```\n\n```python\n# Import the original segmentation masks\nseg_DRIVE = Image.open(f\"tests/imgs/DRIVE_seg_example.png\")\nseg_CDB1 = Image.open(f\"tests/imgs/CHASEDB1_seg_example.png\")\n\n# generates new masks containing only thin vessels\nthin_vessels_seg_DRIVE = get_thin_vessels_mask(seg_DRIVE)\nthin_vessels_seg_CDB1 = get_thin_vessels_mask(seg_CDB1)\n\n# Display the original segmentation mask and the resulting thin-vessel-only mask for comparison\nseg_DRIVE.show()\nimg_DRIVE = Image.fromarray(thin_vessels_seg_DRIVE)\nimg_DRIVE.show()\n\nseg_CDB1.show()\nimg_CDB1 = Image.fromarray(thin_vessels_seg_CDB1)\nimg_CDB1.show()\n```\n<img src=\"tests/imgs/DRIVE_seg_example.png\" alt=\"DRIVE_thin_vessels_example\" width=450/>\n<img src=\"tests/imgs/DRIVE_seg_thin_example.png\" alt=\"DRIVE_thin_vessels_example\" width=450/>\n<img src=\"tests/imgs/CHASEDB1_seg_example.png\" alt=\"CHASEDB1_thin_vessels_example\" width=450/>\n<img src=\"tests/imgs/CHASEDB1_seg_thin_example.png\" alt=\"CHASEDB1_thin_vessels_example\" width=450/>\n\nFurthermore, to demonstrate the metric calculation functions, you can run the code below. It compares the overall metrics (calculated with scikit-learn) with the thin-vessel-specific metrics calculated by this package.\n\n```python\n# Load the ground truth segmentation mask and a sample prediction\npred = Image.open(f\"tests/imgs/DRIVE_pred_example.png\")\nseg_DRIVE = Image.open(f\"tests/imgs/DRIVE_seg_example.png\").resize((pred.size), Image.NEAREST)\n\n# Binarize images to a 0/1 format for scikit-learn compatibility\nseg_DRIVE = np.where(np.array(seg_DRIVE) > 0, 1, 0)\npred = np.where(np.array(pred) > 0, 1, 0)\n\n# Compute and print the metrics\nprint(f\"Overall Recall score: {recall_score(seg_DRIVE.flatten(), pred.flatten())}\")\nprint(f\"Recall score on thin vessels: {recall_thin_vessels(seg_DRIVE, pred)}\")\nprint(\"-\" * 30)\nprint(f\"Overall Precision score: {precision_score(seg_DRIVE.flatten(), pred.flatten())}\")\nprint(f\"Precision score on thin Vessels: {precision_thin_vessels(seg_DRIVE, pred)}\")\n```\n\nIf the program is running correctly with the provided sample images, the results should be similar to this:\n\n```bash\nOverall Recall score: 0.8553852359822509\nRecall score on thin vessels: 0.751244555071562\n------------------------------\nOverall Precision score: 0.8422369623068674\nPrecision score on thin Vessels: 0.6527915897144481\n```\n\n### Weight masks\n\nIn the paper, it is proposed two weight masks formulations for setting the weight for a pixel $i$:\n- W0 formulation: $$W_i = \\frac{2}{R^2}$$\n- W1 formulation: $$W_i = \\frac{D_i+1}{R^2}$$\n\nwhere, respectively, $R$ and $D_i$ refer to the radius of the vessel to which the pixel belongs to and the pixel's distance to the background. \n\nThe following code demonstrates how to generate weight masks using images from two public datasets: DRIVE and CHASEDB1.\n\n```python\nfrom PIL import Image\nfrom retinal_thin_vessels.weights import get_weight_mask\n```\n\n```python\n# Import the original segmentation masks\nseg_DRIVE = Image.open(f\"tests/imgs/DRIVE_seg_example.png\")\nseg_CDB1 = Image.open(f\"tests/imgs/CHASEDB1_seg_example.png\")\n\n# Generates the weight masks using the W1 formulation (just for example)\nW_1_DRIVE = get_weight_mask(seg_DRIVE, weights_function=1)\nW_1_CHASEDB1 = get_weight_mask(seg_CDB1, weights_function=1)\nprint(f\"Weights in the weight mask produced by W1 formulation over the DRIVE segmentation mask belong to the interval [{W_1_DRIVE.min()},{W_1_DRIVE.max()}]\")\nprint(f\"Weights in the weight mask produced by W1 formulation over the CHASEDB1 segmentation mask belong to the interval [{W_1_CDB1.min()},{W_1_CDB1.max()}]\")\n\n# Displays a greyscale image for each computed weight mask followed by the segmentation mask\nseg_DRIVE.show()\nW_1_DRIVE_greyscale = 255*(W_1_DRIVE - W_1_DRIVE.min())/(W_1_DRIVE.max()-W_1_DRIVE.min())\nimg_DRIVE = Image.fromarray(W_1_DRIVE_greyscale.astype(np.uint8))\nimg_DRIVE.show()\n\nseg_CDB1.show()\nW_1_CDB1_greyscale = 255*(W_1_CDB1 - W_1_CDB1.min())/(W_1_CDB1.max()-W_1_CDB1.min())\nimg_CDB1 = Image.fromarray(W_1_CDB1_greyscale.astype(np.uint8))\nimg_CDB1.show()\n```\nIf the program is running correctly with the provided sample images, the results should be similar to this:\n\n```bash\nWeights in the weight mask produced by W1 formulation over the DRIVE segmentation mask belong to the interval [0.0,3.2360680103302]\nWeights in the weight mask produced by W1 formulation over the CHASEDB1 segmentation mask belong to the interval [0.0,3.0]\n```\n\n<img src=\"tests/imgs/DRIVE_seg_example.png\" alt=\"CHASEDB1_thin_vessels_example.png\" width=450/>\n<img src=\"tests/imgs/DRIVE_W1_grey_example.png\" alt=\"DRIVE_W1_greyscale_weight_mask_example.png\" width=450/>\n<img src=\"tests/imgs/CHASEDB1_seg_example.png\" alt=\"CHASEDB1_thin_vessels_example.png\" width=450/>\n<img src=\"tests/imgs/CHASEDB1_W1_grey_example.png\" alt=\"CHASEDB1_W1_greyscale_weight_mask_example.png\" width=450/>\n\n## Overall view\n\nAccording to the study conducted in the referred paper (in which it was used the U-Net architecture and the BCE loss), the expected effect of each weight mask is:\n\n- W0 mask: enhances models ability to preserve vessel architecture (high precision on thin vessels and lower recall on thin vessels)\n- W1 mask: enhances models ability in correctly detecting thin vessels, at the expense of anatomical fidelity (high recall on thin vessels and lower precision on thin vessels)\n\nTherefore, it was noticed a kind of opposite behavior provoked by each one of these masks. This conclusion and both statements above are supported by the results in the following table:\n<p align=\"center\">\n <img src=\"tests/misc/table_weight_masks.png\" alt=\"weight_masks_table.png\" width=900/>\n</p>\n\nOBS: Standard weight mask (Std) stands for sklearn's compute_class_weight function, that aims soly on balancing the impact of each class in the loss function. Therefore, it only makes white and black pixels have the same impact on the loss function (it was used as the baseline for the paper). Morevoer, \"WBCE\" stands for Weighted Binary Cross Entropy loss.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python package for analyzing thin retinal vessels and creating vessels-thickness based weight maps",
"version": "2.2",
"project_urls": {
"Homepage": "https://github.com/J-Linaris/retinal-thin-vessels",
"Repository": "https://github.com/J-Linaris/retinal-thin-vessels"
},
"split_keywords": [
"retina",
" retinal",
" retinal-vessels",
" vessel-segmentation",
" image-analysis",
" medical-imaging",
" deep-learning",
" weight-map",
" pytorch",
" thin-vessels",
" image-segmentation",
" binary-masks",
" binary-mask"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a36486f26d608214660075509eacc97920099b00b3ff927d23d972c3b3e432da",
"md5": "1ed73a90cd4bf8068296347a4becdfee",
"sha256": "8763f73f63e6d024c8533724982779443ad7c447d5c9b9a3ed94e23345574fa9"
},
"downloads": -1,
"filename": "retinal_thin_vessels-2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1ed73a90cd4bf8068296347a4becdfee",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 15365,
"upload_time": "2025-09-17T23:32:38",
"upload_time_iso_8601": "2025-09-17T23:32:38.177508Z",
"url": "https://files.pythonhosted.org/packages/a3/64/86f26d608214660075509eacc97920099b00b3ff927d23d972c3b3e432da/retinal_thin_vessels-2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f629cf15823b697f5aea53ad514fe0dc559d5acd6a72715815194f396051a05f",
"md5": "d58e856964b58e5cbd54909b71e8ccb3",
"sha256": "87a4a35f701d484ec916124122976440b5b0cbdf532781988c6591d14e7b7052"
},
"downloads": -1,
"filename": "retinal_thin_vessels-2.2.tar.gz",
"has_sig": false,
"md5_digest": "d58e856964b58e5cbd54909b71e8ccb3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 15164,
"upload_time": "2025-09-17T23:32:39",
"upload_time_iso_8601": "2025-09-17T23:32:39.815922Z",
"url": "https://files.pythonhosted.org/packages/f6/29/cf15823b697f5aea53ad514fe0dc559d5acd6a72715815194f396051a05f/retinal_thin_vessels-2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-17 23:32:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "J-Linaris",
"github_project": "retinal-thin-vessels",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "retinal-thin-vessels"
}