ParticleSeg3D


NameParticleSeg3D JSON
Version 0.2.16 PyPI version JSON
download
home_page
SummaryScalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT
upload_time2024-01-16 14:10:52
maintainer
docs_urlNone
authorKarol Gotkowski
requires_python>=3.8
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ParticleSeg3D

[![License Apache Software License 2.0](https://img.shields.io/pypi/l/ParticleSeg3D.svg?color=green)](https://github.com/Karol-G/ParticleSeg3D/raw/main/LICENSE)
[![PyPI](https://img.shields.io/pypi/v/ParticleSeg3D.svg?color=green)](https://pypi.org/project/ParticleSeg3D)
[![Python Version](https://img.shields.io/pypi/pyversions/ParticleSeg3D.svg?color=green)](https://python.org)
[![codecov](https://codecov.io/gh/Karol-G/ParticleSeg3D/branch/main/graph/badge.svg)](https://codecov.io/gh/Karol-G/ParticleSeg3D)

ParticleSeg3D is an instance segmentation method that extracts individual particles from large micro CT images taken from mineral samples embedded in an epoxy matrix. It is built on the powerful nnU-Net framework, introduces a particle size normalization, and makes use of a border-core representation to enable instance segmentation.
You can find the Arxiv version of the paper [here](https://arxiv.org/abs/2301.13319) and the journal version [here](https://www.sciencedirect.com/science/article/abs/pii/S0032591023010690).

<p align="center">
  <img width="500" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExMDVjOThmZGU3ZmM1Yzg0YzFlNDQyYzViOWIyODdlYTE1ZmNjM2FiNSZlcD12MV9pbnRlcm5hbF9naWZzX2dpZklkJmN0PWc/GxoBNxpCt79Rxt0Ezj/giphy.gif">
</p>

## Features
- Robust instance segmentation of mineral particles in micro CT images
- Application of nnU-Net framework for reliable and scalable image processing
- Border-core representation for instance segmentation
- Particle size normalization to account for different mineral types
- Trained on a diverse set of particles from various materials and minerals
- Can be applied to a wide variety of particle types, without additional manual annotations or retraining

## Installation
You can install `ParticleSeg3D` via [pip](https://pypi.org/project/ParticleSeg3D/):

    pip install ParticleSeg3D

You should now have the ParticleSeg3D package installed in your Python environment, and you'll be able to use all ParticleSeg3D commands from anywhere on your system.

If you intend to train ParticleSeg3D on new data, you will need to additionally install a modified version of the nnU-Net V1:
```cmd
pip install git+https://github.com/MIC-DKFZ/nnUNet.git@ParticleSeg3D
```

## Dataset

The sample dataset consisting of the whole CT images and the patch dataset with extracted patches from these samples alongside their respective instance segmentations can be found [here](https://syncandshare.desy.de/index.php/s/wjiDQ49KangiPj5).

## Usage - Inference

### Model download
ParticleSeg3D requires a trained model in order to run inference. The trained model can be downloaded [here](https://syncandshare.desy.de/index.php/s/id9D9pkATrFw65s). After downloading the weights, the weights need to be unpacked and saved at a location of your choosing.

### Conversion to Zarr
To run inference on an image using ParticleSeg3D, the image must first be converted into the Zarr format. The Zarr format suits our purposes well as it is designed for very large N-dimensional images. In case of a series of TIFF image files, this conversion can be accomplished using the following command from anywhere on the system:
```cmd
ps3d_tiff2zarr -i /path/to/input -o /path/to/output
```

Here's a breakdown of relevant arguments you should provide:
- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.
- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.

### Metadata preparation
ParticleSeg3D requires the image spacing and a rough mean particle diameter size in millimeter of each image that should be inferenced. 
This information needs to be provided in the form of a metadata.json as shown in this example:
```json
{
    "Ore1_Zone3_Concentrate": {
        "spacing": 0.01,
        "particle_size": 0.29292
    },
    "Recycling1": {
        "spacing": 0.011,
        "particle_size": 0.5082
    },
    "Ore2_PS850_VS10": {
        "spacing": 0.01,
        "particle_size": 1.2874
    },
    "Ore5": {
        "spacing": 0.0055,
        "particle_size": 0.2296
    },
    ...
}
```


### Inference
You can run inference on Zarr images from anywhere on the system using the ps3d_inference command. The Zarr images need to be located in a folder named 'images' and the 'metadata.json' needs to be placed next to the folder such that the folder structure looks like this:
```
.
├── metadata.json
└── images
    ├── Ore1_Zone3_Concentrate.zarr
    ├── Recycling1.zarr
    ├── Ore2_PS850_VS10.zarr
    ├── Ore5.zarr
    └── ...
```


Here's an example of how to use the command:
```cmd
ps3d_inference -i /path/to/input -o /path/to/output -m /path/to/model
```

Here's a breakdown of relevant arguments you should provide:

- '-i', '--input': Required. Absolute input path to the base folder containing the dataset. The dataset should be structured with 'images' directory and metadata.json.
- '-o', '--output': Required. Absolute output path to the save folder.
- '-m', '--model': Required. Absolute path to the model directory.
- '-n', '--name': Optional. The name(s) without extension of the image(s) that should be used for inference. Multiple names must be separated by spaces.

### Conversion from Zarr
Zarr images or Zarr predictions can be converted to TIFF  using the following command from anywhere on the system:
```cmd
ps3d_zarr2tiff -i /path/to/input -o /path/to/output
```

Here's a breakdown of relevant arguments you should provide:
- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.
- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.


## Usage - Training

### Conversion to NIFTI
To train a new ParticleSeg3D model on new training images, the training images must first be converted into the NIFTI format. The NIFTI format is required as input format by the nnU-Net. In case of a series of TIFF image files, this conversion can be accomplished using the following command from anywhere on the system:
```cmd
ps3d_tiff2nifti -i /path/to/input -o /path/to/output -s 0.1 0.1 0.1
```

Here's a breakdown of relevant arguments you should provide:
- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.
- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.
- '-s', '--spacing': Required. The image spacing given as three numbers separate by spaces.

### Metadata preparation
ParticleSeg3D requires the image spacing and a rough mean particle diameter size in millimeter of each image that should be used for training. 
This information needs to be provided in the form of a metadata.json as shown in this example:
```json
{
    "Ore1_Zone3_Concentrate": {
        "spacing": 0.01,
        "particle_size": 0.29292
    },
    "Recycling1": {
        "spacing": 0.011,
        "particle_size": 0.5082
    },
    "Ore2_PS850_VS10": {
        "spacing": 0.01,
        "particle_size": 1.2874
    },
    "Ore5": {
        "spacing": 0.0055,
        "particle_size": 0.2296
    },
    ...
}
```

### Z-Score preparation
ParticleSeg3D performs Z-score intensity normalization and thus requires the global mean and standard deviation of the entire dataset. This can either be exactly computed over all voxel of all images combined of estimated by randomly sampling a subset of voxels from all images. The second option might be more convinient on larger images.

### Dataset preprocessing
The NIFTI images and reference instance segmentations need to be preprocessed into the for the nnU-Net expected dataset format. The  NIFTI images need to be located in a folder named 'images', the NIFTI instance segmentations in a folder named 'instance_seg' and the 'metadata.json' needs to be placed next to both folders. Further, images and their respective instance segmentations should have the same name.The folder structure should look like this:
```
.
├── metadata.json
├── images
│   ├── Ore1_Zone3_Concentrate.zarr
│   ├── Recycling1.zarr
│   ├── Ore2_PS850_VS10.zarr
│   ├── Ore5.zarr
│   └── ...
└── instance_seg
    ├── Ore1_Zone3_Concentrate.zarr
    ├── Recycling1.zarr
    ├── Ore2_PS850_VS10.zarr
    ├── Ore5.zarr
    └── ...
```

The dataset can then be preprocessed into nnU-Net format with the following command:
```cmd
ps3d_train_preprocess -i /path/to/input -o /path/to/output -z 0.12345 0.6789
```

Here's a breakdown of relevant arguments you should provide:

- '-i', '--input': Required. Absolute input path to the base folder that contains the dataset structured in the form of the directories 'images' and 'instance_seg' and the file metadata.json.
- '-o', '--output': Required. Absolute output path to the preprocessed dataset directory.
- '-z', '--zscore': Required. The z-score used for intensity normalization.

### nnU-Net training

After the dataset has been preprocessed the training of the nnU-Net model can commence. In order to this, it is best to follow the instructions from the official nnU-Net V1 [documentation](https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1). Once the training finished the trained model can be used for inference on new images.

## License

Distributed under the terms of the [Apache Software License 2.0](http://www.apache.org/licenses/LICENSE-2.0) license,
"ParticleSeg3D" is free and open source software

# Citations

If you are using ParticleSeg3D for your article, please consider citing our paper:

```
@article{gotkowski2024particleseg3d,
  title={ParticleSeg3D: A scalable out-of-the-box deep learning segmentation solution for individual particle characterization from micro CT images in mineral processing and recycling},
  author={Gotkowski, Karol and Gupta, Shuvam and Godinho, Jose RA and Tochtrop, Camila GS and Maier-Hein, Klaus H and Isensee, Fabian},
  journal={Powder Technology},
  volume={434},
  pages={119286},
  year={2024},
  publisher={Elsevier}
}
```

# Acknowledgements
<img src="https://github.com/MIC-DKFZ/ParticleSeg3D/raw/main/HI_Logo.png" height="100px" />

<img src="https://github.com/MIC-DKFZ/ParticleSeg3D/raw/main/dkfz_logo.png" height="100px" />

ParticleSeg3D is developed and maintained by the Applied Computer Vision Lab (ACVL) of [Helmholtz Imaging](http://helmholtz-imaging.de) 
and the [Division of Medical Image Computing](https://www.dkfz.de/en/mic/index.php) at the 
[German Cancer Research Center (DKFZ)](https://www.dkfz.de/en/index.html).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "ParticleSeg3D",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Karol Gotkowski",
    "author_email": "karol.gotkowski@dkfz.de",
    "download_url": "https://files.pythonhosted.org/packages/54/c2/3b94b80d0743a51adcccc77a9cab4d254fd8a6dbaca9385d6d937f34da1c/ParticleSeg3D-0.2.16.tar.gz",
    "platform": null,
    "description": "# ParticleSeg3D\n\n[![License Apache Software License 2.0](https://img.shields.io/pypi/l/ParticleSeg3D.svg?color=green)](https://github.com/Karol-G/ParticleSeg3D/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/ParticleSeg3D.svg?color=green)](https://pypi.org/project/ParticleSeg3D)\n[![Python Version](https://img.shields.io/pypi/pyversions/ParticleSeg3D.svg?color=green)](https://python.org)\n[![codecov](https://codecov.io/gh/Karol-G/ParticleSeg3D/branch/main/graph/badge.svg)](https://codecov.io/gh/Karol-G/ParticleSeg3D)\n\nParticleSeg3D is an instance segmentation method that extracts individual particles from large micro CT images taken from mineral samples embedded in an epoxy matrix. It is built on the powerful nnU-Net framework, introduces a particle size normalization, and makes use of a border-core representation to enable instance segmentation.\nYou can find the Arxiv version of the paper [here](https://arxiv.org/abs/2301.13319) and the journal version [here](https://www.sciencedirect.com/science/article/abs/pii/S0032591023010690).\n\n<p align=\"center\">\n  <img width=\"500\" src=\"https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExMDVjOThmZGU3ZmM1Yzg0YzFlNDQyYzViOWIyODdlYTE1ZmNjM2FiNSZlcD12MV9pbnRlcm5hbF9naWZzX2dpZklkJmN0PWc/GxoBNxpCt79Rxt0Ezj/giphy.gif\">\n</p>\n\n## Features\n- Robust instance segmentation of mineral particles in micro CT images\n- Application of nnU-Net framework for reliable and scalable image processing\n- Border-core representation for instance segmentation\n- Particle size normalization to account for different mineral types\n- Trained on a diverse set of particles from various materials and minerals\n- Can be applied to a wide variety of particle types, without additional manual annotations or retraining\n\n## Installation\nYou can install `ParticleSeg3D` via [pip](https://pypi.org/project/ParticleSeg3D/):\n\n    pip install ParticleSeg3D\n\nYou should now have the ParticleSeg3D package installed in your Python environment, and you'll be able to use all ParticleSeg3D commands from anywhere on your system.\n\nIf you intend to train ParticleSeg3D on new data, you will need to additionally install a modified version of the nnU-Net V1:\n```cmd\npip install git+https://github.com/MIC-DKFZ/nnUNet.git@ParticleSeg3D\n```\n\n## Dataset\n\nThe sample dataset consisting of the whole CT images and the patch dataset with extracted patches from these samples alongside their respective instance segmentations can be found [here](https://syncandshare.desy.de/index.php/s/wjiDQ49KangiPj5).\n\n## Usage - Inference\n\n### Model download\nParticleSeg3D requires a trained model in order to run inference. The trained model can be downloaded [here](https://syncandshare.desy.de/index.php/s/id9D9pkATrFw65s). After downloading the weights, the weights need to be unpacked and saved at a location of your choosing.\n\n### Conversion to Zarr\nTo run inference on an image using ParticleSeg3D, the image must first be converted into the Zarr format. The Zarr format suits our purposes well as it is designed for very large N-dimensional images. In case of a series of TIFF image files, this conversion can be accomplished using the following command from anywhere on the system:\n```cmd\nps3d_tiff2zarr -i /path/to/input -o /path/to/output\n```\n\nHere's a breakdown of relevant arguments you should provide:\n- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.\n- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.\n\n### Metadata preparation\nParticleSeg3D requires the image spacing and a rough mean particle diameter size in millimeter of each image that should be inferenced. \nThis information needs to be provided in the form of a metadata.json as shown in this example:\n```json\n{\n    \"Ore1_Zone3_Concentrate\": {\n        \"spacing\": 0.01,\n        \"particle_size\": 0.29292\n    },\n    \"Recycling1\": {\n        \"spacing\": 0.011,\n        \"particle_size\": 0.5082\n    },\n    \"Ore2_PS850_VS10\": {\n        \"spacing\": 0.01,\n        \"particle_size\": 1.2874\n    },\n    \"Ore5\": {\n        \"spacing\": 0.0055,\n        \"particle_size\": 0.2296\n    },\n    ...\n}\n```\n\n\n### Inference\nYou can run inference on Zarr images from anywhere on the system using the ps3d_inference command. The Zarr images need to be located in a folder named 'images' and the 'metadata.json' needs to be placed next to the folder such that the folder structure looks like this:\n```\n.\n\u251c\u2500\u2500 metadata.json\n\u2514\u2500\u2500 images\n    \u251c\u2500\u2500 Ore1_Zone3_Concentrate.zarr\n    \u251c\u2500\u2500 Recycling1.zarr\n    \u251c\u2500\u2500 Ore2_PS850_VS10.zarr\n    \u251c\u2500\u2500 Ore5.zarr\n    \u2514\u2500\u2500 ...\n```\n\n\nHere's an example of how to use the command:\n```cmd\nps3d_inference -i /path/to/input -o /path/to/output -m /path/to/model\n```\n\nHere's a breakdown of relevant arguments you should provide:\n\n- '-i', '--input': Required. Absolute input path to the base folder containing the dataset. The dataset should be structured with 'images' directory and metadata.json.\n- '-o', '--output': Required. Absolute output path to the save folder.\n- '-m', '--model': Required. Absolute path to the model directory.\n- '-n', '--name': Optional. The name(s) without extension of the image(s) that should be used for inference. Multiple names must be separated by spaces.\n\n### Conversion from Zarr\nZarr images or Zarr predictions can be converted to TIFF  using the following command from anywhere on the system:\n```cmd\nps3d_zarr2tiff -i /path/to/input -o /path/to/output\n```\n\nHere's a breakdown of relevant arguments you should provide:\n- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.\n- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.\n\n\n## Usage - Training\n\n### Conversion to NIFTI\nTo train a new ParticleSeg3D model on new training images, the training images must first be converted into the NIFTI format. The NIFTI format is required as input format by the nnU-Net. In case of a series of TIFF image files, this conversion can be accomplished using the following command from anywhere on the system:\n```cmd\nps3d_tiff2nifti -i /path/to/input -o /path/to/output -s 0.1 0.1 0.1\n```\n\nHere's a breakdown of relevant arguments you should provide:\n- '-i', '--input': Required. Absolute input path to the folder that contains the TIFF image slices that should be converted to a Zarr image.\n- '-o', '--output': Required. Absolute output path to the folder that should be used to save the Zarr image.\n- '-s', '--spacing': Required. The image spacing given as three numbers separate by spaces.\n\n### Metadata preparation\nParticleSeg3D requires the image spacing and a rough mean particle diameter size in millimeter of each image that should be used for training. \nThis information needs to be provided in the form of a metadata.json as shown in this example:\n```json\n{\n    \"Ore1_Zone3_Concentrate\": {\n        \"spacing\": 0.01,\n        \"particle_size\": 0.29292\n    },\n    \"Recycling1\": {\n        \"spacing\": 0.011,\n        \"particle_size\": 0.5082\n    },\n    \"Ore2_PS850_VS10\": {\n        \"spacing\": 0.01,\n        \"particle_size\": 1.2874\n    },\n    \"Ore5\": {\n        \"spacing\": 0.0055,\n        \"particle_size\": 0.2296\n    },\n    ...\n}\n```\n\n### Z-Score preparation\nParticleSeg3D performs Z-score intensity normalization and thus requires the global mean and standard deviation of the entire dataset. This can either be exactly computed over all voxel of all images combined of estimated by randomly sampling a subset of voxels from all images. The second option might be more convinient on larger images.\n\n### Dataset preprocessing\nThe NIFTI images and reference instance segmentations need to be preprocessed into the for the nnU-Net expected dataset format. The  NIFTI images need to be located in a folder named 'images', the NIFTI instance segmentations in a folder named 'instance_seg' and the 'metadata.json' needs to be placed next to both folders. Further, images and their respective instance segmentations should have the same name.The folder structure should look like this:\n```\n.\n\u251c\u2500\u2500 metadata.json\n\u251c\u2500\u2500 images\n\u2502   \u251c\u2500\u2500 Ore1_Zone3_Concentrate.zarr\n\u2502   \u251c\u2500\u2500 Recycling1.zarr\n\u2502   \u251c\u2500\u2500 Ore2_PS850_VS10.zarr\n\u2502   \u251c\u2500\u2500 Ore5.zarr\n\u2502   \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 instance_seg\n    \u251c\u2500\u2500 Ore1_Zone3_Concentrate.zarr\n    \u251c\u2500\u2500 Recycling1.zarr\n    \u251c\u2500\u2500 Ore2_PS850_VS10.zarr\n    \u251c\u2500\u2500 Ore5.zarr\n    \u2514\u2500\u2500 ...\n```\n\nThe dataset can then be preprocessed into nnU-Net format with the following command:\n```cmd\nps3d_train_preprocess -i /path/to/input -o /path/to/output -z 0.12345 0.6789\n```\n\nHere's a breakdown of relevant arguments you should provide:\n\n- '-i', '--input': Required. Absolute input path to the base folder that contains the dataset structured in the form of the directories 'images' and 'instance_seg' and the file metadata.json.\n- '-o', '--output': Required. Absolute output path to the preprocessed dataset directory.\n- '-z', '--zscore': Required. The z-score used for intensity normalization.\n\n### nnU-Net training\n\nAfter the dataset has been preprocessed the training of the nnU-Net model can commence. In order to this, it is best to follow the instructions from the official nnU-Net V1 [documentation](https://github.com/MIC-DKFZ/nnUNet/tree/nnunetv1). Once the training finished the trained model can be used for inference on new images.\n\n## License\n\nDistributed under the terms of the [Apache Software License 2.0](http://www.apache.org/licenses/LICENSE-2.0) license,\n\"ParticleSeg3D\" is free and open source software\n\n# Citations\n\nIf you are using ParticleSeg3D for your article, please consider citing our paper:\n\n```\n@article{gotkowski2024particleseg3d,\n  title={ParticleSeg3D: A scalable out-of-the-box deep learning segmentation solution for individual particle characterization from micro CT images in mineral processing and recycling},\n  author={Gotkowski, Karol and Gupta, Shuvam and Godinho, Jose RA and Tochtrop, Camila GS and Maier-Hein, Klaus H and Isensee, Fabian},\n  journal={Powder Technology},\n  volume={434},\n  pages={119286},\n  year={2024},\n  publisher={Elsevier}\n}\n```\n\n# Acknowledgements\n<img src=\"https://github.com/MIC-DKFZ/ParticleSeg3D/raw/main/HI_Logo.png\" height=\"100px\" />\n\n<img src=\"https://github.com/MIC-DKFZ/ParticleSeg3D/raw/main/dkfz_logo.png\" height=\"100px\" />\n\nParticleSeg3D is developed and maintained by the Applied Computer Vision Lab (ACVL) of [Helmholtz Imaging](http://helmholtz-imaging.de) \nand the [Division of Medical Image Computing](https://www.dkfz.de/en/mic/index.php) at the \n[German Cancer Research Center (DKFZ)](https://www.dkfz.de/en/index.html).\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT",
    "version": "0.2.16",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3225e1257c38098d18e239c0dce54313080be4697c80c845c25cba26f100fa25",
                "md5": "5d54244577dbc7c48ad5cbc3bc131f2b",
                "sha256": "5dbfc74b01ef0fd4c7849607510deaecb1353605043daa51df025226b40e05be"
            },
            "downloads": -1,
            "filename": "ParticleSeg3D-0.2.16-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5d54244577dbc7c48ad5cbc3bc131f2b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 58416,
            "upload_time": "2024-01-16T14:10:47",
            "upload_time_iso_8601": "2024-01-16T14:10:47.151547Z",
            "url": "https://files.pythonhosted.org/packages/32/25/e1257c38098d18e239c0dce54313080be4697c80c845c25cba26f100fa25/ParticleSeg3D-0.2.16-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "54c23b94b80d0743a51adcccc77a9cab4d254fd8a6dbaca9385d6d937f34da1c",
                "md5": "be27b920b6a6f6f8a771ad59b59aa19f",
                "sha256": "96915061b4fbabf63caacf0ef141779ee1543c551c5626263a5a5f8b87f82c15"
            },
            "downloads": -1,
            "filename": "ParticleSeg3D-0.2.16.tar.gz",
            "has_sig": false,
            "md5_digest": "be27b920b6a6f6f8a771ad59b59aa19f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 52151,
            "upload_time": "2024-01-16T14:10:52",
            "upload_time_iso_8601": "2024-01-16T14:10:52.532671Z",
            "url": "https://files.pythonhosted.org/packages/54/c2/3b94b80d0743a51adcccc77a9cab4d254fd8a6dbaca9385d6d937f34da1c/ParticleSeg3D-0.2.16.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-16 14:10:52",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "particleseg3d"
}
        
Elapsed time: 0.64203s