dmri-rcnn


Namedmri-rcnn JSON
Version 0.4.1 PyPI version JSON
download
home_pagehttps://github.com/m-lyon/dMRI-RCNN
SummaryDiffusion MRI Recurrent CNN for Angular Super-resolution.
upload_time2023-11-17 12:47:45
maintainer
docs_urlNone
authorMatthew Lyon
requires_python>=3.8
licenseMIT License
keywords ai cv computer-vision mri dmri super-resolution cnn
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Angular Super-Resolution in Diffusion MRI with a 3D Recurrent Convolutional Autoencoder

![Model Architecture](resources/rcnn_dmri_model.png)

[![PyPI version](https://badge.fury.io/py/dmri-rcnn.svg)](https://badge.fury.io/py/dmri-rcnn)

This project enhances the angular resolution of dMRI data through the use of a Recurrent CNN. This codebase is associated with the
following paper. Please cite the paper if you use this model:

[Angular Super-Resolution in Diffusion MRI with a 3D Recurrent Convolutional Autoencoder](https://arxiv.org/abs/2203.15598) [MIDL 2022]

## Table of contents

* [Installation](#installation)
* [Inference](#inference)
* [Training](#training)
* [Docker](#docker)
* [Spherical Harmonic Baseline](#spherical-harmonic-baseline)

## Installation

`dMRI-RCNN` can be installed by via pip:

```bash
pip install dmri-rcnn
```

### Requirements

`dMRI-RCNN` uses [TensorFlow](https://www.tensorflow.org/) as the deep learning architecture. To enable [GPU usage within TensorFlow](https://www.tensorflow.org/install/gpu), you should ensure the appropriate prerequisites are installed.

Listed below are the requirements for this package.

* `tensorflow>=2.6.0`
* `numpy`
* `einops`
* `nibabel`
* `tqdm`

## Inference

Once installed, use `run_dmri_rcnn.py` to perform inference of new dMRI volumes. Below lists the data requirements to use the script, and the command-line arguments available for inference.

### Data

To run this script, dMRI data is required in the following format:

* Context dMRI file. The dMRI data used as context within the model to infer other volumes
  * File format: `NIfTI`
  * Single-shell: containing only one b-value.
  * Dimensions: `(i, j, k, q_in)`.
    * `(i, j, k)` are the spatial dimensions of the data
    * `q_in` number of samples within the q-space dimension. This can either be `6`, `10`, or `30` and will affect which of the trained models is used.
* Context b-vector file. The corresponding b-vectors for the context dMRI file.
  * File format: text file, whitespace delimited.
  * `3` rows corresponding to the `x, y, z` co-ordinates of q-space
  * `q_in` columns corresponding to the q-space directions sampled. `q_in` must either be `6`, `10`, or `30`.
* Target b-vector file. The corresponding b-vectors for the inferred dMRI data.
  * File format: text file, whitespace delimited.
  * `3` rows corresponding to the `x, y, z` co-ordinates of q-space
  * `q_out` columns corresponding to the q-space directions sampled.
* Brain mask file. Binary brain mask file for dMRI data.
  * File format: `NIfTI`
  * Dimensions: `(i, j, k)`. Same spatial dimensions as used in the dMRI data.

The script will create the following data:

* Inferred dMRI file. dMRI volumes inferred from the model as defined by the target b-vectors.
  * File format: `NIfTI`
  * Dimensions: `(i, j, k, q_out)`.
    * `q_out` number of samples within the q-space dimension. This can any number, though using higher numbers will require more GPU memory if using.

### Command-line

Bring up the following help message via `run_dmri_rcnn.py -h`:

```
usage: `run_dmri_rcnn.py` [-h] -dmri_in DMRI_IN -bvec_in BVEC_IN -bvec_out BVEC_OUT -mask MASK -dmri_out DMRI_OUT -s {1000,2000,3000} [-m {1,3}] [-c] [-b BATCH_SIZE]

optional arguments:
  -h, --help            show this help message and exit
  -dmri_in DMRI_IN      Context dMRI NIfTI volume. Must be single-shell and contain q_in 3D volumes
  -bvec_in BVEC_IN      Context b-vector text file. Whitespace delimited with 3 rows and q_in columns
  -bvec_out BVEC_OUT    Target b-vector text file. Whitespace delimited with 3 rows and q_out columns
  -mask MASK            Brain mask NIfTI volume. Must have space spatial dimensions as dmri_in.
  -dmri_out DMRI_OUT    Inferred dMRI NIfTI volume. This will contain q_out inferred volumes.
  -s {1000,2000,3000}, --shell {1000,2000,3000}
                        Shell to perform inference with. Must be same shell as context/target dMRI and b-vectors
  -m {1,3}, --model-dim {1,3}
                        Model dimensionality, choose either 1 or 3. Default: 3.
  -c, --combined        Use combined shell model. Currently only applicable with 3D model and 10 q_in.
  -n, --norm            Perform normalisation using 99 percentile of data. Only implemented with --combined flag, and only for q_in = 10
  -b BATCH_SIZE, --batch-size BATCH_SIZE
                        Batch size to run model inference with.
```

***N.B.** Weights are downloaded and stored within `~/.dmri_rcnn` by default. To store weights in a different directory, set environment variable `DMRI_RCNN_DIR="/your/custom/directory"`*

#### Example

The following example performs `b = 1000` inference with the 3D dMRI RCNN on **HCP data**.

```bash
run_dmri_rcnn.py -dmri_in context_dmri.nii.gz -bvec_in context_bvecs -bvec_out target_bvecs -mask brain_mask.nii.gz -dmri_out inferred_dmri.nii.gz -s 1000 -m 3
```

This example would take ~2 minutes to infer 80 volumes on an `NVIDIA RTX 3080`.

To perform inference on data outside of the HCP dataset, use the flags `-c` and `-n`. This is currently only implemented for $q_{in} = 10$.

## Training

Below are details on how to train a given model, and the preprocessing steps involved.

### Data Pre-Processing

A training dataset is typically too large to fit into memory all at once. To overcome this, this project uses TensorFlow's `.tfrecord` file format and the
[tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) API. Therefore training data should be saved in this format before starting. Below is an example on how to do this using the `dMRI-RCNN` project.

```python
import numpy as np

from dmri_rcnn.core import io
from dmri_rcnn.core.processing import save_tfrecord_data

# First load a subject into memory
dmri, _ = io.load_nifti('/path/to/dmri/data.nii.gz')
mask, _ = io.load_nifti('/path/to/brain/mask.nii.gz', dtype=np.int8)
bvecs = io.load_bvec('/path/to/dmri/bvecs') # bvecs & bvals should be in FSL format
bvals = io.load_bval('/path/to/dmri/bvals')

# Optionally crop image data
dmri, mask = io.autocrop_dmri(dmri, mask)

# .tfrecord format uses a maximum filesize of 2 GiB, therefore for high
# resolution dMRI data, the image may need to be split into smaller parts
# to do this use the function below. It is recommended to first try to save
# each subject as a whole before splitting the image into separate files.
dmri_list = io.split_image_to_octants(dmri)
mask_list = io.split_image_to_octants(mask)

# Now save data in .tfrecord format
save_tfrecord_data(dmri, bvecs, bvals, mask, '/path/to/saved/data.tfrecord')

# Alternatively save the list of image parts if dmri is too large
for i in range(len(dmri_list)):
  save_tfrecord_data(dmri_list[i], bvecs, bvals, mask_list[i], '/path/to/saved/data' + str(i) + '.tfrecord')
```

### Training a Model

Once pre-processing is complete, you can then train a model.

```python
from dmri_rcnn.core.weights import get_weights
from dmri_rcnn.core.model import get_1d_autoencoder, get_3d_autoencoder
from dmri_rcnn.core.processing import TrainingProcessor, TrainingProcessorNorm

# If we want to fine-tune the model we can load the previously obtained weights.
# In this example we'll load the weights for the 3D RCNN trained on the b = 1000
# shell and 6 q-space samples per input.
weights = get_weights(model_dim=3, shell=1000, q_in=6)

# Now we can instantiate the pre-compiled 3D model
model = get_3d_autoencoder(weights) # Omit the weights argument to load without pre-trained weights

# Instantiate the training processor
processor = TrainingProcessor(shells=[1000], q_in=6)

# If using non-HCP data, the TrainingProcessorNorm should be used instead.
processor = TrainingProcessorNorm(shells=[1000], q_in=6)

# Important: Here our q_in = 6, and the processor uses a default q_out = 10, therefore our dmri data must
# contain at least 16 volumes.

# Load dataset mapping
train_data = processor.load_data(['/path/to/train_data0.tfrecord', '/path/to/train_data1.tfrecord'])
validation_data = processor.load_data(['/path/to/val_data0.tfrecord'], validation=True)

# Begin training
model.fit(train_data, epochs=10, validation_data=validation_data)
```

## Docker

You can also use `dMRI-RCNN` directly via [Docker](https://www.docker.com/). Both a CPU and GPU version of the project are available.

### CPU

To use `dMRI-RCNN` with the CPU only, use:

```bash
sudo docker run -v /absolute/path/to/my/data/directory:/data -it -t mlyon93/dmri-rcnn-cpu:latest
```

### GPU

To use `dMRI-RCNN` with the GPU, first ensure the [appropriate NVIDIA prerequisites](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) have been installed. Then use:

```bash
sudo docker run --gpus all -v /absolute/path/to/my/data/directory:/data -it -t mlyon93/dmri-rcnn-gpu:latest
```

## Spherical Harmonic Baseline

To run the Spherical Harmonic baseline model used in the paper, first ensure `dipy` is installed. You can install `dipy` directly via `pip` or by installing this project using the following prompt.

```bash
pip install dmri-rcnn[sh]
```

### Command-line

Bring up the following help message via `dmri_sh_baseline.py -h`:

```
usage: dMRI Spherical Harmonic Baseline Inference [-h] -dmri_in DMRI_IN -bvec_in BVEC_IN -bvec_out BVEC_OUT -dmri_out DMRI_OUT -s SHELL

optional arguments:
  -h, --help            show this help message and exit
  -dmri_in DMRI_IN      Context dMRI NIfTI volume. Must be single-shell and contain q_in 3D volumes
  -bvec_in BVEC_IN      Context b-vector text file. Whitespace delimited with 3 rows and q_in columns
  -bvec_out BVEC_OUT    Target b-vector text file. Whitespace delimited with 3 rows and q_out columns
  -dmri_out DMRI_OUT    Inferred dMRI NIfTI volume. This will contain q_out inferred volumes.
  -s SHELL, --shell SHELL
                        Shell to perform inference on. Must be same shell as context/target dMRI and b-vecs
```

#### Example

The following example performs `b = 1000` spherical harmonic inference.

```bash
dmri_sh_baseline.py -dmri_in context_dmri.nii.gz -bvec_in context_bvecs -bvec_out target_bvecs -dmri_out inferred_dmri.nii.gz -s 1000
```

The use or inspect the spherical harmonic model, the code can be found within `dmri_rcnn.core.processing.sph_harmonic`.

## Roadmap

Future Additions & Improvements:

* Plot functionality

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/m-lyon/dMRI-RCNN",
    "name": "dmri-rcnn",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "ai,cv,computer-vision,mri,dmri,super-resolution,cnn",
    "author": "Matthew Lyon",
    "author_email": "matthewlyon18@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4d/f7/26bd5dcdc57098eef75d59dc02a7c85518e1be6a17908a5e3faeb92f3cb9/dmri-rcnn-0.4.1.tar.gz",
    "platform": null,
    "description": "# Angular Super-Resolution in Diffusion MRI with a 3D Recurrent Convolutional Autoencoder\n\n![Model Architecture](resources/rcnn_dmri_model.png)\n\n[![PyPI version](https://badge.fury.io/py/dmri-rcnn.svg)](https://badge.fury.io/py/dmri-rcnn)\n\nThis project enhances the angular resolution of dMRI data through the use of a Recurrent CNN. This codebase is associated with the\nfollowing paper. Please cite the paper if you use this model:\n\n[Angular Super-Resolution in Diffusion MRI with a 3D Recurrent Convolutional Autoencoder](https://arxiv.org/abs/2203.15598) [MIDL 2022]\n\n## Table of contents\n\n* [Installation](#installation)\n* [Inference](#inference)\n* [Training](#training)\n* [Docker](#docker)\n* [Spherical Harmonic Baseline](#spherical-harmonic-baseline)\n\n## Installation\n\n`dMRI-RCNN` can be installed by via pip:\n\n```bash\npip install dmri-rcnn\n```\n\n### Requirements\n\n`dMRI-RCNN` uses [TensorFlow](https://www.tensorflow.org/) as the deep learning architecture. To enable [GPU usage within TensorFlow](https://www.tensorflow.org/install/gpu), you should ensure the appropriate prerequisites are installed.\n\nListed below are the requirements for this package.\n\n* `tensorflow>=2.6.0`\n* `numpy`\n* `einops`\n* `nibabel`\n* `tqdm`\n\n## Inference\n\nOnce installed, use `run_dmri_rcnn.py` to perform inference of new dMRI volumes. Below lists the data requirements to use the script, and the command-line arguments available for inference.\n\n### Data\n\nTo run this script, dMRI data is required in the following format:\n\n* Context dMRI file. The dMRI data used as context within the model to infer other volumes\n  * File format: `NIfTI`\n  * Single-shell: containing only one b-value.\n  * Dimensions: `(i, j, k, q_in)`.\n    * `(i, j, k)` are the spatial dimensions of the data\n    * `q_in` number of samples within the q-space dimension. This can either be `6`, `10`, or `30` and will affect which of the trained models is used.\n* Context b-vector file. The corresponding b-vectors for the context dMRI file.\n  * File format: text file, whitespace delimited.\n  * `3` rows corresponding to the `x, y, z` co-ordinates of q-space\n  * `q_in` columns corresponding to the q-space directions sampled. `q_in` must either be `6`, `10`, or `30`.\n* Target b-vector file. The corresponding b-vectors for the inferred dMRI data.\n  * File format: text file, whitespace delimited.\n  * `3` rows corresponding to the `x, y, z` co-ordinates of q-space\n  * `q_out` columns corresponding to the q-space directions sampled.\n* Brain mask file. Binary brain mask file for dMRI data.\n  * File format: `NIfTI`\n  * Dimensions: `(i, j, k)`. Same spatial dimensions as used in the dMRI data.\n\nThe script will create the following data:\n\n* Inferred dMRI file. dMRI volumes inferred from the model as defined by the target b-vectors.\n  * File format: `NIfTI`\n  * Dimensions: `(i, j, k, q_out)`.\n    * `q_out` number of samples within the q-space dimension. This can any number, though using higher numbers will require more GPU memory if using.\n\n### Command-line\n\nBring up the following help message via `run_dmri_rcnn.py -h`:\n\n```\nusage: `run_dmri_rcnn.py` [-h] -dmri_in DMRI_IN -bvec_in BVEC_IN -bvec_out BVEC_OUT -mask MASK -dmri_out DMRI_OUT -s {1000,2000,3000} [-m {1,3}] [-c] [-b BATCH_SIZE]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -dmri_in DMRI_IN      Context dMRI NIfTI volume. Must be single-shell and contain q_in 3D volumes\n  -bvec_in BVEC_IN      Context b-vector text file. Whitespace delimited with 3 rows and q_in columns\n  -bvec_out BVEC_OUT    Target b-vector text file. Whitespace delimited with 3 rows and q_out columns\n  -mask MASK            Brain mask NIfTI volume. Must have space spatial dimensions as dmri_in.\n  -dmri_out DMRI_OUT    Inferred dMRI NIfTI volume. This will contain q_out inferred volumes.\n  -s {1000,2000,3000}, --shell {1000,2000,3000}\n                        Shell to perform inference with. Must be same shell as context/target dMRI and b-vectors\n  -m {1,3}, --model-dim {1,3}\n                        Model dimensionality, choose either 1 or 3. Default: 3.\n  -c, --combined        Use combined shell model. Currently only applicable with 3D model and 10 q_in.\n  -n, --norm            Perform normalisation using 99 percentile of data. Only implemented with --combined flag, and only for q_in = 10\n  -b BATCH_SIZE, --batch-size BATCH_SIZE\n                        Batch size to run model inference with.\n```\n\n***N.B.** Weights are downloaded and stored within `~/.dmri_rcnn` by default. To store weights in a different directory, set environment variable `DMRI_RCNN_DIR=\"/your/custom/directory\"`*\n\n#### Example\n\nThe following example performs `b = 1000` inference with the 3D dMRI RCNN on **HCP data**.\n\n```bash\nrun_dmri_rcnn.py -dmri_in context_dmri.nii.gz -bvec_in context_bvecs -bvec_out target_bvecs -mask brain_mask.nii.gz -dmri_out inferred_dmri.nii.gz -s 1000 -m 3\n```\n\nThis example would take ~2 minutes to infer 80 volumes on an `NVIDIA RTX 3080`.\n\nTo perform inference on data outside of the HCP dataset, use the flags `-c` and `-n`. This is currently only implemented for $q_{in} = 10$.\n\n## Training\n\nBelow are details on how to train a given model, and the preprocessing steps involved.\n\n### Data Pre-Processing\n\nA training dataset is typically too large to fit into memory all at once. To overcome this, this project uses TensorFlow's `.tfrecord` file format and the\n[tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) API. Therefore training data should be saved in this format before starting. Below is an example on how to do this using the `dMRI-RCNN` project.\n\n```python\nimport numpy as np\n\nfrom dmri_rcnn.core import io\nfrom dmri_rcnn.core.processing import save_tfrecord_data\n\n# First load a subject into memory\ndmri, _ = io.load_nifti('/path/to/dmri/data.nii.gz')\nmask, _ = io.load_nifti('/path/to/brain/mask.nii.gz', dtype=np.int8)\nbvecs = io.load_bvec('/path/to/dmri/bvecs') # bvecs & bvals should be in FSL format\nbvals = io.load_bval('/path/to/dmri/bvals')\n\n# Optionally crop image data\ndmri, mask = io.autocrop_dmri(dmri, mask)\n\n# .tfrecord format uses a maximum filesize of 2 GiB, therefore for high\n# resolution dMRI data, the image may need to be split into smaller parts\n# to do this use the function below. It is recommended to first try to save\n# each subject as a whole before splitting the image into separate files.\ndmri_list = io.split_image_to_octants(dmri)\nmask_list = io.split_image_to_octants(mask)\n\n# Now save data in .tfrecord format\nsave_tfrecord_data(dmri, bvecs, bvals, mask, '/path/to/saved/data.tfrecord')\n\n# Alternatively save the list of image parts if dmri is too large\nfor i in range(len(dmri_list)):\n  save_tfrecord_data(dmri_list[i], bvecs, bvals, mask_list[i], '/path/to/saved/data' + str(i) + '.tfrecord')\n```\n\n### Training a Model\n\nOnce pre-processing is complete, you can then train a model.\n\n```python\nfrom dmri_rcnn.core.weights import get_weights\nfrom dmri_rcnn.core.model import get_1d_autoencoder, get_3d_autoencoder\nfrom dmri_rcnn.core.processing import TrainingProcessor, TrainingProcessorNorm\n\n# If we want to fine-tune the model we can load the previously obtained weights.\n# In this example we'll load the weights for the 3D RCNN trained on the b = 1000\n# shell and 6 q-space samples per input.\nweights = get_weights(model_dim=3, shell=1000, q_in=6)\n\n# Now we can instantiate the pre-compiled 3D model\nmodel = get_3d_autoencoder(weights) # Omit the weights argument to load without pre-trained weights\n\n# Instantiate the training processor\nprocessor = TrainingProcessor(shells=[1000], q_in=6)\n\n# If using non-HCP data, the TrainingProcessorNorm should be used instead.\nprocessor = TrainingProcessorNorm(shells=[1000], q_in=6)\n\n# Important: Here our q_in = 6, and the processor uses a default q_out = 10, therefore our dmri data must\n# contain at least 16 volumes.\n\n# Load dataset mapping\ntrain_data = processor.load_data(['/path/to/train_data0.tfrecord', '/path/to/train_data1.tfrecord'])\nvalidation_data = processor.load_data(['/path/to/val_data0.tfrecord'], validation=True)\n\n# Begin training\nmodel.fit(train_data, epochs=10, validation_data=validation_data)\n```\n\n## Docker\n\nYou can also use `dMRI-RCNN` directly via [Docker](https://www.docker.com/). Both a CPU and GPU version of the project are available.\n\n### CPU\n\nTo use `dMRI-RCNN` with the CPU only, use:\n\n```bash\nsudo docker run -v /absolute/path/to/my/data/directory:/data -it -t mlyon93/dmri-rcnn-cpu:latest\n```\n\n### GPU\n\nTo use `dMRI-RCNN` with the GPU, first ensure the [appropriate NVIDIA prerequisites](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) have been installed. Then use:\n\n```bash\nsudo docker run --gpus all -v /absolute/path/to/my/data/directory:/data -it -t mlyon93/dmri-rcnn-gpu:latest\n```\n\n## Spherical Harmonic Baseline\n\nTo run the Spherical Harmonic baseline model used in the paper, first ensure `dipy` is installed. You can install `dipy` directly via `pip` or by installing this project using the following prompt.\n\n```bash\npip install dmri-rcnn[sh]\n```\n\n### Command-line\n\nBring up the following help message via `dmri_sh_baseline.py -h`:\n\n```\nusage: dMRI Spherical Harmonic Baseline Inference [-h] -dmri_in DMRI_IN -bvec_in BVEC_IN -bvec_out BVEC_OUT -dmri_out DMRI_OUT -s SHELL\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -dmri_in DMRI_IN      Context dMRI NIfTI volume. Must be single-shell and contain q_in 3D volumes\n  -bvec_in BVEC_IN      Context b-vector text file. Whitespace delimited with 3 rows and q_in columns\n  -bvec_out BVEC_OUT    Target b-vector text file. Whitespace delimited with 3 rows and q_out columns\n  -dmri_out DMRI_OUT    Inferred dMRI NIfTI volume. This will contain q_out inferred volumes.\n  -s SHELL, --shell SHELL\n                        Shell to perform inference on. Must be same shell as context/target dMRI and b-vecs\n```\n\n#### Example\n\nThe following example performs `b = 1000` spherical harmonic inference.\n\n```bash\ndmri_sh_baseline.py -dmri_in context_dmri.nii.gz -bvec_in context_bvecs -bvec_out target_bvecs -dmri_out inferred_dmri.nii.gz -s 1000\n```\n\nThe use or inspect the spherical harmonic model, the code can be found within `dmri_rcnn.core.processing.sph_harmonic`.\n\n## Roadmap\n\nFuture Additions & Improvements:\n\n* Plot functionality\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Diffusion MRI Recurrent CNN for Angular Super-resolution.",
    "version": "0.4.1",
    "project_urls": {
        "Download": "https://github.com/m-lyon/dMRI-RCNN/archive/v0.4.1.tar.gz",
        "Homepage": "https://github.com/m-lyon/dMRI-RCNN"
    },
    "split_keywords": [
        "ai",
        "cv",
        "computer-vision",
        "mri",
        "dmri",
        "super-resolution",
        "cnn"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "db428bb3597839b375362e290ae0d3afdd707747574275a3e52cfc86968191d6",
                "md5": "e2616797b703f97ccca4bd21a07f55fd",
                "sha256": "e4d72ab236a2becd676e2634fa5a25575b0a8ed1aaecdd4a4e6fcca2a89f6b5d"
            },
            "downloads": -1,
            "filename": "dmri_rcnn-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e2616797b703f97ccca4bd21a07f55fd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 41495,
            "upload_time": "2023-11-17T12:47:43",
            "upload_time_iso_8601": "2023-11-17T12:47:43.789655Z",
            "url": "https://files.pythonhosted.org/packages/db/42/8bb3597839b375362e290ae0d3afdd707747574275a3e52cfc86968191d6/dmri_rcnn-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4df726bd5dcdc57098eef75d59dc02a7c85518e1be6a17908a5e3faeb92f3cb9",
                "md5": "c356986a155c33401aa3d9392ebf32fc",
                "sha256": "3beb8115749e2c1edeb0e6331849c42843a7a4ea9f8619f04361cb4147eea137"
            },
            "downloads": -1,
            "filename": "dmri-rcnn-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "c356986a155c33401aa3d9392ebf32fc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 31682,
            "upload_time": "2023-11-17T12:47:45",
            "upload_time_iso_8601": "2023-11-17T12:47:45.409908Z",
            "url": "https://files.pythonhosted.org/packages/4d/f7/26bd5dcdc57098eef75d59dc02a7c85518e1be6a17908a5e3faeb92f3cb9/dmri-rcnn-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-17 12:47:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "m-lyon",
    "github_project": "dMRI-RCNN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dmri-rcnn"
}
        
Elapsed time: 0.14157s