mediffusion


Namemediffusion JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/BardiaKh/Mediffusion
SummaryDiffusion Models for Medical Imaging
upload_time2024-01-17 15:29:57
maintainer
docs_urlNone
authorBardia Khosravi
requires_python>=3.8
license
keywords
VCS
bugtrack_url
requirements bkh_pytorch_utils torchextractor OmegaConf
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Mediffusion

Diffusion models have significantly impacted the realm of image generation. In a bid to reduce the technical complexity, we aim to lower the entry barrier for the medical community. To achieve this, we have introduced *mediffusion*, a user-friendly diffusion package that can be effortlessly tailored to address medical problems with less than 20 lines of code. We have utilized various codebases, including [guided diffusion](https://github.com/openai/guided-diffusion) and [LDM](https://github.com/CompVis/latent-diffusion), enhancing their robustness for medical use cases. We plan to update this package regularly. Embracing the spirit of open science, we invite you to consider sharing  a demo notebook of your work should you choose to utilize this package.

Happy Coding!

## Setup and Installation

### Step 1: Create a Conda Environment

If you haven't installed Conda yet, you can download it from [here](https://docs.anaconda.com/anaconda/install/). After installing, create a new Conda environment by running:

```bash
conda create --name mediffusion python=3.10
```

Activate the environment:

```bash
conda activate mediffusion
```

### Step 2: Install PyTorch

Install PyTorch specifically for CUDA 11.8 by running:

```bash
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```

### Step 3: Install The Package

You can install the latest version from github using:

```bash
pip install mediffusion
```

This will install all the necessary packages.

## Training 
### 1. Hyperparameters
Before starting the training, it is recommended that you set up some global constants and environment variables:

```python
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
os.environ['WANDB_API_KEY'] = "WANDB-API-KEY"

TOTAL_IMAGE_SEEN = 40e6
BATCH_SIZE = 36
NUM_DEVICES = 2 # number of devices in CUDA_VISIBLE_DEVICES
TRAIN_ITERATIONS = int(TOTAL_IMAGE_SEEN / (BATCH_SIZE * NUM_DEVICES))
```

### 2. Preparing Data

To prepare the data, you need to create a dataset where each element is a dictionary. The dictionary should have the key "img" and may also contain additional keys like "cls" and "concat" depending on the type of condition. One way to do this is by using MONAI. Below is a sample code snippet:

```python
import monai as mn

train_data_dicts = [
    {"img": "./image1.dcm", "cls": 2},
    {"img": "./image2.dcm", "cls": 0}
]

valid_data_dicts = [
    {"img": "./image9.dcm", "cls": 1}
]

transforms = mn.transforms.Compose([
    mn.transforms.LoadImageD(keys="img"),
    mn.transforms.SelectItemsD(keys=["img","cls"]),
    mn.transforms.ScaleIntensityD(keys=["img"], minv=-1, maxv=1),
    mn.transforms.ToTensorD(keys=["img","cls"], dtype=torch.float, track_meta=False),
])

train_ds = mn.data.Dataset(data=train_data_dicts, transform=transforms) 
valid_ds = mn.data.Dataset(data=valid_data_dicts, transform=transforms)
train_sampler = torch.utils.data.RandomSampler(train_ds, replacement=True, num_samples=TOTAL_IMAGE_SEEN)
```

At the end of this step, you should have `train_ds`, `val_ds` and `train_sampler`.

### 3. Configuring Model

#### Configuration Fields Explanation

Below is a table that provides descriptions for each element in the configuration file:

| Section    | Field                   | Description                                           |
|------------|-------------------------|-------------------------------------------------------|
| diffusion  | timesteps               | The number of timesteps in the diffusion process      |
|            | schedule_name           | The name of the schedule (e.g., "cosine")             |
|            | enforce_zero_terminal_snr | Whether to enforce zero terminal SNR (True/False)    |
|            | schedule_params         | Parameters related to the diffusion schedule          |
|            | -- beta_start           | Starting value for beta in the schedule               |
|            | -- beta_end             | Ending value for beta in the schedule                 |
|            | -- cosine_s             | Parameter for cosine schedule                         |
|            | timestep_respacing      | Can be a list of respacings. For example, with 200 steps, [10,20] means in the first 100, get 10 samples and in the next 100, get 20 samples. |
|            | mean_type               | Type of mean model (e.g., "VELOCITY")                 |
|            | var_type                | Type of variance model (e.g., "LEARNED_RANGE")        |
|            | loss_type               | The type of loss to use (e.g., "MSE")                 |
| optimizer  | lr                      | Learning rate                                         |
|            | type                    | The type of optimizer to use                          |
| validation | classifier_cond_scale   | Classifier guidance scale for validation logging.     |
|            | protocol                | Inference protocol for logging validation results     |
|            | log_original            | Whether to log the original validation data (True/False)     | 
|            | log_concat              | Whether to log the concatenated images (True/False)     |
|            | log_cls_indices         | Whether to log the entire cls vector (default value of -1), or specefic indices from the cls vector (user should provide a list of desired cls indices)     |
| model      | input_size              | The input size of the model. Can be an integer for square and cube images or a list of integers for specific axes, like [64, 64, 32] |
|            | dims                    | Number of dimensions, 2 or 3 for 2D and 3D images     |
|            | attention_resolutions   | List of resolutions for attention layers              |
|            | channel_mult            | List of multipliers for each layer's channels         |
|            | dropout                 | Dropout rate                                          |
|            | in_channels             | Number of input channels (image channels + concat channels) |
|            | out_channels            | Number of output channels (image channels or image channels * 2 if learning the variance) |
|            | model_channels          | Number of convolution channels in the model           |
|            | num_head_channels       | Number of attention head channels                     |
|            | num_heads               | Number of attention heads                             |
|            | num_heads_upsample      | Number of attention head after upsampling             |
|            | num_res_blocks          | List of the number of residual blocks for each layer  |
|            | resblock_updown         | Whether to use residual blocks for down/up sampling (True/False) |
|            | use_checkpoint          | Whether to use checkpointing (True/False)             |
|            | use_new_attention_order | Whether to use the new attention ordering (True/False) |
|            | use_scale_shift_norm    | Whether to use scale-shift normalization (True/False) |
|            | scale_skip_connection   | Whether to scaleskip connections (True/False)         |
|            | num_classes             | Number of classes for conditioning                    |
|            | concat_channels         | Number of concatenatong channels for conditioning (for super-resolution or inpainting) |
|            | guidance_drop_prob      | Drop probability for the classifier free guidance scale training |

For sample configurations, please checkout the `sample_configs` directory.

**Note**: If a field is left out of the config file, the default value is infered based on this file: `mediffusion/default_config/default.yaml`.

#### Instantiating Model

You can instantiate the model using the configuration file and dataset as follows:

```python
from mediffusion import DiffusionModule

model = DiffusionModule(
    "./config.yaml",
    train_ds=train_ds,
    val_ds=valid_ds,
    dl_workers=2,
    train_sampler=train_sampler,
    batch_size=32,               # train batch size
    val_batch_size=16            # validation batch size (recommended size is half of batch_size)
)
```

### 4. Setting Up Trainer
You can set up the trainer using the `Trainer` class:

```python
from mediffusion import Trainer

trainer = Trainer(
    max_steps=TRAIN_ITERATIONS,
    val_check_interval=5000,
    root_directory="./outputs", # where to save the weights and logs
    precision="16-mixed",       # mixed precision training
    devices=-1,                 # use all the devices in CUDA_VISIBLE_DEVICES
    nodes=1,
    wandb_project="Your_Project_Name",
    logger_instance="Your_Logger_Instance",
)
```

### 5. Training the Model

Finally, to train your model, you simply call:

```python
trainer.fit(model)
```

## Prediction 
### 1. Loading the Model

First, import the `DiffusionModule` class and load the pre-trained model checkpoint. The model is then moved to the CUDA device and set to inference mode. Additionally, you may choose to enable half-precision for better performance:

```python
from mediffusion import DiffusionModule

model = DiffusionModule("./config.yaml")
model.load_ckpt("./outputs/pl/last.ckpt", ema=True)
model.cuda().half()
model.eval()
```

### 2. Preparing Input

Prepare the noise and model keyword arguments. Here, `"cls"` specifies the class condition and is set to 0:

```python
import torch

noise = torch.randn(1, 1, 256, 256)
model_kwargs = {"cls": torch.tensor([0]).cuda().half()}
```

**Note**: You can use other keys like `concat` and/or `cls_embed`. To find out more, look at the `tutorials` directory.

### 3. Making Predictions

To make a prediction, use the `predict` method from the `DiffusionModule` class:

```python
img = model.predict(
    noise, 
    model_kwargs=model_kwargs, 
    classifier_cond_scale=4, 
    inference_protocol="DDIM100"
)
```

- `noise`: The input noise tensor
- `model_kwargs`: A dictionary containing additional model configurations (e.g., class conditions)
- `classifier_cond_scale`: The scale used for the classifier free guidance condition during inference
- `inference_protocol`: The inference protocol to be used (e.g., `"DDIM100"`)

The `img` is the generated output based on the model's inference (`C:H:W(:D)`). To save the image, you need to transpose it first, due to the different axis conventions.

**Note**: The model currently supports the following solvers: `DDPM`,`DDIM`,`IDDIM`(for inverse diffusion), and `PLMS`. As an example, `"PLMS100"` means using the `PLMS` solver for `100` steps. 

## Tutorials

For more hands-on tutorials on how to effectively use this package, please check the `tutorials` folder in the GitHub repository. These tutorials provide step-by-step instructions, Colab notebooks, and explanations to help you get started with the software.

| File Name      | Description | Notebook Link |
|----------------|-------------|---------------|
| 01_2d_ddpm | Getting started with training a simple 2D class-conditioned DDPM. | [📓](https://github.com/BardiaKh/Mediffusion/tree/main/tutorials/01_2d_ddpm.ipynb) |
| 02_2d_inpainting | Image inpainting with 2D diffusion model (repaint method) | [📓](https://github.com/BardiaKh/Mediffusion/tree/main/tutorials/02_2d_inpainting.ipynb) |

## TO-DO

The following features and improvements are currently on our development roadmap:

- [ ] Cross-attention
- [ ] DPM-Solver
- [ ] VAE for LDM

We are actively working on these features and they will be available in future releases.

## Issues and Contributions

### Issues
If you encounter any issues while using this package, we encourage you to open an issue in the GitHub repository. Your feedback helps us to improve the software and resolve any bugs or limitations.

### Contributions
Contributions to the codebase are always welcome. If you have a feature request, bugfix, or any other contribution, feel free to submit a pull request.

### Development Opportunities
If you're interested in actively participating in the development of this package, please send us a Direct Message (DM). We're always open to collaboration and would be delighted to have you on board.

## Citation

If you find this work useful, please consider citing the parent project:

```bibtex
@article{KHOSRAVI2023107832,
    title = {Few-shot biomedical image segmentation using diffusion models: Beyond image generation},
    journal = {Computer Methods and Programs in Biomedicine},
    volume = {242},
    pages = {107832},
    year = {2023},
    issn = {0169-2607},
    doi = {https://doi.org/10.1016/j.cmpb.2023.107832},
    url = {https://www.sciencedirect.com/science/article/pii/S0169260723004984},
    author = {Bardia Khosravi and Pouria Rouzrokh and John P. Mickley and Shahriar Faghani and Kellen Mulford and Linjun Yang and A. Noelle Larson and Benjamin M. Howe and Bradley J. Erickson and Michael J. Taunton and Cody C. Wyles},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/BardiaKh/Mediffusion",
    "name": "mediffusion",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "",
    "author": "Bardia Khosravi",
    "author_email": "bardiakhosravi95@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/53/d1/a9d193fd6fb23bf58bc0d507ff75dbe814ceefad30bc41bd20eb219e6c43/mediffusion-0.7.1.tar.gz",
    "platform": null,
    "description": "# Mediffusion\r\n\r\nDiffusion models have significantly impacted the realm of image generation. In a bid to reduce the technical complexity, we aim to lower the entry barrier for the medical community. To achieve this, we have introduced *mediffusion*, a user-friendly diffusion package that can be effortlessly tailored to address medical problems with less than 20 lines of code. We have utilized various codebases, including [guided diffusion](https://github.com/openai/guided-diffusion) and [LDM](https://github.com/CompVis/latent-diffusion), enhancing their robustness for medical use cases. We plan to update this package regularly. Embracing the spirit of open science, we invite you to consider sharing  a demo notebook of your work should you choose to utilize this package.\r\n\r\nHappy Coding!\r\n\r\n## Setup and Installation\r\n\r\n### Step 1: Create a Conda Environment\r\n\r\nIf you haven't installed Conda yet, you can download it from [here](https://docs.anaconda.com/anaconda/install/). After installing, create a new Conda environment by running:\r\n\r\n```bash\r\nconda create --name mediffusion python=3.10\r\n```\r\n\r\nActivate the environment:\r\n\r\n```bash\r\nconda activate mediffusion\r\n```\r\n\r\n### Step 2: Install PyTorch\r\n\r\nInstall PyTorch specifically for CUDA 11.8 by running:\r\n\r\n```bash\r\npip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118\r\n```\r\n\r\n### Step 3: Install The Package\r\n\r\nYou can install the latest version from github using:\r\n\r\n```bash\r\npip install mediffusion\r\n```\r\n\r\nThis will install all the necessary packages.\r\n\r\n## Training \r\n### 1. Hyperparameters\r\nBefore starting the training, it is recommended that you set up some global constants and environment variables:\r\n\r\n```python\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1\"\r\nos.environ['WANDB_API_KEY'] = \"WANDB-API-KEY\"\r\n\r\nTOTAL_IMAGE_SEEN = 40e6\r\nBATCH_SIZE = 36\r\nNUM_DEVICES = 2 # number of devices in CUDA_VISIBLE_DEVICES\r\nTRAIN_ITERATIONS = int(TOTAL_IMAGE_SEEN / (BATCH_SIZE * NUM_DEVICES))\r\n```\r\n\r\n### 2. Preparing Data\r\n\r\nTo prepare the data, you need to create a dataset where each element is a dictionary. The dictionary should have the key \"img\" and may also contain additional keys like \"cls\" and \"concat\" depending on the type of condition. One way to do this is by using MONAI. Below is a sample code snippet:\r\n\r\n```python\r\nimport monai as mn\r\n\r\ntrain_data_dicts = [\r\n    {\"img\": \"./image1.dcm\", \"cls\": 2},\r\n    {\"img\": \"./image2.dcm\", \"cls\": 0}\r\n]\r\n\r\nvalid_data_dicts = [\r\n    {\"img\": \"./image9.dcm\", \"cls\": 1}\r\n]\r\n\r\ntransforms = mn.transforms.Compose([\r\n    mn.transforms.LoadImageD(keys=\"img\"),\r\n    mn.transforms.SelectItemsD(keys=[\"img\",\"cls\"]),\r\n    mn.transforms.ScaleIntensityD(keys=[\"img\"], minv=-1, maxv=1),\r\n    mn.transforms.ToTensorD(keys=[\"img\",\"cls\"], dtype=torch.float, track_meta=False),\r\n])\r\n\r\ntrain_ds = mn.data.Dataset(data=train_data_dicts, transform=transforms) \r\nvalid_ds = mn.data.Dataset(data=valid_data_dicts, transform=transforms)\r\ntrain_sampler = torch.utils.data.RandomSampler(train_ds, replacement=True, num_samples=TOTAL_IMAGE_SEEN)\r\n```\r\n\r\nAt the end of this step, you should have `train_ds`, `val_ds` and `train_sampler`.\r\n\r\n### 3. Configuring Model\r\n\r\n#### Configuration Fields Explanation\r\n\r\nBelow is a table that provides descriptions for each element in the configuration file:\r\n\r\n| Section    | Field                   | Description                                           |\r\n|------------|-------------------------|-------------------------------------------------------|\r\n| diffusion  | timesteps               | The number of timesteps in the diffusion process      |\r\n|            | schedule_name           | The name of the schedule (e.g., \"cosine\")             |\r\n|            | enforce_zero_terminal_snr | Whether to enforce zero terminal SNR (True/False)    |\r\n|            | schedule_params         | Parameters related to the diffusion schedule          |\r\n|            | -- beta_start           | Starting value for beta in the schedule               |\r\n|            | -- beta_end             | Ending value for beta in the schedule                 |\r\n|            | -- cosine_s             | Parameter for cosine schedule                         |\r\n|            | timestep_respacing      | Can be a list of respacings. For example, with 200 steps, [10,20] means in the first 100, get 10 samples and in the next 100, get 20 samples. |\r\n|            | mean_type               | Type of mean model (e.g., \"VELOCITY\")                 |\r\n|            | var_type                | Type of variance model (e.g., \"LEARNED_RANGE\")        |\r\n|            | loss_type               | The type of loss to use (e.g., \"MSE\")                 |\r\n| optimizer  | lr                      | Learning rate                                         |\r\n|            | type                    | The type of optimizer to use                          |\r\n| validation | classifier_cond_scale   | Classifier guidance scale for validation logging.     |\r\n|            | protocol                | Inference protocol for logging validation results     |\r\n|            | log_original            | Whether to log the original validation data (True/False)     | \r\n|            | log_concat              | Whether to log the concatenated images (True/False)     |\r\n|            | log_cls_indices         | Whether to log the entire cls vector (default value of -1), or specefic indices from the cls vector (user should provide a list of desired cls indices)     |\r\n| model      | input_size              | The input size of the model. Can be an integer for square and cube images or a list of integers for specific axes, like [64, 64, 32] |\r\n|            | dims                    | Number of dimensions, 2 or 3 for 2D and 3D images     |\r\n|            | attention_resolutions   | List of resolutions for attention layers              |\r\n|            | channel_mult            | List of multipliers for each layer's channels         |\r\n|            | dropout                 | Dropout rate                                          |\r\n|            | in_channels             | Number of input channels (image channels + concat channels) |\r\n|            | out_channels            | Number of output channels (image channels or image channels * 2 if learning the variance) |\r\n|            | model_channels          | Number of convolution channels in the model           |\r\n|            | num_head_channels       | Number of attention head channels                     |\r\n|            | num_heads               | Number of attention heads                             |\r\n|            | num_heads_upsample      | Number of attention head after upsampling             |\r\n|            | num_res_blocks          | List of the number of residual blocks for each layer  |\r\n|            | resblock_updown         | Whether to use residual blocks for down/up sampling (True/False) |\r\n|            | use_checkpoint          | Whether to use checkpointing (True/False)             |\r\n|            | use_new_attention_order | Whether to use the new attention ordering (True/False) |\r\n|            | use_scale_shift_norm    | Whether to use scale-shift normalization (True/False) |\r\n|            | scale_skip_connection   | Whether to scaleskip connections (True/False)         |\r\n|            | num_classes             | Number of classes for conditioning                    |\r\n|            | concat_channels         | Number of concatenatong channels for conditioning (for super-resolution or inpainting) |\r\n|            | guidance_drop_prob      | Drop probability for the classifier free guidance scale training |\r\n\r\nFor sample configurations, please checkout the `sample_configs` directory.\r\n\r\n**Note**: If a field is left out of the config file, the default value is infered based on this file: `mediffusion/default_config/default.yaml`.\r\n\r\n#### Instantiating Model\r\n\r\nYou can instantiate the model using the configuration file and dataset as follows:\r\n\r\n```python\r\nfrom mediffusion import DiffusionModule\r\n\r\nmodel = DiffusionModule(\r\n    \"./config.yaml\",\r\n    train_ds=train_ds,\r\n    val_ds=valid_ds,\r\n    dl_workers=2,\r\n    train_sampler=train_sampler,\r\n    batch_size=32,               # train batch size\r\n    val_batch_size=16            # validation batch size (recommended size is half of batch_size)\r\n)\r\n```\r\n\r\n### 4. Setting Up Trainer\r\nYou can set up the trainer using the `Trainer` class:\r\n\r\n```python\r\nfrom mediffusion import Trainer\r\n\r\ntrainer = Trainer(\r\n    max_steps=TRAIN_ITERATIONS,\r\n    val_check_interval=5000,\r\n    root_directory=\"./outputs\", # where to save the weights and logs\r\n    precision=\"16-mixed\",       # mixed precision training\r\n    devices=-1,                 # use all the devices in CUDA_VISIBLE_DEVICES\r\n    nodes=1,\r\n    wandb_project=\"Your_Project_Name\",\r\n    logger_instance=\"Your_Logger_Instance\",\r\n)\r\n```\r\n\r\n### 5. Training the Model\r\n\r\nFinally, to train your model, you simply call:\r\n\r\n```python\r\ntrainer.fit(model)\r\n```\r\n\r\n## Prediction \r\n### 1. Loading the Model\r\n\r\nFirst, import the `DiffusionModule` class and load the pre-trained model checkpoint. The model is then moved to the CUDA device and set to inference mode. Additionally, you may choose to enable half-precision for better performance:\r\n\r\n```python\r\nfrom mediffusion import DiffusionModule\r\n\r\nmodel = DiffusionModule(\"./config.yaml\")\r\nmodel.load_ckpt(\"./outputs/pl/last.ckpt\", ema=True)\r\nmodel.cuda().half()\r\nmodel.eval()\r\n```\r\n\r\n### 2. Preparing Input\r\n\r\nPrepare the noise and model keyword arguments. Here, `\"cls\"` specifies the class condition and is set to 0:\r\n\r\n```python\r\nimport torch\r\n\r\nnoise = torch.randn(1, 1, 256, 256)\r\nmodel_kwargs = {\"cls\": torch.tensor([0]).cuda().half()}\r\n```\r\n\r\n**Note**: You can use other keys like `concat` and/or `cls_embed`. To find out more, look at the `tutorials` directory.\r\n\r\n### 3. Making Predictions\r\n\r\nTo make a prediction, use the `predict` method from the `DiffusionModule` class:\r\n\r\n```python\r\nimg = model.predict(\r\n    noise, \r\n    model_kwargs=model_kwargs, \r\n    classifier_cond_scale=4, \r\n    inference_protocol=\"DDIM100\"\r\n)\r\n```\r\n\r\n- `noise`: The input noise tensor\r\n- `model_kwargs`: A dictionary containing additional model configurations (e.g., class conditions)\r\n- `classifier_cond_scale`: The scale used for the classifier free guidance condition during inference\r\n- `inference_protocol`: The inference protocol to be used (e.g., `\"DDIM100\"`)\r\n\r\nThe `img` is the generated output based on the model's inference (`C:H:W(:D)`). To save the image, you need to transpose it first, due to the different axis conventions.\r\n\r\n**Note**: The model currently supports the following solvers: `DDPM`,`DDIM`,`IDDIM`(for inverse diffusion), and `PLMS`. As an example, `\"PLMS100\"` means using the `PLMS` solver for `100` steps. \r\n\r\n## Tutorials\r\n\r\nFor more hands-on tutorials on how to effectively use this package, please check the `tutorials` folder in the GitHub repository. These tutorials provide step-by-step instructions, Colab notebooks, and explanations to help you get started with the software.\r\n\r\n| File Name      | Description | Notebook Link |\r\n|----------------|-------------|---------------|\r\n| 01_2d_ddpm | Getting started with training a simple 2D class-conditioned DDPM. | [\ud83d\udcd3](https://github.com/BardiaKh/Mediffusion/tree/main/tutorials/01_2d_ddpm.ipynb) |\r\n| 02_2d_inpainting | Image inpainting with 2D diffusion model (repaint method) | [\ud83d\udcd3](https://github.com/BardiaKh/Mediffusion/tree/main/tutorials/02_2d_inpainting.ipynb) |\r\n\r\n## TO-DO\r\n\r\nThe following features and improvements are currently on our development roadmap:\r\n\r\n- [ ] Cross-attention\r\n- [ ] DPM-Solver\r\n- [ ] VAE for LDM\r\n\r\nWe are actively working on these features and they will be available in future releases.\r\n\r\n## Issues and Contributions\r\n\r\n### Issues\r\nIf you encounter any issues while using this package, we encourage you to open an issue in the GitHub repository. Your feedback helps us to improve the software and resolve any bugs or limitations.\r\n\r\n### Contributions\r\nContributions to the codebase are always welcome. If you have a feature request, bugfix, or any other contribution, feel free to submit a pull request.\r\n\r\n### Development Opportunities\r\nIf you're interested in actively participating in the development of this package, please send us a Direct Message (DM). We're always open to collaboration and would be delighted to have you on board.\r\n\r\n## Citation\r\n\r\nIf you find this work useful, please consider citing the parent project:\r\n\r\n```bibtex\r\n@article{KHOSRAVI2023107832,\r\n    title = {Few-shot biomedical image segmentation using diffusion models: Beyond image generation},\r\n    journal = {Computer Methods and Programs in Biomedicine},\r\n    volume = {242},\r\n    pages = {107832},\r\n    year = {2023},\r\n    issn = {0169-2607},\r\n    doi = {https://doi.org/10.1016/j.cmpb.2023.107832},\r\n    url = {https://www.sciencedirect.com/science/article/pii/S0169260723004984},\r\n    author = {Bardia Khosravi and Pouria Rouzrokh and John P. Mickley and Shahriar Faghani and Kellen Mulford and Linjun Yang and A. Noelle Larson and Benjamin M. Howe and Bradley J. Erickson and Michael J. Taunton and Cody C. Wyles},\r\n}\r\n```\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Diffusion Models for Medical Imaging",
    "version": "0.7.1",
    "project_urls": {
        "Homepage": "https://github.com/BardiaKh/Mediffusion"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fc212aaeb786ddca1c0b54d96ec03a5df14883b1e62678759a83e4c549e3f63a",
                "md5": "db3467f73d2656327d7ac14521558cd3",
                "sha256": "2d7c9aa8b3509da2a104ad52052f14ef85dd414cfdf761ac9326225dcff0959a"
            },
            "downloads": -1,
            "filename": "mediffusion-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "db3467f73d2656327d7ac14521558cd3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 36504,
            "upload_time": "2024-01-17T15:29:55",
            "upload_time_iso_8601": "2024-01-17T15:29:55.604145Z",
            "url": "https://files.pythonhosted.org/packages/fc/21/2aaeb786ddca1c0b54d96ec03a5df14883b1e62678759a83e4c549e3f63a/mediffusion-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53d1a9d193fd6fb23bf58bc0d507ff75dbe814ceefad30bc41bd20eb219e6c43",
                "md5": "44e1c636f6795bd9294545c3b90c9a48",
                "sha256": "3bc04f177c5ede5bb66e94c4e3922a9c97a6f454f2851baa36bee625f3e201c5"
            },
            "downloads": -1,
            "filename": "mediffusion-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "44e1c636f6795bd9294545c3b90c9a48",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 35951,
            "upload_time": "2024-01-17T15:29:57",
            "upload_time_iso_8601": "2024-01-17T15:29:57.313701Z",
            "url": "https://files.pythonhosted.org/packages/53/d1/a9d193fd6fb23bf58bc0d507ff75dbe814ceefad30bc41bd20eb219e6c43/mediffusion-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-17 15:29:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "BardiaKh",
    "github_project": "Mediffusion",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "bkh_pytorch_utils",
            "specs": [
                [
                    "==",
                    "0.9.3"
                ]
            ]
        },
        {
            "name": "torchextractor",
            "specs": [
                [
                    ">=",
                    "0.3.0"
                ]
            ]
        },
        {
            "name": "OmegaConf",
            "specs": [
                [
                    ">=",
                    "2.3.0"
                ]
            ]
        }
    ],
    "lcname": "mediffusion"
}
        
Elapsed time: 0.17669s