# UNet-based Denoising Diffusion Probabilistic Model (DDPM) in PyTorch
A **modular and customizable PyTorch implementation** of a **UNet-based Denoising Diffusion Probabilistic Model** for high-quality image generation.
Supports multiple beta schedules, flexible loss functions (MSE or L1), attention layers, and residual blocks for advanced denoising performance.
## ๐ Features
- ๐ **UNet backbone** for efficient denoising and image synthesis
- ๐ **Multiple beta schedules** (`linear`, `cosine`, etc.) for diffusion process customization
- ๐ **Residual and attention blocks** for better feature preservation
- โ๏ธ **Configurable architecture** via channel multipliers and attention resolution
- ๐งฎ **Loss function options** โ Mean Squared Error (MSE) or L1 loss
- ๐งช Clean and modular code for research and experimentation
- ๐ฆ Production-ready with training and sampling APIs
## ๐ฆ Installation
```bash
pip install diffusion-pytorch-lib
```
## ๐ Project Structure
```bash
diffusion-pytorch-lib/
โโโ diffusion_pytorch_lib/
โ โโโ __init__.py
โ โโโ module.py # All architecture classes and logic
โโโ pyproject.toml
โโโ LICENSE
โโโ README.md
```
## ๐ Quick Start
### 1. Import and create the model
```python
import torch
from diffusion-pytorch-lib import UNet, Diffusion
# Define UNet
model = UNet(
dim=64,
dim_mults=(1, 2, 4, 8),
channels=3,
)
# Define diffusion process
diffusion = GaussianDiffusion(
model,
image_size=256,
timesteps=1000,
beta_schedule="linear", # or 'cosine'
loss_type="l2" # 'l2' (MSE) or 'l1'
)
```
### 2. Training step (sample loop)
```python
optimizer = torch.optim.Adam(diffusion.parameters(), lr=1e-4)
def train_step(x):
diffusion.train()
optimizer.zero_grad()
loss = diffusion(x) # computes diffusion loss internally
loss.backward()
optimizer.step()
return loss.item()
```
### 3. Sampling new images
```python
diffusion.eval()
with torch.no_grad():
samples = diffusion.sample(batch_size=8) # (8, 3, 256, 256)
```
## โ๏ธ Configuration Options
### ๐งฉ U-Net Architecture
| Argument | Type | Default | Description |
|--|--|--|--|
| `dim` | `int` | `64` | Base number of feature channels in the first layer. |
| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | Multipliers for feature map dimensions at each U-Net stage. |
| `channels` | `int` | `3` | Number of input/output image channels (e.g., `3` for RGB). |
| `dropout` | `float` | `0.0` | Dropout rate for regularization. |
### ๐ Diffusion Process
| Argument | Type | Default | Description |
|--|--|--|--|
| `image_size` | `int` | `256` | Target image resolution |
| `timesteps` | `int` | `1000` | Number of diffusion steps (Higher values improve quality but increase training time). |
| `beta_schedule` | `str` | `"linear"` | Noise schedule type (`"linear"`, `"cosine"`, etc.). |
| `loss_type` | `str` | `"mse"` | Loss type (`"mse"` or `"l1"`) |
## ๐โโ๏ธ Author
Developed by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)
> Built from scratch for research into diffusion models, with inspiration from modern generative modeling literature.
## โญ๏ธ Support & Contribute
If you find this project useful, consider:
- โญ๏ธ Starring the repo
- ๐ Submitting issues
- ๐ฆ Suggesting improvements
## ๐ Related Projects
- [variational-autoencoder-pytorch-lib ยท PyPI (by me)](https://pypi.org/project/variational-autoencoder-pytorch-lib/)
- [Original DDPM Paper (external)](https://arxiv.org/abs/2006.11239)
## ๐ License
This project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/diffusion-pytorch-lib/blob/main/LICENSE).
Raw data
{
"_id": null,
"home_page": null,
"name": "diffusion-pytorch-lib",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "DDPM, UNet-based Denoising Diffusion, Denoising Diffusion Probabilistic Model, Generative AI, Image Generation, pytorch, machine learning, deep learning",
"author": null,
"author_email": "Mehran Bazrafkan <mhrn.bzrafkn.dev@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/39/80/7e7df4f47a06af01d83e0b039bd7512a41a33204f8b16021b0a350804305/diffusion_pytorch_lib-0.1.0.tar.gz",
"platform": null,
"description": "\r\n# UNet-based Denoising Diffusion Probabilistic Model (DDPM) in PyTorch\r\n\r\nA **modular and customizable PyTorch implementation** of a **UNet-based Denoising Diffusion Probabilistic Model** for high-quality image generation. \r\nSupports multiple beta schedules, flexible loss functions (MSE or L1), attention layers, and residual blocks for advanced denoising performance.\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- \ud83c\udf00 **UNet backbone** for efficient denoising and image synthesis\r\n\r\n- \ud83d\udcc8 **Multiple beta schedules** (`linear`, `cosine`, etc.) for diffusion process customization\r\n\r\n- \ud83d\udd01 **Residual and attention blocks** for better feature preservation\r\n\r\n- \u2699\ufe0f **Configurable architecture** via channel multipliers and attention resolution\r\n\r\n- \ud83e\uddee **Loss function options** \u2014 Mean Squared Error (MSE) or L1 loss\r\n\r\n- \ud83e\uddea Clean and modular code for research and experimentation\r\n\r\n- \ud83d\udce6 Production-ready with training and sampling APIs\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install diffusion-pytorch-lib\r\n\r\n```\r\n\r\n## \ud83d\udcc1 Project Structure\r\n\r\n```bash\r\ndiffusion-pytorch-lib/\r\n\u251c\u2500\u2500 diffusion_pytorch_lib/\r\n\u2502 \u251c\u2500\u2500 __init__.py\r\n\u2502 \u251c\u2500\u2500 module.py # All architecture classes and logic\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u251c\u2500\u2500 LICENSE\r\n\u2514\u2500\u2500 README.md\r\n\r\n```\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### 1. Import and create the model\r\n\r\n```python\r\nimport torch\r\nfrom diffusion-pytorch-lib import UNet, Diffusion\r\n\r\n# Define UNet\r\nmodel = UNet(\r\n dim=64,\r\n dim_mults=(1, 2, 4, 8),\r\n channels=3,\r\n)\r\n\r\n# Define diffusion process\r\ndiffusion = GaussianDiffusion(\r\n model,\r\n image_size=256,\r\n timesteps=1000,\r\n beta_schedule=\"linear\", # or 'cosine'\r\n loss_type=\"l2\" # 'l2' (MSE) or 'l1'\r\n)\r\n\r\n```\r\n\r\n### 2. Training step (sample loop)\r\n\r\n```python\r\noptimizer = torch.optim.Adam(diffusion.parameters(), lr=1e-4)\r\n\r\ndef train_step(x):\r\n diffusion.train()\r\n optimizer.zero_grad()\r\n loss = diffusion(x) # computes diffusion loss internally\r\n loss.backward()\r\n optimizer.step()\r\n return loss.item()\r\n\r\n```\r\n\r\n### 3. Sampling new images\r\n\r\n```python\r\ndiffusion.eval()\r\nwith torch.no_grad():\r\n samples = diffusion.sample(batch_size=8) # (8, 3, 256, 256)\r\n\r\n```\r\n\r\n## \u2699\ufe0f Configuration Options\r\n\r\n### \ud83e\udde9 U-Net Architecture\r\n\r\n| Argument | Type | Default | Description |\r\n|--|--|--|--|\r\n| `dim` | `int` | `64` | Base number of feature channels in the first layer. |\r\n| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | Multipliers for feature map dimensions at each U-Net stage. |\r\n| `channels` | `int` | `3` | Number of input/output image channels (e.g., `3` for RGB). |\r\n| `dropout` | `float` | `0.0` | Dropout rate for regularization. |\r\n\r\n### \ud83c\udf00 Diffusion Process\r\n\r\n| Argument | Type | Default | Description |\r\n|--|--|--|--|\r\n| `image_size` | `int` | `256` | Target image resolution |\r\n| `timesteps` | `int` | `1000` | Number of diffusion steps (Higher values improve quality but increase training time). |\r\n| `beta_schedule` | `str` | `\"linear\"` | Noise schedule type (`\"linear\"`, `\"cosine\"`, etc.). |\r\n| `loss_type` | `str` | `\"mse\"` | Loss type (`\"mse\"` or `\"l1\"`) |\r\n\r\n## \ud83d\ude4b\u200d\u2642\ufe0f Author\r\n\r\nDeveloped by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)\r\n\r\n> Built from scratch for research into diffusion models, with inspiration from modern generative modeling literature.\r\n\r\n## \u2b50\ufe0f Support & Contribute\r\n\r\nIf you find this project useful, consider:\r\n\r\n- \u2b50\ufe0f Starring the repo\r\n\r\n- \ud83d\udc1b Submitting issues\r\n\r\n- \ud83d\udce6 Suggesting improvements\r\n\r\n## \ud83d\udd17 Related Projects\r\n\r\n- [variational-autoencoder-pytorch-lib \u00b7 PyPI (by me)](https://pypi.org/project/variational-autoencoder-pytorch-lib/)\r\n\r\n- [Original DDPM Paper (external)](https://arxiv.org/abs/2006.11239)\r\n\r\n## \ud83d\udcdc License\r\n\r\nThis project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/diffusion-pytorch-lib/blob/main/LICENSE).\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A flexible PyTorch implementation of a UNet-based Denoising Diffusion Probabilistic Model (DDPM) with customizable architecture and training options.",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"ddpm",
" unet-based denoising diffusion",
" denoising diffusion probabilistic model",
" generative ai",
" image generation",
" pytorch",
" machine learning",
" deep learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9a81c9b159124fa29ee7d5ba0e5446615495c683dd88ffb8f9f54e1bfeb23a35",
"md5": "f8a7834e35b324d011f6021e5f09a414",
"sha256": "c0a11a64a8e85c298ee4060063cee7a089c8f6d7b9edd0a0da1e8edfca7ae187"
},
"downloads": -1,
"filename": "diffusion_pytorch_lib-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f8a7834e35b324d011f6021e5f09a414",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 9709,
"upload_time": "2025-08-11T08:47:24",
"upload_time_iso_8601": "2025-08-11T08:47:24.581053Z",
"url": "https://files.pythonhosted.org/packages/9a/81/c9b159124fa29ee7d5ba0e5446615495c683dd88ffb8f9f54e1bfeb23a35/diffusion_pytorch_lib-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "39807e7df4f47a06af01d83e0b039bd7512a41a33204f8b16021b0a350804305",
"md5": "8bb628a5f38452dc33446d2cfb3bbb04",
"sha256": "05da3156c8ee32c2b52e7fedfe13e002ba216432323a5158d9e1c3d0dd389029"
},
"downloads": -1,
"filename": "diffusion_pytorch_lib-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "8bb628a5f38452dc33446d2cfb3bbb04",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 11165,
"upload_time": "2025-08-11T08:47:25",
"upload_time_iso_8601": "2025-08-11T08:47:25.986899Z",
"url": "https://files.pythonhosted.org/packages/39/80/7e7df4f47a06af01d83e0b039bd7512a41a33204f8b16021b0a350804305/diffusion_pytorch_lib-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-11 08:47:25",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "diffusion-pytorch-lib"
}