![Azula's banner](https://raw.githubusercontent.com/probabilists/azula/master/docs/images/banner.svg)
# Azula - Diffusion models in PyTorch
Azula is a Python package that implements diffusion models in [PyTorch](https://pytorch.org). Its goal is to unify the different formalisms and notations of the generative diffusion models literature into a single, convenient and hackable interface.
> In the [Avatar](https://wikipedia.org/wiki/Avatar:_The_Last_Airbender) cartoon, [Azula](https://wikipedia.org/wiki/Azula) is a powerful fire and lightning bender ⚡️
## Installation
The `azula` package is available on [PyPI](https://pypi.org/project/azula), which means it is installable via `pip`.
```
pip install azula
```
Alternatively, if you need the latest features, you can install it from the repository.
```
pip install git+https://github.com/probabilists/azula
```
## Getting started
In Azula's formalism, a diffusion model is the composition of three elements: a noise schedule, a denoiser and a sampler.
* A noise schedule is a mapping from a time $t \in [0, 1]$ to the signal scale $\alpha_t$ and the noise scale $\sigma_t$ in a perturbation kernel $p(X_t \mid X) = \mathcal{N}(X_t \mid \alpha_t X_t, \sigma_t^2 I)$ from a "clean" random variable $X \sim p(X)$ to a "noisy" random variable $X_t$.
* A denoiser is a neural network trained to predict $X$ given $X_t$.
* A sampler defines a series of transition kernels $q(X_s \mid X_t)$ from $t$ to $s < t$ based on a noise schedule and a denoiser. Simulating these transitions from $t = 1$ to $0$ samples approximately from $p(X)$.
This formalism is closely followed by Azula's API.
```python
from azula.denoise import PreconditionedDenoiser
from azula.noise import VPSchedule
from azula.sample import DDPMSampler
# Choose the variance preserving (VP) noise schedule
schedule = VPSchedule()
# Initialize a denoiser
denoiser = PreconditionedDenoiser(
backbone=CustomNN(in_features=5, out_features=5),
schedule=schedule,
)
# Train to predict x given x_t
optimizer = torch.optim.Adam(denoiser.parameters(), lr=1e-3)
for x in train_loader:
t = torch.rand((batch_size,))
loss = denoiser.loss(x, t).mean()
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Generate 64 points in 1000 steps
sampler = DDPMSampler(denoiser.eval(), steps=1000)
x1 = sampler.init((64, 5))
x0 = sampler(x1)
```
Alternatively, Azula's plugin interface allows to load pre-trained models and use them with the same convenient interface.
```python
import sys
sys.path.append("path/to/guided-diffusion")
from azula.plugins import adm
from azula.sample import DDIMSampler
# Download weights from openai/guided-diffusion
denoiser = adm.load_model("imagenet_256x256")
# Generate a batch of 4 images
sampler = DDIMSampler(denoiser, steps=64).cuda()
x1 = sampler.init((4, 3 * 256 * 256))
x0 = sampler(x1)
images = torch.clip((x0 + 1) / 2, min=0, max=1).reshape(4, 3, 256, 256)
```
For more information, check out the documentation and tutorials at [azula.readthedocs.io](https://azula.readthedocs.io).
## Contributing
If you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/probabilists/azula/blob/master/CONTRIBUTING.md).
Raw data
{
"_id": null,
"home_page": null,
"name": "azula",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "torch, diffusion models, probability, distribution, generative, deep learning",
"author": null,
"author_email": "The Probabilists <theprobabilists@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/8d/24/47c557ed2eb63ef4bfea61c5786fbdb8b64c3c655b49ed291e7694105004/azula-0.2.4.tar.gz",
"platform": null,
"description": "![Azula's banner](https://raw.githubusercontent.com/probabilists/azula/master/docs/images/banner.svg)\n\n# Azula - Diffusion models in PyTorch\n\nAzula is a Python package that implements diffusion models in [PyTorch](https://pytorch.org). Its goal is to unify the different formalisms and notations of the generative diffusion models literature into a single, convenient and hackable interface.\n\n> In the [Avatar](https://wikipedia.org/wiki/Avatar:_The_Last_Airbender) cartoon, [Azula](https://wikipedia.org/wiki/Azula) is a powerful fire and lightning bender \u26a1\ufe0f\n\n## Installation\n\nThe `azula` package is available on [PyPI](https://pypi.org/project/azula), which means it is installable via `pip`.\n\n```\npip install azula\n```\n\nAlternatively, if you need the latest features, you can install it from the repository.\n\n```\npip install git+https://github.com/probabilists/azula\n```\n\n## Getting started\n\nIn Azula's formalism, a diffusion model is the composition of three elements: a noise schedule, a denoiser and a sampler.\n\n* A noise schedule is a mapping from a time $t \\in [0, 1]$ to the signal scale $\\alpha_t$ and the noise scale $\\sigma_t$ in a perturbation kernel $p(X_t \\mid X) = \\mathcal{N}(X_t \\mid \\alpha_t X_t, \\sigma_t^2 I)$ from a \"clean\" random variable $X \\sim p(X)$ to a \"noisy\" random variable $X_t$.\n\n* A denoiser is a neural network trained to predict $X$ given $X_t$.\n\n* A sampler defines a series of transition kernels $q(X_s \\mid X_t)$ from $t$ to $s < t$ based on a noise schedule and a denoiser. Simulating these transitions from $t = 1$ to $0$ samples approximately from $p(X)$.\n\nThis formalism is closely followed by Azula's API.\n\n```python\nfrom azula.denoise import PreconditionedDenoiser\nfrom azula.noise import VPSchedule\nfrom azula.sample import DDPMSampler\n\n# Choose the variance preserving (VP) noise schedule\nschedule = VPSchedule()\n\n# Initialize a denoiser\ndenoiser = PreconditionedDenoiser(\n backbone=CustomNN(in_features=5, out_features=5),\n schedule=schedule,\n)\n\n# Train to predict x given x_t\noptimizer = torch.optim.Adam(denoiser.parameters(), lr=1e-3)\n\nfor x in train_loader:\n t = torch.rand((batch_size,))\n\n loss = denoiser.loss(x, t).mean()\n loss.backward()\n\n optimizer.step()\n optimizer.zero_grad()\n\n# Generate 64 points in 1000 steps\nsampler = DDPMSampler(denoiser.eval(), steps=1000)\n\nx1 = sampler.init((64, 5))\nx0 = sampler(x1)\n```\n\nAlternatively, Azula's plugin interface allows to load pre-trained models and use them with the same convenient interface.\n\n```python\nimport sys\n\nsys.path.append(\"path/to/guided-diffusion\")\n\nfrom azula.plugins import adm\nfrom azula.sample import DDIMSampler\n\n# Download weights from openai/guided-diffusion\ndenoiser = adm.load_model(\"imagenet_256x256\")\n\n# Generate a batch of 4 images\nsampler = DDIMSampler(denoiser, steps=64).cuda()\n\nx1 = sampler.init((4, 3 * 256 * 256))\nx0 = sampler(x1)\n\nimages = torch.clip((x0 + 1) / 2, min=0, max=1).reshape(4, 3, 256, 256)\n```\n\nFor more information, check out the documentation and tutorials at [azula.readthedocs.io](https://azula.readthedocs.io).\n\n## Contributing\n\nIf you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/probabilists/azula/blob/master/CONTRIBUTING.md).\n",
"bugtrack_url": null,
"license": null,
"summary": "Diffusion models in PyTorch",
"version": "0.2.4",
"project_urls": {
"documentation": "https://azula.readthedocs.io",
"source": "https://github.com/probabilists/azula",
"tracker": "https://github.com/probabilists/azula/issues"
},
"split_keywords": [
"torch",
" diffusion models",
" probability",
" distribution",
" generative",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8a6bc261d139ed57c9825c2d9a240d3438858c4e40a64852ecc8e47d57696053",
"md5": "5ee9d1407afb764386ae25975a7c39e5",
"sha256": "806cee6f822fbd4091243f62e0092f416dde84d9059e270157fcc3a0e611a3f9"
},
"downloads": -1,
"filename": "azula-0.2.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5ee9d1407afb764386ae25975a7c39e5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 32893,
"upload_time": "2024-10-14T20:11:54",
"upload_time_iso_8601": "2024-10-14T20:11:54.321596Z",
"url": "https://files.pythonhosted.org/packages/8a/6b/c261d139ed57c9825c2d9a240d3438858c4e40a64852ecc8e47d57696053/azula-0.2.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8d2447c557ed2eb63ef4bfea61c5786fbdb8b64c3c655b49ed291e7694105004",
"md5": "9eed71d524d4e9aa779357cb26f73c7e",
"sha256": "02cd241c7f68ef88eb91fe2c16be66f6ced5ff42ff2b4156192e8cc3da8ea2ce"
},
"downloads": -1,
"filename": "azula-0.2.4.tar.gz",
"has_sig": false,
"md5_digest": "9eed71d524d4e9aa779357cb26f73c7e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 28069,
"upload_time": "2024-10-14T20:11:55",
"upload_time_iso_8601": "2024-10-14T20:11:55.339348Z",
"url": "https://files.pythonhosted.org/packages/8d/24/47c557ed2eb63ef4bfea61c5786fbdb8b64c3c655b49ed291e7694105004/azula-0.2.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-14 20:11:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "probabilists",
"github_project": "azula",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "azula"
}