![Zuko's banner](https://raw.githubusercontent.com/probabilists/zuko/master/docs/images/banner.svg)
# Zuko - Normalizing flows in PyTorch
Zuko is a Python package that implements normalizing flows in [PyTorch](https://pytorch.org). It relies as much as possible on distributions and transformations already provided by PyTorch. Unfortunately, the `Distribution` and `Transform` classes of `torch` are not sub-classes of `torch.nn.Module`, which means you cannot send their internal tensors to GPU with `.to('cuda')` or retrieve their parameters with `.parameters()`. Worse, the concepts of conditional distribution and transformation, which are essential for probabilistic inference, are impossible to express.
To solve these problems, `zuko` defines two concepts: the `LazyDistribution` and `LazyTransform`, which are any modules whose forward pass returns a `Distribution` or `Transform`, respectively. Because the creation of the actual distribution/transformation is delayed, an eventual condition can be easily taken into account. This design enables lazy distributions, including normalizing flows, to act like distributions while retaining features inherent to modules, such as trainable parameters. It also makes the implementations easy to understand and extend.
> In the [Avatar](https://wikipedia.org/wiki/Avatar:_The_Last_Airbender) cartoon, [Zuko](https://wikipedia.org/wiki/Zuko) is a powerful firebender 🔥
## Acknowledgements
Zuko takes significant inspiration from [nflows](https://github.com/bayesiains/nflows) and [Stefan Webb](https://github.com/stefanwebb)'s work in [Pyro](https://github.com/pyro-ppl/pyro) and [FlowTorch](https://github.com/facebookincubator/flowtorch).
## Installation
The `zuko` package is available on [PyPI](https://pypi.org/project/zuko), which means it is installable via `pip`.
```
pip install zuko
```
Alternatively, if you need the latest features, you can install it from the repository.
```
pip install git+https://github.com/probabilists/zuko
```
## Getting started
Normalizing flows are provided in the `zuko.flows` module. To build one, supply the number of sample and context features as well as the transformations' hyperparameters. Then, feeding a context $c$ to the flow returns a conditional distribution $p(x | c)$ which can be evaluated and sampled from.
```python
import torch
import zuko
# Neural spline flow (NSF) with 3 sample features and 5 context features
flow = zuko.flows.NSF(3, 5, transforms=3, hidden_features=[128] * 3)
# Train to maximize the log-likelihood
optimizer = torch.optim.Adam(flow.parameters(), lr=1e-3)
for x, c in trainset:
loss = -flow(c).log_prob(x) # -log p(x | c)
loss = loss.mean()
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Sample 64 points x ~ p(x | c*)
x = flow(c_star).sample((64,))
```
Alternatively, flows can be built as custom `Flow` objects.
```python
from zuko.flows import Flow, UnconditionalDistribution, UnconditionalTransform
from zuko.flows.autoregressive import MaskedAutoregressiveTransform
from zuko.distributions import DiagNormal
from zuko.transforms import RotationTransform
flow = Flow(
transform=[
MaskedAutoregressiveTransform(3, 5, hidden_features=(64, 64)),
UnconditionalTransform(RotationTransform, torch.randn(3, 3)),
MaskedAutoregressiveTransform(3, 5, hidden_features=(64, 64)),
],
base=UnconditionalDistribution(
DiagNormal,
torch.zeros(3),
torch.ones(3),
buffer=True,
),
)
```
For more information, check out the documentation and tutorials at [zuko.readthedocs.io](https://zuko.readthedocs.io).
### Available flows
| Class | Year | Reference |
|:-------:|:----:|-----------|
| `GMM` | - | [Gaussian Mixture Model](https://wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model) |
| `NICE` | 2014 | [Non-linear Independent Components Estimation](https://arxiv.org/abs/1410.8516) |
| `MAF` | 2017 | [Masked Autoregressive Flow for Density Estimation](https://arxiv.org/abs/1705.07057) |
| `NSF` | 2019 | [Neural Spline Flows](https://arxiv.org/abs/1906.04032) |
| `NCSF` | 2020 | [Normalizing Flows on Tori and Spheres](https://arxiv.org/abs/2002.02428) |
| `SOSPF` | 2019 | [Sum-of-Squares Polynomial Flow](https://arxiv.org/abs/1905.02325) |
| `NAF` | 2018 | [Neural Autoregressive Flows](https://arxiv.org/abs/1804.00779) |
| `UNAF` | 2019 | [Unconstrained Monotonic Neural Networks](https://arxiv.org/abs/1908.05164) |
| `CNF` | 2018 | [Neural Ordinary Differential Equations](https://arxiv.org/abs/1806.07366) |
| `GF` | 2020 | [Gaussianization Flows](https://arxiv.org/abs/2003.01941) |
| `BPF` | 2020 | [Bernstein-Polynomial Normalizing Flows](https://arxiv.org/abs/2004.00464) |
## Contributing
If you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/probabilists/zuko/blob/master/CONTRIBUTING.md).
Raw data
{
"_id": null,
"home_page": null,
"name": "zuko",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "torch, normalizing flows, probability, distribution, generative, deep learning",
"author": null,
"author_email": "The Probabilists <theprobabilists@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/aa/9e/333bd6ce329e56aecd2a04a6ce42a3221c40bc849909fdcf70d10f2cc1a0/zuko-1.3.0.tar.gz",
"platform": null,
"description": "![Zuko's banner](https://raw.githubusercontent.com/probabilists/zuko/master/docs/images/banner.svg)\n\n# Zuko - Normalizing flows in PyTorch\n\nZuko is a Python package that implements normalizing flows in [PyTorch](https://pytorch.org). It relies as much as possible on distributions and transformations already provided by PyTorch. Unfortunately, the `Distribution` and `Transform` classes of `torch` are not sub-classes of `torch.nn.Module`, which means you cannot send their internal tensors to GPU with `.to('cuda')` or retrieve their parameters with `.parameters()`. Worse, the concepts of conditional distribution and transformation, which are essential for probabilistic inference, are impossible to express.\n\nTo solve these problems, `zuko` defines two concepts: the `LazyDistribution` and `LazyTransform`, which are any modules whose forward pass returns a `Distribution` or `Transform`, respectively. Because the creation of the actual distribution/transformation is delayed, an eventual condition can be easily taken into account. This design enables lazy distributions, including normalizing flows, to act like distributions while retaining features inherent to modules, such as trainable parameters. It also makes the implementations easy to understand and extend.\n\n> In the [Avatar](https://wikipedia.org/wiki/Avatar:_The_Last_Airbender) cartoon, [Zuko](https://wikipedia.org/wiki/Zuko) is a powerful firebender \ud83d\udd25\n\n## Acknowledgements\n\nZuko takes significant inspiration from [nflows](https://github.com/bayesiains/nflows) and [Stefan Webb](https://github.com/stefanwebb)'s work in [Pyro](https://github.com/pyro-ppl/pyro) and [FlowTorch](https://github.com/facebookincubator/flowtorch).\n\n## Installation\n\nThe `zuko` package is available on [PyPI](https://pypi.org/project/zuko), which means it is installable via `pip`.\n\n```\npip install zuko\n```\n\nAlternatively, if you need the latest features, you can install it from the repository.\n\n```\npip install git+https://github.com/probabilists/zuko\n```\n\n## Getting started\n\nNormalizing flows are provided in the `zuko.flows` module. To build one, supply the number of sample and context features as well as the transformations' hyperparameters. Then, feeding a context $c$ to the flow returns a conditional distribution $p(x | c)$ which can be evaluated and sampled from.\n\n```python\nimport torch\nimport zuko\n\n# Neural spline flow (NSF) with 3 sample features and 5 context features\nflow = zuko.flows.NSF(3, 5, transforms=3, hidden_features=[128] * 3)\n\n# Train to maximize the log-likelihood\noptimizer = torch.optim.Adam(flow.parameters(), lr=1e-3)\n\nfor x, c in trainset:\n loss = -flow(c).log_prob(x) # -log p(x | c)\n loss = loss.mean()\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n# Sample 64 points x ~ p(x | c*)\nx = flow(c_star).sample((64,))\n```\n\nAlternatively, flows can be built as custom `Flow` objects.\n\n```python\nfrom zuko.flows import Flow, UnconditionalDistribution, UnconditionalTransform\nfrom zuko.flows.autoregressive import MaskedAutoregressiveTransform\nfrom zuko.distributions import DiagNormal\nfrom zuko.transforms import RotationTransform\n\nflow = Flow(\n transform=[\n MaskedAutoregressiveTransform(3, 5, hidden_features=(64, 64)),\n UnconditionalTransform(RotationTransform, torch.randn(3, 3)),\n MaskedAutoregressiveTransform(3, 5, hidden_features=(64, 64)),\n ],\n base=UnconditionalDistribution(\n DiagNormal,\n torch.zeros(3),\n torch.ones(3),\n buffer=True,\n ),\n)\n```\n\nFor more information, check out the documentation and tutorials at [zuko.readthedocs.io](https://zuko.readthedocs.io).\n\n### Available flows\n\n| Class | Year | Reference |\n|:-------:|:----:|-----------|\n| `GMM` | - | [Gaussian Mixture Model](https://wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model) |\n| `NICE` | 2014 | [Non-linear Independent Components Estimation](https://arxiv.org/abs/1410.8516) |\n| `MAF` | 2017 | [Masked Autoregressive Flow for Density Estimation](https://arxiv.org/abs/1705.07057) |\n| `NSF` | 2019 | [Neural Spline Flows](https://arxiv.org/abs/1906.04032) |\n| `NCSF` | 2020 | [Normalizing Flows on Tori and Spheres](https://arxiv.org/abs/2002.02428) |\n| `SOSPF` | 2019 | [Sum-of-Squares Polynomial Flow](https://arxiv.org/abs/1905.02325) |\n| `NAF` | 2018 | [Neural Autoregressive Flows](https://arxiv.org/abs/1804.00779) |\n| `UNAF` | 2019 | [Unconstrained Monotonic Neural Networks](https://arxiv.org/abs/1908.05164) |\n| `CNF` | 2018 | [Neural Ordinary Differential Equations](https://arxiv.org/abs/1806.07366) |\n| `GF` | 2020 | [Gaussianization Flows](https://arxiv.org/abs/2003.01941) |\n| `BPF` | 2020 | [Bernstein-Polynomial Normalizing Flows](https://arxiv.org/abs/2004.00464) |\n\n## Contributing\n\nIf you have a question, an issue or would like to contribute, please read our [contributing guidelines](https://github.com/probabilists/zuko/blob/master/CONTRIBUTING.md).\n",
"bugtrack_url": null,
"license": null,
"summary": "Normalizing flows in PyTorch",
"version": "1.3.0",
"project_urls": {
"documentation": "https://zuko.readthedocs.io",
"source": "https://github.com/probabilists/zuko",
"tracker": "https://github.com/probabilists/zuko/issues"
},
"split_keywords": [
"torch",
" normalizing flows",
" probability",
" distribution",
" generative",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "68d8c72ca94e2dc33652ccedb819095e335b115c26092c59071bf03aaca7d8a3",
"md5": "f47788781cc3978dfe5e072169e3fdd6",
"sha256": "2ecdc5a8b19ca47e7671e0c86101793c3aeb899bf02467c36db85dd3ba6402e3"
},
"downloads": -1,
"filename": "zuko-1.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f47788781cc3978dfe5e072169e3fdd6",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 42777,
"upload_time": "2024-10-02T04:26:02",
"upload_time_iso_8601": "2024-10-02T04:26:02.599034Z",
"url": "https://files.pythonhosted.org/packages/68/d8/c72ca94e2dc33652ccedb819095e335b115c26092c59071bf03aaca7d8a3/zuko-1.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "aa9e333bd6ce329e56aecd2a04a6ce42a3221c40bc849909fdcf70d10f2cc1a0",
"md5": "db4b30f100aaeb66f0ed90806e63adee",
"sha256": "7797b6c069b23a420c839fb274552f1dba4847e9a38292b63ce13cee4dac871a"
},
"downloads": -1,
"filename": "zuko-1.3.0.tar.gz",
"has_sig": false,
"md5_digest": "db4b30f100aaeb66f0ed90806e63adee",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 38899,
"upload_time": "2024-10-02T04:26:04",
"upload_time_iso_8601": "2024-10-02T04:26:04.390258Z",
"url": "https://files.pythonhosted.org/packages/aa/9e/333bd6ce329e56aecd2a04a6ce42a3221c40bc849909fdcf70d10f2cc1a0/zuko-1.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-02 04:26:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "probabilists",
"github_project": "zuko",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"1.12.0"
]
]
}
],
"lcname": "zuko"
}