variational-autoencoder-pytorch-lib


Namevariational-autoencoder-pytorch-lib JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryA package to simplify the implementing a variational-autoencoder model with spacial latent heads.
upload_time2025-08-03 08:06:40
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords variational autoencoder pytorch ml deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# 🧠 Variational Autoencoder (VAE) in PyTorch

A modular and customizable implementation of a **Convolutional Variational Autoencoder (VAE)** in PyTorch, designed for image reconstruction and unsupervised representation learning. Built with residual blocks, RMS normalization, and flexible architecture scaling.

## πŸš€ Features

- πŸ” **Encoder–Decoder VAE** with reparameterization trick
- 🧱 **Residual blocks** with RMS normalization
- 🧩 Fully modular, easy to customize
- πŸ”„ **Downsampling/Upsampling** using `einops` and `nn.Conv2d`
- πŸ§ͺ **Dropout regularization** for improved generalization
- ⚑ Fast inference with `.reconstruct()` method
- 🧼 Clean, production-ready code

## πŸ“¦ Installation

```bash
pip install variational-autoencoder-pytorch

```

## πŸ“ Project Structure

```bash
variational-autoencoder-pytorch/
β”œβ”€β”€ variational_autoencoder_pytorch/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── module.py        # All architecture classes and logic
β”œβ”€β”€ pyproject.toml
β”œβ”€β”€ LICENSE
└── README.md

```

## πŸš€ Quick Start

### 1. Import the package and create the model

```python
import torch
from variational_autoencoder_pytorch import VariationalAutoEncoder

model = AutoEncoder(
    dim=64,
    dim_mults=(1, 2, 4, 8),
    dim_latent=128,
    image_channels=3
)

```

### 2. Forward pass and reconstruction

```python
x = torch.randn(8, 3, 256, 256)  # batch of images
x_recon, mu, logvar = model(x)

# Or just get the reconstruction
x_recon = model.reconstruct(x)

```

### 3. Training step (sample loop)

```python
import torch.nn.functional as F
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)

def train_step(x):
    model.train()
    optimizer.zero_grad()
    x_recon, mu, logvar = model(x)
    loss = vae_loss(x, x_recon, mu, logvar)
    loss.backward()
    optimizer.step()
    return loss.item()
    
```

### 🧠 Model Output

- `x_recon`: Reconstructed image

- `mu`: Mean of the latent distribution

- `logvar`: Log-variance of the latent distribution

## βš™οΈ Configuration Options

| Argument | Type | Default | Description |
|--|--|--|--|
| `dim` | `int` | `64` | Base number of channels |
| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | Multipliers for feature map dimensions |
| `dim_latent` | `int` | `64` | Latent space dimension |
| `image_channels` | `int` | `3` | Input/output image channels (e.g., 3) |
| `dropout` | `float` | `0.0` | Dropout probability |

## πŸ§ͺ Example: Loss Function

Here's a basic VAE loss function combining reconstruction and KL divergence:

```python
def vae_loss(x, x_recon, mu, logvar):
    recon_loss = F.mse_loss(x_recon, x, reduction='sum')
    kl_div = -0.5  *  torch.sum(torch.mean(1  +  logvar  -  mu.pow(2) -  logvar.exp(), dim=[2, 3]))
    loss = recon_loss + (kl_div * 0.0001) # beta = 0.0001
    return loss

```

## πŸ™‹β€β™‚οΈ Author

Developed by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)

> Built from scratch with inspiration from modern deep generative modeling architectures. This package reflects personal experience with VAEs and convolutional design patterns.

## ⭐️ Support & Contribute

If you find this project useful, consider:

- ⭐️ Starring the repo

- πŸ› Submitting issues

- πŸ“¦ Suggesting improvements

## πŸ”— Related Projects

- [convolutional-autoencoder-pytorch Β· PyPI (Implemented by me)](https://pypi.org/project/convolutional-autoencoder-pytorch/)

- [PyTorch VAE Tutorial (external)](https://github.com/pytorch/examples/tree/main/vae)

## πŸ“œ License

This project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/convolutional-variational-autoencoder-pytorch/blob/main/LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "variational-autoencoder-pytorch-lib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "variational autoencoder, pytorch, ml, deep learning",
    "author": null,
    "author_email": "Mehran Bazrafkan <mhrn.bzrafkn.dev@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/b1/0e/496ab9d962ef308604159f376e2ffb87d38754cfc355d2a3042ceb47ef55/variational_autoencoder_pytorch_lib-0.1.2.tar.gz",
    "platform": null,
    "description": "\r\n# \ud83e\udde0 Variational Autoencoder (VAE) in PyTorch\r\n\r\nA modular and customizable implementation of a **Convolutional Variational Autoencoder (VAE)** in PyTorch, designed for image reconstruction and unsupervised representation learning. Built with residual blocks, RMS normalization, and flexible architecture scaling.\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- \ud83d\udd01 **Encoder\u2013Decoder VAE** with reparameterization trick\r\n- \ud83e\uddf1 **Residual blocks** with RMS normalization\r\n- \ud83e\udde9 Fully modular, easy to customize\r\n- \ud83d\udd04 **Downsampling/Upsampling** using `einops` and `nn.Conv2d`\r\n- \ud83e\uddea **Dropout regularization** for improved generalization\r\n- \u26a1 Fast inference with `.reconstruct()` method\r\n- \ud83e\uddfc Clean, production-ready code\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install variational-autoencoder-pytorch\r\n\r\n```\r\n\r\n## \ud83d\udcc1 Project Structure\r\n\r\n```bash\r\nvariational-autoencoder-pytorch/\r\n\u251c\u2500\u2500 variational_autoencoder_pytorch/\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u2514\u2500\u2500 module.py        # All architecture classes and logic\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u251c\u2500\u2500 LICENSE\r\n\u2514\u2500\u2500 README.md\r\n\r\n```\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### 1. Import the package and create the model\r\n\r\n```python\r\nimport torch\r\nfrom variational_autoencoder_pytorch import VariationalAutoEncoder\r\n\r\nmodel = AutoEncoder(\r\n    dim=64,\r\n    dim_mults=(1, 2, 4, 8),\r\n    dim_latent=128,\r\n    image_channels=3\r\n)\r\n\r\n```\r\n\r\n### 2. Forward pass and reconstruction\r\n\r\n```python\r\nx = torch.randn(8, 3, 256, 256)  # batch of images\r\nx_recon, mu, logvar = model(x)\r\n\r\n# Or just get the reconstruction\r\nx_recon = model.reconstruct(x)\r\n\r\n```\r\n\r\n### 3. Training step (sample loop)\r\n\r\n```python\r\nimport torch.nn.functional as F\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\r\n\r\ndef train_step(x):\r\n    model.train()\r\n    optimizer.zero_grad()\r\n    x_recon, mu, logvar = model(x)\r\n    loss = vae_loss(x, x_recon, mu, logvar)\r\n    loss.backward()\r\n    optimizer.step()\r\n    return loss.item()\r\n    \r\n```\r\n\r\n### \ud83e\udde0 Model Output\r\n\r\n- `x_recon`: Reconstructed image\r\n\r\n- `mu`: Mean of the latent distribution\r\n\r\n- `logvar`: Log-variance of the latent distribution\r\n\r\n## \u2699\ufe0f Configuration Options\r\n\r\n| Argument | Type | Default | Description |\r\n|--|--|--|--|\r\n| `dim` | `int` | `64` | Base number of channels |\r\n| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | Multipliers for feature map dimensions |\r\n| `dim_latent` | `int` | `64` | Latent space dimension |\r\n| `image_channels` | `int` | `3` | Input/output image channels (e.g., 3) |\r\n| `dropout` | `float` | `0.0` | Dropout probability |\r\n\r\n## \ud83e\uddea Example: Loss Function\r\n\r\nHere's a basic VAE loss function combining reconstruction and KL divergence:\r\n\r\n```python\r\ndef vae_loss(x, x_recon, mu, logvar):\r\n    recon_loss = F.mse_loss(x_recon, x, reduction='sum')\r\n    kl_div = -0.5  *  torch.sum(torch.mean(1  +  logvar  -  mu.pow(2) -  logvar.exp(), dim=[2, 3]))\r\n    loss = recon_loss + (kl_div * 0.0001) # beta = 0.0001\r\n    return loss\r\n\r\n```\r\n\r\n## \ud83d\ude4b\u200d\u2642\ufe0f Author\r\n\r\nDeveloped by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)\r\n\r\n> Built from scratch with inspiration from modern deep generative modeling architectures. This package reflects personal experience with VAEs and convolutional design patterns.\r\n\r\n## \u2b50\ufe0f Support & Contribute\r\n\r\nIf you find this project useful, consider:\r\n\r\n- \u2b50\ufe0f Starring the repo\r\n\r\n- \ud83d\udc1b Submitting issues\r\n\r\n- \ud83d\udce6 Suggesting improvements\r\n\r\n## \ud83d\udd17 Related Projects\r\n\r\n- [convolutional-autoencoder-pytorch \u00b7 PyPI (Implemented by me)](https://pypi.org/project/convolutional-autoencoder-pytorch/)\r\n\r\n- [PyTorch VAE Tutorial (external)](https://github.com/pytorch/examples/tree/main/vae)\r\n\r\n## \ud83d\udcdc License\r\n\r\nThis project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/convolutional-variational-autoencoder-pytorch/blob/main/LICENSE).\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A package to simplify the implementing a variational-autoencoder model with spacial latent heads.",
    "version": "0.1.2",
    "project_urls": null,
    "split_keywords": [
        "variational autoencoder",
        " pytorch",
        " ml",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1717b354f85db8b8d6388fd2ecdc45d28bfa6c5d0ff8518f853fe2baba4cc2fd",
                "md5": "b92f92972e81abfd7ed8fb9f4fcdc80e",
                "sha256": "c5d28f622a1ca6d493d706e5fc72e2fea1594a6760ff8db6f3ef8a859825492a"
            },
            "downloads": -1,
            "filename": "variational_autoencoder_pytorch_lib-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b92f92972e81abfd7ed8fb9f4fcdc80e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 5913,
            "upload_time": "2025-08-03T08:06:38",
            "upload_time_iso_8601": "2025-08-03T08:06:38.212560Z",
            "url": "https://files.pythonhosted.org/packages/17/17/b354f85db8b8d6388fd2ecdc45d28bfa6c5d0ff8518f853fe2baba4cc2fd/variational_autoencoder_pytorch_lib-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b10e496ab9d962ef308604159f376e2ffb87d38754cfc355d2a3042ceb47ef55",
                "md5": "8caac8a964d73d15b5e74bd26b366432",
                "sha256": "146d4a81e10e5069b126882af6fd6b1b360322806d64b77b362c451d42e052e8"
            },
            "downloads": -1,
            "filename": "variational_autoencoder_pytorch_lib-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8caac8a964d73d15b5e74bd26b366432",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 5099,
            "upload_time": "2025-08-03T08:06:40",
            "upload_time_iso_8601": "2025-08-03T08:06:40.878677Z",
            "url": "https://files.pythonhosted.org/packages/b1/0e/496ab9d962ef308604159f376e2ffb87d38754cfc355d2a3042ceb47ef55/variational_autoencoder_pytorch_lib-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-03 08:06:40",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "variational-autoencoder-pytorch-lib"
}
        
Elapsed time: 0.96777s