convolutional-variational-autoencoder-pytorch


Nameconvolutional-variational-autoencoder-pytorch JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryA package to simplify the implementing a variationsl-autoencoder model.
upload_time2025-07-31 16:50:39
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords variational autoencoder pytorch ml deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# 🧠 Convolutional Variational Autoencoder (VAE) in PyTorch

A modular and customizable implementation of a **Convolutional Variational Autoencoder (VAE)** in PyTorch, designed for image reconstruction and unsupervised representation learning. Built with residual blocks, RMS normalization, and flexible architecture scaling.

## πŸš€ Features

- πŸ” **Encoder–Decoder VAE** with reparameterization trick
- 🧱 **Residual blocks** with RMS normalization
- 🧩 Fully modular, easy to customize
- πŸ”„ **Downsampling/Upsampling** using `einops` and `nn.Conv2d`
- πŸ§ͺ **Dropout regularization** for improved generalization
- ⚑ Fast inference with `.reconstruct()` method
- 🧼 Clean, production-ready code

## πŸ“¦ Installation

```bash
pip install convolutional-variational-autoencoder-pytorch

```

## πŸ“ Project Structure

```bash
convolutional-variational-autoencoder-pytorch/
β”œβ”€β”€ convolutional_variational_autoencoder_pytorch/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── module.py        # All architecture classes and logic
β”œβ”€β”€ pyproject.toml
β”œβ”€β”€ LICENSE
└── README.md

```

## πŸš€ Quick Start

### 1. Import the package and create the model

```python
import torch
from convolutional_variational_autoencoder_pytorch import VariationalAutoEncoder

model = AutoEncoder(
    dim=64,
    dim_mults=(1, 2, 4, 8),
    dim_latent=128,
    image_channels=3
)

```

### 2. Forward pass and reconstruction

```python
x = torch.randn(8, 3, 256, 256)  # batch of images
x_recon, mu, logvar = model(x)

# Or just get the reconstruction
x_recon = model.reconstruct(x)

```

### 3. Training step (sample loop)

```python
import torch.nn.functional as F
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)

def train_step(x):
    model.train()
    optimizer.zero_grad()
    x_recon, _, _ = model(x)
    loss = F.mse_loss(x_recon, x)
    loss.backward()
    optimizer.step()
    return loss.item()
    
```

### 🧠 Model Output

- `x_recon`: Reconstructed image

- `mu`: Mean of the latent distribution

- `logvar`: Log-variance of the latent distribution

## βš™οΈ Configuration Options

| Argument | Type | Default | Description |
|--|--|--|--|
| `dim` | `int` | `64` | Base number of channels |
| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | `Multipliers for feature map dimensions |
| `dim_latent` | `int` | `64` | Latent space dimension |
| `image_channels` | `int` | `3` | Input/output image channels (e.g., 3) |
| `dropout` | `float` | `0.0` | Dropout probability |

## πŸ§ͺ Example: Loss Function

Here's a basic VAE loss function combining reconstruction and KL divergence:

```python
def vae_loss(x, x_recon, mu, logvar):
    recon_loss = F.mse_loss(x_recon, x, reduction='sum')
    kl_div = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
    loss = recon_loss + kl_div
    return loss

```

## πŸ™‹β€β™‚οΈ Author

Developed by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)

> Built from scratch with inspiration from modern deep generative modeling architectures. This package reflects personal experience with VAEs and convolutional design patterns.

## ⭐️ Support & Contribute

If you find this project useful, consider:

- ⭐️ Starring the repo

- πŸ› Submitting issues

- πŸ“¦ Suggesting improvements

## πŸ”— Related Projects

- [convolutional-autoencoder-pytorch Β· PyPI (Implemented by me)](https://pypi.org/project/convolutional-autoencoder-pytorch/)

- [PyTorch VAE Tutorial (external)](https://github.com/pytorch/examples/tree/main/vae)

## πŸ“œ License

This project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/convolutional-variational-autoencoder-pytorch/blob/main/LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "convolutional-variational-autoencoder-pytorch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "variational autoencoder, pytorch, ml, deep learning",
    "author": null,
    "author_email": "Mehran Bazrafkan <mhrn.bzrafkn.dev@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/24/1c/0b7c885fb2b2372264ea9768212370c1d836ca8a872b808a78258ae5e748/convolutional_variational_autoencoder_pytorch-0.1.1.tar.gz",
    "platform": null,
    "description": "\r\n# \ud83e\udde0 Convolutional Variational Autoencoder (VAE) in PyTorch\r\n\r\nA modular and customizable implementation of a **Convolutional Variational Autoencoder (VAE)** in PyTorch, designed for image reconstruction and unsupervised representation learning. Built with residual blocks, RMS normalization, and flexible architecture scaling.\r\n\r\n## \ud83d\ude80 Features\r\n\r\n- \ud83d\udd01 **Encoder\u2013Decoder VAE** with reparameterization trick\r\n- \ud83e\uddf1 **Residual blocks** with RMS normalization\r\n- \ud83e\udde9 Fully modular, easy to customize\r\n- \ud83d\udd04 **Downsampling/Upsampling** using `einops` and `nn.Conv2d`\r\n- \ud83e\uddea **Dropout regularization** for improved generalization\r\n- \u26a1 Fast inference with `.reconstruct()` method\r\n- \ud83e\uddfc Clean, production-ready code\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install convolutional-variational-autoencoder-pytorch\r\n\r\n```\r\n\r\n## \ud83d\udcc1 Project Structure\r\n\r\n```bash\r\nconvolutional-variational-autoencoder-pytorch/\r\n\u251c\u2500\u2500 convolutional_variational_autoencoder_pytorch/\r\n\u2502   \u251c\u2500\u2500 __init__.py\r\n\u2502   \u2514\u2500\u2500 module.py        # All architecture classes and logic\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u251c\u2500\u2500 LICENSE\r\n\u2514\u2500\u2500 README.md\r\n\r\n```\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### 1. Import the package and create the model\r\n\r\n```python\r\nimport torch\r\nfrom convolutional_variational_autoencoder_pytorch import VariationalAutoEncoder\r\n\r\nmodel = AutoEncoder(\r\n    dim=64,\r\n    dim_mults=(1, 2, 4, 8),\r\n    dim_latent=128,\r\n    image_channels=3\r\n)\r\n\r\n```\r\n\r\n### 2. Forward pass and reconstruction\r\n\r\n```python\r\nx = torch.randn(8, 3, 256, 256)  # batch of images\r\nx_recon, mu, logvar = model(x)\r\n\r\n# Or just get the reconstruction\r\nx_recon = model.reconstruct(x)\r\n\r\n```\r\n\r\n### 3. Training step (sample loop)\r\n\r\n```python\r\nimport torch.nn.functional as F\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\r\n\r\ndef train_step(x):\r\n    model.train()\r\n    optimizer.zero_grad()\r\n    x_recon, _, _ = model(x)\r\n    loss = F.mse_loss(x_recon, x)\r\n    loss.backward()\r\n    optimizer.step()\r\n    return loss.item()\r\n    \r\n```\r\n\r\n### \ud83e\udde0 Model Output\r\n\r\n- `x_recon`: Reconstructed image\r\n\r\n- `mu`: Mean of the latent distribution\r\n\r\n- `logvar`: Log-variance of the latent distribution\r\n\r\n## \u2699\ufe0f Configuration Options\r\n\r\n| Argument | Type | Default | Description |\r\n|--|--|--|--|\r\n| `dim` | `int` | `64` | Base number of channels |\r\n| `dim_mults` | `tuple` | `(1, 2, 4, 8)` | `Multipliers for feature map dimensions |\r\n| `dim_latent` | `int` | `64` | Latent space dimension |\r\n| `image_channels` | `int` | `3` | Input/output image channels (e.g., 3) |\r\n| `dropout` | `float` | `0.0` | Dropout probability |\r\n\r\n## \ud83e\uddea Example: Loss Function\r\n\r\nHere's a basic VAE loss function combining reconstruction and KL divergence:\r\n\r\n```python\r\ndef vae_loss(x, x_recon, mu, logvar):\r\n    recon_loss = F.mse_loss(x_recon, x, reduction='sum')\r\n    kl_div = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())\r\n    loss = recon_loss + kl_div\r\n    return loss\r\n\r\n```\r\n\r\n## \ud83d\ude4b\u200d\u2642\ufe0f Author\r\n\r\nDeveloped by [Mehran Bazrafkan](mailto:mhrn.bzrafkn.dev@gmail.com)\r\n\r\n> Built from scratch with inspiration from modern deep generative modeling architectures. This package reflects personal experience with VAEs and convolutional design patterns.\r\n\r\n## \u2b50\ufe0f Support & Contribute\r\n\r\nIf you find this project useful, consider:\r\n\r\n- \u2b50\ufe0f Starring the repo\r\n\r\n- \ud83d\udc1b Submitting issues\r\n\r\n- \ud83d\udce6 Suggesting improvements\r\n\r\n## \ud83d\udd17 Related Projects\r\n\r\n- [convolutional-autoencoder-pytorch \u00b7 PyPI (Implemented by me)](https://pypi.org/project/convolutional-autoencoder-pytorch/)\r\n\r\n- [PyTorch VAE Tutorial (external)](https://github.com/pytorch/examples/tree/main/vae)\r\n\r\n## \ud83d\udcdc License\r\n\r\nThis project is licensed under the terms of the [`MIT LICENSE`](https://github.com/MehranBazrafkan/convolutional-variational-autoencoder-pytorch/blob/main/LICENSE).\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A package to simplify the implementing a variationsl-autoencoder model.",
    "version": "0.1.1",
    "project_urls": null,
    "split_keywords": [
        "variational autoencoder",
        " pytorch",
        " ml",
        " deep learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3ecbe2417e5982e505d33f6c9b30748f2de91eb8d800ece016e76ce6d31f14cc",
                "md5": "8f5647219c5c59b40a242267bc28fb39",
                "sha256": "a45b9d6f0f9ed8b1686b2c498a5673dfecd08b6980a30e8f283c91ddcec245c9"
            },
            "downloads": -1,
            "filename": "convolutional_variational_autoencoder_pytorch-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8f5647219c5c59b40a242267bc28fb39",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 6043,
            "upload_time": "2025-07-31T16:50:37",
            "upload_time_iso_8601": "2025-07-31T16:50:37.698003Z",
            "url": "https://files.pythonhosted.org/packages/3e/cb/e2417e5982e505d33f6c9b30748f2de91eb8d800ece016e76ce6d31f14cc/convolutional_variational_autoencoder_pytorch-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "241c0b7c885fb2b2372264ea9768212370c1d836ca8a872b808a78258ae5e748",
                "md5": "7717e3d6d19e7c820d6c596a3f17c289",
                "sha256": "ecbbbaafb2f8765fde69f4cefbabd0d768ce9c85cd4234e6ff9a15d238c52845"
            },
            "downloads": -1,
            "filename": "convolutional_variational_autoencoder_pytorch-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "7717e3d6d19e7c820d6c596a3f17c289",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 5136,
            "upload_time": "2025-07-31T16:50:39",
            "upload_time_iso_8601": "2025-07-31T16:50:39.504810Z",
            "url": "https://files.pythonhosted.org/packages/24/1c/0b7c885fb2b2372264ea9768212370c1d836ca8a872b808a78258ae5e748/convolutional_variational_autoencoder_pytorch-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-31 16:50:39",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "convolutional-variational-autoencoder-pytorch"
}
        
Elapsed time: 1.53706s