cjm-diffusers-utils


Namecjm-diffusers-utils JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/cj-mills/cjm-diffusers-utils
SummarySome utility functions I frequently use with 🤗 diffusers.
upload_time2023-02-09 23:19:32
maintainer
docs_urlNone
authorcj-mills
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            cjm-diffusers-utils
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install cjm_diffusers_utils
```

## How to use

``` python
import torch
from cjm_pytorch_utils.core import get_torch_device
device = get_torch_device()
dtype = torch.float16 if device == 'cuda' else torch.float16
device, dtype
```

    ('cuda', torch.float16)

### pil_to_latent

``` python
from cjm_diffusers_utils.core import pil_to_latent
from PIL import Image
from diffusers import AutoencoderKL
```

``` python
model_name = "stabilityai/stable-diffusion-2-1"
vae = AutoencoderKL.from_pretrained(model_name, subfolder="vae").to(device=device, dtype=dtype)
```

``` python
img_path = img_path = '../images/cat.jpg'
src_img = Image.open(img_path).convert('RGB')
print(f"Source Image Size: {src_img.size}")

img_latents = pil_to_latent(src_img, vae)
print(f"Latent Dimensions: {img_latents.shape}")
```

    Source Image Size: (768, 512)
    Latent Dimensions: torch.Size([1, 4, 64, 96])

### latent_to_pil

``` python
from cjm_diffusers_utils.core import latent_to_pil
```

``` python
decoded_img = latent_to_pil(img_latents, vae)
print(f"Decoded Image Size: {decoded_img.size}")
```

    Decoded Image Size: (768, 512)

### text_to_emb

``` python
from cjm_diffusers_utils.core import text_to_emb
from transformers import CLIPTextModel, CLIPTokenizer
```

``` python
# Load the tokenizer for the specified model
tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer")
# Load the text encoder for the specified model
text_encoder = CLIPTextModel.from_pretrained(model_name, subfolder="text_encoder").to(device=device, dtype=dtype)
```

``` python
prompt = "A cat sitting on the floor."
text_emb = text_to_emb(prompt, tokenizer, text_encoder)
text_emb.shape
```

    torch.Size([2, 77, 1024])

### prepare_noise_scheduler

``` python
from cjm_diffusers_utils.core import prepare_noise_scheduler
from diffusers import DEISMultistepScheduler
```

``` python
noise_scheduler = DEISMultistepScheduler.from_pretrained(model_name, subfolder='scheduler')
print(f"Number of timesteps: {len(noise_scheduler.timesteps)}")
print(noise_scheduler.timesteps[:10])

noise_scheduler = prepare_noise_scheduler(noise_scheduler, 70, 1.0)
print(f"Number of timesteps: {len(noise_scheduler.timesteps)}")
print(noise_scheduler.timesteps[:10])
```

    Number of timesteps: 1000
    tensor([999., 998., 997., 996., 995., 994., 993., 992., 991., 990.])
    Number of timesteps: 70
    tensor([999, 985, 970, 956, 942, 928, 913, 899, 885, 871])

### prepare_depth_mask

``` python
from cjm_diffusers_utils.core import prepare_depth_mask
```

``` python
depth_map_path = '../images/depth-cat.png'
depth_map = Image.open(depth_map_path)
print(f"Depth map size: {depth_map.size}")

depth_mask = prepare_depth_mask(depth_map).to(device=device, dtype=dtype)
depth_mask.shape, depth_mask.min(), depth_mask.max()
```

    Depth map size: (768, 512)

    (torch.Size([1, 1, 64, 96]),
     tensor(-1., device='cuda:0', dtype=torch.float16),
     tensor(1., device='cuda:0', dtype=torch.float16))

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cj-mills/cjm-diffusers-utils",
    "name": "cjm-diffusers-utils",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "cj-mills",
    "author_email": "millscj.mills2@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5b/c0/8a3f0e75172b23d60905f9a0e3ab11cb1e0ed648999974627aef30a7a54a/cjm-diffusers-utils-0.0.3.tar.gz",
    "platform": null,
    "description": "cjm-diffusers-utils\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install cjm_diffusers_utils\n```\n\n## How to use\n\n``` python\nimport torch\nfrom cjm_pytorch_utils.core import get_torch_device\ndevice = get_torch_device()\ndtype = torch.float16 if device == 'cuda' else torch.float16\ndevice, dtype\n```\n\n    ('cuda', torch.float16)\n\n### pil_to_latent\n\n``` python\nfrom cjm_diffusers_utils.core import pil_to_latent\nfrom PIL import Image\nfrom diffusers import AutoencoderKL\n```\n\n``` python\nmodel_name = \"stabilityai/stable-diffusion-2-1\"\nvae = AutoencoderKL.from_pretrained(model_name, subfolder=\"vae\").to(device=device, dtype=dtype)\n```\n\n``` python\nimg_path = img_path = '../images/cat.jpg'\nsrc_img = Image.open(img_path).convert('RGB')\nprint(f\"Source Image Size: {src_img.size}\")\n\nimg_latents = pil_to_latent(src_img, vae)\nprint(f\"Latent Dimensions: {img_latents.shape}\")\n```\n\n    Source Image Size: (768, 512)\n    Latent Dimensions: torch.Size([1, 4, 64, 96])\n\n### latent_to_pil\n\n``` python\nfrom cjm_diffusers_utils.core import latent_to_pil\n```\n\n``` python\ndecoded_img = latent_to_pil(img_latents, vae)\nprint(f\"Decoded Image Size: {decoded_img.size}\")\n```\n\n    Decoded Image Size: (768, 512)\n\n### text_to_emb\n\n``` python\nfrom cjm_diffusers_utils.core import text_to_emb\nfrom transformers import CLIPTextModel, CLIPTokenizer\n```\n\n``` python\n# Load the tokenizer for the specified model\ntokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder=\"tokenizer\")\n# Load the text encoder for the specified model\ntext_encoder = CLIPTextModel.from_pretrained(model_name, subfolder=\"text_encoder\").to(device=device, dtype=dtype)\n```\n\n``` python\nprompt = \"A cat sitting on the floor.\"\ntext_emb = text_to_emb(prompt, tokenizer, text_encoder)\ntext_emb.shape\n```\n\n    torch.Size([2, 77, 1024])\n\n### prepare_noise_scheduler\n\n``` python\nfrom cjm_diffusers_utils.core import prepare_noise_scheduler\nfrom diffusers import DEISMultistepScheduler\n```\n\n``` python\nnoise_scheduler = DEISMultistepScheduler.from_pretrained(model_name, subfolder='scheduler')\nprint(f\"Number of timesteps: {len(noise_scheduler.timesteps)}\")\nprint(noise_scheduler.timesteps[:10])\n\nnoise_scheduler = prepare_noise_scheduler(noise_scheduler, 70, 1.0)\nprint(f\"Number of timesteps: {len(noise_scheduler.timesteps)}\")\nprint(noise_scheduler.timesteps[:10])\n```\n\n    Number of timesteps: 1000\n    tensor([999., 998., 997., 996., 995., 994., 993., 992., 991., 990.])\n    Number of timesteps: 70\n    tensor([999, 985, 970, 956, 942, 928, 913, 899, 885, 871])\n\n### prepare_depth_mask\n\n``` python\nfrom cjm_diffusers_utils.core import prepare_depth_mask\n```\n\n``` python\ndepth_map_path = '../images/depth-cat.png'\ndepth_map = Image.open(depth_map_path)\nprint(f\"Depth map size: {depth_map.size}\")\n\ndepth_mask = prepare_depth_mask(depth_map).to(device=device, dtype=dtype)\ndepth_mask.shape, depth_mask.min(), depth_mask.max()\n```\n\n    Depth map size: (768, 512)\n\n    (torch.Size([1, 1, 64, 96]),\n     tensor(-1., device='cuda:0', dtype=torch.float16),\n     tensor(1., device='cuda:0', dtype=torch.float16))\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Some utility functions I frequently use with \ud83e\udd17 diffusers.",
    "version": "0.0.3",
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "02f168b5f173b88db8691821582b2ee3da04e8c6e20dab94adf74dbc0c236c47",
                "md5": "6954f8a8af9e436140ada69431391360",
                "sha256": "116643f1294b45aa2a11bb7816be9010545795aa7c66237727f9761476bb61d9"
            },
            "downloads": -1,
            "filename": "cjm_diffusers_utils-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6954f8a8af9e436140ada69431391360",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 8796,
            "upload_time": "2023-02-09T23:19:30",
            "upload_time_iso_8601": "2023-02-09T23:19:30.361771Z",
            "url": "https://files.pythonhosted.org/packages/02/f1/68b5f173b88db8691821582b2ee3da04e8c6e20dab94adf74dbc0c236c47/cjm_diffusers_utils-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5bc08a3f0e75172b23d60905f9a0e3ab11cb1e0ed648999974627aef30a7a54a",
                "md5": "ec8f1ebea9a5bc28e03f4dc284c2c101",
                "sha256": "ac9ccdc081720ffd717b21cf5ff3e992e16adacb3b62d6ad21b83e603dded7f0"
            },
            "downloads": -1,
            "filename": "cjm-diffusers-utils-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "ec8f1ebea9a5bc28e03f4dc284c2c101",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 8415,
            "upload_time": "2023-02-09T23:19:32",
            "upload_time_iso_8601": "2023-02-09T23:19:32.100722Z",
            "url": "https://files.pythonhosted.org/packages/5b/c0/8a3f0e75172b23d60905f9a0e3ab11cb1e0ed648999974627aef30a7a54a/cjm-diffusers-utils-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-09 23:19:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "cj-mills",
    "github_project": "cjm-diffusers-utils",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cjm-diffusers-utils"
}
        
Elapsed time: 0.04715s