custom-diffusion


Namecustom-diffusion JSON
Version 0.1.9 PyPI version JSON
download
home_pagehttps://github.com/kadirnar/Custom-Diffusion
SummaryCustom Diffusion: Creating Video from Frame Using Multiple Diffusion
upload_time2023-06-20 17:18:35
maintainer
docs_urlNone
authorkadirnar
requires_python>=3.6
licenseApache License 2.0
keywords machine-learning deep-learning pytorch diffusion diffusion models controlnet stable diffusion
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
<h2>
     Custom Diffusion: Creating Video from Frame Using Diffusion
</h2>
<div>
    <a href="https://pepy.tech/project/custom_diffusion"><img src="https://pepy.tech/badge/custom_diffusion" alt="downloads"></a>
    <a href="https://badge.fury.io/py/custom_diffusion"><img src="https://badge.fury.io/py/custom_diffusion.svg" alt="pypi version"></a>
    <a href="https://huggingface.co/spaces/ArtGAN/Stable-Diffusion-ControlNet-WebUI"><img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="HuggingFace Spaces"></a>
</div>
</div>


### Installation
```bash
pip install custom_diffusion
```

### Usage
```python

# Importing the required libraries

from custom_diffusion.utils.data_utils import load_images_from_folder
from custom_diffusion import StableDiffusionControlNetGenerator
from custom_diffusion.utils.video_utils import convert_images_to_video
from custom_diffusion.demo import video_pipeline

# Creating a video from a video file
frames_path = video_pipeline(
    video_path="test.mp4",
    output_path="output.mp4",
    start_time=0,
    end_time=5,
    frame_rate=1,
)

# Creating a video from a folder of images
images_list = load_images_from_folder(frames_path)

prompt = "a anime boy"
negative_prompt = "bad"

list_prompt = [prompt] * len(images_list)
list_negative_prompt = [negative_prompt] * len(images_list)

# Generating images from a list of images
generator = StableDiffusionControlNetGenerator()

generated_image_list = generator.generate_image(
    stable_model_path="andite/anything-v4.0",
    controlnet_model_path="lllyasviel/control_v11p_sd15_canny",
    scheduler_name="EulerAncestralDiscrete",
    images_list=images_list,
    prompt=list_prompt,
    negative_prompt=list_negative_prompt,
    height=512,
    width=512,
    guess_mode=False,
    num_images_per_prompt=1,
    num_inference_steps=30,
    guidance_scale=7.0,
    controlnet_conditioning_scale=1.0,
    generator_seed=0,
    preprocess_type="Canny",
    resize_type="center_crop_and_resize",
    crop_size=512,
)

# Converting the generated images to a video
frame2video = convert_images_to_video(
    image_list=generated_image_list,
    output_file="output.mp4",
    frame_rate=5,
)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kadirnar/Custom-Diffusion",
    "name": "custom-diffusion",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "machine-learning,deep-learning,pytorch,diffusion,diffusion models,controlnet,stable diffusion",
    "author": "kadirnar",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/2f/c1/946d80681467d18d824d0ae27b43c0ecbeecbf36a3ee4ce45de54fa2d4cb/custom_diffusion-0.1.9.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\r\n<h2>\r\n     Custom Diffusion: Creating Video from Frame Using Diffusion\r\n</h2>\r\n<div>\r\n    <a href=\"https://pepy.tech/project/custom_diffusion\"><img src=\"https://pepy.tech/badge/custom_diffusion\" alt=\"downloads\"></a>\r\n    <a href=\"https://badge.fury.io/py/custom_diffusion\"><img src=\"https://badge.fury.io/py/custom_diffusion.svg\" alt=\"pypi version\"></a>\r\n    <a href=\"https://huggingface.co/spaces/ArtGAN/Stable-Diffusion-ControlNet-WebUI\"><img src=\"https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg\" alt=\"HuggingFace Spaces\"></a>\r\n</div>\r\n</div>\r\n\r\n\r\n### Installation\r\n```bash\r\npip install custom_diffusion\r\n```\r\n\r\n### Usage\r\n```python\r\n\r\n# Importing the required libraries\r\n\r\nfrom custom_diffusion.utils.data_utils import load_images_from_folder\r\nfrom custom_diffusion import StableDiffusionControlNetGenerator\r\nfrom custom_diffusion.utils.video_utils import convert_images_to_video\r\nfrom custom_diffusion.demo import video_pipeline\r\n\r\n# Creating a video from a video file\r\nframes_path = video_pipeline(\r\n    video_path=\"test.mp4\",\r\n    output_path=\"output.mp4\",\r\n    start_time=0,\r\n    end_time=5,\r\n    frame_rate=1,\r\n)\r\n\r\n# Creating a video from a folder of images\r\nimages_list = load_images_from_folder(frames_path)\r\n\r\nprompt = \"a anime boy\"\r\nnegative_prompt = \"bad\"\r\n\r\nlist_prompt = [prompt] * len(images_list)\r\nlist_negative_prompt = [negative_prompt] * len(images_list)\r\n\r\n# Generating images from a list of images\r\ngenerator = StableDiffusionControlNetGenerator()\r\n\r\ngenerated_image_list = generator.generate_image(\r\n    stable_model_path=\"andite/anything-v4.0\",\r\n    controlnet_model_path=\"lllyasviel/control_v11p_sd15_canny\",\r\n    scheduler_name=\"EulerAncestralDiscrete\",\r\n    images_list=images_list,\r\n    prompt=list_prompt,\r\n    negative_prompt=list_negative_prompt,\r\n    height=512,\r\n    width=512,\r\n    guess_mode=False,\r\n    num_images_per_prompt=1,\r\n    num_inference_steps=30,\r\n    guidance_scale=7.0,\r\n    controlnet_conditioning_scale=1.0,\r\n    generator_seed=0,\r\n    preprocess_type=\"Canny\",\r\n    resize_type=\"center_crop_and_resize\",\r\n    crop_size=512,\r\n)\r\n\r\n# Converting the generated images to a video\r\nframe2video = convert_images_to_video(\r\n    image_list=generated_image_list,\r\n    output_file=\"output.mp4\",\r\n    frame_rate=5,\r\n)\r\n```\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Custom Diffusion: Creating Video from Frame Using Multiple Diffusion",
    "version": "0.1.9",
    "project_urls": {
        "Homepage": "https://github.com/kadirnar/Custom-Diffusion"
    },
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "pytorch",
        "diffusion",
        "diffusion models",
        "controlnet",
        "stable diffusion"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2fc1946d80681467d18d824d0ae27b43c0ecbeecbf36a3ee4ce45de54fa2d4cb",
                "md5": "bb8068fa50e466b0c7a12587ec046530",
                "sha256": "35609dd81f7dc8e5e1a0fe4e037cb0c3d9e1e9405afea57b02d14631f96f31c7"
            },
            "downloads": -1,
            "filename": "custom_diffusion-0.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "bb8068fa50e466b0c7a12587ec046530",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 16958,
            "upload_time": "2023-06-20T17:18:35",
            "upload_time_iso_8601": "2023-06-20T17:18:35.763774Z",
            "url": "https://files.pythonhosted.org/packages/2f/c1/946d80681467d18d824d0ae27b43c0ecbeecbf36a3ee4ce45de54fa2d4cb/custom_diffusion-0.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-20 17:18:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kadirnar",
    "github_project": "Custom-Diffusion",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "custom-diffusion"
}
        
Elapsed time: 0.07945s