controlnet-aux


Namecontrolnet-aux JSON
Version 0.0.9 PyPI version JSON
download
home_pagehttps://github.com/patrickvonplaten/controlnet_aux
SummaryAuxillary models for controlnet
upload_time2024-05-23 03:26:25
maintainerNone
docs_urlNone
authorThe HuggingFace team
requires_python>=3.7.0
licenseApache
keywords deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ControlNet auxiliary models

This is a PyPi installable package of [lllyasviel's ControlNet Annotators](https://github.com/lllyasviel/ControlNet/tree/main/annotator)

The code is copy-pasted from the respective folders in <https://github.com/lllyasviel/ControlNet/tree/main/annotator> and connected to [the 🤗 Hub](https://huggingface.co/lllyasviel/Annotators).

All credit & copyright goes to <https://github.com/lllyasviel> .

## Install

```
pip install -U controlnet-aux
```

To support DWPose which is dependent on MMDetection, MMCV and MMPose

```
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
```

## Usage

You can use the processor class, which can load each of the auxiliary models with the following code

```python
import requests
from PIL import Image
from io import BytesIO

from controlnet_aux.processor import Processor

# load image
url = "https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png"

response = requests.get(url)
img = Image.open(BytesIO(response.content)).convert("RGB").resize((512, 512))

# load processor from processor_id
# options are:
# ["canny", "depth_leres", "depth_leres++", "depth_midas", "depth_zoe", "lineart_anime",
#  "lineart_coarse", "lineart_realistic", "mediapipe_face", "mlsd", "normal_bae", "normal_midas",
#  "openpose", "openpose_face", "openpose_faceonly", "openpose_full", "openpose_hand",
#  "scribble_hed, "scribble_pidinet", "shuffle", "softedge_hed", "softedge_hedsafe",
#  "softedge_pidinet", "softedge_pidsafe", "dwpose"]
processor_id = 'scribble_hed'
processor = Processor(processor_id)

processed_image = processor(img, to_pil=True)
```

Each model can be loaded individually by importing and instantiating them as follows

```python
from PIL import Image
import requests
from io import BytesIO
from controlnet_aux import HEDdetector, MidasDetector, MLSDdetector, OpenposeDetector, PidiNetDetector, NormalBaeDetector, LineartDetector, LineartAnimeDetector, CannyDetector, ContentShuffleDetector, ZoeDetector, MediapipeFaceDetector, SamDetector, LeresDetector, DWposeDetector

# load image
url = "https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png"

response = requests.get(url)
img = Image.open(BytesIO(response.content)).convert("RGB").resize((512, 512))

# load checkpoints
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
midas = MidasDetector.from_pretrained("lllyasviel/Annotators")
mlsd = MLSDdetector.from_pretrained("lllyasviel/Annotators")
open_pose = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
pidi = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
normal_bae = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
lineart = LineartDetector.from_pretrained("lllyasviel/Annotators")
lineart_anime = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
zoe = ZoeDetector.from_pretrained("lllyasviel/Annotators")
sam = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
mobile_sam = SamDetector.from_pretrained("dhkim2810/MobileSAM", model_type="vit_t", filename="mobile_sam.pt")
leres = LeresDetector.from_pretrained("lllyasviel/Annotators")
teed = TEEDdetector.from_pretrained("fal-ai/teed", filename="5_model.pth")
anyline = AnylineDetector.from_pretrained(
    "TheMistoAI/MistoLine", filename="MTEED.pth", subfolder="Anyline"
)

# specify configs, ckpts and device, or it will be downloaded automatically and use cpu by default
# det_config: ./src/controlnet_aux/dwpose/yolox_config/yolox_l_8xb8-300e_coco.py
# det_ckpt: https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth
# pose_config: ./src/controlnet_aux/dwpose/dwpose_config/dwpose-l_384x288.py
# pose_ckpt: https://huggingface.co/wanghaofan/dw-ll_ucoco_384/resolve/main/dw-ll_ucoco_384.pth
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dwpose = DWposeDetector(det_config=det_config, det_ckpt=det_ckpt, pose_config=pose_config, pose_ckpt=pose_ckpt, device=device)

# instantiate
canny = CannyDetector()
content = ContentShuffleDetector()
face_detector = MediapipeFaceDetector()
lineart_standard = LineartStandardDetector()


# process
processed_image_hed = hed(img)
processed_image_midas = midas(img)
processed_image_mlsd = mlsd(img)
processed_image_open_pose = open_pose(img, hand_and_face=True)
processed_image_pidi = pidi(img, safe=True)
processed_image_normal_bae = normal_bae(img)
processed_image_lineart = lineart(img, coarse=True)
processed_image_lineart_anime = lineart_anime(img)
processed_image_zoe = zoe(img)
processed_image_sam = sam(img)
processed_image_leres = leres(img)
processed_image_teed = teed(img, detect_resolution=1024)
processed_image_anyline = anyline(img, detect_resolution=1280)

processed_image_canny = canny(img)
processed_image_content = content(img)
processed_image_mediapipe_face = face_detector(img)
processed_image_dwpose = dwpose(img)
processed_image_lineart_standard = lineart_standard(img, detect_resolution=1024)
```

### Image resolution

In order to maintain the image aspect ratio, `detect_resolution`, `image_resolution` and images sizes need to be using multiple of `64`.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/patrickvonplaten/controlnet_aux",
    "name": "controlnet-aux",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": null,
    "keywords": "deep learning",
    "author": "The HuggingFace team",
    "author_email": "patrick@huggingface.co",
    "download_url": "https://files.pythonhosted.org/packages/20/b6/bc4c5dcab45233e07452c8a9d30cd1d30b7c92b5ea96f7ceca8816cd6078/controlnet_aux-0.0.9.tar.gz",
    "platform": null,
    "description": "# ControlNet auxiliary models\n\nThis is a PyPi installable package of [lllyasviel's ControlNet Annotators](https://github.com/lllyasviel/ControlNet/tree/main/annotator)\n\nThe code is copy-pasted from the respective folders in <https://github.com/lllyasviel/ControlNet/tree/main/annotator> and connected to [the \ud83e\udd17 Hub](https://huggingface.co/lllyasviel/Annotators).\n\nAll credit & copyright goes to <https://github.com/lllyasviel> .\n\n## Install\n\n```\npip install -U controlnet-aux\n```\n\nTo support DWPose which is dependent on MMDetection, MMCV and MMPose\n\n```\npip install -U openmim\nmim install mmengine\nmim install \"mmcv>=2.0.1\"\nmim install \"mmdet>=3.1.0\"\nmim install \"mmpose>=1.1.0\"\n```\n\n## Usage\n\nYou can use the processor class, which can load each of the auxiliary models with the following code\n\n```python\nimport requests\nfrom PIL import Image\nfrom io import BytesIO\n\nfrom controlnet_aux.processor import Processor\n\n# load image\nurl = \"https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png\"\n\nresponse = requests.get(url)\nimg = Image.open(BytesIO(response.content)).convert(\"RGB\").resize((512, 512))\n\n# load processor from processor_id\n# options are:\n# [\"canny\", \"depth_leres\", \"depth_leres++\", \"depth_midas\", \"depth_zoe\", \"lineart_anime\",\n#  \"lineart_coarse\", \"lineart_realistic\", \"mediapipe_face\", \"mlsd\", \"normal_bae\", \"normal_midas\",\n#  \"openpose\", \"openpose_face\", \"openpose_faceonly\", \"openpose_full\", \"openpose_hand\",\n#  \"scribble_hed, \"scribble_pidinet\", \"shuffle\", \"softedge_hed\", \"softedge_hedsafe\",\n#  \"softedge_pidinet\", \"softedge_pidsafe\", \"dwpose\"]\nprocessor_id = 'scribble_hed'\nprocessor = Processor(processor_id)\n\nprocessed_image = processor(img, to_pil=True)\n```\n\nEach model can be loaded individually by importing and instantiating them as follows\n\n```python\nfrom PIL import Image\nimport requests\nfrom io import BytesIO\nfrom controlnet_aux import HEDdetector, MidasDetector, MLSDdetector, OpenposeDetector, PidiNetDetector, NormalBaeDetector, LineartDetector, LineartAnimeDetector, CannyDetector, ContentShuffleDetector, ZoeDetector, MediapipeFaceDetector, SamDetector, LeresDetector, DWposeDetector\n\n# load image\nurl = \"https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png\"\n\nresponse = requests.get(url)\nimg = Image.open(BytesIO(response.content)).convert(\"RGB\").resize((512, 512))\n\n# load checkpoints\nhed = HEDdetector.from_pretrained(\"lllyasviel/Annotators\")\nmidas = MidasDetector.from_pretrained(\"lllyasviel/Annotators\")\nmlsd = MLSDdetector.from_pretrained(\"lllyasviel/Annotators\")\nopen_pose = OpenposeDetector.from_pretrained(\"lllyasviel/Annotators\")\npidi = PidiNetDetector.from_pretrained(\"lllyasviel/Annotators\")\nnormal_bae = NormalBaeDetector.from_pretrained(\"lllyasviel/Annotators\")\nlineart = LineartDetector.from_pretrained(\"lllyasviel/Annotators\")\nlineart_anime = LineartAnimeDetector.from_pretrained(\"lllyasviel/Annotators\")\nzoe = ZoeDetector.from_pretrained(\"lllyasviel/Annotators\")\nsam = SamDetector.from_pretrained(\"ybelkada/segment-anything\", subfolder=\"checkpoints\")\nmobile_sam = SamDetector.from_pretrained(\"dhkim2810/MobileSAM\", model_type=\"vit_t\", filename=\"mobile_sam.pt\")\nleres = LeresDetector.from_pretrained(\"lllyasviel/Annotators\")\nteed = TEEDdetector.from_pretrained(\"fal-ai/teed\", filename=\"5_model.pth\")\nanyline = AnylineDetector.from_pretrained(\n    \"TheMistoAI/MistoLine\", filename=\"MTEED.pth\", subfolder=\"Anyline\"\n)\n\n# specify configs, ckpts and device, or it will be downloaded automatically and use cpu by default\n# det_config: ./src/controlnet_aux/dwpose/yolox_config/yolox_l_8xb8-300e_coco.py\n# det_ckpt: https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth\n# pose_config: ./src/controlnet_aux/dwpose/dwpose_config/dwpose-l_384x288.py\n# pose_ckpt: https://huggingface.co/wanghaofan/dw-ll_ucoco_384/resolve/main/dw-ll_ucoco_384.pth\nimport torch\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\ndwpose = DWposeDetector(det_config=det_config, det_ckpt=det_ckpt, pose_config=pose_config, pose_ckpt=pose_ckpt, device=device)\n\n# instantiate\ncanny = CannyDetector()\ncontent = ContentShuffleDetector()\nface_detector = MediapipeFaceDetector()\nlineart_standard = LineartStandardDetector()\n\n\n# process\nprocessed_image_hed = hed(img)\nprocessed_image_midas = midas(img)\nprocessed_image_mlsd = mlsd(img)\nprocessed_image_open_pose = open_pose(img, hand_and_face=True)\nprocessed_image_pidi = pidi(img, safe=True)\nprocessed_image_normal_bae = normal_bae(img)\nprocessed_image_lineart = lineart(img, coarse=True)\nprocessed_image_lineart_anime = lineart_anime(img)\nprocessed_image_zoe = zoe(img)\nprocessed_image_sam = sam(img)\nprocessed_image_leres = leres(img)\nprocessed_image_teed = teed(img, detect_resolution=1024)\nprocessed_image_anyline = anyline(img, detect_resolution=1280)\n\nprocessed_image_canny = canny(img)\nprocessed_image_content = content(img)\nprocessed_image_mediapipe_face = face_detector(img)\nprocessed_image_dwpose = dwpose(img)\nprocessed_image_lineart_standard = lineart_standard(img, detect_resolution=1024)\n```\n\n### Image resolution\n\nIn order to maintain the image aspect ratio, `detect_resolution`, `image_resolution` and images sizes need to be using multiple of `64`.\n",
    "bugtrack_url": null,
    "license": "Apache",
    "summary": "Auxillary models for controlnet",
    "version": "0.0.9",
    "project_urls": {
        "Homepage": "https://github.com/patrickvonplaten/controlnet_aux"
    },
    "split_keywords": [
        "deep",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9a2e3688dea6d9f22ec152fb931c15329c45df4f87078862fadc1fff0064abd4",
                "md5": "e6b15035c7ae27266480d9ea6ace485c",
                "sha256": "c0401e0f1992562889f15865ede34c0e9b7fbc432aed2d19f7de461a524cc9d8"
            },
            "downloads": -1,
            "filename": "controlnet_aux-0.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e6b15035c7ae27266480d9ea6ace485c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 282354,
            "upload_time": "2024-05-23T03:26:22",
            "upload_time_iso_8601": "2024-05-23T03:26:22.161154Z",
            "url": "https://files.pythonhosted.org/packages/9a/2e/3688dea6d9f22ec152fb931c15329c45df4f87078862fadc1fff0064abd4/controlnet_aux-0.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "20b6bc4c5dcab45233e07452c8a9d30cd1d30b7c92b5ea96f7ceca8816cd6078",
                "md5": "969f7fb18e88990812f1342174938768",
                "sha256": "6c28352e6fe9b3324d6b1288d94d1c2586c6e2fc9f58df9e47297ea6a86fd560"
            },
            "downloads": -1,
            "filename": "controlnet_aux-0.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "969f7fb18e88990812f1342174938768",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 208610,
            "upload_time": "2024-05-23T03:26:25",
            "upload_time_iso_8601": "2024-05-23T03:26:25.238883Z",
            "url": "https://files.pythonhosted.org/packages/20/b6/bc4c5dcab45233e07452c8a9d30cd1d30b7c92b5ea96f7ceca8816cd6078/controlnet_aux-0.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-23 03:26:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "patrickvonplaten",
    "github_project": "controlnet_aux",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "controlnet-aux"
}
        
Elapsed time: 2.10819s