# ControlNet auxiliary models
This is a PyPi installable package of [lllyasviel's ControlNet Annotators](https://github.com/lllyasviel/ControlNet/tree/main/annotator)
The code is copy-pasted from the respective folders in https://github.com/lllyasviel/ControlNet/tree/main/annotator and connected to [the 🤗 Hub](https://huggingface.co/lllyasviel/Annotators).
All credit & copyright goes to https://github.com/lllyasviel .
## Install
```
pip install controlnet-aux==0.0.7
```
To support DWPose which is dependent on MMDetection, MMCV and MMPose
```
pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
```
## Usage
You can use the processor class, which can load each of the auxiliary models with the following code
```python
import requests
from PIL import Image
from io import BytesIO
from controlnet_aux.processor import Processor
# load image
url = "https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png"
response = requests.get(url)
img = Image.open(BytesIO(response.content)).convert("RGB").resize((512, 512))
# load processor from processor_id
# options are:
# ["canny", "depth_leres", "depth_leres++", "depth_midas", "depth_zoe", "lineart_anime",
# "lineart_coarse", "lineart_realistic", "mediapipe_face", "mlsd", "normal_bae", "normal_midas",
# "openpose", "openpose_face", "openpose_faceonly", "openpose_full", "openpose_hand",
# "scribble_hed, "scribble_pidinet", "shuffle", "softedge_hed", "softedge_hedsafe",
# "softedge_pidinet", "softedge_pidsafe", "dwpose"]
processor_id = 'scribble_hed'
processor = Processor(processor_id)
processed_image = processor(img, to_pil=True)
```
Each model can be loaded individually by importing and instantiating them as follows
```python
from PIL import Image
import requests
from io import BytesIO
from controlnet_aux import HEDdetector, MidasDetector, MLSDdetector, OpenposeDetector, PidiNetDetector, NormalBaeDetector, LineartDetector, LineartAnimeDetector, CannyDetector, ContentShuffleDetector, ZoeDetector, MediapipeFaceDetector, SamDetector, LeresDetector, DWposeDetector
# load image
url = "https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png"
response = requests.get(url)
img = Image.open(BytesIO(response.content)).convert("RGB").resize((512, 512))
# load checkpoints
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
midas = MidasDetector.from_pretrained("lllyasviel/Annotators")
mlsd = MLSDdetector.from_pretrained("lllyasviel/Annotators")
open_pose = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
pidi = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
normal_bae = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
lineart = LineartDetector.from_pretrained("lllyasviel/Annotators")
lineart_anime = LineartAnimeDetector.from_pretrained("lllyasviel/Annotators")
zoe = ZoeDetector.from_pretrained("lllyasviel/Annotators")
sam = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
mobile_sam = SamDetector.from_pretrained("dhkim2810/MobileSAM", model_type="vit_t", filename="mobile_sam.pt")
leres = LeresDetector.from_pretrained("lllyasviel/Annotators")
# specify configs, ckpts and device, or it will be downloaded automatically and use cpu by default
# det_config: ./src/controlnet_aux/dwpose/yolox_config/yolox_l_8xb8-300e_coco.py
# det_ckpt: https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth
# pose_config: ./src/controlnet_aux/dwpose/dwpose_config/dwpose-l_384x288.py
# pose_ckpt: https://huggingface.co/wanghaofan/dw-ll_ucoco_384/resolve/main/dw-ll_ucoco_384.pth
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dwpose = DWposeDetector(det_config=det_config, det_ckpt=det_ckpt, pose_config=pose_config, pose_ckpt=pose_ckpt, device=device)
# instantiate
canny = CannyDetector()
content = ContentShuffleDetector()
face_detector = MediapipeFaceDetector()
# process
processed_image_hed = hed(img)
processed_image_midas = midas(img)
processed_image_mlsd = mlsd(img)
processed_image_open_pose = open_pose(img, hand_and_face=True)
processed_image_pidi = pidi(img, safe=True)
processed_image_normal_bae = normal_bae(img)
processed_image_lineart = lineart(img, coarse=True)
processed_image_lineart_anime = lineart_anime(img)
processed_image_zoe = zoe(img)
processed_image_sam = sam(img)
processed_image_leres = leres(img)
processed_image_canny = canny(img)
processed_image_content = content(img)
processed_image_mediapipe_face = face_detector(img)
processed_image_dwpose = dwpose(img)
```
### Image resolution
In order to maintain the image aspect ratio, `detect_resolution`, `image_resolution` and images sizes need to be using multiple of `64`.
Raw data
{
"_id": null,
"home_page": "https://github.com/patrickvonplaten/controlnet_aux",
"name": "controlnet-aux",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7.0",
"maintainer_email": null,
"keywords": "deep learning",
"author": "The HuggingFace team",
"author_email": "patrick@huggingface.co",
"download_url": "https://files.pythonhosted.org/packages/13/be/7e1eb0ac37b5db0c7c3751ca3cbbc5af58d9bdb89190c48ac81ba7a8cb6c/controlnet_aux-0.0.8.tar.gz",
"platform": null,
"description": "# ControlNet auxiliary models\n\nThis is a PyPi installable package of [lllyasviel's ControlNet Annotators](https://github.com/lllyasviel/ControlNet/tree/main/annotator)\n\nThe code is copy-pasted from the respective folders in https://github.com/lllyasviel/ControlNet/tree/main/annotator and connected to [the \ud83e\udd17 Hub](https://huggingface.co/lllyasviel/Annotators).\n\nAll credit & copyright goes to https://github.com/lllyasviel .\n\n## Install\n\n```\npip install controlnet-aux==0.0.7\n```\n\nTo support DWPose which is dependent on MMDetection, MMCV and MMPose\n```\npip install -U openmim\nmim install mmengine\nmim install \"mmcv>=2.0.1\"\nmim install \"mmdet>=3.1.0\"\nmim install \"mmpose>=1.1.0\"\n```\n## Usage\n\n\nYou can use the processor class, which can load each of the auxiliary models with the following code\n```python\nimport requests\nfrom PIL import Image\nfrom io import BytesIO\n\nfrom controlnet_aux.processor import Processor\n\n# load image\nurl = \"https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png\"\n\nresponse = requests.get(url)\nimg = Image.open(BytesIO(response.content)).convert(\"RGB\").resize((512, 512))\n\n# load processor from processor_id\n# options are:\n# [\"canny\", \"depth_leres\", \"depth_leres++\", \"depth_midas\", \"depth_zoe\", \"lineart_anime\",\n# \"lineart_coarse\", \"lineart_realistic\", \"mediapipe_face\", \"mlsd\", \"normal_bae\", \"normal_midas\",\n# \"openpose\", \"openpose_face\", \"openpose_faceonly\", \"openpose_full\", \"openpose_hand\",\n# \"scribble_hed, \"scribble_pidinet\", \"shuffle\", \"softedge_hed\", \"softedge_hedsafe\",\n# \"softedge_pidinet\", \"softedge_pidsafe\", \"dwpose\"]\nprocessor_id = 'scribble_hed'\nprocessor = Processor(processor_id)\n\nprocessed_image = processor(img, to_pil=True)\n```\n\nEach model can be loaded individually by importing and instantiating them as follows\n```python\nfrom PIL import Image\nimport requests\nfrom io import BytesIO\nfrom controlnet_aux import HEDdetector, MidasDetector, MLSDdetector, OpenposeDetector, PidiNetDetector, NormalBaeDetector, LineartDetector, LineartAnimeDetector, CannyDetector, ContentShuffleDetector, ZoeDetector, MediapipeFaceDetector, SamDetector, LeresDetector, DWposeDetector\n\n# load image\nurl = \"https://huggingface.co/lllyasviel/sd-controlnet-openpose/resolve/main/images/pose.png\"\n\nresponse = requests.get(url)\nimg = Image.open(BytesIO(response.content)).convert(\"RGB\").resize((512, 512))\n\n# load checkpoints\nhed = HEDdetector.from_pretrained(\"lllyasviel/Annotators\")\nmidas = MidasDetector.from_pretrained(\"lllyasviel/Annotators\")\nmlsd = MLSDdetector.from_pretrained(\"lllyasviel/Annotators\")\nopen_pose = OpenposeDetector.from_pretrained(\"lllyasviel/Annotators\")\npidi = PidiNetDetector.from_pretrained(\"lllyasviel/Annotators\")\nnormal_bae = NormalBaeDetector.from_pretrained(\"lllyasviel/Annotators\")\nlineart = LineartDetector.from_pretrained(\"lllyasviel/Annotators\")\nlineart_anime = LineartAnimeDetector.from_pretrained(\"lllyasviel/Annotators\")\nzoe = ZoeDetector.from_pretrained(\"lllyasviel/Annotators\")\nsam = SamDetector.from_pretrained(\"ybelkada/segment-anything\", subfolder=\"checkpoints\")\nmobile_sam = SamDetector.from_pretrained(\"dhkim2810/MobileSAM\", model_type=\"vit_t\", filename=\"mobile_sam.pt\")\nleres = LeresDetector.from_pretrained(\"lllyasviel/Annotators\")\n\n# specify configs, ckpts and device, or it will be downloaded automatically and use cpu by default\n# det_config: ./src/controlnet_aux/dwpose/yolox_config/yolox_l_8xb8-300e_coco.py\n# det_ckpt: https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth\n# pose_config: ./src/controlnet_aux/dwpose/dwpose_config/dwpose-l_384x288.py\n# pose_ckpt: https://huggingface.co/wanghaofan/dw-ll_ucoco_384/resolve/main/dw-ll_ucoco_384.pth\nimport torch\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\ndwpose = DWposeDetector(det_config=det_config, det_ckpt=det_ckpt, pose_config=pose_config, pose_ckpt=pose_ckpt, device=device)\n\n# instantiate\ncanny = CannyDetector()\ncontent = ContentShuffleDetector()\nface_detector = MediapipeFaceDetector()\n\n\n# process\nprocessed_image_hed = hed(img)\nprocessed_image_midas = midas(img)\nprocessed_image_mlsd = mlsd(img)\nprocessed_image_open_pose = open_pose(img, hand_and_face=True)\nprocessed_image_pidi = pidi(img, safe=True)\nprocessed_image_normal_bae = normal_bae(img)\nprocessed_image_lineart = lineart(img, coarse=True)\nprocessed_image_lineart_anime = lineart_anime(img)\nprocessed_image_zoe = zoe(img)\nprocessed_image_sam = sam(img)\nprocessed_image_leres = leres(img)\n\nprocessed_image_canny = canny(img)\nprocessed_image_content = content(img)\nprocessed_image_mediapipe_face = face_detector(img)\nprocessed_image_dwpose = dwpose(img)\n```\n\n### Image resolution\n\nIn order to maintain the image aspect ratio, `detect_resolution`, `image_resolution` and images sizes need to be using multiple of `64`.\n",
"bugtrack_url": null,
"license": "Apache",
"summary": "Auxillary models for controlnet",
"version": "0.0.8",
"project_urls": {
"Homepage": "https://github.com/patrickvonplaten/controlnet_aux"
},
"split_keywords": [
"deep",
"learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2adcf3069e491bfb78cbc7e41b7b843a02b44c3b4cd5819918b900cc3c6a9679",
"md5": "b61c6392b4c427040b03bf9e412bf3e3",
"sha256": "637ed006de660a5caa6a1bc0e80edd9b61faf46ce24c459a7ce6a7f5c0e74727"
},
"downloads": -1,
"filename": "controlnet_aux-0.0.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b61c6392b4c427040b03bf9e412bf3e3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7.0",
"size": 274619,
"upload_time": "2024-04-12T08:23:53",
"upload_time_iso_8601": "2024-04-12T08:23:53.201914Z",
"url": "https://files.pythonhosted.org/packages/2a/dc/f3069e491bfb78cbc7e41b7b843a02b44c3b4cd5819918b900cc3c6a9679/controlnet_aux-0.0.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "13be7e1eb0ac37b5db0c7c3751ca3cbbc5af58d9bdb89190c48ac81ba7a8cb6c",
"md5": "7883a110e208b09735137dc0c54a9b28",
"sha256": "53d9147afd2368778bc73b464a8576d373f7edddcf9d030e04c4329174cb58a8"
},
"downloads": -1,
"filename": "controlnet_aux-0.0.8.tar.gz",
"has_sig": false,
"md5_digest": "7883a110e208b09735137dc0c54a9b28",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7.0",
"size": 203462,
"upload_time": "2024-04-12T08:23:58",
"upload_time_iso_8601": "2024-04-12T08:23:58.883459Z",
"url": "https://files.pythonhosted.org/packages/13/be/7e1eb0ac37b5db0c7c3751ca3cbbc5af58d9bdb89190c48ac81ba7a8cb6c/controlnet_aux-0.0.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-12 08:23:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "patrickvonplaten",
"github_project": "controlnet_aux",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "controlnet-aux"
}