# Auto 1111 SDK: Stable Diffusion Python library
<p>
<a href="https://pepy.tech/project/auto1111sdk">
<img alt="GitHub release" src="https://static.pepy.tech/badge/auto1111sdk/month">
</a>
</p>
Auto 1111 SDK is a lightweight Python library for using Stable Diffusion generating images, upscaling images, and editing images with diffusion models. It is designed to be a modular, light-weight Python client that encapsulates all the main features of the [Automatic 1111 Stable Diffusion Web Ui](https://github.com/AUTOMATIC1111/stable-diffusion-webui). Auto 1111 SDK offers 3 main core features currently:
- Text-to-Image, Image-to-Image, Inpainting, and Outpainting pipelines. Our pipelines support the exact same parameters as the [Stable Diffusion Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui), so you can easily replicate creations from the Web UI on the SDK.
- Upscaling Pipelines that can run inference for any Esrgan or Real Esrgan upscaler in a few lines of code.
- An integration with Civit AI to directly download models from the website.
Join our [Discord!!](https://discord.gg/S7wRQqt6QV)
## Demo
We have a colab demo where you can run many of the operations of Auto 1111 SDK. Check it out [here!!](https://colab.research.google.com/drive/1SekiJ-mdB2V8ogWbyRyF_yDnoMuDGWTl?usp=sharing)
## Installation
We recommend installing Auto 1111 SDK in a virtual environment from PyPI. Right now, we do not have support for conda environments yet.
```bash
pip3 install auto1111sdk
```
To install the latest version of Auto 1111 SDK, run:
```bash
pip3 install git+https://github.com/saketh12/Auto1111SDK.git
```
## Quickstart
Generating images with Auto 1111 SDK is super easy. To run inference for Text-to-Image, Image-to-Image, Inpainting, Outpainting, or Stable Diffusion Upscale, we have 1 pipeline that can support all these operations. This saves a lot of RAM from having to create multiple pipeline objects with other solutions.
```python
from auto1111sdk import StableDiffusionPipeline
pipe = StableDiffusionPipeline("<Path to your local safetensors or checkpoint file>")
prompt = "a picture of a brown dog"
output = pipe.generate_txt2img(prompt = prompt, height = 1024, width = 768, steps = 10)
output[0].save("image.png")
```
## Running on Windows
Find the instructions [here.](https://github.com/saketh12/Auto1111SDK/blob/main/automatic1111sdk_on_windows_w_gpu.md) Contributed by by Marco Guardigli, mgua@tomware.it
## Documentation
We have more detailed examples/documentation of how you can use Auto 1111 SDK [here](https://flush-ai.gitbook.io/automatic-1111-sdk/).
For a detailed comparison between us and Huggingface diffusers, you can read [this](https://flush-ai.gitbook.io/automatic-1111-sdk/auto-1111-sdk-vs-huggingface-diffusers).
For a detailed guide on how to use SDXL, we recommend reading [this](https://flush-ai.gitbook.io/automatic-1111-sdk/pipelines/stable-diffusion-xl)
## Features
- Original txt2img and img2img modes
- Real ESRGAN upscale and Esrgan Upscale (compatible with any pth file)
- Outpainting
- Inpainting
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Composable Diffusion: a way to use multiple prompts at once
- separate prompts using uppercase AND
- also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
- Works with a variety of samplers
- Download models directly from Civit AI and RealEsrgan checkpoints
- Set custom VAE: works for any model including SDXL
- Support for SDXL with Stable Diffusion XL Pipelines
- Pass in custom arguments to the models
- No 77 prompt token limit (unlike Huggingface Diffusers, which has this limit)
## Roadmap
- Adding support Hires Fix and Refiner parameters for inference.
- Adding support for Lora's
- Adding support for Face restoration
- Adding support for Dreambooth training script.
- Adding support for custom extensions like Controlnet.
We will be adding support for these features very soon. We also accept any contributions to work on these issues!
## Contributing
Auto1111 SDK is continuously evolving, and we appreciate community involvement. We welcome all forms of contributions - bug reports, feature requests, and code contributions.
Report bugs and request features by opening an issue on Github.
Contribute to the project by forking/cloning the repository and submitting a pull request with your changes.
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Automatic 1111 Stable Diffusion Web UI - https://github.com/AUTOMATIC1111/stable-diffusion-webui
- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- ESRGAN - https://github.com/xinntao/ESRGAN
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
Raw data
{
"_id": null,
"home_page": "",
"name": "auto1111sdk",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "python,Automatic 1111,Stable Diffusion Web UI,image generation,stable diffusion,civit ai",
"author": "Auto1111 SDK",
"author_email": "saketh.kotamraju@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/82/de/bbff71c447e90e5c7782acd49bc56b9ca2a1228540241c6f2ae6c5c4d2af/auto1111sdk-0.0.95.tar.gz",
"platform": null,
"description": "\n# Auto 1111 SDK: Stable Diffusion Python library \n\n<p>\n <a href=\"https://pepy.tech/project/auto1111sdk\">\n <img alt=\"GitHub release\" src=\"https://static.pepy.tech/badge/auto1111sdk/month\">\n </a>\n \n</p>\nAuto 1111 SDK is a lightweight Python library for using Stable Diffusion generating images, upscaling images, and editing images with diffusion models. It is designed to be a modular, light-weight Python client that encapsulates all the main features of the [Automatic 1111 Stable Diffusion Web Ui](https://github.com/AUTOMATIC1111/stable-diffusion-webui). Auto 1111 SDK offers 3 main core features currently:\n\n- Text-to-Image, Image-to-Image, Inpainting, and Outpainting pipelines. Our pipelines support the exact same parameters as the [Stable Diffusion Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui), so you can easily replicate creations from the Web UI on the SDK.\n- Upscaling Pipelines that can run inference for any Esrgan or Real Esrgan upscaler in a few lines of code.\n- An integration with Civit AI to directly download models from the website.\n\nJoin our [Discord!!](https://discord.gg/S7wRQqt6QV)\n\n## Demo\n\nWe have a colab demo where you can run many of the operations of Auto 1111 SDK. Check it out [here!!](https://colab.research.google.com/drive/1SekiJ-mdB2V8ogWbyRyF_yDnoMuDGWTl?usp=sharing)\n\n## Installation\n\nWe recommend installing Auto 1111 SDK in a virtual environment from PyPI. Right now, we do not have support for conda environments yet.\n\n```bash\npip3 install auto1111sdk\n```\n\nTo install the latest version of Auto 1111 SDK, run:\n\n```bash\npip3 install git+https://github.com/saketh12/Auto1111SDK.git\n```\n\n## Quickstart\n\nGenerating images with Auto 1111 SDK is super easy. To run inference for Text-to-Image, Image-to-Image, Inpainting, Outpainting, or Stable Diffusion Upscale, we have 1 pipeline that can support all these operations. This saves a lot of RAM from having to create multiple pipeline objects with other solutions.\n\n```python\nfrom auto1111sdk import StableDiffusionPipeline\n\npipe = StableDiffusionPipeline(\"<Path to your local safetensors or checkpoint file>\")\n\nprompt = \"a picture of a brown dog\"\noutput = pipe.generate_txt2img(prompt = prompt, height = 1024, width = 768, steps = 10)\n\noutput[0].save(\"image.png\")\n```\n\n## Running on Windows\n\nFind the instructions [here.](https://github.com/saketh12/Auto1111SDK/blob/main/automatic1111sdk_on_windows_w_gpu.md) Contributed by by Marco Guardigli, mgua@tomware.it\n\n## Documentation\n\nWe have more detailed examples/documentation of how you can use Auto 1111 SDK [here](https://flush-ai.gitbook.io/automatic-1111-sdk/). \nFor a detailed comparison between us and Huggingface diffusers, you can read [this](https://flush-ai.gitbook.io/automatic-1111-sdk/auto-1111-sdk-vs-huggingface-diffusers).\n\nFor a detailed guide on how to use SDXL, we recommend reading [this](https://flush-ai.gitbook.io/automatic-1111-sdk/pipelines/stable-diffusion-xl)\n\n\n## Features\n- Original txt2img and img2img modes\n- Real ESRGAN upscale and Esrgan Upscale (compatible with any pth file)\n- Outpainting\n- Inpainting\n- Stable Diffusion Upscale\n- Attention, specify parts of text that the model should pay more attention to\n - a man in a `((tuxedo))` - will pay more attention to tuxedo\n - a man in a `(tuxedo:1.21)` - alternative syntax\n - select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)\n- Composable Diffusion: a way to use multiple prompts at once\n - separate prompts using uppercase AND\n - also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2\n- Works with a variety of samplers\n- Download models directly from Civit AI and RealEsrgan checkpoints\n- Set custom VAE: works for any model including SDXL\n- Support for SDXL with Stable Diffusion XL Pipelines\n- Pass in custom arguments to the models\n- No 77 prompt token limit (unlike Huggingface Diffusers, which has this limit)\n\n## Roadmap\n\n- Adding support Hires Fix and Refiner parameters for inference.\n- Adding support for Lora's\n- Adding support for Face restoration\n- Adding support for Dreambooth training script.\n- Adding support for custom extensions like Controlnet.\n\nWe will be adding support for these features very soon. We also accept any contributions to work on these issues!\n\n## Contributing\n\nAuto1111 SDK is continuously evolving, and we appreciate community involvement. We welcome all forms of contributions - bug reports, feature requests, and code contributions.\n\nReport bugs and request features by opening an issue on Github.\nContribute to the project by forking/cloning the repository and submitting a pull request with your changes.\n\n\n## Credits\nLicenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.\n\n- Automatic 1111 Stable Diffusion Web UI - https://github.com/AUTOMATIC1111/stable-diffusion-webui\n- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers\n- k-diffusion - https://github.com/crowsonkb/k-diffusion.git\n- ESRGAN - https://github.com/xinntao/ESRGAN\n- MiDaS - https://github.com/isl-org/MiDaS\n- Ideas for optimizations - https://github.com/basujindal/stable-diffusion\n- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.\n- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)\n- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)\n- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).\n- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd\n- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot\n- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator\n- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch\n- xformers - https://github.com/facebookresearch/xformers\n- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)\n",
"bugtrack_url": null,
"license": "",
"summary": "SDK for Automatic 1111.",
"version": "0.0.95",
"project_urls": null,
"split_keywords": [
"python",
"automatic 1111",
"stable diffusion web ui",
"image generation",
"stable diffusion",
"civit ai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e4460a7a2a1c3b467c8a18fa806e93cec1f51c2176a1c6f7ca798f3898748269",
"md5": "338e188676cf41ab24df5326f06004f2",
"sha256": "5e5e16a3eb2f8600506eb51d65d930891a9540358cd8771c83873909a0c29908"
},
"downloads": -1,
"filename": "auto1111sdk-0.0.95-py3-none-any.whl",
"has_sig": false,
"md5_digest": "338e188676cf41ab24df5326f06004f2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 1898636,
"upload_time": "2024-02-16T00:27:43",
"upload_time_iso_8601": "2024-02-16T00:27:43.165950Z",
"url": "https://files.pythonhosted.org/packages/e4/46/0a7a2a1c3b467c8a18fa806e93cec1f51c2176a1c6f7ca798f3898748269/auto1111sdk-0.0.95-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "82debbff71c447e90e5c7782acd49bc56b9ca2a1228540241c6f2ae6c5c4d2af",
"md5": "162fde99f3ba1b702b2af6bd8c8b5006",
"sha256": "afca24fcf0a6b3b7d0af38dbd0e60ebac81c99fc5fe55b635cc0ec7862dddd66"
},
"downloads": -1,
"filename": "auto1111sdk-0.0.95.tar.gz",
"has_sig": false,
"md5_digest": "162fde99f3ba1b702b2af6bd8c8b5006",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 1724799,
"upload_time": "2024-02-16T00:27:47",
"upload_time_iso_8601": "2024-02-16T00:27:47.666480Z",
"url": "https://files.pythonhosted.org/packages/82/de/bbff71c447e90e5c7782acd49bc56b9ca2a1228540241c6f2ae6c5c4d2af/auto1111sdk-0.0.95.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-16 00:27:47",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "auto1111sdk"
}