onnx-web


Nameonnx-web JSON
Version 0.12.0 PyPI version JSON
download
home_pagehttps://github.com/ssube/onnx-web
Summaryweb UI for running ONNX models
upload_time2023-12-31 23:25:57
maintainer
docs_urlNone
authorssube
requires_python>=3.8,<3.11
license
keywords onnx
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # onnx-web

onnx-web is a tool for running Stable Diffusion and other [ONNX models](https://onnx.ai/) with hardware acceleration,
on both AMD and Nvidia GPUs and with a CPU software fallback.

The GUI is [hosted on Github Pages](https://ssube.github.io/onnx-web/) and runs in all major browsers, including on
mobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters
are shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The
last few output images are shown below the image controls, making it easy to refer back to previous parameters or save
an image from earlier.

The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers`
](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators,
and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows,
with a CPU fallback capable of running on laptop-class machines.

Please check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more
details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md).

![preview of txt2img tab using SDXL to generate ghostly astronauts eating weird hamburgers on an abandoned space station](./docs/readme-sdxl.png)

## Features

This is an incomplete list of new and interesting features, with links to the user guide:

- supports SDXL and SDXL Turbo
- wide variety of schedulers: DDIM, DEIS, DPM SDE, Euler Ancestral, LCM, UniPC, and more
- hardware acceleration on both AMD and Nvidia
  - tested on CUDA, DirectML, and ROCm
  - [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both
    AMD and Nvidia
  - software fallback for CPU-only systems
- web app to generate and view images
  - [hosted on Github Pages](https://ssube.github.io/onnx-web), from your CDN, or locally
  - [persists your recent images and progress as you change tabs](docs/user-guide.md#image-history)
  - queue up multiple images and retry errors
  - translations available for English, French, German, and Spanish (please open an issue for more)
- supports many `diffusers` pipelines
  - [txt2img](docs/user-guide.md#txt2img-tab)
  - [img2img](docs/user-guide.md#img2img-tab)
  - [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload
  - [panorama](docs/user-guide.md#panorama-pipeline), for both SD v1.5 and SDXL
  - [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration
- [add and use your own models](docs/user-guide.md#adding-your-own-models)
  - [convert models from diffusers and SD checkpoints](docs/converting-models.md)
  - [download models from HuggingFace hub, Civitai, and HTTPS sources](docs/user-guide.md#model-sources)
- blend in additional networks
  - [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks)
  - [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens)
  - [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens)
    - each layer of the embeddings can be controlled and used individually
- ControlNet
  - image filters for edge detection and other methods
  - with ONNX acceleration
- highres mode
  - runs img2img on the results of the other pipelines
  - multiple iterations can produce 8k images and larger
- [multi-stage](docs/user-guide.md#prompt-stages) and [region prompts](docs/user-guide.md#region-tokens)
  - seamlessly combine multiple prompts in the same image
  - provide prompts for different areas in the image and blend them together
  - change the prompt for highres mode and refine details without recursion
- infinite prompt length
  - [with long prompt weighting](docs/user-guide.md#long-prompt-weighting)
- [image blending mode](docs/user-guide.md#blend-tab)
  - combine images from history
- upscaling and correction
  - upscaling with Real ESRGAN, SwinIR, and Stable Diffusion
  - face correction with CodeFormer and GFPGAN
- [API server can be run remotely](docs/server-admin.md)
  - REST API can be served over HTTPS or HTTP
  - background processing for all image pipelines
  - polling for image status, plays nice with load balancers
- OCI containers provided
  - for all supported hardware accelerators
  - includes both the API and GUI bundle in a single container
  - runs well on [RunPod](https://www.runpod.io/), [Vast.ai](https://vast.ai/), and other GPU container hosting services

## Contents

- [onnx-web](#onnx-web)
  - [Features](#features)
  - [Contents](#contents)
  - [Setup](#setup)
    - [Adding your own models](#adding-your-own-models)
  - [Usage](#usage)
    - [Known errors and solutions](#known-errors-and-solutions)
    - [Running the containers](#running-the-containers)
  - [Credits](#credits)

## Setup

There are a few ways to run onnx-web:

- cross-platform:
  - [clone this repository, create a virtual environment, and run `pip install`](docs/setup-guide.md#cross-platform-method)
  - [pulling and running the OCI containers](docs/server-admin.md#running-the-containers)
- on Windows:
  - [clone this repository and run one of the `setup-*.bat` scripts](docs/setup-guide.md#windows-python-installer)
  - [download and run the experimental all-in-one bundle](docs/setup-guide.md#windows-all-in-one-bundle)

You only need to run the server and should not need to compile anything. The client GUI is hosted on Github Pages and
is included with the Windows all-in-one bundle.

The extended setup docs have been [moved to the setup guide](docs/setup-guide.md).

### Adding your own models

You can [add your own models](./docs/user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub
or Civitai or by converting them from local files, without making any code changes. You can also download and blend in
additional networks, such as LoRAs and Textual Inversions, using [tokens in the
prompt](docs/user-guide.md#prompt-tokens).

## Usage

### Known errors and solutions

Please see [the Known Errors section of the user guide](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#known-errors).

### Running the containers

This has [been moved to the server admin guide](docs/server-admin.md#running-the-containers).

## Credits

Some of the conversion and pipeline code was copied or derived from code in:

- [`Amblyopius/Stable-Diffusion-ONNX-FP16`](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16)
  - GPL v3: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/LICENSE
  - https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_controlnet.py
  - https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_instruct_pix2pix.py
- [`d8ahazard/sd_dreambooth_extension`](https://github.com/d8ahazard/sd_dreambooth_extension)
  - Non-commerical license: https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/license.md
  - https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/dreambooth/sd_to_diff.py
- [`huggingface/diffusers`](https://github.com/huggingface/diffusers)
  - Apache v2: https://github.com/huggingface/diffusers/blob/main/LICENSE
  - https://github.com/huggingface/diffusers/blob/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py
- [`uchuusen/onnx_stable_diffusion_controlnet`](https://github.com/uchuusen/onnx_stable_diffusion_controlnet)
  - GPL v3: https://github.com/uchuusen/onnx_stable_diffusion_controlnet/blob/main/LICENSE
- [`uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix`](https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix)
  - Apache v2: https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix/blob/main/LICENSE

Those parts have their own licenses with additional restrictions on commercial usage, modification, and redistribution.
The rest of the project is provided under the MIT license, and I am working to isolate these components into a library.

There are many other good options for using Stable Diffusion with hardware acceleration, including:

- https://github.com/Amblyopius/AMD-Stable-Diffusion-ONNX-FP16
- https://github.com/azuritecoin/OnnxDiffusersUI
- https://github.com/ForserX/StableDiffusionUI
- https://github.com/pingzing/stable-diffusion-playground
- https://github.com/quickwick/stable-diffusion-win-amd-ui

Getting this set up and running on AMD would not have been possible without guides by:

- https://gist.github.com/harishanand95/75f4515e6187a6aa3261af6ac6f61269
- https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
- https://www.travelneil.com/stable-diffusion-updates.html

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ssube/onnx-web",
    "name": "onnx-web",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<3.11",
    "maintainer_email": "",
    "keywords": "onnx",
    "author": "ssube",
    "author_email": "seansube@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ad/6f/a38c5d1cc87686c4b0840e2e97eb5b5255b14083b5b42e42a25a6d2d33dd/onnx-web-0.12.0.tar.gz",
    "platform": null,
    "description": "# onnx-web\n\nonnx-web is a tool for running Stable Diffusion and other [ONNX models](https://onnx.ai/) with hardware acceleration,\non both AMD and Nvidia GPUs and with a CPU software fallback.\n\nThe GUI is [hosted on Github Pages](https://ssube.github.io/onnx-web/) and runs in all major browsers, including on\nmobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters\nare shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The\nlast few output images are shown below the image controls, making it easy to refer back to previous parameters or save\nan image from earlier.\n\nThe API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers`\n](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators,\nand the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows,\nwith a CPU fallback capable of running on laptop-class machines.\n\nPlease check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more\ndetails](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md).\n\n![preview of txt2img tab using SDXL to generate ghostly astronauts eating weird hamburgers on an abandoned space station](./docs/readme-sdxl.png)\n\n## Features\n\nThis is an incomplete list of new and interesting features, with links to the user guide:\n\n- supports SDXL and SDXL Turbo\n- wide variety of schedulers: DDIM, DEIS, DPM SDE, Euler Ancestral, LCM, UniPC, and more\n- hardware acceleration on both AMD and Nvidia\n  - tested on CUDA, DirectML, and ROCm\n  - [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both\n    AMD and Nvidia\n  - software fallback for CPU-only systems\n- web app to generate and view images\n  - [hosted on Github Pages](https://ssube.github.io/onnx-web), from your CDN, or locally\n  - [persists your recent images and progress as you change tabs](docs/user-guide.md#image-history)\n  - queue up multiple images and retry errors\n  - translations available for English, French, German, and Spanish (please open an issue for more)\n- supports many `diffusers` pipelines\n  - [txt2img](docs/user-guide.md#txt2img-tab)\n  - [img2img](docs/user-guide.md#img2img-tab)\n  - [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload\n  - [panorama](docs/user-guide.md#panorama-pipeline), for both SD v1.5 and SDXL\n  - [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration\n- [add and use your own models](docs/user-guide.md#adding-your-own-models)\n  - [convert models from diffusers and SD checkpoints](docs/converting-models.md)\n  - [download models from HuggingFace hub, Civitai, and HTTPS sources](docs/user-guide.md#model-sources)\n- blend in additional networks\n  - [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks)\n  - [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens)\n  - [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens)\n    - each layer of the embeddings can be controlled and used individually\n- ControlNet\n  - image filters for edge detection and other methods\n  - with ONNX acceleration\n- highres mode\n  - runs img2img on the results of the other pipelines\n  - multiple iterations can produce 8k images and larger\n- [multi-stage](docs/user-guide.md#prompt-stages) and [region prompts](docs/user-guide.md#region-tokens)\n  - seamlessly combine multiple prompts in the same image\n  - provide prompts for different areas in the image and blend them together\n  - change the prompt for highres mode and refine details without recursion\n- infinite prompt length\n  - [with long prompt weighting](docs/user-guide.md#long-prompt-weighting)\n- [image blending mode](docs/user-guide.md#blend-tab)\n  - combine images from history\n- upscaling and correction\n  - upscaling with Real ESRGAN, SwinIR, and Stable Diffusion\n  - face correction with CodeFormer and GFPGAN\n- [API server can be run remotely](docs/server-admin.md)\n  - REST API can be served over HTTPS or HTTP\n  - background processing for all image pipelines\n  - polling for image status, plays nice with load balancers\n- OCI containers provided\n  - for all supported hardware accelerators\n  - includes both the API and GUI bundle in a single container\n  - runs well on [RunPod](https://www.runpod.io/), [Vast.ai](https://vast.ai/), and other GPU container hosting services\n\n## Contents\n\n- [onnx-web](#onnx-web)\n  - [Features](#features)\n  - [Contents](#contents)\n  - [Setup](#setup)\n    - [Adding your own models](#adding-your-own-models)\n  - [Usage](#usage)\n    - [Known errors and solutions](#known-errors-and-solutions)\n    - [Running the containers](#running-the-containers)\n  - [Credits](#credits)\n\n## Setup\n\nThere are a few ways to run onnx-web:\n\n- cross-platform:\n  - [clone this repository, create a virtual environment, and run `pip install`](docs/setup-guide.md#cross-platform-method)\n  - [pulling and running the OCI containers](docs/server-admin.md#running-the-containers)\n- on Windows:\n  - [clone this repository and run one of the `setup-*.bat` scripts](docs/setup-guide.md#windows-python-installer)\n  - [download and run the experimental all-in-one bundle](docs/setup-guide.md#windows-all-in-one-bundle)\n\nYou only need to run the server and should not need to compile anything. The client GUI is hosted on Github Pages and\nis included with the Windows all-in-one bundle.\n\nThe extended setup docs have been [moved to the setup guide](docs/setup-guide.md).\n\n### Adding your own models\n\nYou can [add your own models](./docs/user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub\nor Civitai or by converting them from local files, without making any code changes. You can also download and blend in\nadditional networks, such as LoRAs and Textual Inversions, using [tokens in the\nprompt](docs/user-guide.md#prompt-tokens).\n\n## Usage\n\n### Known errors and solutions\n\nPlease see [the Known Errors section of the user guide](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#known-errors).\n\n### Running the containers\n\nThis has [been moved to the server admin guide](docs/server-admin.md#running-the-containers).\n\n## Credits\n\nSome of the conversion and pipeline code was copied or derived from code in:\n\n- [`Amblyopius/Stable-Diffusion-ONNX-FP16`](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16)\n  - GPL v3: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/LICENSE\n  - https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_controlnet.py\n  - https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_instruct_pix2pix.py\n- [`d8ahazard/sd_dreambooth_extension`](https://github.com/d8ahazard/sd_dreambooth_extension)\n  - Non-commerical license: https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/license.md\n  - https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/dreambooth/sd_to_diff.py\n- [`huggingface/diffusers`](https://github.com/huggingface/diffusers)\n  - Apache v2: https://github.com/huggingface/diffusers/blob/main/LICENSE\n  - https://github.com/huggingface/diffusers/blob/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py\n- [`uchuusen/onnx_stable_diffusion_controlnet`](https://github.com/uchuusen/onnx_stable_diffusion_controlnet)\n  - GPL v3: https://github.com/uchuusen/onnx_stable_diffusion_controlnet/blob/main/LICENSE\n- [`uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix`](https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix)\n  - Apache v2: https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix/blob/main/LICENSE\n\nThose parts have their own licenses with additional restrictions on commercial usage, modification, and redistribution.\nThe rest of the project is provided under the MIT license, and I am working to isolate these components into a library.\n\nThere are many other good options for using Stable Diffusion with hardware acceleration, including:\n\n- https://github.com/Amblyopius/AMD-Stable-Diffusion-ONNX-FP16\n- https://github.com/azuritecoin/OnnxDiffusersUI\n- https://github.com/ForserX/StableDiffusionUI\n- https://github.com/pingzing/stable-diffusion-playground\n- https://github.com/quickwick/stable-diffusion-win-amd-ui\n\nGetting this set up and running on AMD would not have been possible without guides by:\n\n- https://gist.github.com/harishanand95/75f4515e6187a6aa3261af6ac6f61269\n- https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674\n- https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs\n- https://www.travelneil.com/stable-diffusion-updates.html\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "web UI for running ONNX models",
    "version": "0.12.0",
    "project_urls": {
        "Homepage": "https://github.com/ssube/onnx-web"
    },
    "split_keywords": [
        "onnx"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f1e11ba0bbba5a8bf8e49314a0b374554f696a43282ca3fb0ef60812790b8f84",
                "md5": "7e121d845c32b6aba559d095881ce3a2",
                "sha256": "6595952cc22cd769e14dc419d7a66a278be877f037f4fbc5fa8e007de88a288a"
            },
            "downloads": -1,
            "filename": "onnx_web-0.12.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7e121d845c32b6aba559d095881ce3a2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<3.11",
            "size": 85732,
            "upload_time": "2023-12-31T23:25:55",
            "upload_time_iso_8601": "2023-12-31T23:25:55.550817Z",
            "url": "https://files.pythonhosted.org/packages/f1/e1/1ba0bbba5a8bf8e49314a0b374554f696a43282ca3fb0ef60812790b8f84/onnx_web-0.12.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad6fa38c5d1cc87686c4b0840e2e97eb5b5255b14083b5b42e42a25a6d2d33dd",
                "md5": "b579172d81a375608b93eccb0b4e69cf",
                "sha256": "58b20ee480c281a9e36119c807208363d341f8a51e86d07d792e45916ad59cef"
            },
            "downloads": -1,
            "filename": "onnx-web-0.12.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b579172d81a375608b93eccb0b4e69cf",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<3.11",
            "size": 65892,
            "upload_time": "2023-12-31T23:25:57",
            "upload_time_iso_8601": "2023-12-31T23:25:57.346493Z",
            "url": "https://files.pythonhosted.org/packages/ad/6f/a38c5d1cc87686c4b0840e2e97eb5b5255b14083b5b42e42a25a6d2d33dd/onnx-web-0.12.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-31 23:25:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ssube",
    "github_project": "onnx-web",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "onnx-web"
}
        
Elapsed time: 0.18693s