2023-01-06 23:20:34 +00:00
|
|
|
# ONNX Web
|
2023-01-05 05:25:27 +00:00
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
onnx-web is a tool for running Stable Diffusion and other [ONNX models](https://onnx.ai/) with hardware acceleration,
|
|
|
|
on both AMD and Nvidia GPUs and with a CPU software fallback.
|
2023-01-05 15:44:01 +00:00
|
|
|
|
2023-01-15 19:38:09 +00:00
|
|
|
The GUI is [hosted on Github Pages](https://ssube.github.io/onnx-web/) and runs in all major browsers, including on
|
2023-01-21 04:46:11 +00:00
|
|
|
mobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters
|
|
|
|
are shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The
|
|
|
|
last few output images are shown below the image controls, making it easy to refer back to previous parameters or save
|
|
|
|
an image from earlier.
|
2023-01-05 15:44:01 +00:00
|
|
|
|
2023-03-31 02:51:59 +00:00
|
|
|
The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers`
|
|
|
|
](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators,
|
|
|
|
and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows,
|
|
|
|
with a CPU fallback capable of running on laptop-class machines.
|
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
Please check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more
|
|
|
|
details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md).
|
2023-01-05 06:44:52 +00:00
|
|
|
|
2023-03-28 23:57:46 +00:00
|
|
|
![txt2img with detailed knollingcase renders of a soldier in a cloudy alien jungle](./docs/readme-preview.png)
|
2023-01-05 15:22:27 +00:00
|
|
|
|
2023-01-05 18:46:19 +00:00
|
|
|
## Features
|
|
|
|
|
2023-03-27 03:08:12 +00:00
|
|
|
This is an incomplete list of new and interesting features, with links to the user guide:
|
|
|
|
|
|
|
|
- hardware acceleration on both AMD and Nvidia
|
2023-03-31 02:45:56 +00:00
|
|
|
- tested on CUDA, DirectML, and ROCm
|
2023-03-28 23:01:23 +00:00
|
|
|
- [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both
|
|
|
|
AMD and Nvidia
|
|
|
|
- software fallback for CPU-only systems
|
2023-01-05 18:46:19 +00:00
|
|
|
- web app to generate and view images
|
2023-01-15 19:38:09 +00:00
|
|
|
- [hosted on Github Pages](https://ssube.github.io/onnx-web), from your CDN, or locally
|
2023-03-27 03:08:12 +00:00
|
|
|
- [persists your recent images and progress as you change tabs](docs/user-guide.md#image-history)
|
|
|
|
- queue up multiple images and retry errors
|
2023-03-28 22:34:23 +00:00
|
|
|
- translations available for English, French, German, and Spanish (please open an issue for more)
|
2023-02-17 05:12:44 +00:00
|
|
|
- supports many `diffusers` pipelines
|
2023-03-27 03:08:12 +00:00
|
|
|
- [txt2img](docs/user-guide.md#txt2img-tab)
|
|
|
|
- [img2img](docs/user-guide.md#img2img-tab)
|
|
|
|
- [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload
|
|
|
|
- [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration
|
|
|
|
- [add and use your own models](docs/user-guide.md#adding-your-own-models)
|
|
|
|
- [convert models from diffusers and SD checkpoints](docs/converting-models.md)
|
|
|
|
- [download models from HuggingFace hub, Civitai, and HTTPS sources](docs/user-guide.md#model-sources)
|
|
|
|
- blend in additional networks
|
|
|
|
- [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks)
|
2023-04-21 02:32:27 +00:00
|
|
|
- [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens)
|
2023-03-27 03:08:12 +00:00
|
|
|
- [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens)
|
2023-04-21 02:32:27 +00:00
|
|
|
- ControlNet
|
|
|
|
- image filters for edge detection and other methods
|
|
|
|
- with ONNX acceleration
|
|
|
|
- highres mode
|
|
|
|
- runs img2img on the results of the other pipelines
|
|
|
|
- multiple iterations can produce 8k images and larger
|
2023-03-27 03:08:12 +00:00
|
|
|
- infinite prompt length
|
|
|
|
- [with long prompt weighting](docs/user-guide.md#long-prompt-weighting)
|
|
|
|
- expand and control Textual Inversions per-layer
|
|
|
|
- [image blending mode](docs/user-guide.md#blend-tab)
|
|
|
|
- combine images from history
|
2023-02-20 04:33:32 +00:00
|
|
|
- upscaling and face correction
|
2023-02-05 17:39:58 +00:00
|
|
|
- upscaling with Real ESRGAN or Stable Diffusion
|
|
|
|
- face correction with CodeFormer or GFPGAN
|
2023-03-27 03:08:12 +00:00
|
|
|
- [API server can be run remotely](docs/server-admin.md)
|
2023-01-21 04:46:11 +00:00
|
|
|
- REST API can be served over HTTPS or HTTP
|
|
|
|
- background processing for all image pipelines
|
|
|
|
- polling for image status, plays nice with load balancers
|
|
|
|
- OCI containers provided
|
|
|
|
- for all supported hardware accelerators
|
|
|
|
- includes both the API and GUI bundle in a single container
|
|
|
|
- runs well on [RunPod](https://www.runpod.io/) and other GPU container hosting services
|
2023-01-05 18:46:19 +00:00
|
|
|
|
2023-01-05 15:22:27 +00:00
|
|
|
## Contents
|
|
|
|
|
2023-01-06 23:20:34 +00:00
|
|
|
- [ONNX Web](#onnx-web)
|
2023-01-05 18:46:19 +00:00
|
|
|
- [Features](#features)
|
2023-01-05 15:22:27 +00:00
|
|
|
- [Contents](#contents)
|
|
|
|
- [Setup](#setup)
|
2023-03-31 02:45:56 +00:00
|
|
|
- [Adding your own models](#adding-your-own-models)
|
2023-01-05 15:22:27 +00:00
|
|
|
- [Usage](#usage)
|
2023-01-08 15:46:31 +00:00
|
|
|
- [Known errors and solutions](#known-errors-and-solutions)
|
2023-03-31 02:45:56 +00:00
|
|
|
- [Running the containers](#running-the-containers)
|
2023-01-21 04:46:11 +00:00
|
|
|
- [Credits](#credits)
|
2023-01-05 15:22:27 +00:00
|
|
|
|
|
|
|
## Setup
|
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
There are a few ways to run onnx-web:
|
2023-02-14 05:21:56 +00:00
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
- cross-platform:
|
|
|
|
- [clone this repository, create a virtual environment, and run `pip install`](docs/setup-guide.md#cross-platform-method)
|
|
|
|
- [pulling and running the OCI containers](docs/server-admin.md#running-the-containers)
|
|
|
|
- on Windows:
|
|
|
|
- [clone this repository and run one of the `setup-*.bat` scripts](docs/setup-guide.md#windows-python-installer)
|
|
|
|
- [download and run the experimental all-in-one bundle](docs/setup-guide.md#windows-all-in-one-bundle)
|
2023-01-05 15:22:27 +00:00
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
You only need to run the server and should not need to compile anything. The client GUI is hosted on Github Pages and
|
|
|
|
is included with the Windows all-in-one bundle.
|
2023-01-21 14:52:35 +00:00
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
The extended setup docs have been [moved to the setup guide](docs/setup-guide.md).
|
2023-01-05 15:22:27 +00:00
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
### Adding your own models
|
2023-01-09 03:14:47 +00:00
|
|
|
|
2023-03-31 02:51:59 +00:00
|
|
|
You can [add your own models](./docs/user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub
|
|
|
|
or Civitai or by converting them from local files, without making any code changes. You can also download and blend in
|
|
|
|
additional networks, such as LoRAs and Textual Inversions, using [tokens in the
|
|
|
|
prompt](docs/user-guide.md#prompt-tokens).
|
2023-02-05 21:32:19 +00:00
|
|
|
|
2023-01-05 15:22:27 +00:00
|
|
|
## Usage
|
|
|
|
|
2023-01-08 15:46:31 +00:00
|
|
|
### Known errors and solutions
|
|
|
|
|
2023-01-21 04:46:11 +00:00
|
|
|
Please see [the Known Errors section of the user guide](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#known-errors).
|
|
|
|
|
2023-03-31 02:45:56 +00:00
|
|
|
### Running the containers
|
|
|
|
|
|
|
|
This has [been moved to the server admin guide](docs/server-admin.md#running-the-containers).
|
|
|
|
|
2023-01-21 04:46:11 +00:00
|
|
|
## Credits
|
|
|
|
|
2023-04-15 16:44:20 +00:00
|
|
|
Some of the conversion and pipeline code was copied or derived from code in:
|
|
|
|
|
|
|
|
- [`Amblyopius/Stable-Diffusion-ONNX-FP16`](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16)
|
|
|
|
- GPL v3: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/LICENSE
|
|
|
|
- https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_controlnet.py
|
|
|
|
- https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/pipeline_onnx_stable_diffusion_instruct_pix2pix.py
|
2023-05-27 16:43:30 +00:00
|
|
|
- [`d8ahazard/sd_dreambooth_extension`](https://github.com/d8ahazard/sd_dreambooth_extension)
|
|
|
|
- Non-commerical license: https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/license.md
|
|
|
|
- https://github.com/d8ahazard/sd_dreambooth_extension/blob/main/dreambooth/sd_to_diff.py
|
2023-04-15 16:44:20 +00:00
|
|
|
- [`huggingface/diffusers`](https://github.com/huggingface/diffusers)
|
|
|
|
- Apache v2: https://github.com/huggingface/diffusers/blob/main/LICENSE
|
|
|
|
- https://github.com/huggingface/diffusers/blob/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py
|
2023-05-27 16:43:30 +00:00
|
|
|
- [`uchuusen/onnx_stable_diffusion_controlnet`](https://github.com/uchuusen/onnx_stable_diffusion_controlnet)
|
2023-04-15 16:44:20 +00:00
|
|
|
- GPL v3: https://github.com/uchuusen/onnx_stable_diffusion_controlnet/blob/main/LICENSE
|
2023-05-27 16:43:30 +00:00
|
|
|
- [`uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix](https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix)
|
2023-04-15 16:44:20 +00:00
|
|
|
- Apache v2: https://github.com/uchuusen/pipeline_onnx_stable_diffusion_instruct_pix2pix/blob/main/LICENSE
|
|
|
|
|
|
|
|
Those parts have their own licenses with additional restrictions on commercial usage, modification, and redistribution.
|
|
|
|
The rest of the project is provided under the MIT license, and I am working to isolate these components into a library.
|
2023-02-11 21:02:27 +00:00
|
|
|
|
2023-04-15 16:44:20 +00:00
|
|
|
There are many other good options for using Stable Diffusion with hardware acceleration, including:
|
2023-02-11 21:02:27 +00:00
|
|
|
|
2023-04-15 16:44:20 +00:00
|
|
|
- https://github.com/Amblyopius/AMD-Stable-Diffusion-ONNX-FP16
|
|
|
|
- https://github.com/azuritecoin/OnnxDiffusersUI
|
|
|
|
- https://github.com/ForserX/StableDiffusionUI
|
|
|
|
- https://github.com/pingzing/stable-diffusion-playground
|
|
|
|
- https://github.com/quickwick/stable-diffusion-win-amd-ui
|
2023-02-11 21:02:27 +00:00
|
|
|
|
2023-01-21 04:46:11 +00:00
|
|
|
Getting this set up and running on AMD would not have been possible without guides by:
|
|
|
|
|
|
|
|
- https://gist.github.com/harishanand95/75f4515e6187a6aa3261af6ac6f61269
|
|
|
|
- https://gist.github.com/averad/256c507baa3dcc9464203dc14610d674
|
|
|
|
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
|
|
|
|
- https://www.travelneil.com/stable-diffusion-updates.html
|