1
0
Fork 0
Go to file
Sean Sube 89ebbb8536
feat(api): update requirements for torch 2.0
2023-04-09 17:19:40 -05:00
.devcontainer feat(build): add basic codespaces definition (#25) 2023-02-04 16:11:24 -06:00
.vscode lint lock name 2023-03-26 08:30:34 -05:00
api feat(api): update requirements for torch 2.0 2023-04-09 17:19:40 -05:00
common fix(api): correctly handle missing inversion param 2023-02-25 12:54:51 -06:00
docs fix(docs): add output image size table to user guide 2023-04-01 17:48:09 -05:00
exe fix(exe): include realesrgan and coloredlogs in bundles 2023-03-30 23:43:18 -05:00
gui update params version for highres 2023-04-01 17:15:09 -05:00
models links, subdirs 2023-03-15 20:09:36 -05:00
outputs feat(api): switch to device pool for background workers 2023-02-04 10:06:22 -06:00
run feat(run): add Docker Compose files for API containers 2023-01-21 15:07:53 -06:00
.gitattributes set up LFS for images 2023-01-19 21:04:19 -06:00
.gitignore feat: add model and output directories 2023-01-05 15:37:30 -06:00
.gitlab-ci.yml fix(build): allow long-running jobs to be interrupted 2023-03-19 09:15:41 -05:00
BENCHMARK.md feat(docs): add platform/model compatibility list 2023-01-24 08:39:25 -06:00
CHANGELOG.md chore(release): 0.9.0 2023-03-28 18:57:24 -05:00
LICENSE Create LICENSE 2023-01-05 10:44:44 -06:00
Makefile feat(build): add root makefile and common targets 2023-01-05 15:03:34 -06:00
README.md fix phrasing and order 2023-03-30 21:51:59 -05:00
onnx-web.code-workspace feat(scripts): add model type guessing script (#210) 2023-03-18 13:01:05 -05:00
renovate.json Add renovate.json 2023-01-05 13:10:52 -06:00

README.md

ONNX Web

onnx-web is a tool for running Stable Diffusion and other ONNX models with hardware acceleration, on both AMD and Nvidia GPUs and with a CPU software fallback.

The GUI is hosted on Github Pages and runs in all major browsers, including on mobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters are shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The last few output images are shown below the image controls, making it easy to refer back to previous parameters or save an image from earlier.

The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from diffusers , along with metadata about the available models and accelerators, and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows, with a CPU fallback capable of running on laptop-class machines.

Please check out the setup guide to get started and the user guide for more details.

txt2img with detailed knollingcase renders of a soldier in a cloudy alien jungle

Features

This is an incomplete list of new and interesting features, with links to the user guide:

Contents

Setup

There are a few ways to run onnx-web:

You only need to run the server and should not need to compile anything. The client GUI is hosted on Github Pages and is included with the Windows all-in-one bundle.

The extended setup docs have been moved to the setup guide.

Adding your own models

You can add your own models by downloading them from the HuggingFace Hub or Civitai or by converting them from local files, without making any code changes. You can also download and blend in additional networks, such as LoRAs and Textual Inversions, using tokens in the prompt.

Usage

Known errors and solutions

Please see the Known Errors section of the user guide.

Running the containers

This has been moved to the server admin guide.

Credits

Some of the conversion code was copied or derived from code in:

Those parts have their own license with additional restrictions and may need permission for commercial usage.

Getting this set up and running on AMD would not have been possible without guides by:

There are many other good options for using Stable Diffusion with hardware acceleration, including: