1
0
Fork 0

fix phrasing and order

This commit is contained in:
Sean Sube 2023-03-30 21:51:59 -05:00
parent 968058bf35
commit 6e509c0bc4
Signed by: ssube
GPG Key ID: 3EED7B957D362AF1
1 changed files with 9 additions and 9 deletions

View File

@ -3,17 +3,17 @@
onnx-web is a tool for running Stable Diffusion and other [ONNX models](https://onnx.ai/) with hardware acceleration, onnx-web is a tool for running Stable Diffusion and other [ONNX models](https://onnx.ai/) with hardware acceleration,
on both AMD and Nvidia GPUs and with a CPU software fallback. on both AMD and Nvidia GPUs and with a CPU software fallback.
The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers`
](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators,
and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows,
with a CPU fallback capable of running on laptop-class machines.
The GUI is [hosted on Github Pages](https://ssube.github.io/onnx-web/) and runs in all major browsers, including on The GUI is [hosted on Github Pages](https://ssube.github.io/onnx-web/) and runs in all major browsers, including on
mobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters mobile devices. It allows you to select the model and accelerator being used for each image pipeline. Image parameters
are shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The are shown for each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The
last few output images are shown below the image controls, making it easy to refer back to previous parameters or save last few output images are shown below the image controls, making it easy to refer back to previous parameters or save
an image from earlier. an image from earlier.
The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers`
](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators,
and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows,
with a CPU fallback capable of running on laptop-class machines.
Please check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more Please check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more
details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md). details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md).
@ -92,10 +92,10 @@ The extended setup docs have been [moved to the setup guide](docs/setup-guide.md
### Adding your own models ### Adding your own models
You can include your own models or download and convert models from the HuggingFace Hub or Civitai without making any You can [add your own models](./docs/user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub
code changes, by including them in the `extras.json` file. You can also download and blend in additional networks, such or Civitai or by converting them from local files, without making any code changes. You can also download and blend in
as LoRAs and Textual Inversions. For more details, please [see the user guide additional networks, such as LoRAs and Textual Inversions, using [tokens in the
](./docs/user-guide.md#adding-your-own-models). prompt](docs/user-guide.md#prompt-tokens).
## Usage ## Usage