1
0
Fork 0

chore(docs): emphasize testing section

This commit is contained in:
Sean Sube 2023-01-12 18:33:03 -06:00
parent 4a8fbf514c
commit d1d079d9af
1 changed files with 16 additions and 6 deletions

View File

@ -7,8 +7,8 @@ along with metadata about the available models and accelerators, and the output
is supported on both AMD and Nvidia, with a CPU fallback capable of running on laptop-class machines. is supported on both AMD and Nvidia, with a CPU fallback capable of running on laptop-class machines.
The GUI runs in all major browsers, including on mobile devices, and allows you to select the model and accelerator The GUI runs in all major browsers, including on mobile devices, and allows you to select the model and accelerator
being used, along with the prompt and other image parameters. The last few output images are shown, making it easy to being used, along with the prompt and other image parameters. The last few output images are shown below the image
refer back to previous parameters. controls, making it easy to refer back to previous parameters or save an image from earlier.
![txt2img with example astronaut prompt and image](./docs/readme-preview.png) ![txt2img with example astronaut prompt and image](./docs/readme-preview.png)
@ -22,12 +22,15 @@ Based on guides by:
## Features ## Features
- REST API server capable of running ONNX models with DirectML acceleration - REST API server capable of running ONNX models with DirectML acceleration
- AMD hardware acceleration - AMD and Nvidia hardware acceleration
- CPU software fallback - CPU software fallback
- multiple schedulers - multiple schedulers
- web app to generate and view images - web app to generate and view images
- can be hosted alongside API or on a CDN - can be hosted alongside API or on a CDN
- built with React and MUI - built with React and MUI
- OCI containers
- for both API and GUI
- for each hardware platform
- txt2img mode - txt2img mode
- image controls and scheduler selection - image controls and scheduler selection
- with recent image history - with recent image history
@ -53,6 +56,7 @@ Based on guides by:
- [For CPU on Windows: Install PyTorch CPU](#for-cpu-on-windows-install-pytorch-cpu) - [For CPU on Windows: Install PyTorch CPU](#for-cpu-on-windows-install-pytorch-cpu)
- [For Nvidia everywhere: Install PyTorch GPU and ONNX GPU](#for-nvidia-everywhere-install-pytorch-gpu-and-onnx-gpu) - [For Nvidia everywhere: Install PyTorch GPU and ONNX GPU](#for-nvidia-everywhere-install-pytorch-gpu-and-onnx-gpu)
- [Download and convert models](#download-and-convert-models) - [Download and convert models](#download-and-convert-models)
- [Test the models](#test-the-models)
- [Usage](#usage) - [Usage](#usage)
- [Configuring and running the server](#configuring-and-running-the-server) - [Configuring and running the server](#configuring-and-running-the-server)
- [Securing the server](#securing-the-server) - [Securing the server](#securing-the-server)
@ -271,9 +275,6 @@ Run the conversion script with your desired model(s):
This will take a little while to convert each model. Stable diffusion v1.4 is about 6GB, v1.5 is at least 10GB or so. This will take a little while to convert each model. Stable diffusion v1.4 is about 6GB, v1.5 is at least 10GB or so.
You should verify that all of the steps up to this point have worked correctly by attempting to run the
`api/test-setup.py` script, which is a slight variation on the original txt2img script.
If you want to use inpainting, you will need a second model trained for that purpose: If you want to use inpainting, you will need a second model trained for that purpose:
```shell ```shell
@ -284,6 +285,15 @@ If you want to use inpainting, you will need a second model trained for that pur
> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="runwayml/stable-diffusion-inpainting" --output_path=".\models\stable-diffusion-inpainting" > python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="runwayml/stable-diffusion-inpainting" --output_path=".\models\stable-diffusion-inpainting"
``` ```
### Test the models
You should verify that all of the steps up to this point have worked correctly by attempting to run the
`api/test-setup.py` script, which is a slight variation on the original txt2img script.
If the script works, there will be an image of an astronaut in `outputs/test.png`.
If you get any errors, check [the known errors section](#known-errors-and-solutions).
## Usage ## Usage
### Configuring and running the server ### Configuring and running the server