chore(docs): emphasize testing section
This commit is contained in:
parent
4a8fbf514c
commit
d1d079d9af
22
README.md
22
README.md
|
@ -7,8 +7,8 @@ along with metadata about the available models and accelerators, and the output
|
|||
is supported on both AMD and Nvidia, with a CPU fallback capable of running on laptop-class machines.
|
||||
|
||||
The GUI runs in all major browsers, including on mobile devices, and allows you to select the model and accelerator
|
||||
being used, along with the prompt and other image parameters. The last few output images are shown, making it easy to
|
||||
refer back to previous parameters.
|
||||
being used, along with the prompt and other image parameters. The last few output images are shown below the image
|
||||
controls, making it easy to refer back to previous parameters or save an image from earlier.
|
||||
|
||||
![txt2img with example astronaut prompt and image](./docs/readme-preview.png)
|
||||
|
||||
|
@ -22,12 +22,15 @@ Based on guides by:
|
|||
## Features
|
||||
|
||||
- REST API server capable of running ONNX models with DirectML acceleration
|
||||
- AMD hardware acceleration
|
||||
- AMD and Nvidia hardware acceleration
|
||||
- CPU software fallback
|
||||
- multiple schedulers
|
||||
- web app to generate and view images
|
||||
- can be hosted alongside API or on a CDN
|
||||
- built with React and MUI
|
||||
- OCI containers
|
||||
- for both API and GUI
|
||||
- for each hardware platform
|
||||
- txt2img mode
|
||||
- image controls and scheduler selection
|
||||
- with recent image history
|
||||
|
@ -53,6 +56,7 @@ Based on guides by:
|
|||
- [For CPU on Windows: Install PyTorch CPU](#for-cpu-on-windows-install-pytorch-cpu)
|
||||
- [For Nvidia everywhere: Install PyTorch GPU and ONNX GPU](#for-nvidia-everywhere-install-pytorch-gpu-and-onnx-gpu)
|
||||
- [Download and convert models](#download-and-convert-models)
|
||||
- [Test the models](#test-the-models)
|
||||
- [Usage](#usage)
|
||||
- [Configuring and running the server](#configuring-and-running-the-server)
|
||||
- [Securing the server](#securing-the-server)
|
||||
|
@ -271,9 +275,6 @@ Run the conversion script with your desired model(s):
|
|||
|
||||
This will take a little while to convert each model. Stable diffusion v1.4 is about 6GB, v1.5 is at least 10GB or so.
|
||||
|
||||
You should verify that all of the steps up to this point have worked correctly by attempting to run the
|
||||
`api/test-setup.py` script, which is a slight variation on the original txt2img script.
|
||||
|
||||
If you want to use inpainting, you will need a second model trained for that purpose:
|
||||
|
||||
```shell
|
||||
|
@ -284,6 +285,15 @@ If you want to use inpainting, you will need a second model trained for that pur
|
|||
> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="runwayml/stable-diffusion-inpainting" --output_path=".\models\stable-diffusion-inpainting"
|
||||
```
|
||||
|
||||
### Test the models
|
||||
|
||||
You should verify that all of the steps up to this point have worked correctly by attempting to run the
|
||||
`api/test-setup.py` script, which is a slight variation on the original txt2img script.
|
||||
|
||||
If the script works, there will be an image of an astronaut in `outputs/test.png`.
|
||||
|
||||
If you get any errors, check [the known errors section](#known-errors-and-solutions).
|
||||
|
||||
## Usage
|
||||
|
||||
### Configuring and running the server
|
||||
|
|
Loading…
Reference in New Issue