1
0
Fork 0

update features

This commit is contained in:
Sean Sube 2023-11-10 04:33:29 -06:00
parent 3622ac4bfb
commit 01d8aabc42
Signed by: ssube
GPG Key ID: 3EED7B957D362AF1
2 changed files with 23 additions and 5 deletions

View File

@ -23,6 +23,7 @@ details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md).
This is an incomplete list of new and interesting features, with links to the user guide: This is an incomplete list of new and interesting features, with links to the user guide:
- SDXL support
- hardware acceleration on both AMD and Nvidia - hardware acceleration on both AMD and Nvidia
- tested on CUDA, DirectML, and ROCm - tested on CUDA, DirectML, and ROCm
- [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both - [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both
@ -37,6 +38,7 @@ This is an incomplete list of new and interesting features, with links to the us
- [txt2img](docs/user-guide.md#txt2img-tab) - [txt2img](docs/user-guide.md#txt2img-tab)
- [img2img](docs/user-guide.md#img2img-tab) - [img2img](docs/user-guide.md#img2img-tab)
- [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload - [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload
- [panorama](docs/user-guide.md#panorama-pipeline)
- [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration - [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration
- [add and use your own models](docs/user-guide.md#adding-your-own-models) - [add and use your own models](docs/user-guide.md#adding-your-own-models)
- [convert models from diffusers and SD checkpoints](docs/converting-models.md) - [convert models from diffusers and SD checkpoints](docs/converting-models.md)
@ -45,20 +47,24 @@ This is an incomplete list of new and interesting features, with links to the us
- [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks) - [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks)
- [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens) - [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens)
- [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens) - [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens)
- each layer of the embeddings can be controlled and used individually
- ControlNet - ControlNet
- image filters for edge detection and other methods - image filters for edge detection and other methods
- with ONNX acceleration - with ONNX acceleration
- highres mode - highres mode
- runs img2img on the results of the other pipelines - runs img2img on the results of the other pipelines
- multiple iterations can produce 8k images and larger - multiple iterations can produce 8k images and larger
- [multi-stage](docs/user-guide.md#prompt-stages) and [region prompts](docs/user-guide.md#region-tokens)
- combine multiple prompts in the same image
- provide prompts for different areas in the image and blend them together
- change the prompt for highres mode and refine details without recursion
- infinite prompt length - infinite prompt length
- [with long prompt weighting](docs/user-guide.md#long-prompt-weighting) - [with long prompt weighting](docs/user-guide.md#long-prompt-weighting)
- expand and control Textual Inversions per-layer
- [image blending mode](docs/user-guide.md#blend-tab) - [image blending mode](docs/user-guide.md#blend-tab)
- combine images from history - combine images from history
- upscaling and face correction - upscaling and correction
- upscaling with Real ESRGAN or Stable Diffusion - upscaling with Real ESRGAN, SwinIR, and Stable Diffusion
- face correction with CodeFormer or GFPGAN - face correction with CodeFormer and GFPGAN
- [API server can be run remotely](docs/server-admin.md) - [API server can be run remotely](docs/server-admin.md)
- REST API can be served over HTTPS or HTTP - REST API can be served over HTTPS or HTTP
- background processing for all image pipelines - background processing for all image pipelines

View File

@ -460,7 +460,19 @@ This makes your prompt less specific and some models have been trained to work b
#### Prompt stages #### Prompt stages
TODO: explain `first stage || hires prompt` syntax You can provide a different prompt for the highres and upscaling stages of an image using prompt stages. Each stage
of a prompt is separated by `||` and can include its own LoRAs, embeddings, and regions. If you are using multiple
iterations of highres, each iteration can have its own prompt stage. This can help you avoid recursive body parts
and some other weird mutations that can be caused by iterating over a subject prompt.
For example, a prompt like `human being sitting on wet grass, outdoors, bright sunny day` is likely to produce many
small people mixed in with the grass when used with highres. This becomes even worse with 2+ iterations. However,
changing that prompt to `human being sitting on wet grass, outdoors, bright sunny day || outdoors, bright sunny day, detailed, intricate, HDR`
will use the second stage as the prompt for highres: `outdoors, bright sunny day, detailed, intricate, HDR`.
This allows you to add and refine details, textures, and even the style of the image during the highres pass.
Prompt stages are only used during upscaling if you are using the Stable Diffusion upscaling model.
### Long prompt weighting syntax ### Long prompt weighting syntax