From 3cd4d607dd9ab8c0c325064d8ba673bd24bdc1a1 Mon Sep 17 00:00:00 2001 From: Sean Sube Date: Sat, 16 Dec 2023 17:06:17 -0600 Subject: [PATCH] fix links in doc site readme --- docs/index.md | 58 +++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/docs/index.md b/docs/index.md index a8bb29ab..b651b7ec 100644 --- a/docs/index.md +++ b/docs/index.md @@ -10,12 +10,12 @@ last few output images are shown below the image controls, making it easy to ref an image from earlier. The API runs on both Linux and Windows and provides a REST API to run many of the pipelines from [`diffusers` -](https://huggingface.co/docs/diffusers/main/en/index), along with metadata about the available models and accelerators, +](https://huggingface.co/./diffusers/main/en/index), along with metadata about the available models and accelerators, and the output of previous runs. Hardware acceleration is supported on both AMD and Nvidia for both Linux and Windows, with a CPU fallback capable of running on laptop-class machines. -Please check out [the setup guide to get started](docs/setup-guide.md) and [the user guide for more -details](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md). +Please check out [the setup guide to get started](./setup-guide.md) and [the user guide for more +details](https://github.com/ssube/onnx-web/blob/main/./user-guide.md). ![preview of txt2img tab using SDXL to generate ghostly astronauts eating weird hamburgers on an abandoned space station](./readme-sdxl.png) @@ -27,27 +27,27 @@ This is an incomplete list of new and interesting features, with links to the us - wide variety of schedulers: DDIM, DEIS, DPM SDE, Euler Ancestral, LCM, UniPC, and more - hardware acceleration on both AMD and Nvidia - tested on CUDA, DirectML, and ROCm - - [half-precision support for low-memory GPUs](docs/user-guide.md#optimizing-models-for-lower-memory-usage) on both + - [half-precision support for low-memory GPUs](./user-guide.md#optimizing-models-for-lower-memory-usage) on both AMD and Nvidia - software fallback for CPU-only systems - web app to generate and view images - [hosted on Github Pages](https://ssube.github.io/onnx-web), from your CDN, or locally - - [persists your recent images and progress as you change tabs](docs/user-guide.md#image-history) + - [persists your recent images and progress as you change tabs](./user-guide.md#image-history) - queue up multiple images and retry errors - translations available for English, French, German, and Spanish (please open an issue for more) - supports many `diffusers` pipelines - - [txt2img](docs/user-guide.md#txt2img-tab) - - [img2img](docs/user-guide.md#img2img-tab) - - [inpainting](docs/user-guide.md#inpaint-tab), with mask drawing and upload - - [panorama](docs/user-guide.md#panorama-pipeline), for both SD v1.5 and SDXL - - [upscaling](docs/user-guide.md#upscale-tab), with ONNX acceleration -- [add and use your own models](docs/user-guide.md#adding-your-own-models) - - [convert models from diffusers and SD checkpoints](docs/converting-models.md) - - [download models from HuggingFace hub, Civitai, and HTTPS sources](docs/user-guide.md#model-sources) + - [txt2img](./user-guide.md#txt2img-tab) + - [img2img](./user-guide.md#img2img-tab) + - [inpainting](./user-guide.md#inpaint-tab), with mask drawing and upload + - [panorama](./user-guide.md#panorama-pipeline), for both SD v1.5 and SDXL + - [upscaling](./user-guide.md#upscale-tab), with ONNX acceleration +- [add and use your own models](./user-guide.md#adding-your-own-models) + - [convert models from diffusers and SD checkpoints](./converting-models.md) + - [download models from HuggingFace hub, Civitai, and HTTPS sources](./user-guide.md#model-sources) - blend in additional networks - - [permanent and prompt-based blending](docs/user-guide.md#permanently-blending-additional-networks) - - [supports LoRA and LyCORIS weights](docs/user-guide.md#lora-tokens) - - [supports Textual Inversion concepts and embeddings](docs/user-guide.md#textual-inversion-tokens) + - [permanent and prompt-based blending](./user-guide.md#permanently-blending-additional-networks) + - [supports LoRA and LyCORIS weights](./user-guide.md#lora-tokens) + - [supports Textual Inversion concepts and embeddings](./user-guide.md#textual-inversion-tokens) - each layer of the embeddings can be controlled and used individually - ControlNet - image filters for edge detection and other methods @@ -55,18 +55,18 @@ This is an incomplete list of new and interesting features, with links to the us - highres mode - runs img2img on the results of the other pipelines - multiple iterations can produce 8k images and larger -- [multi-stage](docs/user-guide.md#prompt-stages) and [region prompts](docs/user-guide.md#region-tokens) +- [multi-stage](./user-guide.md#prompt-stages) and [region prompts](./user-guide.md#region-tokens) - seamlessly combine multiple prompts in the same image - provide prompts for different areas in the image and blend them together - change the prompt for highres mode and refine details without recursion - infinite prompt length - - [with long prompt weighting](docs/user-guide.md#long-prompt-weighting) -- [image blending mode](docs/user-guide.md#blend-tab) + - [with long prompt weighting](./user-guide.md#long-prompt-weighting) +- [image blending mode](./user-guide.md#blend-tab) - combine images from history - upscaling and correction - upscaling with Real ESRGAN, SwinIR, and Stable Diffusion - face correction with CodeFormer and GFPGAN -- [API server can be run remotely](docs/server-admin.md) +- [API server can be run remotely](./server-admin.md) - REST API can be served over HTTPS or HTTP - background processing for all image pipelines - polling for image status, plays nice with load balancers @@ -92,33 +92,33 @@ This is an incomplete list of new and interesting features, with links to the us There are a few ways to run onnx-web: - cross-platform: - - [clone this repository, create a virtual environment, and run `pip install`](docs/setup-guide.md#cross-platform-method) - - [pulling and running the OCI containers](docs/server-admin.md#running-the-containers) + - [clone this repository, create a virtual environment, and run `pip install`](./setup-guide.md#cross-platform-method) + - [pulling and running the OCI containers](./server-admin.md#running-the-containers) - on Windows: - - [clone this repository and run one of the `setup-*.bat` scripts](docs/setup-guide.md#windows-python-installer) - - [download and run the experimental all-in-one bundle](docs/setup-guide.md#windows-all-in-one-bundle) + - [clone this repository and run one of the `setup-*.bat` scripts](./setup-guide.md#windows-python-installer) + - [download and run the experimental all-in-one bundle](./setup-guide.md#windows-all-in-one-bundle) You only need to run the server and should not need to compile anything. The client GUI is hosted on Github Pages and is included with the Windows all-in-one bundle. -The extended setup docs have been [moved to the setup guide](docs/setup-guide.md). +The extended setup docs have been [moved to the setup guide](./setup-guide.md). ### Adding your own models -You can [add your own models](./docs/user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub +You can [add your own models](././user-guide.md#adding-your-own-models) by downloading them from the HuggingFace Hub or Civitai or by converting them from local files, without making any code changes. You can also download and blend in additional networks, such as LoRAs and Textual Inversions, using [tokens in the -prompt](docs/user-guide.md#prompt-tokens). +prompt](./user-guide.md#prompt-tokens). ## Usage ### Known errors and solutions -Please see [the Known Errors section of the user guide](https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#known-errors). +Please see [the Known Errors section of the user guide](https://github.com/ssube/onnx-web/blob/main/./user-guide.md#known-errors). ### Running the containers -This has [been moved to the server admin guide](docs/server-admin.md#running-the-containers). +This has [been moved to the server admin guide](./server-admin.md#running-the-containers). ## Credits