1
0
Fork 0

Recomitting everything

This commit is contained in:
HoopyFreud 2023-04-21 15:17:31 -04:00
parent c7aea34b28
commit 21b53c1b7b
7 changed files with 112 additions and 3 deletions

View File

@ -1,9 +1,11 @@
echo "Downloading and converting models to ONNX format..." echo "Downloading and converting models to ONNX format..."
IF "%ONNX_WEB_EXTRA_MODELS%"=="" (set ONNX_WEB_EXTRA_MODELS=..\models\extras.json)
python -m onnx_web.convert ^ python -m onnx_web.convert ^
--sources ^ --sources ^
--diffusion ^ --diffusion ^
--upscaling ^ --upscaling ^
--correction ^ --correction ^
--extras=%ONNX_WEB_EXTRA_MODELS% ^
--token=%HF_TOKEN% %ONNX_WEB_EXTRA_ARGS% --token=%HF_TOKEN% %ONNX_WEB_EXTRA_ARGS%
echo "Launching API server..." echo "Launching API server..."

View File

@ -21,6 +21,7 @@ python3 -m onnx_web.convert \
--diffusion \ --diffusion \
--upscaling \ --upscaling \
--correction \ --correction \
--extras=${ONNX_WEB_EXTRA_MODELS:-..\models\extras.json} \
--token=${HF_TOKEN:-} \ --token=${HF_TOKEN:-} \
${ONNX_WEB_EXTRA_ARGS:-} ${ONNX_WEB_EXTRA_ARGS:-}

View File

@ -71,6 +71,7 @@ Please see [the server admin guide](server-admin.md) for details on how to confi
- [Optimizing models for lower memory usage](#optimizing-models-for-lower-memory-usage) - [Optimizing models for lower memory usage](#optimizing-models-for-lower-memory-usage)
- [Permanently blending additional networks](#permanently-blending-additional-networks) - [Permanently blending additional networks](#permanently-blending-additional-networks)
- [Extras file format](#extras-file-format) - [Extras file format](#extras-file-format)
- [Environment variables](#environment-variables)
- [Known errors](#known-errors) - [Known errors](#known-errors)
- [Check scripts](#check-scripts) - [Check scripts](#check-scripts)
- [Check environment script](#check-environment-script) - [Check environment script](#check-environment-script)
@ -540,7 +541,7 @@ Resets the state of each tab to the default, if some controls become glitchy.
## Adding your own models ## Adding your own models
You can convert and use your own models without making any code changes. Models are stored in You can convert and use your own models without making any code changes. Models are stored in
[the `api/extras.json` file](../api/extras.json) - you can make a copy to avoid any updates replacing your models in [the `models/extras.json` file](../models/extras.json) - you can make a copy to avoid any updates replacing your models in
the future. Add an entry for each of the models that you would like to use: the future. Add an entry for each of the models that you would like to use:
```json ```json
@ -608,8 +609,7 @@ See [the converting models guide](converting-models.md) for more details.
Be careful loading pickle tensors, as they may contain unsafe code which will be executed on your machine. Use Be careful loading pickle tensors, as they may contain unsafe code which will be executed on your machine. Use
safetensors instead whenever possible. safetensors instead whenever possible.
Set the `ONNX_WEB_EXTRA_MODELS` environment variable to the path to your file and make sure to use the `launch-extras` Set the `ONNX_WEB_EXTRA_MODELS` environment variable to the path to your file if not using [the `models/extras.json` file](../models/extras.json). For example:
script. For example:
```shell ```shell
# on Linux: # on Linux:
@ -889,6 +889,104 @@ some common configurations in a server context.
- strings - strings
- additional translation strings - additional translation strings
## Environment variables
This section catalogs the environment variables that can be set in the launch script. These variables can be set in the server launch scripts as
```shell
# on linux:
> export [ENVIRONMENT VARIABLE NAME]=[VALUE]
# on windows:
> set [ENVIRONMENT VARIABLE NAME]=[VALUE]
```
The available environment variables are as follows:
```shell
ONNX_WEB_MODEL_PATH
```
The path to the models folder. Defaults to /models
```shell
ONNX_WEB_EXTRA_MODELS
```
The path to the extra models json file. See [the Adding your own models section](#Adding your own models) for more information. Defaults to nothing.
```shell
ONNX_WEB_OUTPUT_PATH
```
The path to the model output folder (for generated images). Defaults to /outputs
```shell
ONNX_WEB_PARAMS_PATH
```
The path to the params.json file that holds the model parameters currently in use. Defaults to /api. Not accessible by default in the Windows bundle; use the web interface or set another path.
```shell
ONNX_WEB_CORS_ORIGIN
```
TODO
```shell
ONNX_WEB_ANY_PLATFORM
```
TODO
```shell
ONNX_WEB_BLOCK_PLATFORMS
```
TODO
```shell
ONNX_WEB_DEFAULT_PLATFORM
```
TODO
```shell
ONNX_WEB_IMAGE_FORMAT
```
The image format for the output. Defaults to .png.
```shell
ONNX_WEB_CACHE_MODELS
```
The number of models to cache. Decreasing this value may decrease VRAM usage and increase stability when switching models, but may also increase startup time. Defaults to 5.
```shell
ONNX_WEB_SHOW_PROGRESS
```
Whether to show progress in the Web UI. Defaults to True.
```shell
ONNX_WEB_OPTIMIZATIONS
```
See [the Optimizing models for lower memory usage section](#Optimizing models for lower memory usage) for more information.
```shell
ONNX_WEB_JOB_LIMIT
```
Job limit. Defaults to 10.
```shell
ONNX_WEB_MEMORY_LIMIT
```
Memory usage limit. Defaults to none.
## Known errors ## Known errors
This section attempts to cover all of the known errors and their solutions. This section attempts to cover all of the known errors and their solutions.

View File

@ -1,6 +1,7 @@
set ONNX_WEB_BASE_PATH=%~dp0 set ONNX_WEB_BASE_PATH=%~dp0
set ONNX_WEB_BUNDLE_PATH=%ONNX_WEB_BASE_PATH%\client set ONNX_WEB_BUNDLE_PATH=%ONNX_WEB_BASE_PATH%\client
set ONNX_WEB_MODEL_PATH=%ONNX_WEB_BASE_PATH%\models set ONNX_WEB_MODEL_PATH=%ONNX_WEB_BASE_PATH%\models
set ONNX_WEB_EXTRA_MODELS=%ONNX_WEB_BASE_PATH%\models\onnx-web-extras.json
set ONNX_WEB_OUTPUT_PATH=%ONNX_WEB_BASE_PATH%\outputs set ONNX_WEB_OUTPUT_PATH=%ONNX_WEB_BASE_PATH%\outputs
@echo Launching onnx-web in full-precision mode... @echo Launching onnx-web in full-precision mode...

View File

@ -1,6 +1,7 @@
set ONNX_WEB_BASE_PATH=%~dp0 set ONNX_WEB_BASE_PATH=%~dp0
set ONNX_WEB_BUNDLE_PATH=%ONNX_WEB_BASE_PATH%\client set ONNX_WEB_BUNDLE_PATH=%ONNX_WEB_BASE_PATH%\client
set ONNX_WEB_MODEL_PATH=%ONNX_WEB_BASE_PATH%\models set ONNX_WEB_MODEL_PATH=%ONNX_WEB_BASE_PATH%\models
set ONNX_WEB_EXTRA_MODELS=%ONNX_WEB_BASE_PATH%\models\onnx-web-extras.json
set ONNX_WEB_OUTPUT_PATH=%ONNX_WEB_BASE_PATH%\outputs set ONNX_WEB_OUTPUT_PATH=%ONNX_WEB_BASE_PATH%\outputs
set ONNX_WEB_BLOCK_PLATFORMS=cpu set ONNX_WEB_BLOCK_PLATFORMS=cpu

1
models/.gitignore vendored
View File

@ -1 +1,2 @@
* *
!extras.json

5
models/extras.json Normal file
View File

@ -0,0 +1,5 @@
{
"diffusion": [],
"correction": [],
"upscaling": []
}