1
0
Fork 0
onnx-web/exe
Sean Sube 35371d33fe
chore(release): 0.12.0
2023-12-31 14:15:31 -06:00
..
gfpgan fix(exe): add missing gfpgan folders (#315) 2023-04-21 23:45:34 -05:00
.gitignore fix(exe): add build script for Windows bundle 2023-04-30 14:24:43 -05:00
README.txt indent bundle readme better 2023-03-30 21:12:31 -05:00
build.bat chore(release): 0.12.0 2023-12-31 14:15:31 -06:00
entry.py lint(exe): move Windows bundle entry script to correct folder 2023-04-23 18:22:50 -05:00
onnx-web-full.bat fix: download LoRAs and other networks by default 2023-12-30 19:33:40 -06:00
onnx-web-half.bat fix: download LoRAs and other networks by default 2023-12-30 19:33:40 -06:00
win10.directml.dir.spec fix(exe): include necessary codeformer and timm sources in bundle 2023-12-22 13:31:56 -06:00
win10.directml.file.spec fix(exe): include necessary codeformer and timm sources in bundle 2023-12-22 13:31:56 -06:00

README.txt

# onnx-web Windows bundle

This is the Windows all-in-one bundle for onnx-web, a tool for running Stable Diffusion and
other models using ONNX hardware acceleration: https://github.com/ssube/onnx-web

Please check the setup guide for the latest instructions, this may be an older version:
https://github.com/ssube/onnx-web/blob/main/docs/setup-guide.md#windows-all-in-one-bundle

## Running onnx-web

You can run the local server using one of the setup scripts, onnx-web-full.bat or onnx-web-half.bat. Use the
onnx-web-half.bat script if you are using a GPU and have < 12GB of VRAM. Use the onnx-web-full.bat script if
you are using CPU mode or if you have >= 16GB of VRAM.

The user interface should be available in your browser at http://127.0.0.1:5000?api=http://127.0.0.1:5000. If
your PC uses a different IP address or you are running the server on one PC and using it from another, use that IP
address instead.