1
0
Fork 0

feat(docs): add platform/model compatibility list

This commit is contained in:
Sean Sube 2023-01-24 08:39:25 -06:00
parent 6b2ae2aeab
commit b22f15600b
3 changed files with 70 additions and 9 deletions

View File

@ -1,5 +1,7 @@
# Very Rough Benchmarks
CUDA > ROCm > DirectML > drawing it yourself > CPU
Using 25 steps of Euler A in txt2img, 512x512.
- CPU:
@ -9,6 +11,8 @@ Using 25 steps of Euler A in txt2img, 512x512.
- 7950X: 3.5s/it, 90sec/image
- GPU:
- AMD:
- 6900XT: 3.5it/s, 9sec/image
- 6900XT
- Win10, DirectML: 3.5it/s, 9sec/image
- Ubuntu 20.04, ROCm 5.2: 4.5it/s, 6sec/image
- Nvidia:
- 4090: 6.5it/s, 4sec/image

55
docs/compatibility.md Normal file
View File

@ -0,0 +1,55 @@
# Compatibility
## Contents
- [Compatibility](#compatibility)
- [Contents](#contents)
- [Driver Versions](#driver-versions)
- [Container/Platform Acceleration](#containerplatform-acceleration)
- [Container Notes](#container-notes)
- [Model/Platform Acceleration](#modelplatform-acceleration)
- [Model Notes](#model-notes)
## Driver Versions
- CUDA
- 11.6
- 11.7
- ROCm
- 5.2
- 5.4 seems like it might work
## Container/Platform Acceleration
| Runtime | CUDA | DirectML | ROCm | CPU |
| ------- | ---- | -------- | -------- | --- |
| docker | yes | no, 1 | maybe, 2 | yes |
| podman | 3 | no, 1 | maybe, 2 | yes |
### Container Notes
1. no package available: https://github.com/ssube/onnx-web/issues/63
2. should work but testing failed: https://github.com/ssube/onnx-web/issues/10
3. should work, not tested: https://gist.github.com/bernardomig/315534407585d5912f5616c35c7fe374
## Model/Platform Acceleration
| Model | CUDA | DirectML | ROCm | CPU |
| ---------------- | ---- | -------- | ----- | --- |
| Stable Diffusion | yes | yes | yes | yes |
| Real ESRGAN | yes | yes | no, 1 | yes |
| GFPGAN | 2 | 2 | 2 | yes |
### Model Notes
1. Real ESRGAN running on ROCm crashes with an error:
```none
File "/home/ssube/onnx-web/api/onnx_web/upscale.py", line 67, in __call__
output = self.session.run([output_name], {
File "/home/ssube/onnx-web/api/onnx_env/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running FusedConv node. Name:'/body/body.0/rdb1/conv1/Conv' Status Message: MIOPEN failure 1: miopenStatusNotInitialized ; GPU=0 ; hostname=ssube-notwin ; expr=status_;
```
2. GFPGAN seems to always be running in CPU mode

View File

@ -3,14 +3,16 @@
This is the user guide for ONNX web, a web GUI for running ONNX models with hardware acceleration on both AMD and Nvidia
system, with a CPU software fallback.
The API runs on both Linux and Windows and provides access to the major functionality of diffusers, along with metadata
about the available models and accelerators, and the output of previous runs. Hardware acceleration is supported on both
AMD and Nvidia for both Linux and Windows, with a CPU fallback capable of running on laptop-class machines.
The API is written in Python and runs on both Linux and Windows and provides access to the major functionality of
diffusers, along with metadata about the available models and accelerators, and the output of previous runs. Hardware
acceleration is supported on both AMD and Nvidia for both Linux and Windows, with a CPU fallback capable of running on
laptop-class machines.
The GUI is hosted on Github Pages and runs in all major browsers, including on mobile devices. It allows you to select
the model and accelerator being used for each image pipeline. Image parameters are shown for each of the major modes,
and you can either upload or paint the mask for inpainting and outpainting. The last few output images are shown below
the image controls, making it easy to refer back to previous parameters or save an image from earlier.
The GUI is written in Javascript, hosted on Github Pages, and runs in all major browsers, including on mobile devices.
It allows you to select the model and accelerator being used for each image pipeline. Image parameters are shown for
each of the major modes, and you can either upload or paint the mask for inpainting and outpainting. The last few output
images are shown below the image controls, making it easy to refer back to previous parameters or save an image from
earlier.
Please see [the server admin guide](server-admin.md) for details on how to configure and run the server.
@ -92,7 +94,7 @@ will need a simple drawing component, but anything more complicated, like layers
the Gimp, Krita, or Photoshop.
This is _not_ a tool for building new ML models. While I am open to some training features, like Dreambooth and anything
needed to convert models, that is not the focus and should be limited features that support the other tabs.
needed to convert models, that is not the focus and should be limited to features that support the other tabs.
### ONNX models