1
0
Fork 0

fix(docs): add conversion error to user guide (#404)

This commit is contained in:
Sean Sube 2023-12-22 11:24:02 -06:00
parent 1f778abd69
commit 108c502869
Signed by: ssube
GPG Key ID: 3EED7B957D362AF1
2 changed files with 32 additions and 9 deletions

View File

@ -4,6 +4,9 @@ This is a copy of MODEL TITLE converted to the ONNX format for use with tools th
https://github.com/ssube/onnx-web. If you have questions about using this model, please see https://github.com/ssube/onnx-web. If you have questions about using this model, please see
https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#pre-converted-models. https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#pre-converted-models.
FP16 WARNING: This model has been converted to FP16 format and will not run correctly on the CPU platform. If you are
using the CPU platform, please use the FP32 model instead.
As a derivative of MODEL TITLE, the model files that came with this README are licensed under the terms of TODO. A copy As a derivative of MODEL TITLE, the model files that came with this README are licensed under the terms of TODO. A copy
of the license was included in the archive. Please make sure to read and follow the terms before you use this model or of the license was included in the archive. Please make sure to read and follow the terms before you use this model or
redistribute these files. redistribute these files.
@ -12,6 +15,13 @@ If you are the author of this model and have questions about ONNX models or woul
distribution or moved to another site, please contact ssube on https://github.com/ssube/onnx-web/issues or distribution or moved to another site, please contact ssube on https://github.com/ssube/onnx-web/issues or
https://discord.gg/7CdQmutGuw. https://discord.gg/7CdQmutGuw.
## Adding models
Extract the entire ZIP archive into the models folder of your onnx-web installation and restart the server or click the
Restart Workers button in the web UI and then refresh the page.
Please see https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#adding-your-own-models for more details.
## Folder structure ## Folder structure
- cnet - cnet
@ -32,12 +42,4 @@ https://discord.gg/7CdQmutGuw.
- UNet model - UNet model
- vae_decoder - vae_decoder
- VAE decoder model - VAE decoder model
- vae_encoder - vae_encoder
- VAE encoder model
## Adding models
Extract the entire ZIP archive into the models folder of your onnx-web installation and restart the server or click the
Restart Workers button in the web UI and then refresh the page.
Please see https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#adding-your-own-models for more details.

View File

@ -126,6 +126,7 @@ Please see [the server admin guide](server-admin.md) for details on how to confi
- [The expanded size of the tensor must match the existing size](#the-expanded-size-of-the-tensor-must-match-the-existing-size) - [The expanded size of the tensor must match the existing size](#the-expanded-size-of-the-tensor-must-match-the-existing-size)
- [Shape mismatch attempting to re-use buffer](#shape-mismatch-attempting-to-re-use-buffer) - [Shape mismatch attempting to re-use buffer](#shape-mismatch-attempting-to-re-use-buffer)
- [Cannot read properties of undefined (reading 'default')](#cannot-read-properties-of-undefined-reading-default) - [Cannot read properties of undefined (reading 'default')](#cannot-read-properties-of-undefined-reading-default)
- [Missing key(s) in state\_dict](#missing-keys-in-state_dict)
- [Output Image Sizes](#output-image-sizes) - [Output Image Sizes](#output-image-sizes)
## Outline ## Outline
@ -1719,6 +1720,26 @@ Could not fetch parameters from the onnx-web API server at http://10.2.2.34:5000
Cannot read properties of undefined (reading 'default') Cannot read properties of undefined (reading 'default')
``` ```
#### Missing key(s) in state_dict
This can happen when you try to convert a newer Stable Diffusion checkpoint with Torch model extraction enabled. The
code used for model extraction does not support some keys in recent models and will throw an error.
Make sure you have set the `ONNX_WEB_CONVERT_EXTRACT` environment variable to `FALSE`.
Example error:
```none
Traceback (most recent call last):
File "/opt/onnx-web/api/onnx_web/convert/diffusion/checkpoint.py", line 1570, in extract_checkpoint
vae.load_state_dict(converted_vae_checkpoint)
File "/home/ssube/miniconda3/envs/onnx-web-rocm-pytorch2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AutoencoderKL:
Missing key(s) in state_dict: "encoder.mid_block.attentions.0.to_q.weight", "encoder.mid_block.attentions.0.to_q.bias", "encoder.mid_block.attentions.0.to_k.weight", "encoder.mid_block.attentions.0.to_k.bias", "encoder.mid_block.attentions.0.to_v.weight", "encoder.mid_block.attentions.0.to_v.bias", "encoder.mid_block.attentions.0.to_out.0.weight", "encoder.mid_block.attentions.0.to_out.0.bias", "decoder.mid_block.attentions.0.to_q.weight", "decoder.mid_block.attentions.0.to_q.bias", "decoder.mid_block.attentions.0.to_k.weight", "decoder.mid_block.attentions.0.to_k.bias", "decoder.mid_block.attentions.0.to_v.weight", "decoder.mid_block.attentions.0.to_v.bias", "decoder.mid_block.attentions.0.to_out.0.weight", "decoder.mid_block.attentions.0.to_out.0.bias".
Unexpected key(s) in state_dict: "encoder.mid_block.attentions.0.key.bias", "encoder.mid_block.attentions.0.key.weight", "encoder.mid_block.attentions.0.proj_attn.bias", "encoder.mid_block.attentions.0.proj_attn.weight", "encoder.mid_block.attentions.0.query.bias", "encoder.mid_block.attentions.0.query.weight", "encoder.mid_block.attentions.0.value.bias", "encoder.mid_block.attentions.0.value.weight", "decoder.mid_block.attentions.0.key.bias", "decoder.mid_block.attentions.0.key.weight", "decoder.mid_block.attentions.0.proj_attn.bias", "decoder.mid_block.attentions.0.proj_attn.weight", "decoder.mid_block.attentions.0.query.bias", "decoder.mid_block.attentions.0.query.weight", "decoder.mid_block.attentions.0.value.bias", "decoder.mid_block.attentions.0.value.weight".
```
## Output Image Sizes ## Output Image Sizes
You can use this table to figure out the final size for each image, based on the combination of parameters that you are You can use this table to figure out the final size for each image, based on the combination of parameters that you are