feat(docs): cover pre-converted models in user guide
This commit is contained in:
parent
452e18daaa
commit
99161c42de
|
@ -0,0 +1,13 @@
|
|||
# MODEL TITLE
|
||||
|
||||
This is a copy of MODEL TITLE converted to the ONNX format for use with tools that use ONNX models,
|
||||
such as https://github.com/ssube/onnx-web. If you have questions about using this model, please see
|
||||
https://github.com/ssube/onnx-web/blob/main/docs/user-guide.md#pre-converted-models.
|
||||
|
||||
As a derivative of MODEL TITLE, the model files that came with this README are licensed under the
|
||||
terms of TODO. A copy of the license was included in the archive. Please make sure to read and follow
|
||||
the terms before you use this model or redistribute these files.
|
||||
|
||||
If you are the author of this model and have questions about ONNX models or would like to have
|
||||
this model removed from distribution or moved to another site, please contact ssube on
|
||||
https://github.com/ssube/onnx-web/issues or https://discord.gg/7CdQmutGuw.
|
|
@ -99,6 +99,7 @@ Please see [the server admin guide](server-admin.md) for details on how to confi
|
|||
- [Model sources](#model-sources)
|
||||
- [Downloading models from Civitai](#downloading-models-from-civitai)
|
||||
- [Downloading models from HuggingFace](#downloading-models-from-huggingface)
|
||||
- [Pre-converted models](#pre-converted-models)
|
||||
- [Using a custom VAE](#using-a-custom-vae)
|
||||
- [Optimizing models for lower memory usage](#optimizing-models-for-lower-memory-usage)
|
||||
- [Permanently blending additional networks](#permanently-blending-additional-networks)
|
||||
|
@ -1130,6 +1131,28 @@ When downloading models from HuggingFace, you can use the copy button next to th
|
|||
|
||||
![Stable Diffusion v1.5 model card with Copy model name option highlighted](guide-huggingface.png)
|
||||
|
||||
#### Pre-converted models
|
||||
|
||||
You can use models that have already been converted into ONNX format and archived in a ZIP file using the archive
|
||||
converted. This is much faster and requires much less memory than converting models yourself, but limits the
|
||||
optimizations that can be applied, since many optimizations are platform-specific.
|
||||
|
||||
To use a pre-converted model, put the URL or path to the ZIP archive in the `source` field and set the `pipeline` to
|
||||
`archive`. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"diffusion": [
|
||||
{
|
||||
"name": "stable-diffusion-v1-5",
|
||||
"source": "https://example.com/stable-diffusion-v1-5.zip",
|
||||
"format": "safetensors",
|
||||
"pipeline": "archive"
|
||||
},
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Using a custom VAE
|
||||
|
||||
You can use a custom VAE when converting models. Some models require a specific VAE, so if you get weird results,
|
||||
|
|
|
@ -3,6 +3,7 @@ site_name: onnx-web docs
|
|||
site_url: https://onnx-web.ai/docs
|
||||
exclude_docs: |
|
||||
runpod-readme.md
|
||||
converted-readme.md
|
||||
markdown_extensions:
|
||||
- mdx_truly_sane_lists
|
||||
plugins:
|
||||
|
|
Loading…
Reference in New Issue