From d26416465b35c8954f24d3471c012a43a81d70a3 Mon Sep 17 00:00:00 2001 From: Sean Sube Date: Sun, 19 Mar 2023 22:30:24 -0500 Subject: [PATCH] add note about onnx-fp16 opt --- docs/converting-models.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/converting-models.md b/docs/converting-models.md index 30a5340a..333f9246 100644 --- a/docs/converting-models.md +++ b/docs/converting-models.md @@ -331,9 +331,15 @@ any new packages. ```shell > git clone https://github.com/microsoft/onnxruntime > cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion -> python3 optimize_pipeline.py -i /home/ssube/onnx-web/models/stable-diffusion +> python3 optimize_pipeline.py \ + -i /home/ssube/onnx-web/models/stable-diffusion-onnx-v1-5 \ + -o /home/ssube/onnx-web/models/stable-diffusion-optimized-v1-5 \ + --use_external_data_format ``` +You can convert the model into ONNX float16 using the `--float16` option. Models converted into ONNX float16 can only +be run when using [the `onnx-fp16` optimization](server-admin#pipeeline-optimizations). + The `optimize_pipeline.py` script should work on any [diffusers directory with ONNX models](#converting-diffusers-models), but you will need to use the `--use_external_data_format` option if you are not using `--float16`. See the `--help` for more details.