1
0
Fork 0
Commit Graph

16 Commits

Author SHA1 Message Date
Sean Sube 0315a8cbc6
fix(api): apply fp16 optimizations to LoRA and Textual Inversion blending 2023-03-21 21:45:27 -05:00
Sean Sube 98e488319c
use fp16 optimization flag during conversion, add to admin docs 2023-03-19 10:33:46 -05:00
Sean Sube 243a2d9df6
apply lint 2023-03-19 09:29:06 -05:00
Sean Sube 2ef00599b6
experimental CLIP skip 2023-03-19 08:17:40 -05:00
Sean Sube e3bf04ab8f
feat(api): add section to extras file for additional networks 2023-03-18 07:40:57 -05:00
Sean Sube c3979246df
make blending happen once after conversion 2023-03-18 07:14:22 -05:00
Sean Sube 843e2f1ff3
feat(api): look for an index file when checking for converted models (#222) 2023-03-07 23:40:04 -06:00
Sean Sube 10fbafaff0
fix(api): correct imports 2023-03-04 22:25:49 -06:00
Sean Sube 3f9f94fcb5
apply lint, remove unused 2023-02-28 23:05:17 -06:00
Sean Sube 2f4ab20f61
use filename for tensors 2023-02-28 22:49:53 -06:00
Sean Sube 74aae1b027
fix(api): write external weights into same directory as optimized model 2023-02-28 22:47:02 -06:00
Sean Sube dbf9eaf1a4
fix(api): run shape inference before converting models to fp16
per discussion in https://github.com/microsoft/onnxruntime/issues/14827
2023-02-28 22:36:45 -06:00
Sean Sube 9ef89db8b0
extract tensors after conversion 2023-02-28 22:36:33 -06:00
Sean Sube 7e65e21410
reload model from proto file before converting 2023-02-28 22:36:26 -06:00
Sean Sube 2210ee849b
only convert inner nodes with ORT conversion helpers 2023-02-28 22:26:04 -06:00
Sean Sube a31f7b9e1f
feat(api): convert Textual Inversion weights 2023-02-25 11:53:13 -06:00