ONNX model format has become 2x faster #553
WEBPerformace
started this conversation in
General
Replies: 1 comment
-
ONNX container is nice because it can have wide support. In this case the optimization is for DirectML and an optimized model for DirectML. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
ONNX has just become twice as fast as before, so I'm interested: are there any ONNX models on Civitai, and has anyone compared their performance with SafeTensors? It looks very promising.
https://devblogs.microsoft.com/directx/dml-stable-diffusion/
https://www.tomshardware.com/news/nvidia-geforce-driver-promises-doubled-stable-diffusion-performance
https://build.microsoft.com/en-US/sessions/47fe414f-97b8-4b71-ae9e-be9602713667
Beta Was this translation helpful? Give feedback.
All reactions