The result is terrible
Wrong model i believe.
See here:
https://huggingface.co/lightx2v/Hy1.5-Quantized-Models/discussions/2#6925c53a2b7e56f36e6eefea
Or make sure you download those with ComfyUI in filename here: https://huggingface.co/lightx2v/Hy1.5-Quantized-Models/tree/main
Wrong model i believe.
See here:
https://huggingface.co/lightx2v/Hy1.5-Quantized-Models/discussions/2#6925c53a2b7e56f36e6eefeaOr make sure you download those with ComfyUI in filename here: https://huggingface.co/lightx2v/Hy1.5-Quantized-Models/tree/main
Thank you, but I can confirm that I did not use the wrong model in this case. The hy1.5 video GGUF q4_K_M quantized model performs very poorly. I am not certain of the reason, but the q4_K_M quantized models of wan2.1 and wan2.2 deliver results that are significantly better.
GGUF q4_K_M is quite "compressed", but will give it a try here as well.
And note there is a difference between distilled models (that can use cfg 1), and the regular models, that you can not run with cfg 1 to get good results, but rather full steps and full cgi (as per recommended settings)
The guy that made the GGUF models has both a folder for distilled and regular models
