Q6_K does not play as well with LoRAs

#25
by SubtleOne - opened

I tested this multiple times and with multiple LoRAs. For reasons unknown, Q5_k works fine with LoRAs, but Q6_K requires a heavier hand. I have not tried all models.

Here is Q5_K with the Art Nouveau LoRA applied at 1.2 strength:

Lora16-32test_00001_.png

And here is the Q6_K image with the same LoRA applied at 1.2 strength:

Lora16-32test_00002_.png

I do get a result if I increase the strength to 1.4 though:

Lora16-32test_00003_.png

Owner

If you trigger LowVRAM mode in ComfyUI then part of the LoRA won't be applied. This is a known issue currently https://github.com/city96/ComfyUI-GGUF/issues/33

Sign up or log in to comment