I have managed to convert mixtral to GGUF
I managed to merge 8x of these into a mixtral but I can't test until I convert it to GGUF.
Did you name your weights something strange? Here's the model I'm trying to quantize. https://huggingface.co/Kquant03/MistralTrix8x9B/blob/main/README.md
That tensor name "model.layers.0.block_sparse_moe.experts.0.w3.weight" is from Mixtral, not MistralTrix which uses standard Mistral tensor names aside from the extra layers.
For whatever reason, convert.py isn't expecting to handle Mixtral input there.
That tensor name "model.layers.0.block_sparse_moe.experts.0.w3.weight" is from Mixtral, not MistralTrix which uses standard Mistral tensor names aside from the extra layers.
For whatever reason, convert.py isn't expecting to handle Mixtral input there.
thanks for letting me know. Don't worry...I'll just leave it as base, then. I think I'm about to drop two back to back open LLm #1 spots. That base float is nasty...it knows more than I ever possibly thought it would have.
To anyone worried about quantizing MoE's of this model, it's not just this model...it's mergekit-moe in general. Most models over 8x7B will not convert to GGUF. Just letting everyone know.
This model is great, btw...I'm making a 4x MoE of it right now for roleplay haha
After days of merging and editing code and trying new things...I found out that convert-hf-to-gguf.py works, but is very bugged when used for MoEs created by mergekit-moe. This thread will be closed, now.