huihui-ai/MiMo-7B-SFT-abliterated

#1398
by AIgotahole - opened

ERROR:hf-to-gguf:Model MiMoForCausalLM is not supported
Help...

It unfortunately is indeed not supported by llama.cpp: https://github.com/ggml-org/llama.cpp/issues/13218
I recommend you instead run it using vLLM which does support it. You can even use bitsandbytes to load it in 4-bit.

Oh I see it now. MiMo-VL-7B was actually based on Qwen2_5_VL as why it is supported. I think I confused it with MiMo-7B. :P

Sign up or log in to comment