Is KAT-Dev supposed to be that much slower than QWEN?

#11
by sk0d - opened

Hello, I tried both KAT-Dev (33B) and QWEN3-Coder-30B-A3B to compare speed. QWEN is significantly faster. I don't know about quality though, but I assume KAT-Dev is better from hearing from other people.

My stats:

  • KAT-Dev GGUF Q5_K_M: 16 tk/s
  • KAT-Dev GGUF Q4_K_M: 27 tk/s
  • QWEN3-Coder GGUF Q5_K_XL: 201 tk/s

Load settings:

  • Both KAT-Dev GGUFs: Just V Cache Quantization Type set to Q8_0 (with Flash Attention on), rest default
  • QWEN3-Coder GGUF: K- and V Cache Quantization Type set to Q8_0 (with Flash Attention on), rest default

Inference settings (favored KAT-Dev settings):

  • All 3 with same inference settings, which is:
    Temp: 0.6,
    Top K: 20,
    Repeat Penatly: 1.05,
    Min P Sampling: 0,
    Top P Sampling: 0.95

image
1 x RTX 5090 (32 GB total VRAM)
2 x 32 GB RAM (64 GB total RAM) @ 5600 MHz
1 x Ryzen 9 7950x

Kwaipilot org

Thanks for the benchmark! The speed difference is expected - KAT-Dev is a dense 32B model that processes all parameters per token, while QWEN3-Coder is MoE with only ~3B activated parameters.
Your results (7.4x faster for QWEN) match the theory pretty well, since 32B/3B β‰ˆ 10x. Dense models like KAT-Dev are more thorough but slower, while MoE gets you speed through selective activation.
Appreciate the detailed comparison!

Thanks, good to hear

@sk0d
If Kwaipilot releases 0.6b or 14b model, you can use so-called "draft model" acceleration like this:
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/

Do you use LMstudio? does it work with roocode/cline or you?

Sign up or log in to comment