Disclaimer

Do not download this model as there seems to be a missing tensor: "blk.46.attn_norm.weigh"

INTELLECT-3-REAP-50-heretic-GGUF

This model was converted to GGUF format from jtl11/INTELLECT-3-REAP-50-heretic using GGUF Forge.

Quants

The following quants are available: Q4_K_S, Q4_K_M, Q5_0, Q5_K_M, Q5_K_S, Q6_K, Q8_0

Conversion Stats

Metric Value
Job ID 1a111945-aa42-4b90-b875-c486edc7f57d
GGUF Forge Version v4.8
Total Time 2.1h
Avg Time per Quant 8.7min

Step Breakdown

  • Download: 14.5min
  • FP16 Conversion: 7.0min
  • Quantization: 1.7h

πŸš€ Convert Your Own Models

Want to convert more models to GGUF?

πŸ‘‰ gguforge.com β€” Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!

Links

  • 🌐 Free Hosted Service: gguforge.com
  • πŸ› οΈ Self-host GGUF Forge: GitHub
  • πŸ“¦ llama.cpp (quantization engine): GitHub
  • πŸ’¬ Community & Support: Discord

Converted automatically by GGUF Forge v4.8

Downloads last month
943
GGUF
Model size
57B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Akicou/INTELLECT-3-REAP-50-heretic-GGUF

Quantized
(2)
this model

Collection including Akicou/INTELLECT-3-REAP-50-heretic-GGUF