glm-edge-v-5b-GGUF
This model was converted to GGUF format from zai-org/glm-edge-v-5b using GGUF Forge.
Quants
The following quants are available: Q3_K_L, Q3_K_M, Q3_K_S, Q4_0, Q4_K_S, Q2_K, Q4_K_M, Q5_0, Q5_K_M, Q6_K, Q5_K_S, Q8_0
Ollama Support
Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.
Conversion Stats
| Metric | Value |
|---|---|
| Job ID | b240d127-8ad3-45e9-94c3-ca197e2ac431 |
| GGUF Forge Version | v5.8 |
| Total Time | 14.9min |
| Avg Time per Quant | 1.9min |
Step Breakdown
- Download: 1.4min
- FP16 Conversion: 1.2min
- Quantization: 12.3min
π Convert Your Own Models
Want to convert more models to GGUF?
π gguforge.com β Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
Links
- π Free Hosted Service: gguforge.com
- π οΈ Self-host GGUF Forge: GitHub
- π¦ llama.cpp (quantization engine): GitHub
- π¬ Community & Support: Discord
Converted automatically by GGUF Forge v5.8
- Downloads last month
- 368
Hardware compatibility
Log In
to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Akicou/glm-edge-v-5b-GGUF
Base model
zai-org/glm-edge-v-5b