--- base_model: cerebras/Qwen3-Coder-REAP-25B-A3B base_model_relation: quantized quantized_by: ArtusDev language: - en library_name: transformers tags: - qwen-coder - MOE - pruning - compression - exl3 license: apache-2.0 license_link: https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B/blob/main/LICENSE pipeline_tag: text-generation ---

ArtusDev/cerebras_Qwen3-Coder-REAP-25B-A3B-EXL3

EXL3 quants of cerebras/Qwen3-Coder-REAP-25B-A3B using exllamav3 for quantization.

Quants

Quant BPW Head Bits Size (GB)
2.5_H6 2.5 6 8.57
3.0_H6 3.0 6 10.10
3.14_H6 (optimized) 3.14 6 10.43
3.5_H6 3.5 6 11.60
3.63_H6 (optimized) 3.63 6 11.90
4.0_H6 4.0 6 13.13
4.5_H6 4.5 6 14.63
5.0_H6 5.0 6 16.16
6.0_H6 6.0 6 19.19
8.0_H8 8.0 8 25.32

How to Download and Use Quants

You can download quants by targeting specific size using the Hugging Face CLI.

Click for download commands
1. Install huggingface-cli:
pip install -U "huggingface_hub[cli]"
2. Download a specific quant:
huggingface-cli download ArtusDev/cerebras_Qwen3-Coder-REAP-25B-A3B-EXL3 --revision "5.0bpw_H6" --local-dir ./

EXL3 quants can be run with any inference client that supports EXL3, such as TabbyAPI. Refer to documentation for set up instructions.

Quant Requests

Request EXL3 Quants

See EXL community hub for request guidelines.

Acknowledgements

Made possible with cloud compute from lium.io