Aya-Z GGUF Quantized Models
Technical Details
- Quantization Tool: llama.cpp
- Version: version: 5287 (90703650)
Model Information
- Base Model: matrixportal/Aya-Z
- Quantized by: matrixportal
Available Files
| π Download | π’ Type | π Description |
|---|---|---|
| Download | Q4 K M | 4-bit balanced (recommended default) |
π‘ Q4 K M provides the best balance for most use cases
- Downloads last month
- 15
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for matrixportalx/Aya-Z-GGUF
Base model
huihui-ai/aya-expanse-8b-abliterated
Finetuned
matrixportalx/Aya-X-Mod