Model info updated
Browse files- Qwen3-8B-Q2_K/README.md +1 -1
Qwen3-8B-Q2_K/README.md
CHANGED
|
@@ -21,7 +21,7 @@ Quantized version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) at **
|
|
| 21 |
## Model Info
|
| 22 |
|
| 23 |
- **Format**: GGUF (for llama.cpp and compatible runtimes)
|
| 24 |
-
- **Size**: 3.
|
| 25 |
- **Precision**: Q2_K
|
| 26 |
- **Base Model**: [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
|
| 27 |
- **Conversion Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
|
|
|
| 21 |
## Model Info
|
| 22 |
|
| 23 |
- **Format**: GGUF (for llama.cpp and compatible runtimes)
|
| 24 |
+
- **Size**: 3.28 GB
|
| 25 |
- **Precision**: Q2_K
|
| 26 |
- **Base Model**: [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
|
| 27 |
- **Conversion Tool**: [llama.cpp](https://github.com/ggerganov/llama.cpp)
|