ikaganacar commited on
Commit
01a554e
·
verified ·
1 Parent(s): 4361dc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -32,11 +32,14 @@ This repository contains GGUF quantized versions of the Kubernetes AI model, opt
32
  | Model | Size | Download |
33
  |-------|------|----------|
34
  | **Unquantized** | 22.0 GB | [kubernetes-ai.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai.gguf) |
 
 
35
  | **Q4_K_M** | 7.3 GB | [kubernetes-ai-Q4_K_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q4_K_M.gguf) |
36
  | **Q4_K_S** | 6.9 GB | [kubernetes-ai-Q4_K_S.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q4_K_S.gguf) |
37
  | **Q3_K_M** | 6.0 GB | [kubernetes-ai-Q3_K_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q3_K_M.gguf) |
38
  | **IQ3_M** | 5.6 GB | [kubernetes-ai-IQ3_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-IQ3_M.gguf) |
39
 
 
40
  **Recommended:** Q4_K_M for best balance of quality and size, or IQ3_M for low-end systems.
41
 
42
  ## Quick Start
 
32
  | Model | Size | Download |
33
  |-------|------|----------|
34
  | **Unquantized** | 22.0 GB | [kubernetes-ai.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai.gguf) |
35
+ | **Q8_0** | 12.5 GB | [kubernetes-ai-Q8_0.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q8_0.gguf) |
36
+ | **Q5_K_M** | 8.45 GB | [kubernetes-ai-Q5_K_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q5_K_M.gguf) |
37
  | **Q4_K_M** | 7.3 GB | [kubernetes-ai-Q4_K_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q4_K_M.gguf) |
38
  | **Q4_K_S** | 6.9 GB | [kubernetes-ai-Q4_K_S.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q4_K_S.gguf) |
39
  | **Q3_K_M** | 6.0 GB | [kubernetes-ai-Q3_K_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-Q3_K_M.gguf) |
40
  | **IQ3_M** | 5.6 GB | [kubernetes-ai-IQ3_M.gguf](https://huggingface.co/aciklab/kubernetes-ai-GGUF/resolve/main/kubernetes-ai-IQ3_M.gguf) |
41
 
42
+
43
  **Recommended:** Q4_K_M for best balance of quality and size, or IQ3_M for low-end systems.
44
 
45
  ## Quick Start