Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
aciklab
/
kubernetes-ai-GGUF
like
2
Follow
Açıklab
5
GGUF
Turkish
English
kubernetes
devops
quantized
gemma3
llama-cpp
ollama
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
kubernetes-ai-GGUF
69.2 GB
1 contributor
History:
4 commits
ikaganacar
Update README.md
01a554e
verified
23 days ago
.gitattributes
Safe
1.94 kB
Upload kubernetes-ai-Q8_0.gguf with huggingface_hub
23 days ago
README.md
5.67 kB
Update README.md
23 days ago
kubernetes-ai-IQ3_M.gguf
Safe
5.66 GB
xet
Upload folder using huggingface_hub
27 days ago
kubernetes-ai-Q3_K_M.gguf
Safe
6.01 GB
xet
Upload folder using huggingface_hub
27 days ago
kubernetes-ai-Q4_K_M.gguf
Safe
7.3 GB
xet
Upload folder using huggingface_hub
27 days ago
kubernetes-ai-Q4_K_S.gguf
Safe
6.94 GB
xet
Upload folder using huggingface_hub
27 days ago
kubernetes-ai-Q5_K_M.gguf
Safe
8.45 GB
xet
Upload folder using huggingface_hub
27 days ago
kubernetes-ai-Q8_0.gguf
Safe
12.5 GB
xet
Upload kubernetes-ai-Q8_0.gguf with huggingface_hub
23 days ago
kubernetes-ai.gguf
Safe
22.4 GB
xet
Upload folder using huggingface_hub
27 days ago