codewithdark's picture
Upload model via QuantLLM
aeb30a3 verified
metadata
license: apache-2.0
base_model: google/functiongemma-270m-it
library_name: gguf
language:
  - en
tags:
  - quantllm
  - gguf
  - llama-cpp
  - quantized
  - transformers
  - q4_k_m

πŸ¦™ functiongemma-270m-it-4bit-gguf

google/functiongemma-270m-it converted to GGUF format

QuantLLM Format Quantization

⭐ Star QuantLLM on GitHub


πŸ“– About This Model

This model is google/functiongemma-270m-it converted to GGUF format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines.

Property Value
Base Model google/functiongemma-270m-it
Format GGUF
Quantization Q4_K_M
License apache-2.0
Created With QuantLLM

πŸš€ Quick Start

Option 1: Python (llama-cpp-python)

from llama_cpp import Llama

# Load the model
llm = Llama.from_pretrained(
    repo_id="QuantLLM/functiongemma-270m-it-4bit-gguf",
    filename="functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf",
)

# Generate text
output = llm(
    "Write a short story about a robot learning to paint:",
    max_tokens=256,
    echo=True
)
print(output["choices"][0]["text"])

Option 2: Ollama

# Download the model
huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir .

# Create Modelfile
echo 'FROM ./functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf' > Modelfile

# Import to Ollama
ollama create functiongemma-270m-it-4bit-gguf -f Modelfile

# Chat with the model
ollama run functiongemma-270m-it-4bit-gguf

Option 3: LM Studio

  1. Download the .gguf file from the Files tab above
  2. Open LM Studio β†’ My Models β†’ Add Model
  3. Select the downloaded file
  4. Start chatting!

Option 4: llama.cpp CLI

# Download
huggingface-cli download QuantLLM/functiongemma-270m-it-4bit-gguf functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf --local-dir .

# Run inference
./llama-cli -m functiongemma-270m-it-4bit-gguf.Q4_K_M.gguf -p "Hello! " -n 128

πŸ“Š Model Details

Property Value
Original Model google/functiongemma-270m-it
Format GGUF
Quantization Q4_K_M
License apache-2.0
Export Date 2025-12-21
Exported By QuantLLM v2.0

πŸ“¦ Quantization Details

This model uses Q4_K_M quantization:

Property Value
Type Q4_K_M
Bits 4-bit
Quality 🟒 ⭐ Recommended - Best quality/size balance

All Available GGUF Quantizations

Type Bits Quality Best For
Q2_K 2-bit πŸ”΄ Lowest Extreme size constraints
Q3_K_M 3-bit 🟠 Low Very limited memory
Q4_K_M 4-bit 🟒 Good Most users ⭐
Q5_K_M 5-bit 🟒 High Quality-focused
Q6_K 6-bit πŸ”΅ Very High Near-original
Q8_0 8-bit πŸ”΅ Excellent Maximum quality

πŸš€ Created with QuantLLM

QuantLLM

Convert any model to GGUF, ONNX, or MLX in one line!

from quantllm import turbo

# Load any HuggingFace model
model = turbo("google/functiongemma-270m-it")

# Export to any format
model.export("gguf", quantization="Q4_K_M")

# Push to HuggingFace
model.push("your-repo", format="gguf")
GitHub Stars

πŸ“š Documentation Β· πŸ› Report Issue Β· πŸ’‘ Request Feature