FunctionGemma Hermes Tool-Use (3K Fine-tuned)

This model is a fine-tuned version of Google’s FunctionGemma (270M), trained on a curated subset of the Hermes Tool-Use dataset to improve structured function calling.

The goal of this fine-tuning is higher accuracy and reliability when selecting the correct tool and emitting a valid function call in the expected format.

Check out Fine-tuning script: https://www.kaggle.com/code/kingabzpro/finetuning-functiongemma

🚀 What’s Improved

Evaluation was run on a held-out validation set (50 examples):

Metric Before FT After FT
Tool Selection Accuracy 88.0% 98.0%
Absolute Gain +10.0%

This shows the model learns better tool selection and call consistency, even though the base model already performs strongly.

🧠 Supported Output Format

The model emits function calls in FunctionGemma-style tags:

<start_function_call>
call:tool_name{args:<escape>{...}<escape>}
<end_function_call>

This is compatible with downstream tool execution pipelines.

📦 Installation

pip install transformers accelerate sentencepiece

🔧 Usage Example (Function Calling)

from transformers import AutoProcessor, AutoModelForCausalLM

repo_id = "kingabzpro/functiongemma-hermes-3k-ft"

processor = AutoProcessor.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)

# Tool definition (HF function schema)
tools = [
    {
        "type": "function",
        "function": {
            "name": "billboard_global_200",
            "description": "Fetch Billboard Global 200 chart information for a specific date.",
            "parameters": {
                "type": "object",
                "properties": {
                    "date": {
                        "type": "string",
                        "description": "Date in YYYY-MM-DD format",
                        "default": "2020-09-19",
                    }
                },
                "required": ["date"],
            },
        },
    }
]

messages = [
    {
        "role": "developer",
        "content": (
            "You are a function calling AI model. "
            "Each function call must be enclosed in <tool_call> XML tags."
        ),
    },
    {
        "role": "user",
        "content": (
            "Which songs were at positions 1, 11, 21, 31, and 41 "
            "on the Billboard Global 200 chart, and who sang them?"
        ),
    },
]

inputs = processor.apply_chat_template(
    messages,
    tools=tools,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt",
)

inputs = {k: v.to(model.device) for k, v in inputs.items()}

outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    pad_token_id=processor.eos_token_id,
)

gen = processor.decode(
    outputs[0][inputs["input_ids"].shape[-1]:],
    skip_special_tokens=True,
)

print(gen)

✅ Example Output

<start_function_call>
call:billboard_global_200{args:<escape>{"date": "2006-03-20"}<escape>}
<end_function_call>

🎯 Intended Use

  • Tool / function calling research
  • Agent systems and planners
  • Structured API invocation
  • Evaluation of tool-selection accuracy
  • Lightweight function-calling demos (CPU / small GPU friendly)

⚠️ Limitations

  • Trained on a subset (3K) of Hermes Tool-Use data
  • Focused on tool selection, not long-form reasoning
  • Not instruction-tuned for general chat beyond tool use

📜 Attribution

  • Base model: Google FunctionGemma
  • Dataset: Hermes Tool-Use
  • Fine-tuning & evaluation: kingabzpro
Downloads last month
32
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kingabzpro/functiongemma-hermes-3k-ft

Finetuned
(212)
this model

Dataset used to train kingabzpro/functiongemma-hermes-3k-ft