Llama-3.2-Gitara-3B

Gitara = git + ara (the parrot genus): your local stochastic parrot for git commands.

A 3B parameter function-calling model fine-tuned by Distil Labs to translate plain English into git commands. Optimized to run locally via Ollama with strong tool-calling accuracy that matches models 40x larger.

GitHub Demo and Code

Model Details

Developed by Distil Labs GmbH
Model type Causal language model, fine-tuned for function calling
Language English
License Llama 3.2 Community License
Fine-tuned from meta-llama/Llama-3.2-3B-Instruct

Use Case

Given a natural language description of a git operation, the model outputs a structured JSON tool call that can be converted to an executable git command.

Supported commands: status · add · commit · push · pull · branch · switch · restore · merge · stash · rebase · reset · log

Example

Input:

push feature-x to origin, override any changes there and track it

Output:

{"name": "git_push", "parameters": {"remote": "origin", "branch": "feature-x", "force": true, "set_upstream": true}}

Resulting command:

git push origin feature-x --force --set-upstream

More Examples

Query Tool Call Output
what's in the latest stash, show diff {"name": "git_stash", "parameters": {"action": "show", "patch": true}}
undo last commit but keep the changes {"name": "git_reset", "parameters": {"mode": "soft", "ref": "HEAD~1"}}
show 8 commits for current branch with graph {"name": "git_log", "parameters": {"limit": 8, "graph": true}}
merge vendor branch preferring ours {"name": "git_merge", "parameters": {"branch": "vendor", "strategy": "ours"}}

Training

The model was trained using knowledge distillation from the teacher model GPT-OSS-120B.

Training method LoRA fine-tuning
Seed examples ~100 (manually validated)
Synthetic examples 10,000
Teacher model GPT-OSS-120B

Training Process

  1. Created ~100 seed examples covering all 13 git commands with realistic query phrasings
  2. Expanded seed data to 10,000 synthetic training examples using the Distil Labs platform
  3. Fine-tuned Llama 3.2 3B Instruct using LoRA
  4. Validated on held-out test set

Training data and configuration available in the GitHub repository.

Evaluation

Evaluated on 50 held-out test examples. Accuracy is measured by parsing outputs into normalized Python dicts and comparing for structural equality.

Model Parameters Accuracy
GPT-OSS-120B (teacher) 120B 0.92 ± 0.02
Llama 3.2 3B Instruct (tuned) 3B 0.92 ± 0.01
Llama 3.2 1B Instruct (tuned) 1B 0.90 ± 0.01
Llama 3.2 3B Instruct (base) 3B 0.12 ± 0.05
Llama 3.2 1B Instruct (base) 1B 0.00 ± 0.01

The tuned 3B model matches the 120B teacher while being 40x smaller. The base model achieves only 0.12 accuracy, confirming that fine-tuning is essential for this task.

How to Use

With Ollama (Recommended)

# Download model
huggingface-cli download distil-labs/Distil-gitara-v2-Llama-3.2-3B-Instruct --local-dir distil-model

# Build with Ollama
cd distil-model
ollama create gitara -f Modelfile

# Run
ollama run gitara "show staged changes with diffs"

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "distil-labs/Distil-gitara-v2-Llama-3.2-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# See GitHub repo for full tool-calling implementation

For complete usage instructions, see the GitHub repository.

Inference Speed

On an M4 MacBook Pro, most queries return in under 2 seconds once the model is loaded.

Limitations

  • Accuracy is 0.92, meaning approximately 1 in 12 queries may produce incorrect output
  • Limited to the 13 supported git commands and their common options
  • Does not support git checkout (use switch and restore instead)
  • Single-turn only; does not support multi-step workflows

Model Sources

Related Models

Citation

@misc{gitara2025,
  author = {Distil Labs},
  title = {Gitara: A Function-Calling Git Agent},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/distil-labs/Llama-3_2-gitara-3B}
}
Downloads last month
18
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for distil-labs/Distil-gitara-v2-Llama-3.2-3B-Instruct

Finetuned
(787)
this model
Quantizations
1 model

Collection including distil-labs/Distil-gitara-v2-Llama-3.2-3B-Instruct