Form Generator - Gemma 3 270M (LoRA Adapters)
Fine-tuned LoRA adapters for google/gemma-3-270m-it to generate form definitions in JSON format from natural language descriptions in Bahasa Indonesia.
π― Model Description
This repository contains LoRA adapters (not a merged model) that can be loaded on top of the base Gemma 3 270M model. The adapters were trained to generate dynamic form definitions in JSON format.
β οΈ Important Note
This is a LoRA adapter model, NOT a standalone model. You need to load it together with the base model google/gemma-3-270m-it.
π Usage
Installation
pip install transformers peft torch
Loading the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-3-270m-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "bhismaperkasa/form-generator-lora-adapters")
tokenizer = AutoTokenizer.from_pretrained("bhismaperkasa/form-generator-lora-adapters")
# Generate
messages = [
{"role": "system", "content": "You are a helpful assistant that generates form definitions in JSON format based on user requests."},
{"role": "user", "content": "buatkan form pendaftaran event dengan nama dan email"}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
generated = result.split("<start_of_turn>model\n")[-1].strip()
print(generated)
π Training Details
- Base Model: google/gemma-3-270m-it
- Method: QLoRA (4-bit quantization + LoRA)
- Dataset: bhismaperkasa/form_dinamis (10,000 samples)
- Training Samples: ~9,000
- Validation Samples: ~1,000
- Epochs: 3
- Final Eval Loss: 0.2256
- Token Accuracy: 93.5%
Hyperparameters
- LoRA Rank: 16
- LoRA Alpha: 32
- LoRA Dropout: 0.05
- Learning Rate: 5e-5
- Batch Size: 4
- Max Length: 512 tokens
π Performance
- Evaluation Loss: 0.2256
- Token Accuracy: 93.50%
- Train-Eval Gap: 4.9% (healthy, no overfitting)
- Entropy: 0.1881 (high confidence)
π‘ Example Outputs
Input:
buatkan form login sederhana
Output:
{
"id": "form_login_sims",
"title": "Form Login",
"description": "Form untuk login akun",
"formDefinition": {
"fields": [
{"fieldId": "email", "label": "Email", "fieldType": "EMAIL", "required": true},
{"fieldId": "password", "label": "Password", "fieldType": "PASSWORD", "required": true}
]
}
}
π Use Cases
- Dynamic form generation for web applications
- Survey and questionnaire creation
- User registration forms
- Data collection forms
- Event registration forms
βοΈ Technical Details
Why LoRA Adapters?
- Smaller size: ~50MB vs ~500MB for merged model
- Better quality: Works more reliably than merged model
- Flexibility: Can be combined with different base models
- Efficiency: Faster to download and deploy
Model Architecture
- Base: Gemma 3 270M (270 million parameters)
- Adapters: LoRA with rank 16 (few million trainable parameters)
- Target modules: All linear layers
- Quantization: 4-bit NormalFloat (NF4)
π Citation
@misc{form-generator-gemma-lora,
author = {bhismaperkasa},
title = {Form Generator - Gemma 3 270M LoRA Adapters},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/bhismaperkasa/form-generator-lora-adapters}}
}
π License
This model is based on Gemma 3 270M which is licensed under Apache 2.0.
π Acknowledgments
- Google for Gemma 3 270M model
- Hugging Face for transformers, PEFT, and TRL libraries
- Dataset: bhismaperkasa/form_dinamis
Note: For production use, consider using these adapters instead of a merged model for better reliability and performance.
- Downloads last month
- 3