πŸ‡¦πŸ‡Ώ LIA-AzInstruction 🧠

✨ Fine-tuned Gemma 3 (270M) for Azerbaijani Instruction Understanding


πŸ“š Overview

LIA-AzInstruction is a fine-tuned version of Gemma 3 (270M) trained on a custom Azerbaijani instruction dataset πŸ—οΈ
The goal is to create a compact yet intelligent model capable of understanding and responding in natural Azerbaijani language.

🧩 This project bridges the gap between global LLM architectures and local Azerbaijani AI development πŸ‡¦πŸ‡Ώ


🧾 Dataset

πŸ—‚οΈ Total Samples: 400
πŸ“˜ Structure: Alpaca-style (input β†’ output)
πŸ“‘ Topics covered:

# Field Description
πŸ› 1 Tarix Ancient states, historical facts, culture
βž— 2 Riyazi Analiz Derivatives, integrals, limits
πŸ€– 3 Mexatronika vΙ™ Robototexnika Sensors, actuators, control theory
πŸ’° 4 Δ°qtisadiyyatΔ±n ƏsaslarΔ± Supply, demand, inflation, markets

βš™οΈ Training Details

Parameter Value
🧱 Base Model Gemma 3 (270M)
πŸŽ“ Fine-tuning Type Instruction-tuning
πŸ’Ύ Precision BF16
πŸ” Epochs 3
πŸš€ Learning Rate 2e-5

🧠 Model Capabilities

βœ… Understands and answers πŸ‡¦πŸ‡Ώ Azerbaijani instructions naturally
βœ… Supports general academic and technical fields
βœ… Performs simple reasoning and step-by-step explanations
βœ… Compact enough for local deployment (mobile & web apps)


πŸ’¬ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Yusiko/LIA-AzInstruction-400")
tokenizer = AutoTokenizer.from_pretrained("Yusiko/LIA-AzInstruction-400")

prompt = "Qarabağ haqqΔ±nda qΔ±sa mΙ™lumat ver."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Yusiko/AzInstruction-400

Finetuned
(93)
this model

Dataset used to train Yusiko/AzInstruction-400