π¦πΏ LIA-AzInstruction π§
β¨ Fine-tuned Gemma 3 (270M) for Azerbaijani Instruction Understanding
π Overview
LIA-AzInstruction is a fine-tuned version of Gemma 3 (270M) trained on a custom Azerbaijani instruction dataset ποΈ
The goal is to create a compact yet intelligent model capable of understanding and responding in natural Azerbaijani language.
π§© This project bridges the gap between global LLM architectures and local Azerbaijani AI development π¦πΏ
π§Ύ Dataset
ποΈ Total Samples: 400
π Structure: Alpaca-style (input β output)
π Topics covered:
| # | Field | Description |
|---|---|---|
| π 1 | Tarix | Ancient states, historical facts, culture |
| β 2 | Riyazi Analiz | Derivatives, integrals, limits |
| π€ 3 | Mexatronika vΙ Robototexnika | Sensors, actuators, control theory |
| π° 4 | Δ°qtisadiyyatΔ±n ΖsaslarΔ± | Supply, demand, inflation, markets |
βοΈ Training Details
| Parameter | Value |
|---|---|
| π§± Base Model | Gemma 3 (270M) |
| π Fine-tuning Type | Instruction-tuning |
| πΎ Precision | BF16 |
| π Epochs | 3 |
| π Learning Rate | 2e-5 |
π§ Model Capabilities
β
Understands and answers π¦πΏ Azerbaijani instructions naturally
β
Supports general academic and technical fields
β
Performs simple reasoning and step-by-step explanations
β
Compact enough for local deployment (mobile & web apps)
π¬ Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Yusiko/LIA-AzInstruction-400")
tokenizer = AutoTokenizer.from_pretrained("Yusiko/LIA-AzInstruction-400")
prompt = "QarabaΔ haqqΔ±nda qΔ±sa mΙlumat ver."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Yusiko/AzInstruction-400
Base model
google/gemma-3-270m