SmolLM2-135M LoRA Adapter for GSM8K
This is a LoRA adapter for HuggingFaceTB/SmolLM2-135M fine-tuned on the GSM8K dataset for mathematical reasoning.
Model Description
- Base Model: HuggingFaceTB/SmolLM2-135M
- Training Method: Curriculum learning - Complexity Score
- Dataset: GSM8K (Grade School Math 8K)
- Task: Mathematical word problem solving
- Exact Match Accuracy: 2.93%
Training Details
LoRA adapter for SmolLM2 trained with curriculum learning (complexity score method)
Training Configuration
- Method: LoRA (Low-Rank Adaptation)
- Rank: 16
- Alpha: 32
- Target Modules: q_proj, k_proj, v_proj, o_proj
- Dropout: 0.1
- Epochs: 3
- Batch Size: 4 (with gradient accumulation of 4)
- Learning Rate: 3e-4
Curriculum Learning
This model was trained using curriculum learning, where the model is exposed to progressively harder problems:
- Easy Stage: Simple problems with fewer steps
- Normal Stage: Moderate complexity problems
- Difficult Stage: Complex multi-step problems
The curriculum was determined based on problem complexity (number of solution steps ร operation complexity).
Usage
Loading the Adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"HuggingFaceTB/SmolLM2-135M",
device_map="auto",
torch_dtype="auto"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "CrystalRaindropsFall/smollm2-gsm8k-curriculum-complexity")
# Inference
prompt = "Question: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using with Pipeline
from transformers import pipeline
from peft import PeftModel, AutoPeftModelForCausalLM
# Load model with adapter
model = AutoPeftModelForCausalLM.from_pretrained(
"YOUR_USERNAME/REPO_NAME",
device_map="auto"
)
# Create pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Generate
result = pipe("Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?\nAnswer:")
print(result[0]['generated_text'])
Performance
Evaluated on GSM8K test set (512 samples):
| Metric | Score |
|---|---|
| Exact Match | 2.93% |
| Format Correct | 100% |
Limitations
- Trained on grade school level math problems
- May struggle with problems requiring external knowledge
- Performance depends on problem complexity and wording
- Best used with base model's standard generation settings
Acknowledgments
- Base model: HuggingFaceTB/SmolLM2-135M
- Dataset: GSM8K by Cobbe et al.
- Training framework: HuggingFace PEFT
License
Apache 2.0 (following base model license)
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for CrystalRaindropsFall/smollm2-gsm8k-curriculum-complexity
Base model
HuggingFaceTB/SmolLM2-135M