Welcome to the smollest course of fine-tuning! This module will guide you through instruction tuning using SmolLM3, Hugging Face’s latest 3B parameter model that achieves state-of-the-art performance for its size, while remaining accessible for learning and experimentation.
By the end of this course you will be fine tuning an LLM with SFT. This course is smol but fast! If you’re like for a smoother gradient, check out the The LLM Course.
After completing this unit (and the assignment), don’t forget to test your knowledge with the quiz!
Instruction tuning is the process of adapting pre-trained language models to follow human instructions and engage in conversations. While base models like SmolLM3-3B-Base are trained to predict the next token, instruction-tuned models like SmolLM3-3B are specifically trained to:
This transformation from a text completion model to an instruction-following assistant is achieved through supervised fine-tuning on carefully curated datasets.
We dive into the instruction tuning here in the LLM Course.
SmolLM3 is perfect for learning instruction tuning because it:
In this comprehensive module, we will explore four key areas:
Chat templates are the foundation of instruction tuning - they structure interactions between users and AI models, ensuring consistent and contextually appropriate responses. You’ll learn:
For detailed information, see Chat Templates.
Supervised Fine-Tuning is the core technique for adapting pre-trained models to follow instructions. You’ll master:
SFTTrainer for efficient trainingFor a comprehensive guide, see Supervised Fine-Tuning.
Put your knowledge into practice with progressively challenging exercises:
Complete exercises and examples are in Exercises.
Hugging Face Jobs is a fully managed cloud infrastructure for training models without the hassle of setting up GPUs, managing dependencies, or configuring environments locally. This is particularly valuable for SFT training, which can be resource-intensive and time-consuming.
For a comprehensive guide, see Hugging Face Jobs.
By the end of this module, you’ll have:
Let’s dive into the fascinating world of instruction tuning!
SFTTrainer in TRL