Welcome to the š¤ smol-course

Welcome to the comprehensive (and smollest) course to Fine-Tuning Language Models!
This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models.
This first unit will help you onboard:
- Discover the courseās syllabus.
- Get more information about the certification process and the schedule.
- Get to know the team behind the course.
- Create your account.
- Sign-up to our Discord server, and meet your classmates and us.
Letās get started!
This course is smol but fast! Itās for software developers and engineers looking to fast track their LLM fine-tuning skills. If thatās not you, check out the LLM Course.
What to expect from this course?
In this course, you will:
- š Study instruction tuning, supervised fine-tuning, preference alignment, evaluation, vision language models⦠and more!
- š§āš» Learn to use established fine-tuning frameworks and tools like TRL and Transformers.
- š¾ Share your projects and explore fine-tuning applications created by the community.
- š Participate in challenges where you will evaluate your fine-tuned models against other students.
- š Earn a certificate of completion by completing assignments.
At the end of this course, youāll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques.
Donāt forget to sign up to the course!
What does the course look like?
The course is composed of:
- Foundational Units: where you learn fine-tuning concepts in theory.
- Hands-on: where youāll learn to use established fine-tuning frameworks to adapt your models. These hands-on sections will have pre-configured environments.
- Use case assignments: where youāll apply the concepts youāve learned to solve a real-world problem that youāll choose.
- Collaborations: Weāre collaborating with Hugging Faceās partners to give you the latest fine-tuning implementations and tools.
This course is a living project, evolving with your feedback and contributions! Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.
Whatās the syllabus?
Here is the general syllabus for the course. A more detailed list of topics will be released with each unit.
| # | Topic | Description | Released |
| 1 | Instruction Tuning | Supervised fine-tuning, chat templates, instruction following | ā
|
| 2 | Evaluation | Benchmarks and custom domain evaluation | ā
|
| 3 | Preference Alignment | Aligning models to human preferences with algorithms like DPO. | ā
|
| 4 | Vision Language Models | Adapt and use multimodal models | ā
|
| 5 | Reinforcement Learning | Optimizing models with based on reinforcement policies. | October |
| 6 | Synthetic Data | Generate synthetic datasets for custom domains | November |
| 7 | Award Ceremony | Showcase projects and celebrate | December |
What are the prerequisites?
To be able to follow this course, you should have:
- Basic understanding of AI and LLM concepts
- Familiarity with Python programming and machine learning fundamentals
- Experience with PyTorch or similar deep learning frameworks
- Understanding of transformers architecture basics
If you donāt have any of these, donāt worry. Check out the LLM Course to get started.
The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and transformers, you can start the course now!
What tools do I need?
You only need 2 things:
- A computer with an internet connection and preferably GPU access (Hugging Face Pro works great).
- An account: to access the course resources and create projects. If you donāt have an account yet, you can create one here (itās free).
The Certification Process
You can choose to follow this course in audit mode, or do the activities and get one of the two certificates weāll issue. If you audit the course, you can participate in all the challenges and do assignments if you want, and you donāt need to notify us.
The certification process is completely free:
- To get a certification for fundamentals: you need to complete Unit 1 of the course. This is intended for students that want to understand instruction tuning basics without building advanced applications.
- To get a certificate of completion: you need to complete all course units and submit a final project. This is intended for students that want to demonstrate mastery of fine-tuning techniques.
What is the recommended pace?
Each chapter in this course is designed to be completed in 1 week, with approximately 3-4 hours of work per week.
Since thereās a deadline, we provide you a recommended pace:

How to get the most out of the course?
To get the most out of the course, we have some advice:
- Join study groups in Discord: Studying in groups is always easier. To do that, you need to join our discord server and verify your account.
- Do the quizzes and assignments: The best way to learn is through hands-on practice and self-assessment.
- Define a schedule to stay in sync: You can use our recommended pace schedule below or create yours.

Who are we
About the authors:
Ben Burtenshaw
Ben is a Machine Learning Engineer at Hugging Face who focuses on building LLM applications, with post training and agentic approaches. Follow Ben on the Hub to see his latest projects.
Acknowledgments
We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support:
I found a bug, or I want to improve the course
Contributions are welcome š¤
- If you found a bug š in a notebook, please open an issue and describe the problem.
- If you want to improve the course, you can open a Pull Request.
- If you want to add a full section or a new unit, the best is to open an issue and describe what content you want to add before starting to write it so that we can guide you.
I still have questions
Please ask your question in our discord server #fine-tuning-course-questions.
Now that you have all the information, letās get on board āµ
< > Update on GitHub