Understand when and why to use post-training methods like Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Online Reinforcement Learning.
We'd like to know you better so we can create more relevant courses. What do you do for work?
Understand when and why to use post-training methods like Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Online Reinforcement Learning.
Learn the concepts underlying the three post-training methods of SFT, DPO, and Online RL, their common use-cases, and how to curate high-quality data to effectively train a model using each method.
Download a pre-trained model and implement post-training pipelines to turn a base model into an instruct model, change the identity of a chat assistant, and improve a modelâs math capabilities.
Learn to post-train and customize an LLM in this short course, âPost-training of LLMs,â taught by Banghua Zhu, Assistant Professor at the University of Washington, and co-founder of NexusFlow.
Before a large language model can follow instructions or answer questions, it undergoes two key stages: pre-training and post-training. In pre-training, it learns to predict the next word or token from large amounts of unlabeled text. In post-training, it learns useful behaviors such as following instructions, tool use, and reasoning.
Post-training transforms a general-purpose token predictorâtrained on trillions of unlabeled text tokensâinto an assistant that follows instructions and performs specific tasks.
In this course, youâll learn three common post-training methodsâSupervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Online Reinforcement Learning (RL)âand how to use each one effectively. With SFT, you train the model on input-output pairs with ideal output responses. With DPO, you provide both a preferred (âchosenâ) and a less preferred (ârejectedâ) response, and train the model to favor the preferred output. With RL, the model generates an output, receives a reward score based on human or automated feedback, and updates the model to improve performance.
Youâll learn the basic concepts, common use-cases, and principles for curating high-quality data for effective training in each of these methods. Through hands-on labs, youâll download a pre-trained model from HuggingFace and post-train it using SFT, DPO, and RL to see how each technique shapes model behavior.
In detail, youâll:
Post-training is one of the most rapidly developing areas of LLM training.
Whether youâre looking to create a safer assistant, fine-tune a modelâs tone, or improve task-specific accuracy, this course gives you hands-on experience with the most important techniques shaping how LLMs are post-trained today.
This course is for AI builders who want to adapt language models for specific tasks or behaviors. If youâre familiar with LLM basics and ready to go beyond pre-training, this course will help you understand and apply the key techniques that make LLMs truly useful.
Introduction
Introduction to Post-training
Basics of SFT
SFT in Practice
Basics of DPO
DPO in Practice
Basics of Online RL
Online RL in Practice
Conclusion
Quiz
Gradedă»Quiz
ă»10 minsAssistant Professor at the University of Washington, Principal Research Scientist at Nvidia, Co-founder of Nexusflow
Course access is free for a limited time during the DeepLearning.AI learning platform beta!
Keep learning with updates on curated AI news, courses, and events, as well as Andrewâs thoughts from DeepLearning.AI!