In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning and preference alignment. The presentation will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.
Learning Objectives and Tools: How to generate data for post-training, how to train LLMs and which libraries to use, and how to evaluate them.

Maxime Labonne, PhD
Head of Post-Training at Liquid AI
Maxime Labonne is Head of Post-Training at Liquid AI. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is a Google Developer Expert in AI/ML.
He has made significant contributions to the open-source community, including the LLM Course, tutorials on fine-tuning, tools such as LLM AutoEval, and best-in-class models like NeuralDaredevil. He is the author of the best-selling books “LLM Engineer’s Handbook” and “Hands-On Graph Neural Networks Using Python”.

Dozens of Free Courses with Premium
-
All Courses
ODSC 2025: 6-Week Winter AI Bootcamp
69 Lessons $499.00 -
All Courses, LLMs
ODSC AI Builders 2025 Summit - Mastering LLMs
36 Lessons $299.00 -
All Courses
Agentic AI Summit 2025
38 Lessons $399.00 -
All Courses, RAG
ODSC AI Builders 2025 Summit - Mastering RAG
26 Lessons $299.00 -
All Courses
ODSC West 2024 - All Recordings
34 Lessons $399.00 -
All Courses
Deep Learning Bootcamp with Dr. Jon Krohn
7 Lessons $699.00