Description

In this talk, we will cover the fundamentals of modern LLM post-training at various scales with concrete examples. High-quality data generation is at the core of this process, focusing on the accuracy, diversity, and complexity of the training samples. We will explore key training techniques, including supervised fine-tuning and preference alignment. The presentation will delve into evaluation frameworks with their pros and cons for measuring model performance. We will conclude with an overview of emerging trends in post-training methodologies and their implications for the future of LLM development.

Learning Objectives and Tools: How to generate data for post-training, how to train LLMs and which libraries to use, and how to evaluate them.

Instructors Bio

Maxime Labonne, PhD

Head of Post-Training at Liquid AI

Maxime Labonne is Head of Post-Training at Liquid AI. He holds a Ph.D. in Machine Learning from the Polytechnic Institute of Paris and is a Google Developer Expert in AI/ML.

He has made significant contributions to the open-source community, including the LLM Course, tutorials on fine-tuning, tools such as LLM AutoEval, and best-in-class models like NeuralDaredevil. He is the author of the best-selling books “LLM Engineer’s Handbook” and “Hands-On Graph Neural Networks Using Python”.

Unlock Premium Features with a Subscription

  • Live Tarining:

    Full access to all live workshops and training sessions.

  • 20+ Expert-Led Workshops:

    Dive deep into AI Agents, RAG, and the latest LLMs

  • ODSC Conference Discounts:

    Receive extra discounts to attend ODSC conferences.