Parameter Efficient Fine tuning
For this hands-on workshop, our focus will be on parameter-efficient fine-tuning (PEFT) techniques in the field of machine learning, specifically within the context of large neural language models like GPT or BERT.
Parameter Efficient Fine tuning
For this hands-on workshop, our focus will be on parameter-efficient fine-tuning (PEFT) techniques in the field of machine learning, specifically within the context of large neural language models like GPT or BERT. PEFT is a powerful approach that allows us to adapt these pre-trained models to specific tasks while minimizing additional parameter overhead. Instead of fine-tuning the entire massive model, PEFT introduces compact, task-specific parameters known as "adapters" into the pre-trained model's architecture. PEFT strikes a balance between model size and adaptability, making it a crucial technique for real-world applications where computational and memory resources are limited, while still maintaining competitive performance.
In this workshop, we will delve into the different PEFT methods, such as additive, selective, re-parameterization, adapter-based, and soft prompt-based approaches, exploring their characteristics, benefits, and practical applications. We will also demonstrate how to implement PEFT using the Hugging Face PEFT library, showcasing its effectiveness in adapting large pre-trained language models to specific tasks. Join us to discover how PEFT can make state-of-the-art language models more accessible and practical for a wide range of natural language processing tasks.
Tutorial Topics
Mary Grace Moesta
Welcome to the tutorial!
What You'll Learn in This Tutorial
How to use this tutorial
Tutorial Prerequisites
Parameter Efficient: Fine Tuning
Lesson Notebook - Parameter Efficient Fine Tuning
A Hands-on Tutorial