Learn Generative AI and Large Language Models

Upskill to The Next Frontier of AI

Tutorial Overview

Parameter Efficient Fine tuning

For this hands-on workshop, our focus will be on parameter-efficient fine-tuning (PEFT) techniques in the field of machine learning, specifically within the context of large neural language models like GPT or BERT.  PEFT is a powerful approach that allows us to adapt these pre-trained models to specific tasks while minimizing additional parameter overhead. Instead of fine-tuning the entire massive model, PEFT introduces compact, task-specific parameters known as "adapters" into the pre-trained model's architecture. PEFT strikes a balance between model size and adaptability, making it a crucial technique for real-world applications where computational and memory resources are limited, while still maintaining competitive performance. 

In this workshop, we will delve into the different PEFT methods, such as additive, selective, re-parameterization, adapter-based, and soft prompt-based approaches, exploring their characteristics, benefits, and practical applications. We will also demonstrate how to implement PEFT using the Hugging Face PEFT library, showcasing its effectiveness in adapting large pre-trained language models to specific tasks. Join us to discover how PEFT can make state-of-the-art language models more accessible and practical for a wide range of natural language processing tasks.

Tutorial Topics

  • parameter-efficient fine-tuning (PEFT)
  • PERF Adapters
  • Different PEFT methods
  • Hugging Face PEFT library


Meet your instructor

Senior Machine Learning Engineer / Data Science Consultant

Mary Grace Moesta

Mary Grace Moesta is a senior data science consultant at Databricks. She's been working in the big data and data science space for several years with opportunities to collaborate across several verticals, with the majority of her work focused in the Retail and CPG space. Prior to Databricks, Mary Grace was able to contribute to several machine learning applications, namely - personalization use cases, forecasting, recommendation engines, and customer experience measures.

Course Curriculum

  • 1

    Welcome to the Tutorial!

    • Welcome to the tutorial!

    • What You'll Learn in This Tutorial

    • How to use this tutorial

    • Tutorial Prerequisites

  • 2

    Parameter Efficient: Fine tuning.

    • Parameter Efficient: Fine Tuning

    • Lesson Notebook - Parameter Efficient Fine Tuning

CODE TO LEARN A Hands-on Tutorial

A Hands-on Tutorial

This hands-on tutorial goes beyond the basics, offering you an interactive Coding Notebook crucial to your educational journey. It immerses you in an engaging process of writing, generating, and executing code, enabling a comprehensive exploration of the tutorial's core concepts through practical coding exercises. By applying these concepts in real time, you'll witness the immediate impact of your coding choices. This hands-on approach is not just about learning to code; it's about coding to learn, solidifying your understanding as you seamlessly generate and execute code.

Enroll now!

Accelerate your journey to enerative AI by enrolling in our program today!