Fine tuning an Existing Large Language Model
The tutorial explores the process of fine-tuning Large Language Models (LLMs) for Natural Language Processing (NLP) tasks.
Fine Tuning an Existing LLM
The workshop explores the process of fine-tuning Large Language Models (LLMs) for Natural Language Processing (NLP) tasks. It highlights the motivations for fine-tuning, such as task adaptation, transfer learning, and handling low-data scenarios, using a Yelp Review dataset. The notebook employs the HuggingFace Transformers library, including tokenization with AutoTokenizer, data subset selection, and model choice (BERT-based model). Hyperparameter tuning, evaluation strategy, and metrics are introduced. It also briefly mentions DeepSpeed for optimization and Parameter Efficient Fine-Tuning (PEFT) for resource-efficient fine-tuning, providing a comprehensive introduction to fine-tuning LLMs for NLP tasks.
Tutorial Topics
Mary Grace Moesta
Welcome to the no tutorial!
What You'll Learn in This Tutorial
How to use this tutorial
Tutorial Prerequisites
Fine tuning an Existing Large Language Model
Fine Tuning Part II - Quiz
Lesson Notebook: Fine Tuning Part II
A Hands-on Tutorial