Description
In the realm of Natural Language Processing (NLP) and Large Language Models (LLM), leveraging pre-trained models has become a transformative practice, accelerating both research and applications in real-world scenarios. Presented by Oracle Cloud Infrastructure and NVIDIA, this talk is intended to guide attendees on the path of fine-tuning large pre-trained models such as HuggingFace's Transformers (Falcon-40B), specifically focusing on question-answering and text summarization tasks.
We will delve into the theory and practical aspects of fine-tuning these models, shedding light on the mechanisms and strategies for model optimization and tailoring to specific tasks. Attendees will gain a deeper understanding of how pre-training and fine-tuning paradigms work and will acquire the know-how to apply them in their own NLP/LLM projects.
Instructor's Bio
Dr. Sanjay Basu
Senior Director – AI/ML at Oracle Cloud Engineering
Dr. Sanjay Basu is a highly accomplished leader with superior strengths in setting technology direction, managing customer relationships, managing large, diverse teams and delivering results. He is an industry-recognized Artificial Intelligence / Machine Learning / Quantum Computing subject matter expert.
Webinar
-
1
ON-DEMAND WEBINAR: Enhanced Fine-tuning (using PEFT) of Open Source Pre-trained LLMs for Q&A and Summarization Tasks
-
Ai+ Training
-
Webinar recording
-
UPCOMING LIVE TRAINING
Register now to save 30%
-
All Courses
Python for Network Traffic Analysis
17 Lessons $99.00 -
All Courses
Gradient Boosting Series - 4 courses Program
1 Lessons $137.00 -
All Courses, All Live Training
PAST LIVE TRAINING: Available On-Demand: Google BigQuery and Colab Notebooks: Develop Cloud, SQL, and Python Skills Using Public Data
2 Lessons $147.00