Generative AI & Large Language Models

Learn Gen AI and Large Language Models: The Next Frontier of AI

Dive into the world of LLMs and generative AI with the "Generative AI Fundamentals" course. This API-driven journey begins with core principles, enabling you to engage with models like GPT-3 confidently. Learn their architecture, training methods, and applications across various domains. Hands-on tutorials allow direct interaction with pre-trained models, fostering skills in text generation, contextual questions, and ethical considerations. By the end, you'll possess a strong understanding to leverage cutting-edge language models for creative writing, business, or research, harnessing their potential responsibly and impactfully. Enroll now and unlock the creative power of generative AI!

Meet your instructor

Senior Machine Learning Engineer / Data Science Consultant

Mary Grace Moesta

Mary Grace Moesta is a senior data science consultant at Databricks. She's been working in the big data and data science space for several years with opportunities to collaborate across several verticals, with the majority of her work focused in the Retail and CPG space. Prior to Databricks, Mary Grace was able to contribute to several machine learning applications, namely - personalization use cases, forecasting, recommendation engines, and customer experience measures.

Course Curriculum

  • 1

    Welcome to the course!

    • Welcome to the course!

    • What You'll Learn in This Course

    • How to use this course

    • Course Prerequisites

  • 2

    LLM Basics Part I

    • How do LLMs Differ from Language Models?

    • Why LLMs are so powerful

    • The transformer Architecture

    • The application of LLMs

    • Flow Chaining

    • Lesson Notebook: LLM Basics Part I

    • LLM Basics

  • 3

    Prompt Engineering Part I: Fundamentals

    • Introduction to prompt engineering

    • Guardrails for model responses

    • Temperature as a Means for Model Control

    • Prompt engineering as a mechanism for fine-tuning

    • Memorization

    • Tools for prompt engineering

    • Lesson Notebook: Prompt Engineering Part I: Fundamentals.

    • Prompt Engineering Part I

  • 4

    Prompt Engineering Part II: Prompt Engineering with OpenAI

    • Best practices for prompting OpenAI models

    • Prompting safety guardrails

    • Lesson Notebook: Prompt Engineering Part II.

    • Prompt Engineering Part II

  • 5

    Build a Question Answering Bot

    • Search

    • Vector search technologies

    • Lang Chain Chains

    • Lesson Notebook: QA Bot

    • QA BOT Quiz

  • 6

    Fine Tuning Part I: Embedding Models

    • Fine Tuning Part I: Embedding ModelsNew Lesson

    • Lesson Notebook: Fine Tuning Embedding Models

    • Fine Tuning Quiz

  • 7

    Fine Tuning Part II: Fine Tuning an Existing LLM

    • Fine tuning Part II: Models + LLMs

    • Fine Tuning Part II - Quiz

    • Lesson Notebook: Fine Tuning Part II

  • 8

    LangChain Agents

    • LangChain Agents

    • Agents quiz

    • Lesson Notebook: LangChain Agents

  • 9

    Parameter Efficient: Fine tuning.

    • Parameter Efficient: Fine Tuning

    • Lesson Notebook - Parameter Efficient: Fine Tuning

Learning Objectives

By the end of this course, you should be able to:

  • Coves the Basics: Understand the basics in order to start applying LLMs and generative AI to real-world applications

  • Be able to intelligently discern how to use LLMs via prompting, fine tuning, chains, and agents

  • Get Hand-on Experience: each course comes with a detailed notebook to run you own models!

  • Grow Your Portfolio: modify the code to build y own portfolio projects to showcase your work and test out new projects

Course Bundle Outline

What you'll learn in this series of courses?

LLM Basics Part I

  • Why LLMs are so powerful 
  • The transformer Architecture 
  • Applications of LLMs
  • Zero-shot learning
  • Few shot learning 
  • Fine-tuning
  • Flow chaining


Prompt Engineering Part I: Prompt Engineering Fundamentals

  • Introduction to prompt engineering 
  • Prompt engineering for text-generation content 
  • Prompt engineering for text generation tone 
  • Prompt engineering as a mechanism for fine-tuning 
  • Guardrails for model responses
  • The temperature in generative models 
  • Memorization
  • Tools for prompt engineering


Prompt Engineering Part II: Prompt Engineering with OpenAI

  • Best practices for prompting OpenAI models 
    • Prompt organization 
    • Prompt character distinctions 
    • Specifying prompt variables with LangChain
  • Prompting safety guardrails 
    • Prompts for preventing hallucination 
    • Prompts for preventing jailbreaking


Question Answering Bot Part I

  • Search 
  • Vector search technologies 
  • Search and retrieval with vector technologies 
  • LangChain Chains
    • Retrieval QA chain for quick development 


Fine Tuning Part I: Embedding Models

  • Embedding models and Large Language Models 
  • Why fine-tuning an embedding model is useful in application 
  • Existing fine-tuning solutions 
  • Hardware considerations
  • Fine-tuning a sentence transformer model


Fine Tuning Part II: Models + LLMs

  • Fine-tuning an existing LLM
  • Tasks that could require fine-tuning 
  • A fine-tuning example using a Huggingface Dataset


Agents Part I

  • Introduction to LangChain Agents and Tools
  • Using built-in tools 
  • Creating a custom tool 
  • Types of Agents 
  • Zero-shot learning with Agents


Is this course for you?

If you want to explore AI-driven content creation, the "Generative AI Fundamentals" course is for you. It provides a comprehensive insight into generative AI, including GPT-3 applications, with hands-on tutorials for practical skills. Ideal for creative writing, business, or research, it equips you with essential tools responsibly.

Enroll now!

Accelerate your journey to enerative AI by enrolling in our program today!