Understand what we mean by ML explainability
What types of explainability exist
When to apply explainability (and when not)
Build an understanding of how to apply explainability approaches from RuleFit, Partial dependecy plots, Individual Conditional expectations to Global surrogate models, LIME and Shapley values.
Violeta Misheva, PhD
Data Scientist | ABN AMRO Bank N.V.
Module 1: Explainability: nuts and bolts
Lesson 1. Who am I, why this course and requirements?
Lesson 2. What is explainability
Lesson 3. Why explainability and when not?
Lesson 4. Types of explanations
Lesson 5. Explainability in the ML development process
Module 2. Explainability with Python: visual approaches
Lesson 1. Introduction to the use case
Lesson 2. Transparent approaches and RuleFit
Lesson 3. Visual explanations: PDP
Lesson 4. Visual explanations: ICE plots
Exercise 1. Apply PDP and ICE plots
Module 3. Global explanations
Lesson 1. Global surrogate models
Lesson 2. Feature importances
Exercise 2. Develop Global surrogate model and feature importances
Module 4. Local explanations
Lesson 1. LIME
Lesson 2. Shapley values
Exercises 3. Apply LIME and Shap
This course is for current or aspiring Data Scientists, Machine Learning Engineers, Software Engineers and AI Product Managers
Knowledge of following tools and concepts:
Intermediate Machine learning
Beginner to intermediate Python