Supervised Learning 6: Interpretability
This course is available only as a part of subscription plans
Training duration: 90 min (Hands-on)
Summarize why it is important to explain models
Describe why additional tools are necessary to explain non-linear models
Review the difference between global and local feature importance metrics
Use the coefficients of linear models to measure feature importance
Apply permutation feature importance to calculate global feature importances
Describe some model-specific approaches to measure global feature importance
Describe the intuition behind SHAP values
Create force, dependence, and summary plots to aid local interpretability
Andras Zsom, PhD
Andras Zsom, PhD
Module 1: Global feature importance metrics in linear models
Module 2: Global feature importance metrics in non-linear models
Module 3: Local feature importance metrics
Python coding experience
Familiarity with pandas and numpy
Prior experience with scikit-learn and matplotlib are a plus but not required