Explaining and Interpreting Gradient Boosting Models in Machine Learning
This course is only available as a part of subscription plans.
Training duration: 90 minutes
DIFFICULTY LEVEL: INTERMEDIATE
How to approach data exploration.
How to assess the ""coherence"" of a model
How to interpret complicated models (such as from Gradient Boosting or Random Forests)
How to ascribe reasons to individual predictions
Instructor Bio:
Brian Lucena,PhD
Module1: Understanding the overall dynamics of your data and your model:
- Using sophisticated modeling packages (like XGBoost) to understand more complicated dynamics in the data
- How to approach data exploration to understand more complicated relationships between the variables in your data
- Why the "coherence" of a model is important - arguably, on the same level as its predictive performance
- How to assess the "coherence" of a model using ICE plots
Module 2: Understanding and explaining individual predictions from the model
- How to ascribe "reasons" to individual predictions
- How to "consolidate" features to make the reasons more coherent and understandable
- Using visualizations independently and from the SHAP package
Background in Python, Numpy, Pandas, Scikit-learn