Course curriculum

Machine learning (ML) models have caused a revolution in several fields, including, search and recommendation, finance, healthcare, and also fundamental sciences. Unfortunately, much of this progress has come with machine learning models getting more complex and opaque. Despite widespread deployment, the practice of evaluating models remains limited to computing aggregate metrics on held-out test sets. In this talk, I will argue how this practice can fall short of surfacing failure modes of the model that may otherwise show up during real world usage. In light of this, I will discuss the importance of understanding model predictions by asking: why did the model make this prediction?  One approach to answering this question is to attribute predictions to input features — a problem that has received a lot of attention in the last few years. I will describe an attribution method, called Integrated Gradients (ICML 2017), that is applicable to a variety of Deep Neural Networks (object recognition, text categorization, machine translation, etc.), and is backed by an axiomatic justification. I will discuss an evaluation workflow based on feature attributions, and describe several applications of it. Finally, I discuss how attributions can be used for monitoring models in production. I will conclude with some caveats around using features attribution. This talk is based on joint work with colleagues at Google.

  • 1

    Evaluating, Interpreting and Monitoring Machine Learning Models

    • Evaluating, Interpreting and Monitoring Machine Learning Models

Instructor

Staff Research Scientist- Google

Ankur Taly, PhD

Ankur Taly is a Staff Research Scientist at Google, where he carries out research in Machine Learning and Explainable AI. Previously, he served as the Head of Data Science at Fiddler labs, where he was responsible for developing, productionising, and evangelising core explainable AI technology. Ankur is most well-known for his contribution to developing and applying Integrated Gradients— a new interpretability algorithm for deep networks. His research in this area has resulted in publications at top-tier machine learning conferences and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). Besides explainable AI, Ankur has a broad research background and has published 30+ papers in areas including computer security, programming languages, formal verification, and machine learning. He has served on several academic conference program committees, and instructed short courses at summer schools and conferences. Ankur earned his PhD in computer science from Stanford University in 2012 and a B-Tech in Computer Science from IIT Bombay in 2007.

ODSC Europe Hybrid Conference

Learn from more than 150 sessions