Description

Are the machine learning models we build really reliable? Traditional model performance techniques tend to have a limited view of a model's true accuracy.  Errors in the model or data are common blind spots that can lead to inaccuracies or societal bias.  In addition, the need to understand features driving a model’s outcome is becoming a necessity to meet some industry regulations for transparency and accountability.  

This session will illustrate how to use model Error Analysis, Data Analysis, Explainability/Interpretability, Counterfactual/What-If, Casual analysis to debug and mitigate model issues faster.  You will learn how to use Azure Machine Learning’s Responsible AI dashboard to analyze and identify potential model issues to help ML professionals produce AI solutions that are less harmful to society and more trustworthy.


Local ODSC chapter in NYC, USA

Instructor's Bio

Ruth Yakubu

Principal Cloud Advocate at Microsoft

Ruth specializes in Java, Advanced Analytics, Data Platforms and Artificial Intelligence (AI). In addition, she's been a tech speaker at several conferences like Microsoft Ignite, O'reilly velocity, Devoxx UK, Grace Hopper Dublin, TechSummit, Websummit and numerous other developer conferences. Prior to Microsoft, She has also worked for great companies like UNISYS, ACCENTURE and DIRECTV over the years where she gained a lot of experience with software architectural design and programming. She’s awarded Dzone.com’s Most Valued Blogger.

Webinar

  • 1

    ON-DEMAND WEBINAR: Responsible AI: Debugging AI models for errors, fairness and explainability

    • Ai+ Training

    • Webinar recording

    • Welcome to ODSC East 2023 in Boston or virtually!