Description

Over the last two decades, the influence of machine learning on daily life has become increasingly common.  While the benefits of these applications are wide-reaching, notable high-profile cases of machine learning bias have raised concerns that algorithms will serve to widen societal inequality.   


The machine learning community has responded with a wide array of techniques aimed at measuring, and ultimately, eliminating such biases in machine learning models. Concerns about ML bias are particularly acute within the field of healthcare.  Healthcare, like many aspects of society, has underlying biases along lines of race, gender, and socioeconomic status.  These inequalities articulate themselves within the data used to build population health models.  If special care is not taken, it is all too easy for algorithms to guide population health efforts based on these inequalities under the guise of objectivity.     


A particular challenge is that canonical examples of bias mitigation have key distinguishing factors from typical health population application models, and techniques standard to ensuring model fairness are often inappropriate.  Certain bias mitigation approaches, taken by well-meaning practitioners, can harm communities that these techniques are designed to protect.  


The purpose of this talk is to introduce a novel measure of algorithmic fairness specifically designed for population health algorithms.  The discussion will begin by giving a brief introduction to ML bias and fairness techniques.  We’ll contrast the application of canonical examples used to illustrate ML bias with population health models, and contextualize how the techniques of bias mitigation fall short. 


We will then introduce our novel measure, called Group Benefit Equality, and explore a case study where it is used as a part of a larger effort to assess bias and fairness within models built for the CMS AI for Health Outcomes Challenge.    


The speaker has prepared an informal survey to gauge the audience's background in ML fairness. Please take 1 minute to answer this (very) short survey: https://www.surveymonkey.com/r/D9QB2T7

Instructor's Bio

Joe Gartner, PhD

 Director of Data Science at Closedloop.ai

Joe Gartner is the director of data science for Closedloop.ai.  Closedloop.ai is a platform specifically designed for doing data science within the healthcare industry, and his primary focus there is on explainable machine learning, model deployments, and algorithmic fairness.  In his previous roles, he has worked as the lead instructor for the Galvanize data science immersive program in Austin, and as a data scientist for Sotera Defense Systems.    He holds a Ph.D. in physics from the University of Florida, where his research efforts were on Large Hadron Collider data studying fundamental physics properties.  He enjoys a wide range of topics within the field of data science, particularly centered around making the mechanisms transparent and accessible to wide and diverse audiences.  In his free time, he enjoys reading and Brazilian jiu jitsu.


Use discount code WEBINAR2021 to get your Virtual ODSC East 2021 pass with an additional 20% OFF

Webinar

  • 1

    A New Metric for Fairness in Healthcare AI

    • AI+ Training

    • AI+ Subscription Plans

    • Webinar recording