Description

     Fairness is inherently contextual, and difficult to scale. Yet the scope of AI at tech companies requires processes, frameworks, and tools to meet the challenge. We will discuss Facebook Responsible AI’s approach to defining and measuring AI Fairness in our ML pipelines, including an overview of our Fairness Flow internal tooling. 

     Two important, but not complete, sources of potential ML bias are the labels used to train a model, and the model itself. Fairness Flow works by helping machine learning engineers to detect certain forms of statistical bias in both. Model bias occurs when a model produces results or predictions that favor or disfavor certain groups over others, such as when a spam detection model disproportionately flags content from a certain group as spam while similar content from another group does not get the same treatment. Label bias occurs when the labels that the model is trained on do not accurately reflect the real world. While it may not be possible to eliminate model and label bias completely, it is possible to strive for systems that are fairer.

     There are a number of compelling metrics to measure the performance of a system and its fairness, but unfortunately, they sometimes give contradictory results. It’s crucial to select the specific fairness metric for your product with the exact context of the product, and the impact on users, in mind. We will discuss a number of compelling metrics, their tensions, and in what cases each might be appropriate.

     Finally, we will discuss the ongoing work to scale this process to support product teams across the company, while preserving all of the necessary depth to answer inherently complicated, contextual concerns.


Local ODSC chapter in NYC, USA


Instructor's Bio

Jonathan Tannen PhD

Research Engineering Manager at Facebook

Jonathan has a Ph.D. in Urban Demography and comes to ML research from the quantitative social sciences. He was a founding Research Scientist on Facebook's Responsible AI Fairness team, and now manages the team responsible for internal AI Fairness Products.

Webinar

  • 1

    Measuring AI Fairness at Facebook

    • AI+ Training

    • Webinar recording

    • AI+ Subscription Plans