Description

Talk

Over the last 20 years, AI has scaled massively, but so have its "unintended behaviors." In this talk, we explore why we must move away from a single "gold standard" truth in AI safety. Learn about the "Data Cascades" problem and how to treat human variation and bias as a meaningful signal rather than noise.

Instructors Bio

Lora Aroyo, PhD

Senior Research Scientist at Google DeepMind

Dr. Lora Aroyo is a Research Scientist and senior team lead at Google DeepMind, where she drives a Data-Centric AI research agenda. Her core work focuses on advancing AI evaluation practices to incorporate the wide spectrum of human values and perspectives, fundamentally contributing to the field of Pluralistic AI Alignment. She articulates the crucial need for Data Excellence in AI, addressing the ""Data Cascades"" problem. Dr. Aroyo champions the principle that human disagreement and bias should be treated as meaningful signals, not noise, ensuring AI safety practices account for the often-neglected variation present in real-world use cases. Her contributions emphasize the scalable and repeatable measurement of data quality as a critical milestone for more efficient and effective AI evaluations.

Unlock Premium Features with a Subscription

  • Live Tarining:

    Full access to all live workshops and training sessions.

  • 20+ Expert-Led Workshops:

    Dive deep into AI Agents, RAG, and the latest LLMs

  • ODSC Conference Discounts:

    Receive extra discounts to attend ODSC conferences.