Talk
Over the last 20 years, AI has scaled massively, but so have its "unintended behaviors." In this talk, we explore why we must move away from a single "gold standard" truth in AI safety. Learn about the "Data Cascades" problem and how to treat human variation and bias as a meaningful signal rather than noise.
Lora Aroyo, PhD
Senior Research Scientist at Google DeepMind
Dr. Lora Aroyo is a Research Scientist and senior team lead at Google DeepMind, where she drives a Data-Centric AI research agenda. Her core work focuses on advancing AI evaluation practices to incorporate the wide spectrum of human values and perspectives, fundamentally contributing to the field of Pluralistic AI Alignment. She articulates the crucial need for Data Excellence in AI, addressing the ""Data Cascades"" problem. Dr. Aroyo champions the principle that human disagreement and bias should be treated as meaningful signals, not noise, ensuring AI safety practices account for the often-neglected variation present in real-world use cases. Her contributions emphasize the scalable and repeatable measurement of data quality as a critical milestone for more efficient and effective AI evaluations.
Dozens of Free Courses with Premium
-
All Courses
ODSC AI 2025: 6-Week Fall AI Bootcamp
81 Lessons $499.00
-
All Courses
ODSC AI West 2025 - All Recordings
56 Lessons $299.00
-
All Courses
Agentic AI Summit 2025
38 Lessons $399.00
-
All Courses, RAG
ODSC AI Builders 2025 Summit - Mastering RAG
26 Lessons $299.00
-
All Courses
ODSC East 2025 - All Recordings
61 Lessons $299.00
-
All Courses
Deep Learning Bootcamp with Dr. Jon Krohn
7 Lessons $699.00