AI Safety Sessions | 2025
Discover the 4 Best AI Safety Sessions
Most people generally agree that we want aligned AI. But what does it mean to be aligned to something? And what should we want AI be aligned to?
AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration.
This talk explores known cases, the fraud tradecraft employed, open data sources, and how technology gets leveraged. There are multiple areas were multimodal agentic workflows (e.g., based on BAML) play important roles, both for handling unstructured data sources and for actions taken based on inference. Moreover, we'll look at where data professionals are very much needed, where you can get involved.
When AI Outgrows Us: Risk, Safety, and the Future of Intelligence
All Courses
All Courses
All Courses
All Courses, RAG
All Courses
All Courses