Sessions

Talk

Most people generally agree that we want aligned AI. But what does it mean to be aligned to something? And what should we want AI be aligned to?

Talk

AI safety discourse often splits into immediate harm vs catastrophic risk framings. In this keynote, I argue that the two research streams will benefit from increased cross-talk and a greater number of synergistic projects. A zero-sum framing on attention and resources between the two communities is incorrect and does not serve either side's goals. Recent theoretical work, including on accumulative existential risk, unifies risk pathways between the two fields. Building on this, I suggest concrete synergies that are already in place - as well as opportunities for future collaboration.

Talk

This talk explores known cases, the fraud tradecraft employed, open data sources, and how technology gets leveraged. There are multiple areas were multimodal agentic workflows (e.g., based on BAML) play important roles, both for handling unstructured data sources and for actions taken based on inference. Moreover, we'll look at where data professionals are very much needed, where you can get involved.

Talk

When AI Outgrows Us: Risk, Safety, and the Future of Intelligence

Unlock Premium Features with a Subscription

  • Live Tarining:

    Full access to all live workshops and training sessions.

  • 20+ Expert-Led Workshops:

    Dive deep into AI Agents, RAG, and the latest LLMs

  • ODSC Conference Discounts:

    Receive extra discounts to attend ODSC conferences.