Session Overview

The real world is open and full of unknowns, presenting significant challenges for AI systems that must reliably handle diverse, and sometimes anomalous inputs. Out-of-distribution (OOD) uncertainty arises when a machine learning model sees a test-time input that differs from its training data, and thus should not be predicted by the model. As ML is used for more safety-critical domains, the abilities to handle out-of-distribution data are central in building open-world learning systems. In this talk, I will talk about methods, challenges, and opportunities towards building ROWL (Reliable Open-World Learning).  

To tackle these challenges, I will first describe a mechanism that improves OOD uncertainty estimation by using calibrated softmax score and input processing. I will then talk about the recent advancement of an energy-based OOD detection framework, which produces theoretically grounded measurement that is aligned with the probability density of the input data. We show that energy score is less susceptible to softmax's overconfidence issue, and leads to superior performance on common OOD detection benchmarks. Lastly, I will discuss how to scale out-of-distribution detection algorithms to real-world large-scale classification problems.


Overview

  • 1

    Reliable Open-World Learning Against Out-of-distribution Data

    • Abstract & Bio

    • Reliable Open-World Learning Against Out-of-distribution Data

INTERESTED IN HANDS-ON TRAINING SESSIONS?

Start your 7-days trial. Cancel anytime.