Overview

Testing is a critical part of the software development cycle. As your software project grows, dealing with bugs and regressions can consume your team if you do not take a principled approach to testing. As a result, software testing methodologies are well-studied. However, machine learning models introduce a new set of complexities beyond traditional software. In particular, machine learning models depend on data in addition to code. As a result, testing methodologies for machine learning systems are less understood and less widely implemented in practice. In this talk, we argue for the importance of testing in ML, give an overview of the types of testing available to ML practitioners, and make recommendations about how you can start to incorporate more robust testing into your ML projects.

AI+ SUBSCRIPTION PLANS

New on-demand courses are added weekly

Session Overview

  • 1

    ODSC West 2020: Testing Production Machine Learning Systems

    • Overview and Author Bio

    • Testing Production Machine Learning Systems

Instructor Bio:

Josh Tobin, PhD

Founder | Former Research Scientist | Stealth Startup |Open AI

Josh Tobin, PhD

Josh is the Founder of Stealth Startup and his research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control. Additionally, he co-organizes a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning. Previously, Josh was a Research Scientist at OpenAI working at the intersection of machine learning and robotics. Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel. He has also been a management consultant at McKinsey and an Investment Partner at Dorm Room Fund.