Description

    Taking a model from research to production is hard — and keeping it there is even harder! As more machine learning models are deployed into production, it is imperative to have tools to monitor, troubleshoot, and explain model decisions. Join Amber Roberts, Machine Learning Engineer at Arize AI, in an overview of Arize AI’s ML Observability platform, enabling ML teams to surface, resolve, and improve model performance issues automatically.

    Gain confidence taking your models from research to production with a deep dive into the Arize platform. Attendees will learn how to identify segments where a model is underperforming, troubleshoot and perform root cause analysis, and proactively monitor your model for future degradations.


Local ODSC chapter in London, UK

Instructor's Bio

Amber Roberts

Machine Learning Engineer at Arize

Amber is an Astronomer and Machine Learning Engineer. She comes to Arize from the Splunk ML Product Org where she built out the ML feature solutions as an ML Product Manager. Ambers' current role as a community-oriented Machine Learning Engineer looks to help teams across all industries build ML Observability into their productionalized AI environments.

Webinar

  • 1

    ON-DEMAND WEBINAR: Build trust in models in production with ML observability and performance tracing

    • Ai+ Training

    • Webinar recording

    • Join ODSC Europe 2022 Training Conference