Description
Taking a model from research to production is hard — and keeping it there is even harder! As more machine learning models are deployed into production, it is imperative to have tools to monitor, troubleshoot, and explain model decisions. Join Amber Roberts, Machine Learning Engineer at Arize AI, in an overview of Arize AI’s ML Observability platform, enabling ML teams to surface, resolve, and improve model performance issues automatically.
Gain confidence taking your models from research to production with a deep dive into the Arize platform. Attendees will learn how to identify segments where a model is underperforming, troubleshoot and perform root cause analysis, and proactively monitor your model for future degradations.
Instructor's Bio
Amber Roberts
Machine Learning Engineer at Arize
Amber is an Astronomer and Machine Learning Engineer. She comes to Arize from the Splunk ML Product Org where she built out the ML feature solutions as an ML Product Manager. Ambers' current role as a community-oriented Machine Learning Engineer looks to help teams across all industries build ML Observability into their productionalized AI environments.
Webinar
-
1
ON-DEMAND WEBINAR: Build trust in models in production with ML observability and performance tracing
-
Ai+ Training
-
Webinar recording
-
Join ODSC Europe 2022 Training Conference
-
UPCOMING LIVE TRAINING
Register now to save 30%
-
All Courses
Deep Learning Bootcamp with Dr. Jon Krohn
7 Lessons $699.00 -
All Courses, All Live Training
PAST LIVE TRAINING: Available On-Demand: Complete Business Intelligence (BI) with Python Data Science
2 Lessons $147.00 -
All Courses, All Live Training
PAST LIVE TRAINING: Available On-Demand: Google BigQuery and Colab Notebooks: Develop Cloud, SQL, and Python Skills Using Public Data
2 Lessons $147.00