Data scientists and machine learning engineers use a variety of open-source projects in their everyday tasks: scikit-learn, SparkML, TensorFlow, Apache MXNet, Pytorch, etc. They make it very easy to get started, but as models become more complex and datasets become larger, training time and prediction latency become a significant concern. Here too, containers can help, especially when used with elastic on-demand compute services. In this session, we'll show you how to scale machine learning workloads using containers on AWS (Deep Learning AMI and containers, ECS, EKS, SageMaker). We'll discuss the pros and cons of these different services from a technical, operational, and cost perspective. Of course, we'll run some demos.


New on-demand courses are added weekly

Session Overview

  • 1

    ODSC East 2020: Scaling your ML workloads from 0 to millions of users

    • Overview and Author Bio

    • Scaling your ML workloads from 0 to millions of users

Instructor Bio:

Julien Simon

Principal Technical Evangelist at Amazon

Julien Simon

Before joining Amazon Web Services, Julien served for 10 years as CTO/VP Engineering in top-tier web startups. Thus, he’s particularly interested in all things architecture, deployment, performance, scalability and data. As a Principal Technical Evangelist, Julien speaks very frequently at conferences and technical workshops, where he meets developers and enterprises to help them bring their ideas to life thanks to the Amazon Web Services infrastructure.