We know that model training and inference is faster on GPU, but the slowest, most draining part of a data scientist’s typical day is processing data into the structure the model requires. Can GPUs help us with this challenge as well? To answer this question, we’ll compare cycles per second and costs of CPU vs GPU, look at speed gains with the framework, and calculate ROI as our AI/ML models scale. Join us as we lay out the compelling case for why you should be using GPUs for your end-to-end data science workflows, including ETL jobs. 

Join us and see: 

- A typical data science day with and without GPU acceleration 

- How to easily convert your code to take advantage of GPU-accelerated libraries 

- How to calculate cost savings of GPU clusters vs large CPU clusters

Local ODSC chapter in London, UK

Instructor's Bio

Jonathan Cosme

AI/ML Solutions Architect at Run:ai

He’s passionate about helping organizations leverage GPU computing for optimized AI performance, advanced machine learning modeling, and ETL. He has a plethora of experience in architecting cloud-based computer vision prototypes, building parallized NLP pipeline for extracting and analyzing enormous amounts of data in reduced time, and creating predictive machine learning models. He’s a graduate of Florida State University with a degree in Economics. Jonathan is an avid fencer, and competed competitively in high school and at a collegiate level.


  • 1

    ON-DEMAND WEBINAR: CPUs vs GPUs for Your End-to-End Data Science Workflows

    • Ai+ Training

    • Webinar recording

    • Join ODSC APAC 2022 Virtual Conference