Session Overview
Development tools such as Jupyter are prevalent among data scientists because they provide an environment to explore data visually and interactively. However, when deploying a project, we must ensure the analysis can run reliably in a production environment like Airflow or Argo; this causes data scientists to move code back and forth between their notebooks and these production tools. Furthermore, data scientists have to learn an unfamiliar framework and write pipeline code, which severely delays the deployment process.
Ploomber solves this problem by providing:
1. A workflow orchestrator that automatically infers task execution order using static analysis.
2. A sensible layout to bootstrap projects.
3. A development environment integrated with Jupyter.
4. Capabilities to export to production systems (Airflow and Argo) without code changes.
This talk develops and deploys a Machine Learning pipeline in 30 minutes to demonstrate how Ploomber streamlines the Machine Learning development and deployment process.
Who and why
This talk is for data scientists (with experience developing Machine Learning projects) looking to enhance their workflow. Experience with production tools such as Airflow or Argo is not necessary.
The talk has two objectives:
1. Advocate for more development-friendly tools that let data scientists focus on analyzing data and taking off popular production tools' overhead.
2. Demonstrate an example workflow using Ploomber where a pipeline is developed interactively (using Jupyter) and deployed without code changes.
GitHub: https://github.com/ploomber/ploomber
Overview
-
1
Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber
-
Abstract & Bio
-
Develop and Deploy a Machine Learning Pipeline in 45 Minutes with Ploomber
-