Get this course for free with Premium Ai+ subscription
Description
For many of us, the data ingestion journey begins with a single, magical line: df.to_sql().
This starting point is great for ML one-offs but for data ingestion pipelines this often becomes a production nightmare. The ad-hoc scripts are brittle, memory-hungry, and fail silently, creating a cycle of constant firefighting.
This hands-on workshop is a recovery plan designed to replace these bad habits with best-practice, professional, resilient patterns.
In a guided, interactive notebook, we will look at all typical challenges of data ingestion and how we can solve them in a quick, easy way with dlt. You will learn how to build pipelines using:
- Schema evolution and self healing
- Memory, disk management
- Async and parallelism
- incremental loading and state management
- Declarative REST clients
You'll leave this session with a practical toolkit and a new default workflow, ready to build data systems you can finally trust.
Instructor's Bio

Adrian Brudaru
Co-Founder of dltHub
Adrian spent five years building end-to-end data systems for startups in Berlin before moving into freelance work. Over the next five years, he led data projects focused on building teams and infrastructure. Drawing from this experience, he founded dltHub to tackle the challenges he repeatedly encountered - this time at scale.
Webinar
-
1
Workshop "Production-Ready Data Ingestion for Recovering Pandas Users"
-
Ai+ Training
FREE PREVIEW -
Training recording
-
Slides
-
Additional information
-
Dozens of Free Courses with Premium
-
All Courses
ODSC 2025: 6-Week Winter AI Bootcamp
69 Lessons $499.00
-
All Courses
Agentic AI Summit 2025
38 Lessons $399.00
-
All Courses
ODSC East 2025 - All Recordings
61 Lessons $299.00
-
All Courses, RAG
ODSC AI Builders 2025 Summit - Mastering RAG
26 Lessons $299.00
-
All Courses
ODSC AI West 2025 - All Recordings
56 Lessons $299.00
-
All Courses
Deep Learning Bootcamp with Dr. Jon Krohn
7 Lessons $699.00