Course Abstract

Training duration : 90 minutes

Learn the basics of building a PyTorch model using a structured, incremental and from first principles approach. Find out why PyTorch is the fastest growing Deep Learning framework and how to make use of its capabilities: autograd, dynamic computation graph, model classes, data loaders and more. The main goal of this training is to show you how PyTorch works: we will start with a simple and familiar example in Numpy and "torch" it! At the end of it, you should be able to understand PyTorch's key components and how to assemble them together into a working model.

DIFFICULTY LEVEL: BEGINNER

Learning Objectives

  • Understand the basic building blocks of PyTorch: tensors, autograd, models, optimizers, losses, datasets, and data loaders

  • Identify the basic steps of gradient descent, and how to use PyTorch to make each one of them more automatic

  • Build, train, and evaluate a model using mini-batch gradient descent

Instructor

Instructor Bio:

Manager, Financial Advisory Analytics, Dean | Deloitte, Data Science Retreat

Daniel Voigt Godoy

Daniel is a data scientist, teacher, and author of "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide". He has been teaching machine learning and distributed computing technologies at Data Science Retreat, the longest-running Berlin-based bootcamp, since 2016, helping more than 150 students advance their careers. Daniel is also the main contributor of two Python packages: HandySpark and DeepReplay. His professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail, and mobility.

INTERESTED IN MORE HANDS-ON TRAINING SESSIONS?

Course Outline

Module 1: PyTorch: tensors, tensors, tensors 

• Introducing a simple and familiar example: linear regression   

 • Generating synthetic data    

• Tensors: what they are and how to create them

 • CUDA: GPU vs CPU tensors    

• Parameters: tensors meet gradients 


Module 2: Gradient Descent in Five Easy Steps 

• Step 0: initializing parameters    

• Step 1: making predictions in the forward pass

• Step 2: computing the loss, or “how bad is my model?”    

• Step 3: computing gradients, or “how to minimize the loss?”    

• Step 4: updating parameters    

• Bonus: learning rate, the most important hyper-parameter    

• Step 5: Rinse and repeat 


Module 3: Autograd, your companion for all your gradient needs! (15 min)    

• Computing gradients automatically with the backward method    

• Dynamic Computation Graph: what is that?   

 • Optimizers: updating parameters, the PyTorch way    

• Loss functions in PyTorch 


Module 4: Building a Model in PyTorch 

• Your first custom model in PyTorch    

• Peeking inside a model with state dictionaries

• The importance of setting a model to training mode    

• Nested models, layers, and sequential models

• Organizing our code: the training step 


Module 5: Datasets and data loaders    

• Your first custom dataset in PyTorch   

• Data loaders and mini-batches    

• Evaluation phase: setting up the stage   

• Organizing our code: the training loop   

 • Putting it all together: data preparation, model configuration, and model training    

• Taking a break: saving and loading models

Background knowledge

  • This course is for current or aspiring Data Scientists, Machine Learning Engineers, and Deep Learning Practioners

  • Knowledge of following tools and concepts:

  • Python, Jupyter notebooks, Numpy and, preferably, object oriented programming.

  • Basic machine learning concepts may be helpful, but it is not required.

Real-world applications

  • Several companies are already “powered by PyTorch”, to name a few: Facebook, Tesla, OpenAI, Uber, and more.

  • PyTorch can be used for developing deep learning models in a wide range of applications and areas, ranging from natural language processing to self-driving cars.