To safely navigate roads and adapt to our ever-evolving cityscape, self-driving vehicles need a deep understanding of the world around us. We’ll explore how Waymo uses real-world and simulated data with deep learning to unlock new capabilities and build safer autonomous vehicles. We’ll also cover how Waymo uses data to develop machine learning at scale as it expands to new cities and geographies. As the only autonomous vehicle company that designs the full suite of self-driving hardware and software in-house, Waymo has developed an integrated system that allows each Waymo vehicle to process a large and diverse set of sensor data, ultimately allowing our vehicles to make informed, real-time decisions. We’ll touch on the benefits of designing and developing our hardware (LiDAR, radar, and cameras), software and manufacturing systems, how deep learning is used in various parts of self-driving from mapping, to perception, behavior prediction, and more, and what machine learning infrastructure is required to develop self-driving technology at scale.
Chen Wu is a Senior Engineering Manager at Waymo, a self-driving technology company with a mission to make it safe and easy for people and things to move around. In her role, Chen leads a team responsible for how our vehicles use Waymo’s custom sensors, including cameras, lidar and radar, to see the world around it and recognize objects such as other cars, pedestrians, and cyclists. Her researchers develops a wide range of machine learning techniques across individual sensors modalities and sensor fusion, applying them on the vehicle’s perception system that enables the vehicle to make real-time decisions. Prior to Waymo, Chen worked at Google on the algorithms that optimize photo speed and quality for Google Glass. Before that, Chen was at YouTube where she used machine learning to enable 2D videos to be viewed in 3D. Chen holds a Ph.D. and M.S. in electrical engineering from Stanford University, and a B.S. in control theory from Tsinghua University in China.