Session Overview

In a recent experiment, we used an uncommon convolutional architecture to invert multi-channel 2D surface-acquired remote sensing data into 3D volumetric models. The traditional deterministic process to perform this inversion often takes months of compute time. We’re able to decrease it to tens of milliseconds using our deep learning model. This experiment exposed some fascinating analytical methodologies through empirical exploration, namely, an interesting embedding strategy using eigenvector decomposition at feature engineering time, a brute force conversion from 2D to 3D information in latent space, and a clever but exhaustive investigation through several model architectures including generative and fully convolutional models.

Join me for a description of the use case, a detailed technical walkthrough of the methods used to solve the problem, and a demonstration of the system on physically realistic synthetic data. I’ll describe the model architectures which failed, explain the most successful model architecture, and show the process by which we decided on the non-intuitive optimal feature transforms. As this is ongoing work, I’d love to ask the audience about their thoughts on our processes and, through an extended question and answer session, brainstorm about future work to improve accuracy and increase training speed. The application of these methodologies works in any signal processing application where reconstruction of a higher-dimensional space is required; thus, our work will be of consequence to folks in the fields of computer vision, digital signal processing, or volumetric space manipulation.


Overview

  • 1

    Inversion of 2D Remote Sensing Data to 3D Volumetric Models Using Deep Dimensionality Exchange

    • Abstract & Bio

    • Inversion of 2D Remote Sensing Data to 3D Volumetric Models Using Deep Dimensionality Exchange

INTERESTED IN HANDS-ON TRAINING SESSIONS?

Start your 7-days trial. Cancel anytime.