Session Overview

Advanced data science techniques, like machine learning, are a very useful tool to derive valuable insights from existing data. Platforms like Spark and complex libraries for R, Python, and Scala put advanced techniques at the fingertips of data scientists. The drawback, however, is that accessing all of the data in an integrated central repository is a challenge, resulting in data scientists spending up to 80% of the project time on data acquisition and preparation tasks.

Data virtualization is a modern data integration technique that integrates data in real-time, without physically replicating it. It can seamlessly combine views of data from a wide variety of different data sources and feed AI/ML engines with data from a common data services layer.

Join us for this demo to learn:

How data virtualization can accelerate data acquisition and preparation, providing the data scientist with a powerful tool to complement their practice

How popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc. integrate with the Denodo Platform for Data Virtualization

How you can use the Denodo Platform with large data volumes in an efficient way


Overview

  • 1

    The Role of Data Virtualization in AI/ML Projects - A Demonstration

    • Abstract & Bio

    • The Role of Data Virtualization in AI/ML Projects - A Demonstration

INTERESTED IN HANDS-ON TRAINING SESSIONS?

Start your 7-days trial. Cancel anytime.