Description

Machine learning has the potential to transform businesses due to its ability to deliver non-obvious, valuable insights from massive amounts of data. There are multiple programming languages and tools like Python and TensorFlow which are helping data scientists create ever sophisticated machine learning models. However, putting these models into production in hybrid architectures and multi cloud environments in the enterprise requires data science teams to handle major issues including scalability, speed and massive data movements between different platforms.

While there are distributed analytics databases designed to specifically handle massive workloads with high concurrency, they usually have limited set of machine learning capabilities. What if there is a way to combine the power and flexibility of machine learning tools with the speed and scalability of the massively parallel processing analytical databases?

In this talk, we will demonstrate how you can combine machine learning tools like Python and TensorFlow with a massively parallel processing analytics platform to fully leverage the potential of your big data, breaking free of common speed, security and deployment constraints by putting your models into production across hundreds of nodes in a cluster.

Instructors' Bio

Waqas Dhillon, Technical Product Manager for Machine Learning at Vertica

Waqas is a Technical Product Manager for Machine Learning at Vertica. He drives the development of distributed in-database machine learning algorithms and the integrations with open source tools and platforms including Python and Tensorflow. He is passionate about the productization of machine learning models in enterprise environments. Previously, Waqas has worked with CPG and telecommunications companies in the use of big data analytics for consumer research and revenue growth.


Webinar

  • 1

    Putting Machine Learning Models into Production on MPP Platforms

    • Putting Machine Learning Models into Production on MPP Platforms