Course Abstract

Natural Language Processing (NLP) and Conversational AI has been transforming various industries such as Search, Social Media, Automation, Contact Center, Assistants, and eCommerce. It has undergone several phases of research and development. Prior to the 1990s, most systems were purely based on rules. Then came machine learning based systems, however, it was still hard to manage multiple domains and scenarios. Post 2013, Transfer learning and Deep Learning based systems further enhanced the performance substantially by scaling the system to millions of users across a variety of applications. More recently, Transformers based models such as BERT and GPT-3 have taken the AI research and product community by storm through a variety of benefits. Despite significant progress in the past decade, most systems rely on large amounts of data annotation or require experts in the loop. With recent no code or low code tools, it is possible for developers with minimal coding or NLP/Conversational AI background to be able to build applications. In course, I will be starting with a background in NLP, Conversational AI, and Deep learning and then gradually move towards advanced topics such as Transformers based Large Scale Language Models such as BERT and building complex Conversational Bots. The objective is to start really simple examples and illustrations and then gradually reach and understand the building blocks of state of the art NLP and Conversational AI techniques such as Transformers. I will walk the audience through hands-on examples and how they can leverage such techniques in their applications. Being at the forefront of cutting edge technologies through research and development and as a leader at companies like Amazon Alexa, Uber AI and Got It AI, I will share my experiences which can help the audience quickly obtain a good overview of the field of NLP and Conversational AI.


Key Skills and Tools

  • Python

  • PyTorch

  • Huggingface

  • Rasa

Learning Objectives

  • Get to know the background of NLP and Conversational AI applications such as Alexa and Google Assistant and the know the techniques behind them

  • Understand the techniques through hands on examples and obtain guidance on how to leverage them in various applications

  • Build NLP and Conversational Applications powered by state of the art technologies

Instructor Bio:

Chandra Khatri

Chief Scientist and Head of AI Research | Got It AI

Chandra Khatri

Chandra Khatri is one of the leading experts in the field of Conversational AI and Multi-modal AI. Currently, he is the Chief Scientist and Head of AI at Got It AI while also being the CTO of the BITSAA Silicon Valley Chapter. He is best known for leveraging cutting edge research and technologies for transforming products thereby impacting hundreds of millions of users. At Got It AI, he is leading the efforts of transforming the AI space by leveraging state-of-the-art technologies in order to deliver Self-Discovering, Self-Training, and Self-Optimizing products. Under his leadership, Got It AI is democratizing Conversational AI and related ecosystems through automation. Prior to Got-It, Chandra was leading various kinds of applied research projects at Uber AI such as Conversational AI, Multi-modal AI, and Recommendation Systems. Prior to Uber AI, he was the founding member of the Alexa Prize Competition at Amazon, wherein he was leading the R&D and got the opportunity to significantly advance the field of Conversational AI, particularly Open-domain Dialog Systems, which is considered as the holy-grail of Conversational AI and is one of the open-ended problems in AI. Prior to Alexa AI, he was driving NLP, Deep Learning, and Recommendation Systems related Applied Research at eBay. He graduated from Georgia Tech with a specialization in Deep Learning in 2015 and holds an undergraduate degree from BITS Pilani, India (2012). His current areas of research include Artificial and General Intelligence, Democratization of AI, Reinforcement Learning, Language Understanding, Conversational AI, Multi-modal and Human-agent Interactions, and Introducing Common Sense within Artificial Agents.

Course Outline

1. Introduction 

  • Who am I?
  • Course Overview and Objectives

2. NLP and Conversational AI: Background and Motivation 

  • Motivation and Challenges
  • Standard Tasks in NLP such as: 
  • Classification
  • Generation
  • Parsing
  • Machine Translation
  • Summarization
  • Standard Applications and Tasks in Conversational AI:
  • Types of Conversational AI Systems
  • Components of Conversational AI Systems

3. Deep Learning Background & Motivation 

  • Motivation
  • Standard Deep Learning Techniques:
  • NN
  • Standard Deep Learning based NLP Tasks:
  • Classification
  • Generation: Sequence to Sequence via Encoders and Decoders
  • Hands on Code Examples

4. Building Blocks of NLP 

  • Featurization & Embeddings
  • Classic: One-Hot and TF-IDF Featurization
  • Advantages of word Embeddings such as Word2vec
  • Transformers
  • Contextual and Dynamic Embeddings through Transformers such as BERT
  • Language Models 
  • Pre-trained Language Models and their applications (BERT, GPT-3)
  • Leveraging state of the art NLP toolkits such as Huggingface
  • Hands On Coding:
  • Classification: Sentiment Analysis
  • Tagging: Named Entity Recognition
  • Generation: Question & Answering
  • Machine Translation
  • Text Summarization

5. Conversational AI

  • Evolution of Conversational AI
  • Components of Conversational AI in detail: 
  • ASR
  • NLU
  • Dialog Manager and Dialog State Tracking
  • Natural Language Generation
  • Text To Speech
  • Leveraging state of the art Conversational AI toolkits such as Rasa
  • Hands On Code:
  • Rasa Tutorial
  • Building Assistants such as Alexa and Google Assistant
  • Building bots leveraging state of the art NLP models such as Transformers (e.g. BERT features)

6. Conclusion and Recap 

  • Overview of learnings
  • Best practices and recommendations
  • Additional resources

Background knowledge

  • Python