Session Overview

Social media platforms have become a hotbed for hate speech. They have been increasingly exploited for the propagation of harmful content and abusive language. Violence and hate crimes attributed to online hate speech have increased worldwide.

It has been increasingly important to build AI systems that can automatically identify hate speech from text content. However, most of machine learning classifiers rely on supervised training. The shortage of labeled training data is one of the biggest challenges in building highly efficient hate-speech detection models.

Self-supervised training has become a prominent approach to solve this kind of problems. This approach allows the model to learn from entirely unlabeled data.
One of the breakthroughs of self-supervised learning in NLP is the Transformer. To compute the representation of a text sequence, the Transformer model relies entirely on self-attention mechanism by relating different positions of the text sequence. The Transformer marks an important paradigm shift in how we understand and model language. It is also behind the recent NLP developments in 2019, including Google’s BERT self-supervised language understanding model and FacebookAI’s multilingual language model XLM/mBERT.

We will learn how the Transformer idea works, how it’s related to language representation.
We will learn how we can leverage self-supervised language understanding models, like BERT’s embeddings, and a small amount of labeled data to build ML models that can automatically identify hate speech in text content with high accuracy.
We will also go through the details of the different algorithms and code implementations to give you a hands-on learning experience.


Overview

  • 1

    Self-Supervised Learning and Natural Language Processing for Hate Speech Detection

    • Abstract & Bio

    • Self-Supervised Learning and Natural Language Processing for Hate Speech Detection

INTERESTED IN HANDS-ON TRAINING SESSIONS?

Start your 7-days trial. Cancel anytime.