Description
In this session, you'll discover how to improve the security of Large Language Models (LLMs). We'll show you how to use red-teaming techniques from cybersecurity to identify and evaluate vulnerabilities in LLM applications, ensuring its safety and reliability.
Additionally, you'll learn how Giskard's tools can be integrated into your workflow for automatic vulnerability detection, allowing you to scale your security efforts for Generative AI.
Instructor's Bio

Alexandre Landeau
Data Science Lead at Giskard
Alexandre collaborates closely with clients to understand their needs and have them secure their models against performance issues, cybersecurity threats, and ethical biases. Previously, he spent 5 years as a Data Scientist and Full-stack Software Engineer in several domains.
Webinar
-
1
ON-DEMAND WEBINAR: "Secure LLM App Deployments—Strategies and Tactics"
-
Ai+ Training
-
Webinar recording
-
UPCOMING LIVE TRAINING
Register now to save 30%