Description
In this session, you'll discover how to improve the security of Large Language Models (LLMs). We'll show you how to use red-teaming techniques from cybersecurity to identify and evaluate vulnerabilities in LLM applications, ensuring its safety and reliability.
Additionally, you'll learn how Giskard's tools can be integrated into your workflow for automatic vulnerability detection, allowing you to scale your security efforts for Generative AI.
Instructor's Bio
Alexandre Landeau
Data Science Lead at Giskard
Alexandre collaborates closely with clients to understand their needs and have them secure their models against performance issues, cybersecurity threats, and ethical biases. Previously, he spent 5 years as a Data Scientist and Full-stack Software Engineer in several domains.
Webinar
-
1
ON-DEMAND WEBINAR: "Secure LLM App Deployments—Strategies and Tactics"
-
Ai+ Training
-
Webinar recording
-
UPCOMING LIVE TRAINING
Register now to save 30%
-
All Courses
Generative AI Fundamentals
39 Lessons Free -
All Courses
Gradient Boosting Series - 4 courses Program
1 Lessons $137.00 -
All Courses, All Live Training
PAST LIVE TRAINING: Available On-Demand: Google BigQuery and Colab Notebooks: Develop Cloud, SQL, and Python Skills Using Public Data
2 Lessons $147.00