Duration: 2 Hours
"How do I know if what comes out of my agentic LLM application is correct?”
“What does a good output look like?”
“How can I avoid hallucinations and wrong answers?”
Just as in 2024, everyone working to develop production LLM applications is asking these questions, and rightly so!
This year, however, agents are on the rise, as are the number of companies building, shipping, and sharing LLM application prototypes.
In this event, we’ll explore the latest on agent evaluation from the leading LLM application evaluation framework: RAG ASsessment (RAGAS). 📚 You’ll learn: - How to think about assessing your agent applications quantitatively, with leading best-practice metrics - How agentic workflows are being assessed at the LLM edge 🤓 Who should attend the event: - Aspiring AI Engineers who want to build and evaluate production-grade agent applications - AI Engineering leaders who want to instrument their agent deployments with leading evaluators"
Greg Loughnane
Co-Founder & CEO at AI Makerspace
Chris Alexiuk
Co-Founder & CTO | DL at AI Makerspace | NVIDIA
Dozens of Free Courses with Premium
-
All Courses
ODSC 2025: 6-Week Winter AI Bootcamp
69 Lessons $499.00
-
All Courses
Agentic AI Summit 2025
38 Lessons $399.00
-
All Courses
ODSC East 2025 - All Recordings
61 Lessons $299.00
-
All Courses, RAG
ODSC AI Builders 2025 Summit - Mastering RAG
26 Lessons $299.00
-
All Courses
ODSC AI West 2025 - All Recordings
56 Lessons $299.00
-
All Courses
Deep Learning Bootcamp with Dr. Jon Krohn
7 Lessons $699.00