Description

Building proof-of-concept LLM/RAG apps is easy—we know that. The next step, which consumes the most time and is the most challenging, is bringing the app to a production-ready level. You must increase accuracy, reduce latency and costs, and create reproducible results.

This workshop will focus on evaluating LLM/RAG apps. We will take a simple, predefined agentic RAG system built in LangGraph and understand how to evaluate and monitor it.

Instructors Bio

Paul Iusztin

Senior AI Engineer / Founder at Decoding ML

Paul Iusztin is a senior AI/ML engineer with over seven years of experience building GenAI, Computer Vision and MLOps solutions. His latest contribution was at Metaphysic, where he was one of the core AI engineers who took large GPU-heavy models to production. He previously worked at CoreAI, Everseen, and Continental. He is the co-author of the LLM Engineer's Handbook, a bestseller on Amazon, which presents a hands-on framework for building LLM applications. Paul is the Founder of Decoding ML, an educational channel on GenAI and information retrieval that provides code, posts, articles, and courses teaching people to build production-ready AI systems that work. His contributions to the open-source community have sparked collaborations with industry leaders like MongoDB, Comet, Qdrant, ZenML and 11 other AI companies.

Unlock Premium Features with a Subscription

  • Live Tarining:

    Full access to all live workshops and training sessions.

  • 20+ Expert-Led Workshops:

    Dive deep into AI Agents, RAG, and the latest LLMs

  • ODSC Conference Discounts:

    Receive extra discounts to attend ODSC conferences.