ODSC AI Builders 2025 Summit - Mastering LLMs
Start 2025 Strong: Master LLMs
Workshops:
Then, we explore modern workflows for adapting SLMs with domain-specific pre-training, instruction fine-tuning, and alignment. Along the way, we will introduce and demonstrate open-source tools like DistillKit, Spectrum, and MergeKit, which implement advanced techniques that are critical in achieving task-specific accuracy, optimizing computational costs, and building accurate and efficient agentic workflows. Join us to learn how smaller, efficient, and adaptable models can transform your AI applications."
There are many different approaches to fine-tuning. In this workshop, we will focus on Memory Tuning. This innovative approach combines two advanced techniques—LoRAs (Low-Rank Adaptation) and MoE (Mixture of Experts)—to create the Mixture of Memory Experts (MoME) model, pronounced “mommy.”
In this workshop, we will dive into the technical implementation details of the Mixture of Memory Experts (MoME) model that makes Memory Tuning computationally feasible at scale. We will step through a real-world text-to-SQL use case so you walk away with the knowledge to tune and evaluate your own models.
Using real world case-studies, we’ll cover how to design your evaluators and use them as part of an iterative development process. We’ll cover code-based, LLM-as-judge and human evaluators.
Walk away with a strategy for finding the AI model that's not just good—but perfect for your specific needs.
Our Agents will include a frontier model, a proprietary QLoRA fine-tuned LLM, and a RAG pipeline. There’s a lot to build, but we will draw on magical open-source libraries from HuggingFace, Gradio and Chroma to get it all done in time -- and the results will be rather satisfying.
In addition to walking through a code sample that includes everything needed for deployment on a specific platform (Modal), we'll consider the general problem of serving LLM inference workloads, the available open source software, and the hardware that software controls.
Upcoming Events: