Get this course for free with any paid AI+ subscription

Description

Live Training Overview

Duration: 2 Hours

Discover how to transform LLMs into powerful agentic systems. Starting with LLM fundamentals, you'll learn to build single agents with tools, then advance to developing a coordinated multi-agent system that can perform financial research, analysis, and visualization tasks.

Rick Chakra

Founder & CEO | Armada IQ

Rick Chakra is a founder, researcher, and builder — with over 12 years of experience in designing and developing data and AI systems. He is the Founder & CEO of Armada IQ, an AI strategy and implementation consultancy, and was previously a Senior Consultant in Deloitte’s Applied AI practice. Rick is also an Adjunct Professor of Machine Learning and Computer Vision at UNC Charlotte’s School of Data Science. His current research focuses on exploring the boundaries of machine intelligence through simulation and evaluation techniques.

Training Outline:

Module 1 - LLM Foundations:

  • Start with a recap of core LLM functionality and API interfaces (using OpenAI)

  • Review structured thinking, tool use, memory, and RAG concepts

Module 2 - Introduction to Agents:

  • Introduce the building blocks of LangChain and LangGraph

  • Implement pre-built agent architectures with tool use and agent memory management

  • Expand to custom agent architectures via graph-based agents

  • Explore system tracing / observability with LangSmith

Module 3 - Multi-Agent Systems:

  • Develop a specialized multi-agent system to support with end-to-end financial research and analysis tasks

  • Create a vector database to house internal financial research

  • Integrate web search to access external financial information

  • Integrate a sandboxed code interpreter to generate analysis and visualizations

  • Develop system control and routing steps including:

  • Routing to internal financial research vs external financial information

  • Grading the relevance of retrieved documents

  • Detecting model hallucinations

  • Assessing the relevance of model outputs to the user's research question

  • Wrap the system with a Gradio user interface

Technical requirements:

  • Beginner / intermediate Python experience

  • Basic experience with LLMs / LLM APIs

Required accounts and API keys:

  • OpenAI

  • LangChain (for LangSmith tracing / observability) - free tier

  • Tavily (for web search) - free tier

  • E2B (for sandboxed code interpreter) - free tier

Development environment:

  • Participants will clone the student version of the notebooks from Github

  • The entire workshop and can be run in Google Colab