Description

As enterprises adopt increasingly powerful AI systems, one critical challenge remains: understanding how and why these systems make decisions. Many modern models - especially large language models - operate as opaque “black boxes,” making transparency, compliance, and trust difficult to achieve.

This webinar introduces the AI Explainability Scorecard, a practical framework for evaluating the transparency of AI systems based on criteria such as faithfulness, consistency, accessibility, and comprehensibility. Attendees will learn how different model architectures - from K-Nearest Neighbors to neural networks and transformers - vary dramatically in explainability and risk.

The session will also explore real-world techniques for improving visibility into complex models, including surrogate monitoring approaches that map AI behavior to comparable examples. These methods help organizations build observable, auditable AI systems without sacrificing performance or scalability.

By the end of this session, attendees will understand how to move beyond black-box AI toward transparent, accountable, and secure AI deployment - a critical step for enterprises scaling agentic AI in high-stakes environments.

You will learn:
- Why Explainability Is the Foundation of Trustworthy AI
Learn why transparency isn’t just a technical preference - it’s becoming a legal, ethical, and operational requirement for enterprise AI systems.

- The AI Explainability Scorecard Framework
Understand the five key criteria - faithfulness, comprehensibility, consistency, accessibility, and optimization clarity - for evaluating how explainable an AI model truly is.

- How Different AI Models Compare in Transparency
Discover why some models are inherently interpretable while others require advanced methods to understand their behavior.

- Practical Methods for Making Black-Box AI Observable
Explore modern techniques - including surrogate monitoring models - that help organizations understand and audit large language models at scale.


Local ODSC AI Meetup chapters

Instructor's Bio

Michael Novack

Solutions Architect at AIceberg

Webinar

  • 1

    UPCOMING WEBINAR "Trust What You Can Trace: Making Agentic AI Explainable, Secure, and Enterprise-Ready"

    • Ai+ Training

    • RSVP here