Description

As large language models (LLMs) become more widely adopted, it is crucial to understand their effective utilization, copilot development, evaluation, operationalization, and monitoring in real-world applications. This session will provide insights into incorporating responsible AI practices and safety features into your generative AI applications. You will gain knowledge on assessing your copilots and generative AI applications, mitigating content-related risks, addressing hallucinations, jailbreak, and copywrite issues, ensuring fairness, and enhancing the overall quality and safety of your copilot.


Local ODSC chapter in NYC, USA

Instructor's Bio

Mehrnoosh Sameki

Principal PM Manager, Responsible AI Tools Area Lead at Microsoft

Mehrnoosh is responsible for overseeing product initiatives that focus on responsible Artificial Intelligence and machine learning model understanding tools, such as interpretability, fairness, reliability, and decision-making, within the Open Source and Azure Machine Learning platforms.

She is co-founded several open-source repositories, including Fairlearn, Error Analysis, and Responsible-AI-Toolbox, and is also a contributor to the InterpretML offering. Mehrnoosh holds a Ph.D. in Computer Science from Boston University, where she is currently an Adjunct Assistant Professor, teaching courses on responsible AI. Prior to her role at Microsoft, she worked as a Data Scientist in the retail industry, utilizing data science and machine learning to improve customers' personalized shopping experiences.

Webinar

  • 1

    ON-DEMAND WEBINAR: Building Responsible and Safe Generative AI Applications

    • Ai+ Training

    • Webinar Recording