Description

Enabling responsible development of artificial intelligent technologies is one of the major challenges we face as the field moves from research to practice. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learning in many current and future real-world applications.  Now there are calls from across the industry (academia, government, and industry leaders) for technology creators to ensure that AI is used only in ways that benefit people and “to engineer responsibility into the very fabric of the technology.”   Overcoming these challenges and enabling responsible development is essential to ensure a future where AI and machine learning can be widely used. In this talk we will cover six principles of development and deployment of trustworthy AI systems:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. We present on how each principle plays a key role in responsible AI and what it means to take these principles from theory to practice. We will cover open source products across different area of responsible AI umbrella, particularly transparency and interpretability for tabular and text data and AI fairness that aims to empower researchers, data scientists, and machine learning developers to take a significant step forward in this space, building trust between users and AI systems.


Responsible AI is an umbrella term for many themes associated with the intersection of ethics and AI.  One reasonable enumeration is Microsoft’s 6 Principles for AI development:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. For this presentation, we focus on Transparency (Interpretability), Fairness and Inclusiveness, and Privacy as major principles of responsible AI and cover best practices and state-of-the-art open source toolkits and offerings that target researchers, data scientists, machine learning developers, and business stakeholders to be able to build trustable, more transparent AI systems.


Attendees will leave the session with the basic understanding of responsible AI principles, best practices and open source tools around responsible development and deployment of AI systems. They will be able to incorporate the introduced tools and products in their machine learning life cycle, running them on their previously-trained models to understand the factors that went into their model predictions, and verify their model fairness across protected attributes and mitigate the existing bias.

Instructor's Bio

Mehrnoosh Sameki

 Senior Technical Program Manager at Microsoft

Mehrnoosh Sameki is a senior technical program manager at Microsoft, responsible for leading the product efforts on machine learning interpretability and fairness within the Azure Machine Learning platform. She earned her PhD degree in computer science at Boston University. She also serves as an adjunct assistant professor at Boston University, offering courses in AI and responsible ML. Previously, she was a data scientist at Rue Gilt Groupe, incorporating data science and machine learning in the retail space to drive revenue and enhance customers’ personalized shopping experiences.


Minsoo Thigpen is a Program Manager in the Responsible AI team at Microsoft focusing on building out offerings for the OSS Interpretability Toolkit and its integration into Azure Machine Learning Platform. She recently graduated from Microsoft's pilot AI rotation program as one of three first PMs in the first cohort working on a variety of ML/AI application projects within Microsoft to accelerate its adoption of the AI-first initiative. She has Bachelor's degrees in Applied Math and Painting from Brown University and Rhode Island School of Design (RISD). Coming from an interdisciplinary background with experience in building models and applications, analyzing data, and designing UX, she is looking to work in the intersection of AI/ML, design, and social sciences to empower data practitioners to work ethically and responsibly end-to-end.


Local ODSC chapter in Zurich, Switzerland


Use discount code - Meetup2020 - to get extra 10% off on your pass for Virtual Conference West and Virtual Conference APAC.

Webinar

  • 1

    Responsible AI – State of the Art and Future Directions

    • AI+ Training

    • Webinar recording

    • AI+ Subscription Plans