Course curriculum

  • 1

    Trustworthy AI

    • Abstract and Bio

    • Trustworthy AI

  • 2

    Open-source Best Practices in Responsible AI

    • Abstract and Bio

    • Open-source Best Practices in Responsible AI

  • 3

    ImageNet and its Discontents: The Case for Responsible Interpretation in ML

    • Abstract and Bio

    • ImageNet and its Discontents: The Case for Responsible Interpretation in ML

  • 4

    Open-source Tools for Synthetic Data On-Demand

    • Abstract and Bio

    • Open-source Tools for Synthetic Data On-Demand

Abstract and Speaker

Trustworthy AI

Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success. Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems. However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.


   Jeannette M. WingPhD, Executive Vice President of Research | Professor of Computer Science @ Columbia University

Open-source Best Practices in Responsible AI

We have started a Non-profit organization, the Foundation for Best Practices in Machine Learning. Our goal is to help data scientists, governance experts, managers, and other machine learning professionals implement ethical and responsible machine learning. We do that via our free, open-source technical and organisational Best Practices for Responsible AI. The technical and organisational best practices look at both the technical and institutional requirements needed to promote responsible ML. Both blueprints touch on subjects such as “Fairness & Non-Discrimination”, “Representativeness & Specification”, “Product Traceability”, “Explainability” amongst other topics. Where the organisational guide relates to organisation-wide process and responsibilities (f.e. the necessity of setting proper product definitions and risk portfolios); the model guide details issues ranging from cost function specification & optimisation to selection function characterization, from disparate impact metrics to local explanations and counterfactuals. It also addresses issues concerning thorough product management.


   Violeta Misheva, PhD, Senior Data Scientist | Vice-Chair @ ABN Amro Bank | The Foundation for Best Practices in ML 

   Daniel Vale, Legal Counsel for AI & Data Science @ H&M Group

ImageNet and its Discontents: The Case for Responsible Interpretation in ML

Sociotechnical systems abound in examples of the ways they constitute sources of harm for historically marginalized groups. In this context, the field of machine learning has seen a rapid proliferation of new machine learning methods, model architectures, and optimization techniques. Yet, data -- which remains the backbone of machine learning research and development -- has received comparatively little research attention. My research hypothesis is that focusing exclusively on the content of training datasets — the data used for algorithms to “learn” associations — only captures part of the problem. Instead, we should identify the historical and conceptual conditions which unveil the modes of dataset construction. In this talk, we propose an analysis of datasets from the perspective of three techniques of interpretation: genealogy, problematization, and hermeneutics. 


   Razvan AmironeseiPhD, Applied Data Ethicist | Visiting Researcher @ Google

Open-source Tools for Synthetic Data On-Demand

Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success. Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems. However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.


   Lipika Ramaswamy, Senior Applied Scientist Gretel.ai