The COVID-19 pandemic demonstrates the tremendous importance of data for research, cause analysis, government action, and medical progress. However, for understandable data protection considerations,
individuals and decision-makers are often very reluctant to share personal or sensitive data. To ensure sustainable progress, we need new practices that enable insights from personal data while reliably protecting individuals' privacy.
Pioneered by Microsoft Research and associates, differential privacy is the emerging gold standard for protecting data in applications like preparing and publishing statistical analyses. Differential privacy provides a mathematically measurable privacy guarantee to individuals by adding a carefully tuned amount of statistical noise to sensitive data. It promises significantly higher privacy protection levels than commonly used disclosure limitation practices like data anonymization.
Join our demo-intensive session to learn about:
- What differential privacy is and how it works
- Microsoft's and Harvard's OpenDP initiative and the SmartNoise system
- Using SmartNoise to protect sensitive data against privacy attacks
- How to create differentially private synthetic data using the new SmartNoise synthesizers
- Performing analytics, machine learning including deep learning on sensitive data using differential privacy
- The trade-off between privacy guarantee and accuracy of analytical results
- How to engage in our Early Adopter program
Digital Advisor for AI Solutions, Microsoft
As a Microsoft Digital Advisor, Andreas Kopp advises Enterprise customers on the planning and implementation of digital business solutions. His focus is on applied business AI solutions, including medical imaging and fraud detection. Furthermore, he specializes in practical solutions for the responsible use of AI systems. Among these are AI interpretability and fairness, as well as differential privacy.
Principal Program Manager, Microsoft
Sarah’s work focuses on research and emerging technology strategy for AI products in Azure. Sarah works to accelerate the adoption and positive impact of AI by bringing together the latest innovations in research with the best of open source and product expertise to create new tools and technologies.
She is an active member of the Microsoft AETHER committee, where she works to develop and drive company-wide adoption of responsible AI principles, best practices, and technologies. Sarah was one of the founding researchers in the Microsoft FATE research group and prior to joining Microsoft worked on AI fairness in Facebook.
Sarah is an active contributor to the open-source ecosystem, she co-founded ONNX, Fairlearn, and OpenDP’s SmartNoise was a leader in the Pytorch 1.0 and InterpretML projects. She was an early member of the machine learning systems research community and has been active in growing and forming the community. She co-founded the MLSys research conference and the Learning Systems workshops. She has a Ph.D. in computer science from UC Berkeley advised by Dave Patterson, Krste Asanovic, and Burton Smith.
ML Engineer (MAIDAP) at Microsoft
Lucas Rosenblatt is a Machine Learning Engineer and Researcher with Microsoft's AI Development Acceleration Program (MAIDAP). He works in the space of responsible AI around privacy and fairness, and has contributed differentially private synthesizers to the Smartnoise toolkit. You can read about some of his work in a paper he and the team published on the subject: Differentially Private Synthetic Data: Applied Evaluations and Enhancements (arxiv.org). He graduated from Brown University in 2019 with Honors.
Use discount code WEBINAR2021 to get your Virtual ODSC East 2021 pass with an additional 10% OFF
AI+ Subscription Plans