2022 ACSW – Opening Plenary Session
School of Computing, Macquarie University, Australia. Email: email@example.com
Mathematical Reasoning for Explainable Data Privacy
The problem of data privacy — protecting sensitive or personal information from discovery — is a long-standing research issue, and now one of general public concern on the back of some well-publicised privacy breaches. Organisations such as Google and Apple have taken to protecting customer data using a technique called differential privacy, originally designed for statistical datasets and now considered a gold standard for privacy by many in the research community. Despite its popularity and widespread use, a number of issues continue to be debated in the community regarding differential privacy’s explainability and application, including: What sort of protection does differential privacy provide against inference attacks? How can the use of noise-adding mechanisms guarantee the release of useful information? And how can this privacy-utility balance be achieved?
In this talk, I will give an overview of one approach to understanding these questions using the framework of Quantitative Information Flow (QIF). Designed for probabilistic systems, QIF is an algebraic framework for reasoning about adversarial threats modelled as inference attacks. QIF can naturally be extended to model privacy-preserving data releases and provides a unified framework for answering questions about privacy and utility. This approach leads to explainable models for risk assessment in private data releases.
Natasha is a postdoctoral researcher in cybersecurity at Macquarie University. Her research interests include differential privacy, privacy-preserving natural language processing and quantitative information flow for security and privacy. Natasha received her undergraduate degree from Sydney University in Pure Mathematics and Computer Science, and recently completed a cotutelle PhD in Computing with Macquarie University and Ecole Polytechnique in France. Her PhD focussed on information flow techniques for analysing differential privacy guarantees, and introduced new methods for reasoning about privacy for natural language processing using a metric-based version of differential privacy. Natasha’s current work is on metric-based reasoning for privacy-preserving natural language processing and machine learning.