In this meetup, we’ll make a connection between two important ideas in the Privacy Enhancing Technologies ecosystem – Homomorphic Encryption and Differential Privacy. While Homomorphic Encryption ensures that sensitive data is not exposed during computation, Differential Privacy guarantees that each data subject can maintain their privacy when we share the result of that computation.
During this talk, we’ll look at noise growth in Homomorphic Encryption (HE), and investigate the possibility that this inherent noise can give Differential Privacy (DP) “for free”. We will recap what we mean by HE, noise, and DP, before examining new results on the DP guarantees of the Approximate HE setting. We’ll finish by applying our results to a case study: Ridge Regression Training via Gradient Descent.
Tabitha is a PhD student in the Information Security Group at Royal Holloway, University of London, and has just completed a year long internship at Intel, as part of the Security and Privacy Research Group within Intel Labs. Her area of research is Privacy Enhancing Technologies and Privacy Preserving Machine Learning, with a focus on Homomorphic Encryption.
The newsletter where we post community announcements: https://fheorg.substack.com/
The discord server where you can discuss FHE related topics with the community: https://discord.fhe.org
Make sure to join either (or both) of these to stay informed about future events!