Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Paper 2022/1250

Eureka: A General Framework for Black-box Differential Privacy Estimators

Yun Lu, University of Victoria
Malik Magdon-Ismail, Rensselaer Polytechnic Institute
Yu Wei, Purdue University West Lafayette
Vassilis Zikas, Purdue University West Lafayette
Abstract

Differential privacy (DP) is a key tool in privacy-preserving data analysis. Yet it remains challenging for non-privacy-experts to prove the DP of their algorithms. We propose a methodology for domain experts with limited data privacy background to empirically estimate the privacy of an arbitrary mechanism. Our Eureka moment is a new link---which we prove---between the problems of DP parameter-estimation and Bayes optimal classifiers in ML, which we believe can be of independent interest. Our estimator uses this link to achieve two desirable properties: (1) black-box, i.e., it does not require knowledge of the underlying mechanism, and (2) it has a theoretically-proven accuracy, depending on the underlying classifier used, allowing plug-and-play use of different classifiers. More concretely, motivated by the impossibility of the above task for unrestricted input domains (which we prove), we introduce a natural, application-inspired relaxation of DP which we term relative DP. Intuitively, relative DP defines a mechanism's privacy relative to an input set $T$, circumventing the above impossibility when $T$ is finite. Importantly, it preserves the key intuitive privacy guarantee of DP while enjoying a number of desirable DP properties---scalability, composition, and robustness to post-processing. We then devise a black-box poly-time $(\epsilon,\delta)$-relative DP estimator for any poly-size $T$---the first privacy estimator to support mechanisms with large output spaces while having tight accuracy bounds. As a result of independent interest, we generalize our theory to develop the first Distributional Differential Privacy (DDP) estimator. We benchmark our estimator in a proof-of-concept implementation. First, using kNN as the classifier we show that our method (1) produces a tight, analytically computed $(\epsilon, \delta)$-DP trade-off of low-dimensional Laplace and Gaussian mechanisms---the first to do so, (2) accurately estimates the privacy spectrum of DDP mechanisms, and (3) can verify a DP mechanism's implementations, e.g., Sparse Vector Technique, Noisy Histogram, and Noisy max. Our implementation and experiments demonstrate the potential of our framework, and highlight its computational bottlenecks in estimating DP, e.g., in terms of the size of $\delta$ and the data dimensionality. Our second, neural-network-based instantiation makes a first step in showing that our method can be extended to mechanisms with high-dimensional outputs.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Published elsewhere. 45th IEEE Symposium on Security and Privacy
DOI
https://doi.ieeecomputersociety.org/10.1109/SP54263.2024.00166
Keywords
differential privacydistributional differential privacyprivacy estimatorclassification algorithms
Contact author(s)
yunlu @ uvic ca
magdon @ cs rpi edu
yuwei @ purdue edu
vzikas @ purdue edu
History
2024-05-29: last of 5 revisions
2022-09-20: received
See all versions
Short URL
https://ia.cr/2022/1250
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2022/1250,
      author = {Yun Lu and Malik Magdon-Ismail and Yu Wei and Vassilis Zikas},
      title = {Eureka: A General Framework for Black-box Differential Privacy Estimators},
      howpublished = {Cryptology {ePrint} Archive, Paper 2022/1250},
      year = {2022},
      doi = {https://doi.ieeecomputersociety.org/10.1109/SP54263.2024.00166},
      url = {https://eprint.iacr.org/2022/1250}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.