Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Privacy-enhanced generative adversarial network with adaptive noise allocation

Published: 19 July 2023 Publication History

Abstract

Generative adversarial networks (GANs) have become hugely popular by virtue of their impressive ability to generate realistic samples. Although GANs alleviate the arduous data-collection problem, they are prone to memorize training samples as a result of their complex model structure. Thus, GANs may not provide sufficient privacy guarantees, and there is a considerable chance of inadvertently divulging data privacy. To alleviate this issue, we design a privacy-enhanced GAN based on differential privacy. We first integrate truncated concentrated differential privacy technique into GAN for mitigating privacy leakage with tighter privacy bound. Then, according to different privacy demands of users in real-world scenarios, we design two adaptive noise allocation strategies, which enable us to dynamically inject noise into gradients at each iteration. Different strategies provide us with an intuitive handle to adopt a suitable strategy and achieve an elegant compromise between privacy and utility in distinct scenarios. Furthermore, we offer rigorous illustrations from the perspective of privacy preservation and privacy defense to demonstrate that our algorithm can fulfill differential privacy guarantees. Extensive experiments on real-world datasets manifest that our algorithm can generate high-quality samples while achieving an excellent trade-off between model performance and privacy guarantees.

References

[1]
Ma J., Jiang X., Fan A., Jiang J., Yan J., Image matching from handcrafted to deep features: A survey, Int. J. Comput. Vis. 129 (1) (2021) 23–79.
[2]
Pei Y., Huang Y., Zou Q., Zhang X., Wang S., Effects of image degradation and degradation removal to CNN-based image classification, IEEE Trans. Pattern Anal. Mach. Intell. 43 (4) (2021) 1239–1253.
[3]
Zheng Z., An G., Wu D., Ruan Q., Global and local knowledge-aware attention network for action recognition, IEEE Trans. Neural Netw. Learn. Syst. 32 (1) (2021) 334–347.
[4]
Zhou P., Wang K., Guo L., Gong S., Zheng B., A privacy-preserving distributed contextual federated online learning framework with big data support in social recommender systems, IEEE Trans. Knowl. Data Eng. 33 (3) (2021) 824–838.
[5]
Voigt P., Von dem Bussche A., The eu general data protection regulation (gdpr), in: A Practical Guide, Vol. 10, 2017.
[6]
Mescheder L., Nowozin S., Geiger A., Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks, in: International Conference on Machine Learning, 2017, pp. 2391–2400.
[7]
Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville A.C., Bengio Y., Generative adversarial nets, in: Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
[8]
Arjovsky M., Chintala S., Bottou L., Wasserstein generative adversarial networks, in: International Conference on Machine Learning, 2017, pp. 214–223.
[9]
Hitaj B., Ateniese G., Perez-Cruz F., Deep models under the GAN: information leakage from collaborative deep learning, in: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 603–618.
[10]
Hayes J., Melis L., Danezis G., Cristofaro E.D., LOGAN: membership inference attacks against generative models, Proc. Priv. Enhancing Technol. 2019 (1) (2019) 133–152.
[11]
Sweeney L., k-Anonymity: A model for protecting privacy, Int. J. Uncertain. Fuzziness Knowl. Based Syst. 10 (05) (2002) 557–570.
[12]
Machanavajjhala A., Kifer D., Gehrke J., Venkitasubramaniam M., L-Diversity: Privacy beyond k-anonymity, ACM Trans. Knowl. Discov. Data 1 (1) (2007) 3.
[13]
Dwork C., Differential privacy, in: Automata, Languages and Programming, Vol. 4052, 2006, pp. 1–12.
[14]
Dwork C., McSherry F., Nissim K., Smith A.D., Calibrating noise to sensitivity in private data analysis, in: Theory of Cryptography, Third Theory of Cryptography Conference, Vol. 3876, 2006, pp. 265–284.
[15]
Dwork C., Roth A., The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci. 9 (3–4) (2014) 211–407.
[16]
Xie L., Lin K., Wang S., Wang F., Zhou J., Differentially private generative adversarial network, 2018, arXiv preprint arXiv:1802.06739.
[17]
Beaulieu-Jones B.K., Wu Z.S., Williams C., Lee R., Bhavnani S.P., Byrd J.B., Greene C.S., Privacy-preserving generative deep neural networks support clinical data sharing, Circ.: Cardiovasc. Qual. Outcomes 12 (7) (2019).
[18]
Jordon J., Yoon J., van der Schaar M., PATE-GAN: Generating synthetic data with differential privacy guarantees, in: 7th International Conference on Learning Representations, 2019.
[19]
Papernot N., Abadi M., Erlingsson Ú., Goodfellow I.J., Talwar K., Semi-supervised knowledge transfer for deep learning from private training data, in: 5th International Conference on Learning Representations, 2017.
[20]
Xu C., Ren J., Zhang D., Zhang Y., Qin Z., Ren K., GANobfuscator: Mitigating information leakage under GAN via differential privacy, IEEE Trans. Inf. Forensics Secur. 14 (9) (2019) 2358–2371.
[21]
Chen D., Orekondy T., Fritz M., GS-WGAN: A gradient-sanitized approach for learning differentially private generators, in: Advances in Neural Information Processing Systems, 2020.
[22]
Ma C., Li J., Ding M., Liu B., Wei K., Weng J., Poor H.V., RDP-GAN: A Rényi-differential privacy based generative adversarial network, IEEE Trans. Dependable Secure Comput. (2023) 1–15.
[23]
Yu L., Liu L., Pu C., Gursoy M.E., Truex S., Differentially private model publishing for deep learning, in: IEEE Symposium on Security and Privacy, 2019, pp. 332–349.
[24]
Wei K., Li J., Ding M., Ma C., Su H., Zhang B., Poor H.V., User-level privacy-preserving federated learning: Analysis and performance optimization, IEEE Trans. Mob. Comput. 21 (9) (2021) 3388–3401.
[25]
Fredrikson M., Jha S., Ristenpart T., Model inversion attacks that exploit confidence information and basic countermeasures, in: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333.
[26]
Shokri R., Stronati M., Song C., Shmatikov V., Membership inference attacks against machine learning models, in: 2017 IEEE Symposium on Security and Privacy, IEEE Computer Society, 2017, pp. 3–18.
[27]
Bun M., Dwork C., Rothblum G.N., Steinke T., Composable and versatile privacy via truncated CDP, in: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, 2018, pp. 74–86.
[28]
Rényi A., On measures of entropy and information, in: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, The Regents of the University of California, 1961.
[29]
van Erven T., Harremoës P., Rényi divergence and Kullback-Leibler divergence, IEEE Trans. Inf. Theory 60 (7) (2014) 3797–3820.
[30]
Arjovsky M., Chintala S., Bottou L., Wasserstein GAN, 2017, arXiv preprint arXiv:1701.07875.
[31]
Wu X., Li F., Kumar A., Chaudhuri K., Jha S., Naughton J.F., Bolt-on differential privacy for scalable stochastic gradient descent-based analytics, in: Proceedings of the 2017 ACM International Conference on Management of Data, 2017, pp. 1307–1322.
[32]
Chaudhuri K., Monteleoni C., Sarwate A.D., Differentially private empirical risk minimization, J. Mach. Learn. Res. 12 (2011) 1069–1109.
[33]
Yu D., Zhang H., Chen W., Yin J., Liu T., Gradient perturbation is underrated for differentially private convex optimization, in: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020, pp. 3117–3123.
[34]
Lee J., Kifer D., Concentrated differentially private gradient descent with adaptive per-iteration privacy budget, in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1656–1665.
[35]
LeCun Y., Bottou L., Bengio Y., Haffner P., Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278–2324.
[36]
Xiao H., Rasul K., Vollgraf R., Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms, 2017, CoRR abs/1708.07747.
[37]
A. Krizhevsky, G. Hinton, et al., Learning Multiple Layers of Features from Tiny Images, Toronto, ON, Canada, 2009.
[38]
Liu Z., Luo P., Wang X., Tang X., Deep learning face attributes in the wild, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3730–3738.
[39]
Yu F., Seff A., Zhang Y., Song S., Funkhouser T., Xiao J., Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop, 2015, arXiv preprint arXiv:1506.03365.
[40]
Salimans T., Goodfellow I.J., Zaremba W., Cheung V., Radford A., Chen X., Improved techniques for training GANs, in: Advances in Neural Information Processing Systems, 2016, pp. 2226–2234.
[41]
Heusel M., Ramsauer H., Unterthiner T., Nessler B., Hochreiter S., GANs trained by a two time-scale update rule converge to a local Nash equilibrium, in: Advances in Neural Information Processing Systems, 2017, pp. 6626–6637.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Knowledge-Based Systems
Knowledge-Based Systems  Volume 272, Issue C
Jul 2023
308 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 19 July 2023

Author Tags

  1. Generative adversarial network
  2. Privacy guarantees
  3. Differential privacy
  4. Adaptive noise allocation

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Sep 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media