Abstract
The datasets containing sensitive information can’t be publicly shared as a privacy-risk posed by several types of attacks exists. The data perturbation approach uses a random noise adding mechanism to preserve privacy, however, results in distortion of useful data. There remains the challenge of studying and optimizing privacy-utility tradeoff especially in the case when statistical distributions of data are unknown. This study introduces a novel information theoretic framework for studying privacy-utility tradeoff suitable for multivariate data and for the cases with unknown statistical distributions. We consider an information theoretic approach of quantifying privacy-leakage by the mutual information between sensitive data and released data. At the core of privacy-preserving framework lies a variational Bayesian fuzzy model approximating the uncertain mapping between released noise added data and private data such that the model is employed for variational approximation of informational privacy. The suggested privacy-preserving framework consists of three components: 1) Optimal Noise Adding Mechanism; 2) Modeling of Uncertain Mapping Between Released Noise Added Data and Private Data; and 3) Variational Approximation of Information Privacy.
The research reported in this paper has been supported by the Austrian Research Promotion Agency (FFG) Grant 873979 “Privacy Preserving Machine Learning for Industrial Applications”, EU Horizon 2020 Grant 826278 “Securing Medical Data in Smart Patient-Centric Healthcare Systems” (Serums), and the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry for Digital and Economic Affairs, and the Province of Upper Austria in the frame of the COMET center SCCH.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Basciftci, Y.O., Wang, Y., Ishwar, P.: On privacy-utility tradeoffs for constrained data release mechanisms. In: 2016 Information Theory and Applications Workshop (ITA), pp. 1–6, January 2016. https://doi.org/10.1109/ITA.2016.7888175
Calmon, F.D.P., Fawaz, N.: Privacy against statistical inference. In: Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing, Allerton (2012). http://arxiv.org/abs/1210.2123
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 2172–2180. Curran Associates, Inc. (2016). http://papers.nips.cc/paper/6399-infogan-interpretable-representation-learning-by-information-maximizing-generative-adversarial-nets.pdf
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014). https://doi.org/10.1561/0400000042
Huang, C., Kairouz, P., Chen, X., Sankar, L., Rajagopal, R.: Context-aware generative adversarial privacy. Entropy 19(12), 656 (2017). https://doi.org/10.3390/e19120656
Kifer, D., Machanavajjhala, A.: No free lunch in data privacy. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, pp. 193–204. SIGMOD 2011, Association for Computing Machinery, New York, NY, USA (2011). https://doi.org/10.1145/1989323.1989345
Kumar, M., Insan, A., Stoll, N., Thurow, K., Stoll, R.: Stochastic fuzzy modeling for ear imaging based child identification. IEEE Trans. Syst. Man Cybern. Syst. 46(9), 1265–1278 (2016). https://doi.org/10.1109/TSMC.2015.2468195
Kumar, M., et al.: Stress monitoring based on stochastic fuzzy analysis of heartbeat intervals. IEEE Trans. Fuzzy Syst. 20(4), 746–759 (2012). https://doi.org/10.1109/TFUZZ.2012.2183602
Kumar, M., Stoll, N., Stoll, R.: Variational Bayes for a mixed stochastic/deterministic fuzzy filter. IEEE Trans. Fuzzy Syst. 18(4), 787–801 (2010). https://doi.org/10.1109/TFUZZ.2010.2048331
Kumar, M., Stoll, N., Stoll, R.: Stationary fuzzy Fokker-Planck learning and stochastic fuzzy filtering. IEEE Trans. Fuzzy Syst. 19(5), 873–889 (2011). https://doi.org/10.1109/TFUZZ.2011.2148724
Kumar, M., Stoll, N., Stoll, R., Thurow, K.: A stochastic framework for robust fuzzy filtering and analysis of signals-Part I. IEEE Trans. Cybern. 46(5), 1118–1131 (2016). https://doi.org/10.1109/TCYB.2015.2423657
Kumar, M., Rossbory, M., Moser, B.A., Freudenthaler, B.: Deriving an optimal noise adding mechanism for privacy-preserving machine learning. In: Anderst-Kotsis, G., et al. (eds.) Database and Expert Systems Applications, pp. 108–118. Springer International Publishing, Cham (2019)
Li, N., Li, T., Venkatasubramanian, S.: t-closeness: privacy beyond k-anonymity and l-diversity. In: Chirkova, R., Dogac, A., Özsu, M.T., Sellis, T.K. (eds.) Proceedings of the 23rd International Conference on Data Engineering, ICDE 2007, The Marmara Hotel, Istanbul, Turkey, 15–20 April 2007, pp. 106–115. IEEE Computer Society (2007). https://doi.org/10.1109/ICDE.2007.367856
Liu, C., Chakraborty, S., Mittal, P.: Dependence makes you vulnberable: differential privacy under dependent tuples. In: 23rd Annual Network and Distributed System Security Symposium, NDSS 2016, San Diego, California, USA, 21–24 February 2016. The Internet Society (2016). http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2017/09/dependence-makes-you-vulnerable-differential-privacy-under-dependent-tuples.pdf
Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: L-diversity: privacy beyond k-anonymity. ACM Trans. Knowl. Discov. Data 1(1) (2007). https://doi.org/10.1145/1217299.1217302
Rebollo-Monedero, D., Forné, J., Domingo-Ferrer, J.: From t-closeness-like privacy to postrandomization via information theory. IEEE Trans. Knowl. Data Eng. 22(11), 1623–1636 (2010). https://doi.org/10.1109/TKDE.2009.190
Sankar, L., Rajagopalan, S.R., Poor, H.V.: Utility-privacy tradeoffs in databases: an information-theoretic approach. IEEE Trans. Inf. Forensics Secur. 8(6), 838–852 (2013). https://doi.org/10.1109/TIFS.2013.2253320
Sweeney, L.: K-anonymity: a model for protecting privacy. Int. J. Uncertainity Fuzziness Knowl. Based Syst. 10(5), 557–570 (2002). https://doi.org/10.1142/S0218488502001648
Tripathy, A., Wang, Y., Ishwar, P.: Privacy-preserving adversarial networks. In: 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 495–505, September 2019. https://doi.org/10.1109/ALLERTON.2019.8919758
Wang, Y., Basciftci, Y.O., Ishwar, P.: Privacy-utility tradeoffs under constrained data release mechanisms. CoRR abs/1710.09295 (2017). http://arxiv.org/abs/1710.09295
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kumar, M., Brunner, D., Moser, B.A., Freudenthaler, B. (2020). Variational Optimization of Informational Privacy. In: Kotsis, G., et al. Database and Expert Systems Applications. DEXA 2020. Communications in Computer and Information Science, vol 1285. Springer, Cham. https://doi.org/10.1007/978-3-030-59028-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-59028-4_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59027-7
Online ISBN: 978-3-030-59028-4
eBook Packages: Computer ScienceComputer Science (R0)