Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

When Machine Learning Meets Privacy: A Survey and Outlook

Published: 05 March 2021 Publication History

Abstract

The newly emerged machine learning (e.g., deep learning) methods have become a strong driving force to revolutionize a wide range of industries, such as smart healthcare, financial technology, and surveillance systems. Meanwhile, privacy has emerged as a big concern in this machine learning-based artificial intelligence era. It is important to note that the problem of privacy preservation in the context of machine learning is quite different from that in traditional data privacy protection, as machine learning can act as both friend and foe. Currently, the work on the preservation of privacy and machine learning are still in an infancy stage, as most existing solutions only focus on privacy problems during the machine learning process. Therefore, a comprehensive study on the privacy preservation problems and machine learning is required. This article surveys the state of the art in privacy issues and solutions for machine learning. The survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning-aided privacy protection, and (iii) machine learning-based privacy attack and corresponding protection schemes. The current research progress in each category is reviewed and the key challenges are identified. Finally, based on our in-depth analysis of the area of privacy and machine learning, we point out future research directions in this field.

References

[1]
Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, and Li Zhang. 2017. On the protection of private information in machine learning systems: Two recent approches. In Proceedings of IEEE Computer Security Foundations Symposium (CSF’17). 1--6.
[2]
Martín Abadi, H. Brendan McMahan, Andy Chu, Ilya Mironov, Li Zhang, Ian Goodfellow, and Kunal Talwar. 2016. Deep learning with differential privacy. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’16). 308--318.
[3]
Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2015. Privacy and human behavior in the age of information. Science 347, 6221 (2015), 509--514.
[4]
Gergely Acs, Luca Melis, Claude Castelluccia, and Emiliano De Cristofaro. 2019. Differentially private mixture of generative neural networks. IEEE Trans. Knowl. Data Eng. 31, 6 (2019), 1109--1121.
[5]
Charu C. Aggarwal. 2005. On k-anonymity and the curse of dimensionality. In Proceedings of the 31st International Conference on Very Large Data Bases (VLDB’05). 901--909.
[6]
Gagan Aggarwal, Tomás Feder, Krishnaram Kenthapadi, Rajeev Motwani, Rina Panigrahy, Dilys Thomas, and An Zhu. 2005. Anonymizing tables. In International Conference on Database Theory. Springer, 246--258.
[7]
Rakesh Agrawal and Ramakrishnan Srikant. 2000. Privacy-preserving data mining. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD’00). ACM Press, New York, New York, 439--450.
[8]
Giuseppe Ateniese, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Netw. 10, 3 (2015), 137--150.
[9]
Vitalii Avdiienko, Konstantin Kuznetsov, Alessandra Gorla, Andreas Zeller, Steven Arzt, Siegfried Rasthofer, and Eric Bodden. 2015. Mining apps for abnormal usage of sensitive data. In Proceedings International Conference on Software Engineering (ICSE’15), Vol. 1. 426--436.
[10]
Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Secaucus, NJ.
[11]
Merlijn Blaauw and Jordi Bonada. 2017. A neural parametric singing synthesizer. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH’17), Vol. 2017-Augus. 4001--4005.
[12]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’17). 1175--1191.
[13]
Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine learning classification over encrypted data. In Proceeding of the Network and Distributed System Security Symposium (NDSS’15).
[14]
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS’17).
[15]
Justin Brickell, Donald E. Porter, Vitaly Shmatikov, and Emmett Witchel. 2007. Privacy-preserving remote diagnostics. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS’07). 498--507.
[16]
Paul Bunn and Rafail Ostrovsky. 2007. Secure two-party k-means clustering. In Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS’07). 486--497.
[17]
Pew Research Center. 2014. Public perceptions of privacy and security. Pew Research Center (2014). Retrieved from http://www.pewinternet.org/2014/11/12/public-privacy-perceptions/.
[18]
Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2011. Differentially private empirical risk minimization. J. Mach. Learn. Res. 12 (Mar.2011), 1069--1109.
[19]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 15--26.
[20]
Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: A content-based approach to geo-locating twitter users. In Proceedings of ACM International Conference on Information and Knowledge Management (CIKM’10). 759--768.
[21]
Sen-Ching Samson Cheung, Herb Wildfeuer, Mehdi Nikkhah, Xiaoqing Zhu, and Waitian Tan. 2018. Learning sensitive images using generative models. In Proceedings of the 25th IEEE International Conference on Image Processing (ICIP’18). 4128--4132.
[22]
Eunjoon Cho, Seth A. Myers, and Jure Leskovec. 2011. Friendship and mobility: User movement in location-based social networks. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD’11). 1082--1090.
[23]
Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F. Stewart, and Jimeng Sun. 2017. Generating multi-label discrete patient records using generative adversarial networks. Proceedings of the Machine Learning for Healthcare Conference (2017). Retrieved from http://proceedings.mlr.press/v68/choi17a.html http://arxiv.org/abs/1703.06490.
[24]
Elisa Costante, Jerry Den Hartog, and Milan Petkovic. 2011. On-line trust perception: What really matters. In Proceedings of the 1st Workshop on Socio-Technical Aspects in Security and Trust (STAST’11). 52--59.
[25]
Elisa Costante, Yuanhao Sun, Milan Petkovic, and Jerry Den Hartog. 2012. A machine learning solution to assess privacy policy completeness (short paper). In Proceedings of the ACM Conference on Computer and Communications Security (CCS’12). 91--96.
[26]
Anupam Das, Martin Degeling, Daniel Smullen, and Norman Sadeh. 2018. Personalized privacy assistants for the internet of things: Providing users with notice and choice. IEEE Pervas. Comput. 17, 3 (2018), 35--46.
[27]
Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. 2015. Deep generative image models using a laplacian pyramid of adversarial networks. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS’15). 1486--1494.
[28]
Hadi M. Dolatabadi, Sarah Erfani, and Christopher Leckie. 2020. AdvFlow: Inconspicuous black-box adversarial attacks using normalizing flows. Retrieved from https://arXiv:2007.07435.
[29]
Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of the 33rd International Conference on Machine Learning (ICML’16), Vol. 1. 342--351.
[30]
Wenliang Du, Yunghsiang S. Han, and Shigang Chen. 2004. Privacy-preserving multivariate statistical analysis: Linear regression and classification. In SIAM Proceedings Series. 222--233.
[31]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. Springer, 1--19.
[32]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of Cryptography Conference (TCC’06). Springer, 265--284.
[33]
Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy.Found. Trends Theoret. Comput. Sci. 9, 3-4 (2014), 211--407.
[34]
C. Dwork and V. Feldman Theory. 2018. Privacy-preserving prediction. In Proceedings of the Machine Learning for Healthcare Conference. Retrieved from http://proceedings.mlr.press/v75/dwork18a.html.
[35]
Gamaleldin F Elsayed, Nicolas Papernot, Shreya Shankar, Alexey Kurakin, Brian Cheung, Ian Goodfellow, and Jascha Sohl-Dickstein. 2018. Adversarial examples that fool both computer vision and time-limited humans. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS’18). 3910--3920.
[36]
Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2015. Fundamental limits on adversarial robustness. Proceedings of International Conference on Machine Learning (ICML’15). 1--7.
[37]
Jesse Fox and Jennifer J. Moreland. 2015. The dark side of social networking sites: An exploration of the relational and psychological stressors associated with Facebook use and affordances. Comput. Hum. Behav. 45 (2015), 168--176.
[38]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’15). ACM, 1322--1333.
[39]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In Proceedings of the 23rd USENIX Security Symposium (USENIX’14). 17--32.
[40]
Arik Friedman, Shlomo Berkovsky, and Mohamed Ali Kaafar. 2016. A differential privacy framework for matrix factorization recommender systems. User Model. User-Adapt. Interact. 26, 5 (Dec. 2016), 425--458.
[41]
Max Friedrich, Arne Köhn, Gregor Wiedemann, and Chris Biemann. 2020. Adversarial learning of privacy-preserving text representations for de-identification of medical records. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL’20). 5829--5839.
[42]
Hao Fu, Zizhan Zheng, Sencun Zhu, and Prasant Mohapatra. 2019. Keeping context in mind: Automating mobile app access control with user interface inspection. In Proceedings of IEEE International Conference on Computer Communications (INFOCOM’19). 2089--2097.
[43]
Keke Gai, Meikang Qiu, Hui Zhao, and Jian Xiong. 2016. Privacy-aware adaptive data encryption strategy of big data in cloud computing. In Proceedings of 3rd IEEE International Conference on Cyber Security and Cloud Computing (CSCloud’16). 273--278.
[44]
Ian Goodfellow. 2018. Defense against the dark arts: An overview of adversarial example security research and future research directions. Retrieved from http://arxiv.org/abs/1806.04169.
[45]
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS’14), Vol. 3. 2672--2680.
[46]
Alessandra Gorla, Ilaria Tavecchia, Florian Gross, and Andreas Zeller. 2014. Checking app behavior against app descriptions. In Proceedings of the 36th International Conference on Software Engineering (ICSE’14). 1025--1035.
[47]
Thore Graepel, Kristin Lauter, and Michael Naehrig. 2012. ML confidential: Machine learning on encrypted data. In Proceedings of the International Conference on Information Security and Cryptology (ICISC’12). 1--21.
[48]
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning (ICML’15), Vol. 2. 1462--1471.
[49]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. Retrieved from https://arXiv:1708.06733.
[50]
Yulong Gu, Yuan Yao, Weidong Liu, and Jiaxing Song. 2016. We know where you are: Home location identification in location-based social networks. In Proceedings of the 25th International Conference on Computer Communications and Networks (ICCCN’16). 1--9.
[51]
Rakibul Hasan, David Crandall, and Mario Fritz Apu Kapadia. 2020. Automatically detecting bystanders in photos to reduce privacy risks. In Proceedings of the IEEE Symposium on Security and Privacy (SP’20). 318--335.
[52]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Membership inference attacks against generative models. In Proceedings on Privacy Enhancing Technologies (PETS’19). 133--152.
[53]
Jane Henriksen-Bulmer and Sheridan Jeary. 2016. Re-identification attacks—A systematic literature review. Int. J. Info. Manage. 36, 6 (2016), 1184--1192.
[54]
Ehsan Hesamifard, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. 2018. Privacy-preserving machine learning as a service. In Proceedings of the Conference on Privacy Enhancing Technologies (PETS’19). 123--142.
[55]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’17). ACM, 603--618.
[56]
Weizhe Hua, Zhiru Zhang, and G. Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In Proceedings of the Design Automation Conference (DAC’18), Vol. Part F1377. 1--6.
[57]
Mengdi Huai, Di Wang, Chenglin Miao, Jinhui Xu, and Aidong Zhang. 2019. Privacy-aware synthesizing for crowdsourced data. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, California, 2542--2548.
[58]
Abou-el-ela Abdou Hussien, Nermin Hamza, and Hesham A. Hefny. 2013. Attacks on anonymization-based privacy-preserving: A survey for data mining and data publishing. J. Info. Secur. 04, 02 (2013), 101--112.
[59]
Hafiz Imtia and Anand D. Sarwate. 2018. Improved algorithms for differentially private orthogonal tensor decomposition. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’18). IEEE, 2201--2205.
[60]
Hafiz Imtiaz and Anand D. Sarwate. 2018. Distributed differentially private algorithms for matrix and tensor factorization. IEEE J. Sel. Topics Signal Process. 12, 6 (2018), 1449--1464.
[61]
Roger Iyengar, Joseph P. Near, Dawn Song, Om Thakkar, Abhradeep Thakurta, and Lun Wang. 2019. Towards practical differentially private convex optimization. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). IEEE, 299--316.
[62]
Geetha Jagannathan and Rebecca N. Wright. 2005. Privacy-preserving distributed k-means clustering over arbitrarily partitioned data. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD’05). 593--599.
[63]
Amirhossein Jahanbekam, Christian Bauckhage, and Christian Thurau. 2010. Age recognition in the wild. In Proceedings of the International Conference on Pattern Recognition (ICPR’10). 392--395.
[64]
Zhanglong Ji, Zachary C. Lipton, and Charles Elkan. 2014. Differential privacy and machine learning: A survey and review. Retrieved from http://arxiv.org/abs/1412.7584.
[65]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. AttriGuard: A practical defense against attribute inference attacks via adversarial machine learning. In Proceedings of the 27th USENIX Security Symposium (USENIX’18). 513--529.
[66]
Qi Jia, Linke Guo, Zhanpeng Jin, and Yuguang Fang. 2018. Preserving model privacy for machine learning in distributed systems. IEEE Trans. Parallel Distrib. Syst. 29, 8 (2018), 1808--1822.
[67]
Seong Joon Oh, Rodrigo Benenson, Mario Fritz, and Bernt Schiele. 2015. Person recognition in personal photo collections. In Proceedings of the IEEE International Conference on Computer Vision (CVPR’15). 3862--3870.
[68]
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D’Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. 2019. Advances and open problems in federated learning. Retrieved from http://arxiv.org/abs/1912.04977.
[69]
Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. Retrieved from http://arxiv.org/abs/1610.05492.
[70]
John Krumm. 2007. Inference attacks on location tracks. In Proceedings of the International Conference on Pervasive Computing. Springer, 127--143.
[71]
Vaibhav Kulkarni, Natasa Tagasovska, Thibault Vatter, and Benoit Garbinato. 2018. Generative models for simulating mobility trajectories. Retrieved from http://arxiv.org/abs/1811.12801.
[72]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2019. Adversarial machine learning at scale. In Proceedings of the International Conference on Learning Representations (ICLR’19).
[73]
Logan Lebanoff and Fei Liu. 2018. Automatic detection of vague words and sentences in privacy policies. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, 3508--3517.
[74]
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified robustness to adversarial examples with differential privacy. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). IEEE, 656--672.
[75]
Hosub Lee and Alfred Kobsa. 2017. Privacy preference modeling and prediction in a simulated campuswide IoT environment. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications (PerCom’17). 276--285.
[76]
Huaxin Li, Haojin Zhu, Suguo Du, Xiaohui Liang, and Xuemin Sherman Shen. 2018. Privacy leakage of location sharing in mobile social networks: Attacks and defense. IEEE Trans. Depend. Secure Comput. 15, 4 (2018), 646--660.
[77]
Ping Li, Jin Li, Zhengan Huang, Tong Li, Chong Zhi Gao, Siu Ming Yiu, and Kai Chen. 2017. Multi-key privacy-preserving deep learning in cloud computing. Future Gen. Comput. Syst. 74 (2017), 76--85.
[78]
Tao Li and Lei Lin. 2019. AnonymousNet: Natural face de-identification with measurable privacy. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’19). 56--65.
[79]
Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, and Boqing Gong. 2019. NATTACK: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In Proceedings of the International Conference on Machine Learning (ICML’19). 3866--3876.
[80]
Bo Liu, Ming Ding, Tianqing Zhu, Yong Xiang, and Wanlei Zhou. 2019. Adversaries or allies? Privacy and deep learning in big data era. In Concurrency Computation, Vol. 31. Wiley Online Library, e5102.
[81]
Bo Liu, Wanlei Zhou, Shui Yu, Kun Wang, Yu Wang, Yong Xiang, and Jin Li. 2017. Home location protection in mobile social networks: A community based method (short paper). In Proceedings of the International Conference on Information Security Practice and Experience (ISPEC’17). Springer, 694--704.
[82]
Bo Liu, Wanlei Zhou, Tianqing Zhu, Longxiang Gao, Tom H. Luan, and Haibo Zhou. 2016. Silence is golden: Enhancing privacy of location-based services by content broadcasting and active caching in wireless vehicular networks. IEEE Trans. Vehic. Technol. 65, 12 (2016), 9942--9953.
[83]
Hao Liu, Yaoxue Zhang, Yuezhi Zhou, Di Zhang, Xiaoming Fu, and K. K. Ramakrishnan. 2014. Mining checkins from location-sharing services for client-independent IP geolocation. In Proceedings of IEEE International Conference on Computer Communications (INFOCOM’14). 619--627.
[84]
Kin Sum Liu, Bo Li, and Jie Gao. 2018. Generative model: Membership attack, generalization and diversity. CoRR abs/1805.09898.
[85]
Qiang Liu, Pan Li, Wentao Zhao, Wei Cai, Shui Yu, and Victor C.M. Leung. 2018. A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access 6 (2018), 12103--12117.
[86]
Xi Liu, Hanzhou Chen, and Clio Andris. 2018. trajGANs: Using generative adversarial networks for geo-privacy protection of trajectory data (Vision paper). In Proceedings of the Location Privacy and Security Workshop. 1--7.
[87]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning attack on neural networks. In Proceedings of Network and Distributed Systems Security Symposium (NDSS’18).
[88]
Yujia Liu, Weiming Zhang, and Nenghai Yu. 2017. Protecting privacy in shared photos via adversarial examples based stealth. Secur. Commun. Netw. 2017 (2017).
[89]
Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A. Gunter, and Kai Chen. 2018. Understanding membership inferences on well-generalized learning models. Retrieved from http://arxiv.org/abs/1802.04889.
[90]
Daniel Lowd and Christopher Meek. 2005. Adversarial learning. In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (KDD’05). 641--647.
[91]
Jalal Mahmud, Jeffrey Nichols, and Clemens Drews. 2014. Home location identification of twitter users. ACM Trans. Intell. Syst. Technol. 5, 3 (2014), 47.
[92]
Asha S. Manek, P. Deepa Shenoy, M. Chandra Mohan, and K. R. Venugopal. 2016. Detection of fraudulent and malicious websites by analysing user reviews for online shopping websites. Int. J. Knowl. Web Intell. 5, 3 (2016), 171.
[93]
Robert R. McCrae, Paul T. Costa, Antonio Terracciano, Wayne D. Parker, Carol J. Mills, Filip De Fruyt, and Ivan Mervielde. 2002. Personality trait development from age 12 to age 18: Longitudinal, cross-sectional, and cross-cultural analyses. J. Personal. Soc. Psychol. 83, 6 (2002), 1456--1468.
[94]
H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. Arxiv 92, 9 (2016), 091118.
[95]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning differentially private recurrent language models without lossing accuracy. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18), Vol. 45. 39--44.
[96]
Richard McPherson, Reza Shokri, and Vitaly Shmatikov. 2016. Defeating image obfuscation with deep learning. Retrieved from https://arXiv:1609.00408.
[97]
Hoda Mehrpouyan, Ion Madrazo Azpiazu, and Maria Soledad Pera. 2017. Measuring personality for automatic elicitation of privacy preferences. In Proceedings of the IEEE Symposium on Privacy-Aware Computing (PAC’17). 84--95.
[98]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). IEEE, 691--706.
[99]
Wei Meng, Xinyu Xing, Anmol Sheth, Udi Weinsberg, and Wenke Lee. 2014. Your online interests: Pwned! a pollution attack against targeted advertising. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’14). 129--140.
[100]
Ilya Mironov. 2017. Rényi differential privacy. In Proceedings of the IEEE Computer Security Foundations Symposium (CSF’17). IEEE, 263--275.
[101]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). 19--38.
[102]
Seyed Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 86--94.
[103]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 2574--2582.
[104]
Arvind Narayanan and Vitaly Shmatikov. 2006. How to break anonymity of the Netflix prize dataset. Retrieved from http://arxiv.org/abs/cs/0610105.
[105]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). 739--753.
[106]
Seth Neel, Aaron Roth, Giuseppe Vietri, and Zhiwei Steven Wu. 2019. Oracle efficient private non-convex optimization. Retrieved from http://arxiv.org/abs/1909.01783.
[107]
Valeria Nikolaenko, Udi Weinsberg, Stratis Ioannidis, Marc Joye, Dan Boneh, and Nina Taft. 2013. Privacy-preserving ridge regression on hundreds of millions of records. In Proceedings of the IEEE Symposium on Security and Privacy (SP’13). 334--348.
[108]
Ruairí Nugent. 2018. Assesing completeness of solvency and financial condition reports through the use of machine learning and text classification. (2018).
[109]
Seong Joon Oh, Rodrigo Benenson, Mario Fritz, and Bernt Schiele. 2016. Faceless person recognition: Privacy implications in social media. In Proceedings of European Conference on Computer Vision (ECCV’16). Springer, 19--35.
[110]
Seong Joon Oh, Mario Fritz, and Bernt Schiele. 2017. Adversarial image perturbation for privacy protection a game theory perspective. In Proceedings of IEEE International Conference on Computer Vision (ICCV’17). IEEE, 1491--1500.
[111]
Seong Joon Oh, Bernt Schiele, and Mario Fritz. 2019. Towards reverse-engineering black-box neural networks. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 121--144.
[112]
Katarzyna Olejnik, Italo Dacosta, Joana Soares Machado, Kevin Huguenin, Mohammad Emtiyaz Khan, and Jean Pierre Hubaux. 2017. SmarPer: Context-aware and automatic runtime-permissions for mobile devices. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). 1058--1076.
[113]
Tribhuvanesh Orekondy, Mario Fritz, and Bernt Schiele. 2018. Connecting pixels to privacy and utility: Automatic redaction of private information in images. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’18). 8466--8475.
[114]
Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2017. Towards a visual privacy advisor: Understanding and predicting privacy risks in images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 3706--3715.
[115]
Kun Ouyang, Reza Shokri, David S. Rosenblum, and Wenzhuo Yang. 2018. A non-parametric generative model for human trajectories. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’18). 3812--3817.
[116]
Nicolas Papernot, Ian Goodfellow, Martín Abadi, Kunal Talwar, and Úlfar Erlingsson. 2019. Semi-supervised knowledge transfer for deep learning from private training data. In Proceedings of the 5th International Conference on Learning Representations (ICLR’19).
[117]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. Retrieved from http://arxiv.org/abs/1605.07277.
[118]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the ACM Asia Conference on Computer and Communications Security (ASIACCS’17). ACM, 506--519.
[119]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Proceedings of the IEEE European Symposium on Security and Privacy (EuroS&P’’16). IEEE, 372--387.
[120]
Noseong Park, Mahmoud Mohammadi, Kshitij Gorde, Sushil Jajodia, Hongkyu Park, and Youngmin Kim. 2018. Data synthesis based on generative adversarial networks. In Proceedings of the VLDB Endowment, Vol. 11. VLDB Endowment, 1071--1083.
[121]
Manas A. Pathak, Shantanu Rane, and Bhiksha Raj. 2010. Multiparty differential privacy via aggregation of locally trained classifiers. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS’10). 1876--1884.
[122]
Siani Pearson and Azzedine Benameur. 2010. Privacy, security and trust issues arising from cloud computing. In Proceedings of the 2nd IEEE International Conference on Cloud Computing Technology and Science (CloudCom’10). 693--702.
[123]
NhatHai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, and Dejing Dou. 2020. Scalable differential privacy with certified robustness in adversarial learning. In Proceedings of the 37th International Conference on Machine Learning (PMLR’20), Vol. 6. Retrieved from http://arxiv.org/abs/1903.09822.
[124]
Nhathai Phan, Xintao Wu, Han Hu, and Dejing Dou. 2017. Adaptive laplace mechanism: Differential privacy preservation in deep learning. In Proceedings of IEEE International Conference on Data Mining (ICDM’17), Vol. 2017-November. IEEE, 385--394.
[125]
Nhat Hai Phan, Minh N. Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, and My T. Thai. 2019. Heterogeneous Gaussian mechanism: Preserving differential privacy in deep learning with provable robustness. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI’19). 4753--4759.
[126]
Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Info. Forens. Secur. 13, 5 (2018), 1333--1345.
[127]
Benny Pinkas. 2002. Cryptographic techniques for privacy-preserving data mining. ACM SIGKDD Explorations Newsletter 4, 2 (2002), 12--19.
[128]
Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. 2018. Generative adversarial perturbations. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’18). 4422--4431.
[129]
Shadi Rahimian, Tribhuvanesh Orekondy, and Mario Fritz. 2020. Sampling attacks: Amplification of membership inference attacks by repeated queries. Retrieved from http://arxiv.org/abs/2009.00395.
[130]
Alexey Reznichenko and Paul Francis. 2014. Private-by-design advertising meets the real world. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’14). 116--128.
[131]
Andras Rozsa, Ethan M. Rudd, and Terrance E. Boult. 2016. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPW’16). 410--417.
[132]
Benjamin I. P. Rubinstein, Peter L. Bartlett, Ling Huang, and Nina Taft. 2012. Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning. Technical Report 1. 65--100. Retrieved from http://repository.cmu.edu/jpc.
[133]
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang. 2020. Dynamic backdoor attacks against machine learning models. Retrieved from http://arxiv.org/abs/2003.03675.
[134]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019. ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models. In Proceedings of Network and Distributed Systems Security Symposium (NDSS’19).
[135]
Ashish P. Sanil, Alan F. Karr, Xiaodong Lin, and Jerome P. Reiter. 2004. Privacy preserving regression modelling via distributed computation. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’04). 677--682.
[136]
Phillipp Schoppmann, Borja Balle, Jack Doerner, Samee Zahur, and David Evans. 2016. Secure linear regression on vertically partitioned datasets. IACR Cryptol. Eprint Arch. 2016 (2016), 1--27.
[137]
Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. Comput. Surveys 34, 1 (2002), 1--47.
[138]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’16). 1528--1540.
[139]
Parvaneh Shayegh and Sepideh Ghanavati. 2017. Toward an approach to privacy notices in IoT. In Proceedings of the IEEE 25th International Requirements Engineering Conference Workshops (REW’17). 104--110.
[140]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’15). 1310--1321.
[141]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). IEEE, 3--18.
[142]
Reza Shokri, George Theodorakopoulos, Panos Papadimitratos, Ehsan Kazemi, and Jean Pierre Hubaux. 2014. Hiding in the mobile crowd: Location privacy through collaboration. IEEE Trans. Dependable Secure Comput.3 (2014), 266--279.
[143]
Aleksandra B. Slavkovic, Yuval Nardi, and Matthew M. Tibbits. 2007. Secure logistic regression of horizontally and vertically partitioned distributed databases. In Proceedings of the IEEE International Conference on Data Mining (ICDM’07). 723--728.
[144]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine learning models that remember too much. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’17). ACM, 587--601.
[145]
Guocong Song and Wei Chai. 2018. Collaborative learning for deep neural networks. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS’18). 1832--1841.
[146]
Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. 2013. Stochastic gradient descent with differentially private updates. In Proceedings of the IEEE Global Conference on Signal and Information Processing (GlobalSIP’13). 245--248.
[147]
Anna Squicciarini, Cornelia Caragea, and Rahul Balakavi. 2017. Toward automated online photo privacy. ACM Trans. Web 11, 1 (2017), 2.
[148]
Anna C. Squicciarini, Cornelia Caragea, and Rahul Balakavi. 2014. Analyzing images’ privacy for the modern web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media (HT’14). 136--147.
[149]
Qianru Sun, Liqian Ma, Seong Joon Oh, Luc Van Gool, Bernt Schiele, and Mario Fritz. 2018. Natural and effective obfuscation by head inpainting. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’18). 5050--5059.
[150]
Qianru Sun, Bernt Schiele, and Mario Fritz. 2017. A domain based approach to social relation recognition. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17). 435--444.
[151]
Xudong Sun, Pengcheng Wu, and Steven C. H. Hoi. 2018. Face detection using deep learning: An improved faster RCNN approach. Neurocomputing 299 (2018), 42--50.
[152]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations (ICLR’14).
[153]
Welderufael B. Tesfay, Peter Hofmann, Toru Nakamura, Shinsaku Kiyomoto, and Jetzabel Serna. 2018. I read but don’t agree: Privacy policy benchmarking using machine learning and the EU GDPR. Comp. Proc. Web Conf. 2 (2018), 163--166.
[154]
Financial Times. 2020. Facebook privacy breach. Financial Times (2020), 11--12. Retrieved from https://www.ft.com/content/87184c40-2cfe-11e8-9b4b-bc4b9f08f381.
[155]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In Proceedings of the 25th USENIX Security Symposium (USENIX’16). 601--618.
[156]
Aleksei Triastcyn and Boi Faltings. 2019. Generating artificial data for private deep learning. In Proceedings of the CEUR Workshop, Vol. 2335. 33--40.
[157]
Jaideep Vaidya, Murat Kantarcoglu, and Chris Clifton. 2008. Privacy-preserving naive bayes classification. VLDB J. 17, 4 (2008), 879--898.
[158]
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. Retrieved from http://arxiv.org/abs/1609.03499.
[159]
Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. Retrieved from http://arxiv.org/abs/1812.00564.
[160]
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’15). 3156--3164.
[161]
Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing hyperparameters in machine learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’18). 36--52.
[162]
K. Wang, R. Chen, B. C. Fung, and P. S. Yu. 2010. Privacy-preserving data publishing: A survey on recent developments. Comput. Surveys (2010).
[163]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. Proceedings of IEEE International Conference on Computer Communications (INFOCOM’19). 2512--2520.
[164]
Lingxiao Wei, Bo Luo, Yu Li, Yannan Liu, and Qiang Xu. 2018. I know what you see: Power side-channel attack on convolutional neural network accelerators. In ACM International Conference Proceeding Series. 393--406.
[165]
Primal Wijesekera, Arjun Baokar, Ashkan Hosseini, Serge Egelman, David Wagner, and Konstantin Beznosov. 2015. Android permissions remystified: A field study on contextual integrity. In Proceedings of the 24th USENIX Security Symposium (USENIX’15). 499--514.
[166]
Primal Wijesekera, Arjun Baokar, Lynn Tsai, Joel Reardon, Serge Egelman, David Wagner, and Konstantin Beznosov. 2017. The feasibility of dynamically granted permissions: Aligning mobile privacy with user preferences. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). 1077--1093.
[167]
Primal Wijesekera, Joel Reardon, Irwin Reyes, Lynn Tsai, Jung Wei Chen, Nathan Good, David Wagner, Konstantin Beznosov, and Serge Egelman. 2018. Contextualizing privacy decisions for better prediction (and protection). In Proceedings of the Conference on Human Factors in Computing Systems (CHI’18). 268.
[168]
Michael J. Wilber, Vitaly Shmatikov, and Serge Belongie. 2016. Can we still avoid automatic face detection? In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV’16). 1--9.
[169]
Nan Wu, Farhad Farokhi, David Smith, and Mohamed Ali Kaafar. 2020. The value of collaboration in convex machine learning with differential privacy. In Proceedings of the IEEE Symposium on Security and Privacy (SP’20). 304--317.
[170]
Shuang Wu, Tadanori Teruya, Junpei Kawamoto, Jun Sakuma, and Hiroaki Kikuchi. 2013. Privacy-preservation for stochastic gradient descent application to secure logistic regression. In Proceedings of the 27th Annual Conference of the Japanese Society for Artificial Intelligence (JSAI’13). 6--9.
[171]
Lei Xu and Kalyan Veeramachaneni. 2018. Synthesizing tabular data using generative adversarial networks. Retrieved from http://arxiv.org/abs/1811.11264.
[172]
Yuanshun Yao, Haitao Zheng, Huiying Li, and Ben Y. Zhao. 2019. Latent backdoor attacks on deep neural networks. In Proceedings of the ACM Conference on Computer and Communications Security (CCS’19). 2041--2055.
[173]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In Proceedings of the IEEE Computer Security Foundations Symposium (CSF’18), Vol. 2018-July. 268--282.
[174]
Jun Yu, Baopeng Zhang, Zhengzhong Kuang, Dan Lin, and Jianping Fan. 2017. IPrivacy: Image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans. Info. Forens. Secur. 12, 5 (2017), 1005--1016.
[175]
Lin Yuan, Joël Theytaz, and Touradj Ebrahimi. 2017. Context-dependent privacy-aware photo sharing based on machine learning. In IFIP Advances in Information and Communication Technology, Vol. 502. 93--107.
[176]
Sergej Zerr, Stefan Siersdorfer, Jonathon Hare, and Elena Demidova. 2012. Privacy-aware image classification and search. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’12). 35--44.
[177]
Dayin Zhang, Xiaojun Chen, Dakui Wang, and Jinqiao Shi. 2018. A survey on collaborative deep learning and privacy-preserving. In Proceedings of the IEEE 3rd International Conference on Data Science in Cyberspace (DSC’18). IEEE, 652--658.
[178]
Jun Zhang, Zhenjie Zhang, Xiaokui Xiao, Yin Yang, and Marianne Winslett. 2012. Functional mechanism: Regression analysis under differential privacy. In Proceedings of the International Conference on Very Large Data Bases (VLDB’12) 5, 11 (2012), 1364--1375.
[179]
Tianwei Zhang, Zecheng He, and Ruby B. Lee. 2018. Privacy-preserving machine learning through data obfuscation. Retrieved from http://arxiv.org/abs/1807.01860.
[180]
Xinyang Zhang, Shouling Ji, and Ting Wang. 2018. Differentially private releasing via deep generative model (technical report). Retrieved from http://arxiv.org/abs/1801.01594.
[181]
Wengang Zhou, Houqiang Li, Yijuan Lu, and Qi Tian. 2012. Principal visual word discovery for automatic license plate detection. IEEE Trans. Image Process. 21, 9 (2012), 4269--4279.

Cited By

View all
  • (2024)Soluções para Dados Heterogêneos em Aprendizado Federado através de Similaridade de Modelos e Agrupamento de ClientesRevista Eletrônica de Iniciação Científica em Computação10.5753/reic.2024.464922:1(61-70)Online publication date: 28-Jun-2024
  • (2024)Futuristic ChatbotsDesign and Development of Emerging Chatbot Technology10.4018/979-8-3693-1830-0.ch018(317-345)Online publication date: 15-Mar-2024
  • (2024)Dicing with data: the risks, benefits, tensions and tech of health data in the iToBoS projectFrontiers in Digital Health10.3389/fdgth.2024.12727096Online publication date: 31-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 54, Issue 2
March 2022
800 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3450359
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 March 2021
Accepted: 01 November 2020
Revised: 01 November 2020
Received: 01 March 2019
Published in CSUR Volume 54, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Machine learning
  2. deep learning
  3. differential privacy
  4. privacy

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,400
  • Downloads (Last 6 weeks)163
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Soluções para Dados Heterogêneos em Aprendizado Federado através de Similaridade de Modelos e Agrupamento de ClientesRevista Eletrônica de Iniciação Científica em Computação10.5753/reic.2024.464922:1(61-70)Online publication date: 28-Jun-2024
  • (2024)Futuristic ChatbotsDesign and Development of Emerging Chatbot Technology10.4018/979-8-3693-1830-0.ch018(317-345)Online publication date: 15-Mar-2024
  • (2024)Dicing with data: the risks, benefits, tensions and tech of health data in the iToBoS projectFrontiers in Digital Health10.3389/fdgth.2024.12727096Online publication date: 31-Jan-2024
  • (2024)Smart connected farms and networked farmers to improve crop production, sustainability and profitabilityFrontiers in Agronomy10.3389/fagro.2024.14108296Online publication date: 8-Aug-2024
  • (2024)Communicating the cultural other: trust and bias in generative AI and large language modelsApplied Linguistics Review10.1515/applirev-2024-0196Online publication date: 28-Jun-2024
  • (2024)Where you go is who you are: a study on machine learning based semantic privacy attacksJournal of Big Data10.1186/s40537-024-00888-811:1Online publication date: 12-Mar-2024
  • (2024)From data to insights: the application and challenges of knowledge graphs in intelligent auditJournal of Cloud Computing: Advances, Systems and Applications10.1186/s13677-024-00674-013:1Online publication date: 29-May-2024
  • (2024)Privacy-preserving federated learning based on partial low-quality dataJournal of Cloud Computing: Advances, Systems and Applications10.1186/s13677-024-00618-813:1Online publication date: 18-Mar-2024
  • (2024)Small-sample damage detection of bleacher structure based on GAN and MSS-CNN modelsStructural Health Monitoring10.1177/14759217241252756Online publication date: 21-May-2024
  • (2024)A Systematic Review of Contemporary Applications of Privacy-Aware Graph Neural Networks in Smart CitiesProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3669980(1-10)Online publication date: 30-Jul-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media