Abstract
Membership inference attack (MIA) aims to infer whether a given data sample is in the target training dataset or not, which poses a severe privacy risk in particular data-sensitive fields like the military, national security department, as well as enterprise. Observing that a model generated from adversarial training is more vulnerable against MIA, a novel attack method based on confidence-thresholding was proposed by Song et al. recently. However, it is not a straightforward work to deploy such an attack into real-world application scenarios, since shadow training and redundant assumptions are prerequisites. To address the above issues, in this paper, we propose an improved confidence-thresholding method with relaxed assumption, evaluating the prediction accuracy as the threshold. Our attack can be released without using shadow training and an additional dataset. Instead of collecting an additional dataset, attackers use their target data records, which are needed to be inferred about membership, to achieve MIA. As a result, our proposed attack against robust model has an overwhelming advantage on model recall with fewer accuracy and precision loss. Extensive experiments are conducted on real-world data, i.e., Yale Face, and the results show that our proposed MIA attack is effective and feasible.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: International Conference on Machine Learning (ICML), pp. 1467–1474 (2012)
Chen, X., Li, J., Ma, J., Tang, Q., Lou, W.: New algorithms for secure outsourcing of modular exponentiations. In: Foresti, S., Yung, M., Martinelli, F. (eds.) ESORICS 2012. LNCS, vol. 7459, pp. 541–556. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33167-1_31
Chen, X., Huang, X., Li, J., Ma, J., Lou, W., Wong, D.S.: New algorithms for secure outsourcing of large-scale systems of linear equations. IEEE Trans. Inf. Forensics Secur. 10(1), 69–78 (2015)
Ciolacu, M., Tehrani, A. F., Beer, R., Popp, H.: Education 4.0–fostering student’s performance with machine learning methods. In: Proceedings of the 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME), pp. 438–443 (2017)
Elsayed, G., et al.: Adversarial examples that fool both computer vision and time-limited humans. In: Proceedings of the Advances in Neural Information Processing Systems, pp. 3910–3920 (2018)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)
Goodfellow, I. J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Machine Learning (2015)
Guo, H., Tang, R., Ye, Y., Li, Z., He, X.: DeepFM: a factorization-machine based neural network for CTR prediction. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), pp. 1725–1731 (2017)
Isele, D., Cosgun, A., Subramanian, K., Fujimura, K.: Navigating intersections with autonomous vehicles using deep reinforcement learning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2034–2039 (2017)
Leino, K., Fredrikson, M.: Stolen memories: leveraging model memorization for calibrated white-box membership inference. arXiv preprint arXiv:1906.11798 (2019)
Long, Y., Bindschaedler, V., Gunter, C.A.: Towards measuring membership privacy. arXiv preprint arXiv:1712.09136 (2017)
Long, Y., et al.: Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889 (2018)
Mao, Y., Zhu, X., Zheng, W., Yuan, D., Ma, J.: A novel user membership leakage attack in collaborative deep learning. In: Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), pp. 1–6 (2019)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706 (2019)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753 (2019)
Rahman, M.A., Rahman, T., Laganiére, R., Mohammed, N., Wang, Y.: Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11(1), 61–79 (2018)
Raví, D., et al.: Deep learning for health informatics. IEEE J. Biomed. Health Inform. 21(1), 4–21 (2016)
Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. In: 26th Annual Network and Distributed System Security Symposium (2019)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18 (2017)
Song, L., Shokri, R., Mittal, P.: Privacy risks of securing machine learning models against adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 241–257 (2019)
Song, C., Ristenpart, T., Shmatikov, V.: Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 587–601 (2017)
Shan, Y., Hoens, T.R., Jiao, J., Wang, H., Yu, D., Mao, J.C.: Deep crossing: web-scale modeling without manually crafted combinatorial features. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 255–262 (2016)
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Proceedings of the Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)
Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520 (2019)
Yang, C., Wu, Q., Li, H., Chen, Y.: Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340 (2017)
Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: analyzing the connection to overfitting. In: Proceedings of the 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282 (2018)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)
Zhang, W., Du, T., Wang, J.: Deep learning over multi-field categorical data. In: Ferro, N., et al. (eds.) ECIR 2016. LNCS, vol. 9626, pp. 45–57. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30671-1_4
Zhang, X., Chen, X., Liu, J., Xiang, Y.: DeepPAR and DeepDPA: privacy-preserving and asynchronous deep learning for industrial IoT. IEEE Trans. Ind. Inf. 16(3), 2081–2090 (2019)
Zhang, X., Jiang, T., Li, K.C., Castiglione, A., Chen, X.: New publicly verifiable computation for batch matrix multiplication. Inf. Sci. 479, 664–678 (2019)
Acknowledgment
The authors would like to acknowledge the support of the National Natural Science Foundation of China (No. 61902315).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, L., Zhang, X., Xie, Y., Ma, X., Miao, M. (2020). A High-Recall Membership Inference Attack Based on Confidence-Thresholding Method with Relaxed Assumption. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-62460-6_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62459-0
Online ISBN: 978-3-030-62460-6
eBook Packages: Computer ScienceComputer Science (R0)