Abstract
With the continuous development of Graph Neural Network (GNN) technologies, securing their robustness is crucial for their broad adoption in practical applications. Although various verification methods for training GNNs have been proposed, studies indicate that Graph Convolutional Networks (GCNs) remain vulnerable to adversarial attacks affecting both graph structure and node attributes. We propose a novel approach to verify the robustness of GCNs against perturbations in node attributes by employing a dual approximation technique to convexify nonlinear activation functions. This transformation changes the original non-convex problem into a more manageable convex forms. We start by applying linear relaxation to convert fixed-value features in each GCN layer into variables suitable for optimization. Next, we reframe the task of identifying the worst-case margin for a graph as a linear problem, which we solve using linear programming techniques. Given the discrete nature of graph data, we define a perturbation space that extends the data domain from discrete to continuous values. To improve the accuracy of the convex relaxation, we use a dual approximation algorithm to set bounds on the optimizable variables. Our method certifies the robustness of nodes against perturbations within a specified range and significantly improves verification accuracy compared to previous approaches. This method surpasses previous work in verification accuracy and is distinctively tailored to address the S-curve, an aspect less explored in prior research. Experimental results show that our method significantly refines the precision of robustness verification for GCNs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Joardar, B.K., Arka, A.I., Doppa, J.R., Pande, P.P., Li, H., Chakrabarty, K.: Heterogeneous manycore architectures enabled by processing-in-memory for deep learning: from CNNs to GNNs: (ICCAD Special Session Paper). In: 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), November 2021, pp. 1–7. https://doi.org/10.1109/ICCAD51958.2021.9643559
Niu, X., et al.: A dual heterogeneous graph attention network to improve long-tail performance for shop search in E-Commerce. In: Gupta, R., Liu, Y., Tang, J., Prakash, B.A. (eds.) KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23–27, 2020. ACM, pp. 3405–3415. https://doi.org/10.1145/3394486.3403393
Bui, K.-H.N., Cho, J., Yi, H.: Spatial-temporal graph neural network for traffic forecasting: an overview and open research issues. Appl. Intell. 52(3), 2763–2774 (2022)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, London, pp. 2847–2856 (2018)
Zügner, D., Günnemann, S.: Certifiable robustness and robust training for graph convolutional networks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, USA, pp. 246–256. ACM (2019)
Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of the 35th International Conference on Machine Learning. Stockholmsmässan, Stockholm, Sweden: PMLR, pp. 5283–5292 (2018)
Bojchevski, A., GÜnnemann, S.: Certifiable robustness to graph perturbations. In: Advances in Neural Information Processing Systems, pp. 8317–8328 (2019)
Bojchevski, A., Klicpera, J., Günnemann, S.: Efficient robustness certificates for discrete data: Sparsityaware randomized smoothing for graphs, images and more. arXiv preprint arXiv:2008.12952 (2020)
Cohen, J.M., Rosenfeld, E., Kolter, J.Z.: Certified adversarial robustness via randomized smoothing.arXiv preprint arXiv:1902.02918 (2019)
Jeh, G., Widom, J.: Scaling personalized web search. In: Proceedings of the 12th International Conference on World Wide Web, pp. 271–279 (2003)
Jia, J., Wang, B., Cao, X., Gong, N.Z.: Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. arXiv preprint arXiv:2002.03421 (2020)
Tao, S., Shen, H., Cao, Q., Hou, L., Cheng, X.: Adversarial immunization for improving certifiable robustness on graphs. arXiv preprint arXiv:2007.09647 (2020)
Wang, B., Jia, J., Cao, X., Gong, N.Z.: Certified robustness of graph neural networks against adversarial structural perturbation. arXiv preprint arXiv:2008.10715 (2020)
Zügner, D., Günnemann, S.: Certifiable robustness of graph convolutional networks under structure perturbations. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1656–1665 (2020)
Henriksen, P., Lomuscio, A.: Efficient neural network verification via adaptive refinement and adversarial search. In: European Association for Artificial Intelligence (ECAI’20), pp. 2513–2520 (2020)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Lyu, Z., Ko, C.-Y., Kong, Z., Wong, N., Lin, D., Daniel, L.: Fastened CROWN: tightened neural network robustness certificates. In: AAAI Conference on Artificial Intelligence (AAAI’20), pp. 5037–5044 (2020)
Müller, M., Makarchuk, G., Singh, G., Püschel, M., Vechev, M.T.: PRIMA: general and precise neural network certification via scalable convex hull approximations. Proc. ACM Program. Lang. 6(2022), 1–33 (2022)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: USENIX Security’18. 1599–1614 (2018)
Wang, Z., Albarghouthi, A., Prakriya, G., Jha, S.: Interval universal approximation for neural networks. Proc. ACM Program. Lang. 6, POPL, pp. 1–29 (2022)
Wu, Y., Zhang, M.: Tightening robustness verification of convolutional neural networks with fine-grained linear approximation. In: AAAI Conference on Artificial Intelligence (AAAI’21), pp. 11674–11681
Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Advances in Neural Information Processing Systems (NeurIPS’18), pp. 4944–4953 (2018)
Zhang, Z., Wu, Y., Liu, S., Liu, J., Zhang, M.: Provably tightest linear approximation for robustness verification of sigmoid-like neural networks. In: IEEE/ACM International Conference on Automated Software Engineering (ASE’22), pp. 80:1–80:13. ACM (2022)
Croce, F., Andriushchenko, M., Hein, M.: Provable robustness of relu networks via maximization of linear regions. In: AISTATS (2018)
Raghunathan, A., Steinhardt, J., Liang, P.S.: Semidefinite relaxations for certifying robustness to adversarial examples. In: NIPS (2018)
Xue, Z., Liu, S., Zhang, Z., Wu, Y., Zhang, M.: A tale of two approximations: tightening over-approximation for dnn robustness verification via under-approximation. In: Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2023), pp. 1182–1194. Association for Computing Machinery, New York (2023). https://doi.org/10.1145/3597926.3598127
Acknowledgment
This work was sponsored in part by the National Key Research and Development Program of China under Grant No. 2022YFB4501704 and 2022YFC3302600; in part by the National Natural Science Foundation Youth Fund under Grant No. 62302308; in part by the National Natural Science Foundation of China under Grant No. 61972150, 62132014, 62302308, U2142206, 62372300, and 61702333; in part by the Shanghai Engineering Research Center of Intelligent Education and Big Data; and in part by the Research Base of Online Education for Shanghai Middle and Primary Schools.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
An, D. et al. (2024). Graph Convolutional Network Robustness Verification Algorithm Based on Dual Approximation. In: Ogata, K., Mery, D., Sun, M., Liu, S. (eds) Formal Methods and Software Engineering. ICFEM 2024. Lecture Notes in Computer Science, vol 15394. Springer, Singapore. https://doi.org/10.1007/978-981-96-0617-7_9
Download citation
DOI: https://doi.org/10.1007/978-981-96-0617-7_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-96-0616-0
Online ISBN: 978-981-96-0617-7
eBook Packages: Computer ScienceComputer Science (R0)