Abstract
The sensitivity of Graph Neural Networks (GNNs) to their input graph data has drawn increasing attention to adversarial graphs. Given the widespread application of GNNs in various graph tasks, it is particularly important to study the principles and implementation of graph adversarial attacks for understanding the robustness of GNNs. Previous studies have attempted to reduce the prediction accuracy of GNNs by adding small perturbations to the graph structure or node features. However, these methods typically limit the perturbation strength within a small budget and fix the perturbation budget to a constant value when perturbing the graph structure or node features. In downstream node classification tasks, the required perturbation strengths to misclassify different nodes vary. Therefore, it is important to take domain knowledge of nodes or edges into account when setting the perturbation vector. To address this issue, we propose a special adversarial graph called DK-AdvGraph, where we meticulously tailor the perturbation vector of adversarial graphs in a highly limited black-box setting. Additionally, to better confuse GNNs, we ensure a higher similarity between nodes after perturbation while setting the perturbation vector. Our extensive experimental results demonstrate that the proposed DK-AdvGraph has practical significance in promoting the progress of GNNs in considering graph domain knowledge.
Supported by Southwest University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
Chang, H., et al.: A restricted black-box adversarial framework towards attacking graph embedding models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 3389–3396 (2020)
Chen, Y., et al.: Practical attacks against graph-based clustering. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1125–1142 (2017)
Dai, H., et al.: Adversarial attack on graph structured data. In: International Conference on Machine Learning, pp. 1115–1124. PMLR (2018)
Fang, Z., Tan, S., Wang, Y., Lu, J.: Elementary subgraph features for link prediction with neural networks. IEEE Trans. Knowl. Data Eng. (2021)
Gao, H., Ji, S.: Graph U-Nets. In: International Conference on Machine Learning, pp. 2083–2092. PMLR (2019)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Hao, Y., Meng, G., Wang, J., Zong, C.: A detection method for hybrid attacks in recommender systems. Inf. Syst. 114, 102154 (2023)
Hsieh, I.C., Li, C.T.: Netfense: adversarial defenses against privacy attacks on neural networks for graph data. IEEE Trans. Knowl. Data Eng. (2021)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Li, H., Di, S., Li, Z., Chen, L., Cao, J.: Black-box adversarial attack and defense on graph neural networks. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 1017–1030. IEEE (2022)
Ma, J., Deng, J., Mei, Q.: Adversarial attack on graph neural networks as an influence maximization problem. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 675–685 (2022)
Ma, J., Ding, S., Mei, Q.: Towards more practical adversarial attacks on graph neural networks. In: Advances in Neural Information Processing Systems, vol. 33, pp. 4756–4766 (2020)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Mu, J., Wang, B., Li, Q., Sun, K., Xu, M., Liu, Z.: A hard label black-box adversarial attack against graph neural networks. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 108–125 (2021)
Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Networks 20, 61–80 (2008)
Shen, Y., He, X., Han, Y., Zhang, Y.: Model stealing attacks against inductive graph neural networks. In: 2022 IEEE Symposium on Security and Privacy (SP), pp. 1175–1192. IEEE (2022)
Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In: Proceedings of the Web Conference 2020, pp. 673–683 (2020)
Suyeon, H., et al.: A fast and flexible FPGA-based accelerator for natural language processing neural networks. ACM Trans. Archit. Code Opt. 20, 1–24 (2023)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Xiao, Y., Sun, Z., Shi, G., Niyato, D.: Imitation learning-based implicit semantic-aware communication networks: Multi-layer representation and collaborative reasoning. IEEE J. Sel. Areas Commun. (2022)
Xu, J., et al.: Blindfolded attackers still threatening: strict black-box adversarial attacks on graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 4299–4307 (2022)
Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. arXiv preprint arXiv:1906.04214 (2019)
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018)
Xu, X., Yu, Y., Li, B., Song, L., Liu, C., Gunter, C.: Characterizing malicious edges targeting on graph neural networks. https://openreview.net/forum?id=HJxdAoCcYX (2019)
Yin, X., Lin, W., Sun, K., Wei, C., Chen, Y.: A 2 s 2-GNN: rigging GNN-based social status by adversarial attacks in signed social networks. IEEE Trans. Inf. Forensics Secur. 18, 206–220 (2022)
Yu, Z., Gao, H.: Molecular representation learning via heterogeneous motif graph neural networks. In: International Conference on Machine Learning, pp. 25581–25594. PMLR (2022)
Zhang, H., Avrithis, Y., Furon, T., Amsaleg, L.: Walking on the edge: fast, low-distortion adversarial examples. IEEE Trans. Inf. Forensics Secur. 16, 701–713 (2020)
Zhang, J., Peng, S., Gao, Y., Zhang, Z., Hong, Q.: APMSA: adversarial perturbation against model stealing attacks. IEEE Trans. Inf. Forensics Secur. 18, 1667–1679 (2023)
Zhang, Z., Wang, L., Wang, Y., Zhou, L., Zhang, J., Chen, F.: Dataset-driven unsupervised object discovery for region-based instance image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 45, 247–263 (2022)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856 (2018)
Acknowledgements
This paper is funded in part by the National Natural Science Foundation of China (62032019) and the Capacity Development Grant of Southwest University (SWU116007).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, Q., Yang, Z., Liu, Z., Zou, Q. (2023). Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14117. Springer, Cham. https://doi.org/10.1007/978-3-031-40283-8_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-40283-8_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40282-1
Online ISBN: 978-3-031-40283-8
eBook Packages: Computer ScienceComputer Science (R0)