Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14117))

  • 880 Accesses

Abstract

The sensitivity of Graph Neural Networks (GNNs) to their input graph data has drawn increasing attention to adversarial graphs. Given the widespread application of GNNs in various graph tasks, it is particularly important to study the principles and implementation of graph adversarial attacks for understanding the robustness of GNNs. Previous studies have attempted to reduce the prediction accuracy of GNNs by adding small perturbations to the graph structure or node features. However, these methods typically limit the perturbation strength within a small budget and fix the perturbation budget to a constant value when perturbing the graph structure or node features. In downstream node classification tasks, the required perturbation strengths to misclassify different nodes vary. Therefore, it is important to take domain knowledge of nodes or edges into account when setting the perturbation vector. To address this issue, we propose a special adversarial graph called DK-AdvGraph, where we meticulously tailor the perturbation vector of adversarial graphs in a highly limited black-box setting. Additionally, to better confuse GNNs, we ensure a higher similarity between nodes after perturbation while setting the perturbation vector. Our extensive experimental results demonstrate that the proposed DK-AdvGraph has practical significance in promoting the progress of GNNs in considering graph domain knowledge.

Supported by Southwest University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  2. Chang, H., et al.: A restricted black-box adversarial framework towards attacking graph embedding models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 3389–3396 (2020)

    Google Scholar 

  3. Chen, Y., et al.: Practical attacks against graph-based clustering. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1125–1142 (2017)

    Google Scholar 

  4. Dai, H., et al.: Adversarial attack on graph structured data. In: International Conference on Machine Learning, pp. 1115–1124. PMLR (2018)

    Google Scholar 

  5. Fang, Z., Tan, S., Wang, Y., Lu, J.: Elementary subgraph features for link prediction with neural networks. IEEE Trans. Knowl. Data Eng. (2021)

    Google Scholar 

  6. Gao, H., Ji, S.: Graph U-Nets. In: International Conference on Machine Learning, pp. 2083–2092. PMLR (2019)

    Google Scholar 

  7. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  8. Hao, Y., Meng, G., Wang, J., Zong, C.: A detection method for hybrid attacks in recommender systems. Inf. Syst. 114, 102154 (2023)

    Article  Google Scholar 

  9. Hsieh, I.C., Li, C.T.: Netfense: adversarial defenses against privacy attacks on neural networks for graph data. IEEE Trans. Knowl. Data Eng. (2021)

    Google Scholar 

  10. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  11. Li, H., Di, S., Li, Z., Chen, L., Cao, J.: Black-box adversarial attack and defense on graph neural networks. In: 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 1017–1030. IEEE (2022)

    Google Scholar 

  12. Ma, J., Deng, J., Mei, Q.: Adversarial attack on graph neural networks as an influence maximization problem. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 675–685 (2022)

    Google Scholar 

  13. Ma, J., Ding, S., Mei, Q.: Towards more practical adversarial attacks on graph neural networks. In: Advances in Neural Information Processing Systems, vol. 33, pp. 4756–4766 (2020)

    Google Scholar 

  14. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  15. Mu, J., Wang, B., Li, Q., Sun, K., Xu, M., Liu, Z.: A hard label black-box adversarial attack against graph neural networks. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 108–125 (2021)

    Google Scholar 

  16. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Networks 20, 61–80 (2008)

    Article  Google Scholar 

  17. Shen, Y., He, X., Han, Y., Zhang, Y.: Model stealing attacks against inductive graph neural networks. In: 2022 IEEE Symposium on Security and Privacy (SP), pp. 1175–1192. IEEE (2022)

    Google Scholar 

  18. Sun, Y., Wang, S., Tang, X., Hsieh, T.Y., Honavar, V.: Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In: Proceedings of the Web Conference 2020, pp. 673–683 (2020)

    Google Scholar 

  19. Suyeon, H., et al.: A fast and flexible FPGA-based accelerator for natural language processing neural networks. ACM Trans. Archit. Code Opt. 20, 1–24 (2023)

    Google Scholar 

  20. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  21. Xiao, Y., Sun, Z., Shi, G., Niyato, D.: Imitation learning-based implicit semantic-aware communication networks: Multi-layer representation and collaborative reasoning. IEEE J. Sel. Areas Commun. (2022)

    Google Scholar 

  22. Xu, J., et al.: Blindfolded attackers still threatening: strict black-box adversarial attacks on graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 4299–4307 (2022)

    Google Scholar 

  23. Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. arXiv preprint arXiv:1906.04214 (2019)

  24. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018)

  25. Xu, X., Yu, Y., Li, B., Song, L., Liu, C., Gunter, C.: Characterizing malicious edges targeting on graph neural networks. https://openreview.net/forum?id=HJxdAoCcYX (2019)

  26. Yin, X., Lin, W., Sun, K., Wei, C., Chen, Y.: A 2 s 2-GNN: rigging GNN-based social status by adversarial attacks in signed social networks. IEEE Trans. Inf. Forensics Secur. 18, 206–220 (2022)

    Article  Google Scholar 

  27. Yu, Z., Gao, H.: Molecular representation learning via heterogeneous motif graph neural networks. In: International Conference on Machine Learning, pp. 25581–25594. PMLR (2022)

    Google Scholar 

  28. Zhang, H., Avrithis, Y., Furon, T., Amsaleg, L.: Walking on the edge: fast, low-distortion adversarial examples. IEEE Trans. Inf. Forensics Secur. 16, 701–713 (2020)

    Article  Google Scholar 

  29. Zhang, J., Peng, S., Gao, Y., Zhang, Z., Hong, Q.: APMSA: adversarial perturbation against model stealing attacks. IEEE Trans. Inf. Forensics Secur. 18, 1667–1679 (2023)

    Article  Google Scholar 

  30. Zhang, Z., Wang, L., Wang, Y., Zhou, L., Zhang, J., Chen, F.: Dataset-driven unsupervised object discovery for region-based instance image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 45, 247–263 (2022)

    Article  Google Scholar 

  31. Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2847–2856 (2018)

    Google Scholar 

Download references

Acknowledgements

This paper is funded in part by the National Natural Science Foundation of China (62032019) and the Capacity Development Grant of Southwest University (SWU116007).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, Q., Yang, Z., Liu, Z., Zou, Q. (2023). Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14117. Springer, Cham. https://doi.org/10.1007/978-3-031-40283-8_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40283-8_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40282-1

  • Online ISBN: 978-3-031-40283-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics