Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Disttack: Graph Adversarial Attacks Toward Distributed GNN Training

Yuxiang Zhang ,
Xin Liu ,
Meng Wu ,
Wei Yan ,
Mingyu Yan ,
Xiaochun Ye ,
Dongrui Fan
2024年05月10日
  • 简介
    本文介绍了Disttack,这是首个针对分布式图神经网络(GNN)训练的对抗攻击框架,利用了分布式系统中频繁的梯度更新特性。具体而言,Disttack通过向单个计算节点注入对抗性攻击来破坏分布式GNN训练。被攻击的子图被精确扰动以在反向传播中引起异常的梯度上升,破坏计算节点之间的梯度同步,从而导致训练GNN的性能显著下降。我们在四个大型真实世界的图上评估了Disttack,攻击了五个广泛采用的GNN。实验结果表明,与现有的对抗攻击方法相比,Disttack将模型准确度下降的幅度增加了2.75倍,并平均加速了17.33倍,同时保持了不可察觉性。
  • 图表
  • 解决问题
    Disttack: Adversarial Attacks for Distributed Graph Neural Networks
  • 关键思路
    Disttack is the first framework of adversarial attacks for distributed GNN training that leverages the characteristics of frequent gradient updates in a distributed system. It corrupts distributed GNN training by injecting adversarial attacks into one single computing node, disrupting gradient synchronization between computing nodes and leading to a significant performance decline of the trained GNN.
  • 其它亮点
    Disttack is evaluated on four large real-world graphs by attacking five widely adopted GNNs. Experimental results demonstrate that Disttack amplifies the model accuracy degradation by 2.75x and achieves speedup by 17.33x on average while maintaining unnoticeability.
  • 相关研究
    Related work in this field includes adversarial attacks on GNNs, distributed machine learning, and privacy-preserving distributed learning.
PDF
原文
点赞 收藏 评论 分享到Link

沙发等你来抢

去评论