Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

RustGraph: Robust Anomaly Detection in Dynamic Graphs by Jointly Learning Structural-Temporal Dependency

Published: 01 July 2024 Publication History

Abstract

Dynamic graph-based data are ubiquitous in the real world, such as social networks, finance systems, and traffic flow. Fast and accurately detecting anomalies in these dynamic graphs is of vital importance. However, despite promising results the current anomaly detection methods have achieved, there are two major limitations when coping with dynamic graphs. The first limitation is that the topological structures and the temporal dynamics have been modeled separately, resulting in less expressive features for detection. The second limitation is that the models have been trained by unreliable noisy labels generated by random negative sampling, rendering it severely vulnerable to subtle perturbations. To overcome the above limitations, we propose RustGraph, a robust anomaly detection framework by jointly learning structural-temporal dependency in dynamic graphs. To this end, we design a variational graph auto-encoder with informative prior that simultaneously encodes both graph structural and temporal information. Then we introduce a fine-grained contrastive learning method to learn better node representations by utilizing the temporal consistency between two snapshots. Furthermore, we formulate the noisy label learning problem for anomaly detection in dynamic graph, and then propose a robust anomaly detector to improve the model performance by leveraging learned graph structure signal. Our extensive experiments on six real-world datasets demonstrate the proposed RustGraph method achieves state-of-the-art performance with an average of 3.64&#x0025; improvement on AUC-ROC metric compared with all baselines. The codes are available at <uri>https://github.com/aubreygjh/RustGraph</uri>.

References

[1]
L. Wu, P. Cui, J. Pei, L. Zhao, and L. Song, “Graph neural networks,” in Graph Neural Networks: Foundations, Frontiers, and Applications. Berlin, Germany: Springer, 2022, pp. 27–37.
[2]
S. M. Kazemi, “Dynamic graph neural networks,” in Graph Neural Networks: Foundations, Frontiers, and Applications. Berlin, Germany: Springer, 2022, pp. 323–349.
[3]
T. Y. Berger-Wolf and J. Saia, “A framework for analysis of dynamic social networks,” in Proc. 12th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, 2006, pp. 523–528.
[4]
K. Zhao, Z. Zhang, Y. Rong, J. X. Yu, and J. Huang, “Finding critical users in social communities via graph convolutions,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 1, pp. 456–468, Jan. 2023.
[5]
Y. Zheng et al., “Clustering social audiences in business information networks,” Pattern Recognit., vol. 100, 2020, Art. no.
[6]
M. Zhang, S. Wu, X. Yu, Q. Liu, and L. Wang, “Dynamic graph neural networks for sequential recommendation,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 5, pp. 4741–4753, May 2022.
[7]
A. F. Subahi and K. E. Bouazza, “An intelligent IoT-based system design for controlling and monitoring greenhouse temperature,” IEEE Access, vol. 8, pp. 125 488–125 500, 2020.
[8]
Z. Cheng et al., “Time2Graph+: Bridging time series and graph representation learning via multiple attentions,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 2, pp. 2078–2090, Feb. 2023.
[9]
X. Ma et al., “A comprehensive survey on graph anomaly detection with deep learning,” IEEE Trans. Knowl. Data Eng., early access, Oct 8, 2021.
[10]
W. Cheng, K. Zhang, H. Chen, G. Jiang, Z. Chen, and W. Wang, “Ranking causal anomalies via temporal and dynamical analysis on vanishing correlations,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, San Francisco, CA, USA, 2016, pp. 805–814.
[11]
B. Dong et al., “Efficient discovery of abnormal event sequences in enterprise security systems,” in Proc. ACM Conf. Inf. Knowl. Manage., Singapore, 2017, pp. 707–715.
[12]
S. Ranshous, S. Shen, D. Koutra, S. Harenberg, C. Faloutsos, and N. F. Samatova, “Anomaly detection in dynamic networks: A survey,” Wiley Interdiscipl. Rev.: Comput. Statist., vol. 7, no. 3, pp. 223–247, 2015.
[13]
W. Yu, W. Cheng, C. C. Aggarwal, K. Zhang, H. Chen, and W. Wang, “NetWalk: A flexible deep embedding approach for anomaly detection in dynamic networks,” in Proc. 24th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, London, U.K., 2018, pp. 2672–2681.
[14]
L. Zheng, Z. Li, J. Li, Z. Li, and J. Gao, “AddGraph: Anomaly detection in dynamic graph using attention-based temporal GCN,” in Proc. 28th Int. Joint Conf. Artif. Intell., Macao, China, 2019, pp. 4419–4425.
[15]
L. Cai et al., “Structural temporal graph neural networks for anomaly detection in dynamic graphs,” in Proc. 30th ACM Int. Conf. Inf. Knowl. Manage., 2021, pp. 3747–3756.
[16]
Y. Liu et al., “Anomaly detection in dynamic graphs via transformer,” IEEE Trans. Knowl. Data Eng., early access, Nov. 2, 2021.
[17]
Y. Yang, Y. Xu, Y. Sun, Y. Dong, F. Wu, and Y. Zhuang, “Mining fraudsters and fraudulent strategies in large-scale mobile social networks,” IEEE Trans. Knowl. Data Eng., vol. 33, no. 1, pp. 169–179, Jan. 2021.
[18]
S. M. Kazemi et al., “Representation learning for dynamic graphs: A survey,” J. Mach. Learn. Res., vol. 21, no. 1, pp. 2648–2720, 2020.
[19]
S. Wang and P. S. Yu, “Graph neural networks in anomaly detection,” in Graph Neural Networks: Foundations, Frontiers, and Applications. Berlin, Germany: Springer, 2022, pp. 557–578.
[20]
C. C. Aggarwal, Y. Zhao, and S. Y. Philip, “Outlier detection in graph streams,” in Proc. IEEE 27th Int. Conf. Data Eng., 2011, pp. 399–409.
[21]
S. Ranshous, S. Harenberg, K. Sharma, and N. F. Samatova, “A scalable approach for outlier detection in edge streams using sketch-based approximations,” in Proc. SIAM Int. Conf. Data Mining, Miami, Florida, USA, 2016, pp. 189–197.
[22]
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. 5th Int. Conf. Learn. Representations, Toulon, France, 2017. [Online]. Available: https://openreview.net/forum?id=SJU4ayYgl
[23]
R. Liao, “Graph neural networks: Graph generation,” in Graph Neural Networks: Foundations, Frontiers, and Applications. Berlin, Germany: Springer, 2022, pp. 225–250.
[24]
T. Kipf and M. Welling, “Variational graph auto-encoders,” 2016,.
[25]
W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. Annu. Conf. Neural Inf. Process. Syst., Long Beach, CA, USA, 2017, pp. 1024–1034. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html
[26]
E. Hajiramezanali, A. Hasanzadeh, K. R. Narayanan, N. Duffield, M. Zhou, and X. Qian, “Variational graph recurrent neural networks,” in Proc. Annu. Conf. Neural Inf. Process. Syst., Vancouver, BC, Canada, 2019, pp. 10 700–10 710. [Online]. Available: https://proceedings.neurips.cc/paper/2019/hash/a6b8deb7798e7532ade2a8934477d3ce-Abstract.html
[27]
D. Kreuzer, D. Beaini, W. Hamilton, V. Létourneau, and P. Tossou, “Rethinking graph transformers with spectral attention,” in Proc. Adv. Neural Inf. Process. Syst., 2021, pp. 21618–21629.
[28]
X. Liu et al., “Self-supervised learning: Generative or contrastive,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 1, pp. 857–876, Jan. 2023.
[29]
Y. Wang, W. Jin, and T. Derr, “Graph neural networks: Self-supervised learning,” in Graph Neural Networks: Foundations, Frontiers, and Applications. Berlin, Germany: Springer, 2022, pp. 391–420.
[30]
P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm, “Deep graph infomax,” in Proc. 7th Int. Conf. Learn. Representations, New Orleans, LA, USA, 2019. [Online]. Available: https://openreview.net/forum?id=rklz9iAcKQ
[31]
Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” in Proc. Adv. Neural Inf. Process. Syst., 2020, pp. 5812–5823.
[32]
Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” 2020,.
[33]
B. Chen et al., “GCCAD: Graph contrastive learning for anomaly detection,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 8, pp. 8037–8051, Aug. 2023.
[34]
S. Tian, R. Wu, L. Shi, L. Zhu, and T. Xiong, “Self-supervised representation learning on dynamic graphs,” in Proc. 30th ACM Int. Conf. Inf. Knowl. Manage., 2021, pp. 1814–1823.
[35]
L. Wang et al., “TCL: Transformer-based dynamic graph modelling via contrastive learning,” 2021,.
[36]
S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus, “Training convolutional networks with noisy labels,” in Proc. 3rd Int. Conf. Learn. Representations Workshop, 2015.
[37]
J. Goldberger and E. Ben-Reuven, “Training deep neural-networks using a noise adaptation layer,” in Proc. 5th Int. Conf. Learn. Representations, Toulon, France, 2017. [Online]. Available: https://openreview.net/forum?id=H12GRgcxg
[38]
T. Miyato, S. Ichi Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: A regularization method for supervised and semi-supervised learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 8, pp. 1979–1993, Aug. 2019.
[39]
S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich, “Training deep neural networks on noisy labels with bootstrapping,” in Proc. 3rd Int. Conf. Learn. Representations Workshop, 2015. [Online]. Available: http://arxiv.org/abs/1412.6596
[40]
J. Jin, Y. Li, and C. L. P. Chen, “Pattern classification with corrupted labeling via robust broad learning system,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 10, pp. 4959–4971, Oct. 2022.
[41]
J. Li et al., “Compositional temporal grounding with structured variational cross-graph correspondence learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 3032–3041.
[42]
J. Li et al., “Variational cross-graph reasoning and adaptive structured semantics learning for compositional temporal grounding,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 10, pp. 12601–12617, Oct. 2023.
[43]
T. Liu and D. Tao, “Classification with noisy labels by importance reweighting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 3, pp. 447–461, Mar. 2015.
[44]
Y. Wang, A. Kucukelbir, and D. M. Blei, “Robust probabilistic modeling with Bayesian data reweighting,” in Proc. 34th Int. Conf. Mach. Learn., Sydney, NSW, Australia, 2017, pp. 3646–3655. [Online]. Available: http://proceedings.mlr.press/v70/wang17g.html
[45]
L. Jiang et al., “MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels,” in Proc. 35th Int. Conf. Mach. Learn., Stockholm, Sweden, 2018, pp. 2309–2318. [Online]. Available: http://proceedings.mlr.press/v80/jiang18c.html
[46]
B. Han et al., “Co-teaching: Robust training of deep neural networks with extremely noisy labels,” in Proc. Annu. Conf. Neural Inf. Process. Syst., Montréal, Canada, 2018, pp. 8536–8546. [Online]. Available: https://proceedings.neurips.cc/paper/2018/hash/a19744e268754fb0148b017647355b7b-Abstract.html
[47]
E. Dai, C. Aggarwal, and S. Wang, “NRGNN: Learning a label noise resistant graph neural network on sparsely and noisily labeled graphs,” in Proc. 27th ACM SIGKDD Conf. Knowl. Discov. Data Mining, New York, NY, USA: Association for Computing Machinery, 2021, pp. 227–236.
[48]
J. Xu et al., “Robust network enhancement from flawed networks,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 7, pp. 3507–3520, Jul. 2022.
[49]
M. Zhang, L. Hu, C. Shi, and X. Wang, “Adversarial label-flipping attack and defense for graph neural networks,” in Proc. IEEE Int. Conf. Data Mining, 2020, pp. 791–800.
[50]
Y. Li, J. Yin, and L. Chen, “Unified robust training for graph neural networks against label noise,” in Proc. 25th Pacific-Asia Conf., 2021, pp. 528–540.
[51]
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in Proc. 2nd Int. Conf. Learn. Representations, Banff, AB, Canada, 2014. [Online]. Available: http://arxiv.org/abs/1312.6114
[52]
T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc. 37th Int. Conf. Mach. Learn., 2020, pp. 1597–1607. [Online]. Available: http://proceedings.mlr.press/v119/chen20j.html
[53]
M. Rotman and L. Wolf, “Shuffling recurrent neural networks,” in Proc. AAAI Conf. Artif. Intell., 2021, pp. 9428–9435.
[54]
T. Opsahl and P. Panzarasa, “Clustering in weighted networks,” Social Netw., vol. 31, no. 2, pp. 155–163, 2009.
[55]
M. De Choudhury, H. Sundaram, A. John, and D. D. Seligmann, “Social synchrony: Predicting mimicry of user actions in online social media,” in Proc. Int. Conf. Comput. Sci. Eng., 2009, pp. 151–158.
[56]
R. A. Rossi and N. K. Ahmed, “The network data repository with interactive graph analytics and visualization,” in Proc. 29th AAAI Conf. Artif. Intell., Austin, Texas, USA, 2015, pp. 4292–4293. [Online]. Available: http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9553
[57]
S. Kumar, F. Spezzano, V. Subrahmanian, and C. Faloutsos, “Edge weight prediction in weighted signed networks,” in Proc. IEEE 16th Int. Conf. Data Mining, 2016, pp. 221–230.
[58]
S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, and V. S. Subrahmanian, “REV2: Fraudulent user prediction in rating platforms,” in Proc. 11th ACM Int. Conf. Web Search Data Mining, Marina Del Rey, CA, USA, 2018, pp. 333–341.
[59]
B. Zhang, R. Liu, D. Massey, and L. Zhang, “Collecting the internet as-level topology,” ACM SIGCOMM Comput. Commun. Rev., vol. 35, no. 1, pp. 53–61, 2005.
[60]
D. Eswaran and C. Faloutsos, “SedanSpot: Detecting anomalies in edge streams,” in Proc. IEEE Int. Conf. Data Mining, 2018, pp. 953–958.
[61]
A. Grover and J. Leskovec, “node2vec: Scalable feature learning for networks,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, San Francisco, CA, USA, 2016, pp. 855–864.
[62]
U. Von Luxburg, “A tutorial on spectral clustering,” Statist. Comput., vol. 17, no. 4, pp. 395–416, 2007.
[63]
B. Perozzi, R. Al-Rfou, and S. Skiena, “DeepWalk: Online learning of social representations,” in Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, New York, NY, USA, 2014, pp. 701–710.
[64]
J. Saramäki, M. Kivelä, J.-P. Onnela, K. Kaski, and J. Kertész, “Generalizations of the clustering coefficient to weighted complex networks,” Phys. Rev. E, vol. 75, Feb. 2007, Art. no.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering  Volume 36, Issue 7
July 2024
876 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 01 July 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media