Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

GraphPrior: Mutation-based Test Input Prioritization for Graph Neural Networks

Published: 24 November 2023 Publication History

Abstract

Graph Neural Networks (GNNs) have achieved promising performance in a variety of practical applications. Similar to traditional DNNs, GNNs could exhibit incorrect behavior that may lead to severe consequences, and thus testing is necessary and crucial. However, labeling all the test inputs for GNNs can be costly and time-consuming, especially when dealing with large and complex graphs, which seriously affects the efficiency of GNN testing. Existing studies have focused on test prioritization for DNNs, which aims to identify and prioritize fault-revealing tests (i.e., test inputs that are more likely to be misclassified) to detect system bugs earlier in a limited time. Although some DNN prioritization approaches have been demonstrated effective, there is a significant problem when applying them to GNNs: They do not take into account the connections (edges) between GNN test inputs (nodes), which play a significant role in GNN inference. In general, DNN test inputs are independent of each other, while GNN test inputs are usually represented as a graph with complex relationships between each test. In this article, we propose GraphPrior (GNN-oriented Test Prioritization), a set of approaches to prioritize test inputs specifically for GNNs via mutation analysis. Inspired by mutation testing in traditional software engineering, in which test suites are evaluated based on the mutants they kill, GraphPrior generates mutated models for GNNs and regards test inputs that kill many mutated models as more likely to be misclassified. Then, GraphPrior leverages the mutation results in two ways, killing-based and feature-based methods. When scoring a test input, the killing-based method considers each mutated model equally important, while feature-based methods learn different importance for each mutated model through ranking models. Finally, GraphPrior ranks all the test inputs based on their scores. We conducted an extensive study based on 604 subjects to evaluate GraphPrior on both natural and adversarial test inputs. The results demonstrate that KMGP, the killing-based GraphPrior approach, outperforms the compared approaches in a majority of cases, with an average improvement of 4.76% ~49.60% in terms of APFD. Furthermore, the feature-based GraphPrior approach, RFGP, performs the best among all the GraphPrior approaches. On adversarial test inputs, RFGP outperforms the compared approaches across different adversarial attacks, with the average improvement of 2.95% ~46.69%.

References

[1]
Bernhard K. Aichernig, Harald Brandl, Elisabeth Jöbstl, Willibald Krenn, Rupert Schlick, and Stefan Tiran. 2015. Killing strategies for model-based mutation testing. Softw. Test. Verif. Reliab. 25, 8 (2015), 716–748. DOI:
[2]
Paul Ammann and Jeff Offutt. 2008. Introduction to Software Testing. Cambridge University Press.
[3]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning. PMLR, 695–704.
[4]
Pietro Bongini, Monica Bianchini, and Franco Scarselli. 2021. Molecular generative graph neural networks for drug discovery. Neurocomputing 450 (2021), 242–252.
[5]
Leo Breiman. 2001. Random forests. Mach. Learn. 45, 1 (2001), 5–32.
[6]
Taejoon Byun, Vaibhav Sharma, Abhishek Vijayakumar, Sanjai Rayadurgam, and Darren Cofer. 2019. Input prioritization for testing neural networks. In IEEE International Conference on Artificial Intelligence Testing (AITest’19). IEEE, 63–70.
[7]
Hongyun Cai, Vincent W. Zheng, and Kevin Chen-Chuan Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Trans. Knowl. Data Eng. 30, 9 (2018), 1616–1637.
[8]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (SP’17). IEEE, 39–57.
[9]
Thierry Titcheu Chekam, Mike Papadakis, Yves Le Traon, and Mark Harman. 2017. An empirical study on mutation, statement and branch coverage fault revelation that avoids the unreliable clean program assumption. In 39th International Conference on Software Engineering, Sebastián Uchitel, Alessandro Orso, and Martin P. Robillard (Eds.). IEEE/ACM, 597–608. DOI:
[10]
Cen Chen, Kenli Li, Sin G. Teo, Xiaofeng Zou, Kang Wang, Jie Wang, and Zeng Zeng. 2019. Gated residual recurrent graph neural networks for traffic prediction. In AAAI Conference on Artificial Intelligence, Vol. 33. 485–492.
[11]
Junjie Chen. 2018. Learning to accelerate compiler testing. In 40th International Conference on Software Engineering. 472–475.
[12]
Junjie Chen, Yanwei Bai, Dan Hao, Yingfei Xiong, Hongyu Zhang, and Bing Xie. 2017. Learning to prioritize test programs for compiler testing. In IEEE/ACM 39th International Conference on Software Engineering (ICSE’17). IEEE, 700–711.
[13]
Junjie Chen, Guancheng Wang, Dan Hao, Yingfei Xiong, Hongyu Zhang, Lu Zhang, and Bing Xie. 2018. Coverage prediction for accelerating compiler testing. IEEE Trans. Softw. Eng. 47, 2 (2018), 261–278.
[14]
Junjie Chen, Zhuo Wu, Zan Wang, Hanmo You, Lingming Zhang, and Ming Yan. 2020. Practical accuracy estimation for efficient deep neural network testing. ACM Trans. Softw. Eng. Method. 29, 4 (2020), 1–35.
[15]
Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 785–794.
[16]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. In International Conference on Machine Learning. PMLR, 1115–1124.
[17]
Richard A. DeMillo, Richard J. Lipton, and Frederick G. Sayward. 1978. Hints on test data selection: Help for the practicing programmer. IEEE Comput. 11, 4 (1978), 34–41. DOI:
[18]
Xavier Devroey, Gilles Perrouin, Mike Papadakis, Axel Legay, Pierre-Yves Schobbens, and Patrick Heymans. 2016. Featured model-based mutation analysis. In 38th International Conference on Software Engineering, Laura K. Dillon, Willem Visser, and Laurie A. Williams (Eds.). ACM, 655–666. DOI:
[19]
Daniel Di Nardo, Nadia Alshahwan, Lionel Briand, and Yvan Labiche. 2013. Coverage-based test case prioritisation: An industrial case study. In IEEE 6th International Conference on Software Testing, Verification and Validation. IEEE, 302–311.
[20]
Hyunsook Do and Gregg Rothermel. 2006. On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Trans. Softw. Eng. 32, 9 (2006), 733–752.
[21]
Jian Du, Shanghang Zhang, Guanhang Wu, José M. F. Moura, and Soummya Kar. 2017. Topology adaptive graph convolutional networks. arXiv preprint arXiv:1710.10370 (2017).
[22]
Sebastian Elbaum, Alexey G. Malishevsky, and Gregg Rothermel. 2002. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng. 28, 2 (2002), 159–182.
[23]
Sebastian Elbaum, Gregg Rothermel, and John Penix. 2014. Techniques for improving regression testing in continuous integration development environments. In 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. 235–245.
[24]
Emelie Engström, Per Runeson, and Mats Skoglund. 2010. A systematic review on regression test selection techniques. Inf. Softw. Technol. 52, 1 (2010), 14–30.
[25]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In World Wide Web Conference. 417–426.
[26]
Yang Feng, Qingkai Shi, Xinyu Gao, Jun Wan, Chunrong Fang, and Zhenyu Chen. 2020. DeepGini: Prioritizing massive tests to enhance the robustness of deep neural networks. In 29th ACM SIGSOFT International Symposium on Software Testing and Analysis. 177–188.
[27]
Thomas Gaudelet, Ben Day, Arian R. Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy B. R. Hayter, Richard Vickers, Charles Roberts, Jian Tang, David Roblin, Tom L. Blundell, Michael M. Bronstein, and Jake P. Taylor-King. 2021. Utilizing graph machine learning within drug discovery and development. Brief. Bioinform. 22, 6 (2021).
[28]
Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann. 2021. Robustness of graph neural networks at scale. Adv. Neural Inf. Process. Syst. 34 (2021), 7637–7649.
[29]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In International Conference on Machine Learning. PMLR, 1263–1272.
[30]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 30 (2017).
[31]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
[32]
Hadi Hemmati, Andrea Arcuri, and Lionel Briand. 2013. Achieving scalable model-based testing through test case diversity. ACM Trans. Softw. Eng. Methodol. 22, 1 (2013), 1–42.
[33]
Christopher Henard, Mike Papadakis, Mark Harman, Yue Jia, and Yves Le Traon. 2016. Comparing white-box and black-box test prioritization. In IEEE/ACM 38th International Conference on Software Engineering (ICSE’16). IEEE, 523–534.
[34]
Christopher Henard, Mike Papadakis, Gilles Perrouin, Jacques Klein, Patrick Heymans, and Yves Le Traon. 2014. Bypassing the combinatorial explosion: Using similarity to generate and prioritize t-wise test configurations for software product lines. IEEE Trans. Softw. Eng. 40, 7 (2014), 650–670.
[35]
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, and Jocelyn Chanussot. 2020. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Rem. Sens. 59, 7 (2020), 5966–5978.
[36]
Qiang Hu, Yuejun Guo, Maxime Cordy, Xiaofei Xie, Wei Ma, Mike Papadakis, and Yves Le Traon. 2021. Towards exploring the limitations of active learning: An empirical study. In 36th IEEE/ACM International Conference on Automated Software Engineering (ASE’21). IEEE, 917–929.
[37]
Qiang Hu, Lei Ma, Xiaofei Xie, Bing Yu, Yang Liu, and Jianjun Zhao. 2019. DeepMutation++: A mutation testing framework for deep learning systems. In 34th IEEE/ACM International Conference on Automated Software Engineering (ASE’19). IEEE, 1158–1161.
[38]
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Adv. Neural Inf. Process. Syst. 33 (2020), 22118–22133.
[39]
Nargiz Humbatova, Gunel Jahangirova, and Paolo Tonella. 2021. DeepCrime: Mutation testing of deep learning systems based on real faults. In 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. 67–78.
[40]
Kanchan Jha, Sriparna Saha, and Hiteshi Singh. 2022. Prediction of protein–protein interaction using graph neural networks. Scient. Rep. 12, 1 (2022), 1–12.
[41]
Weiwei Jiang and Jiayun Luo. 2022. Graph neural network for traffic forecasting: A survey. Expert Syst. Applic. 207 (2022), 117921.
[42]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 30 (2017).
[43]
Been Kim, Rajiv Khanna, and Oluwasanmi O. Koyejo. 2016. Examples are not enough, learn to criticize! Criticism for interpretability. Adv. Neural Inf. Process. Syst. 29 (2016).
[44]
Jinhan Kim, Robert Feldt, and Shin Yoo. 2019. Guiding deep learning system testing using surprise adequacy. In IEEE/ACM 41st International Conference on Software Engineering (ICSE’19). IEEE, 1039–1049.
[45]
Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[46]
Yves Ledru, Alexandre Petrenko, Sergiy Boroday, and Nadine Mandran. 2012. Prioritizing test cases with string distances. Autom. Softw. Eng. 19, 1 (2012), 65–95.
[47]
Cheng Li, Jiaqi Ma, Xiaoxiao Guo, and Qiaozhu Mei. 2017. DeepCas: An end-to-end predictor of information cascades. In 26th International Conference on World Wide Web. 577–586.
[48]
Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. 2020. DeepRobust: A PyTorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149 (2020).
[49]
Zheng Li, Mark Harman, and Robert M. Hierons. 2007. Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. 33, 4 (2007), 225–237.
[50]
Zenan Li, Xiaoxing Ma, Chang Xu, Chun Cao, Jingwei Xu, and Jian Lü. 2019. Boosting operational DNN testing efficiency through conditioning. In 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 499–509.
[51]
Yiling Lou, Junjie Chen, Lingming Zhang, and Dan Hao. 2019. A survey on regression test-case prioritization. In Advances in Computers. Vol. 113. Elsevier, 1–46.
[52]
Yiling Lou, Dan Hao, and Lu Zhang. 2015. Mutation-based test-case prioritization in software evolution. In IEEE 26th International Symposium on Software Reliability Engineering (ISSRE’15). IEEE, 46–57.
[53]
Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. 2020. Towards more practical adversarial attacks on graph neural networks. Adv. Neural Inf. Process. Syst. 33 (2020), 4756–4766.
[54]
Lei Ma, Felix Juefei-Xu, Minhui Xue, Bo Li, Li Li, Yang Liu, and Jianjun Zhao. 2019. DeepCT: Tomographic combinatorial testing for deep learning systems. In IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER’19). IEEE, 614–618.
[55]
Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, and Yadong Wang. 2018. DeepGauge: Multi-granularity testing criteria for deep learning systems. In 33rd ACM/IEEE International Conference on Automated Software Engineering. 120–131.
[56]
Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, et al. 2018. DeepMutation: Mutation testing of deep learning systems. In IEEE 29th International Symposium on Software Reliability Engineering (ISSRE’18). IEEE, 100–111.
[57]
Wei Ma, Mike Papadakis, Anestis Tsakmalis, Maxime Cordy, and Yves Le Traon. 2021. Test selection for deep learning systems. ACM Trans. Softw. Eng. Methodol. 30, 2 (2021), 1–22.
[58]
Yao Ma, Suhang Wang, Tyler Derr, Lingfei Wu, and Jiliang Tang. 2019. Attacking graph convolutional networks via rewiring. arXiv preprint arXiv:1906.03750 (2019).
[59]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[60]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529–533.
[61]
Quang Hung Nguyen, Hai-Bang Ly, Lanh Si Ho, Nadhir Al-Ansari, Hiep Van Le, Van Quan Tran, Indra Prakash, and Binh Thai Pham. 2021. Influence of data splitting on performance of machine learning models in prediction of shear strength of soil. Math. Prob. Eng. 2021 (2021), 1–15.
[62]
Niccolò Pancino, Alberto Rossi, Giorgio Ciano, Giorgia Giacomini, Simone Bonechi, Paolo Andreini, Franco Scarselli, Monica Bianchini, and Pietro Bongini. 2020. Graph neural networks for the prediction of protein-protein interfaces. In European Conference on Artificial Neural Networks. 127–132.
[63]
Mike Papadakis, Christopher Henard, and Yves Le Traon. 2014. Sampling program inputs with mutation analysis: Going beyond combinatorial interaction testing. In 7th IEEE International Conference on Software Testing, Verification and Validation. IEEE Computer Society, 1–10. DOI:
[64]
Mike Papadakis, Marinos Kintis, Jie Zhang, Yue Jia, Yves Le Traon, and Mark Harman. 2019. Mutation testing advances: An analysis and survey. Adv. Comput. 112 (2019), 275–378. DOI:
[65]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32 (2019).
[66]
Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore: Automated whitebox testing of deep learning systems. In 26th Symposium on Operating Systems Principles. 1–18.
[67]
Michael Prince. 2004. Does active learning work? A review of the research. J. Eng. Educ. 93, 3 (2004), 223–231.
[68]
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, and Xin Wang. 2021. A survey of deep active learning. ACM Comput. Surv. 54, 9 (2021), 1–40.
[69]
Gregg Rothermel, Roland H. Untch, Chengyun Chu, and Mary Jean Harrold. 2001. Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27, 10 (2001), 929–948.
[70]
Benedek Rozemberczki and Rik Sarkar. 2020. Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In 29th ACM International Conference on Information & Knowledge Management. 1325–1334.
[71]
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE Trans. Neural Netw. 20, 1 (2008), 61–80.
[72]
Weijun Shen, Jun Wan, and Zhenyu Chen. 2018. MuNN: Mutation analysis of neural networks. In IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C’18). IEEE, 108–115.
[73]
Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. 2020. GraphAF: A flow-based autoregressive model for molecular graph generation. arXiv preprint arXiv:2001.09382 (2020).
[74]
Donghwan Shin, Shin Yoo, Mike Papadakis, and Doo-Hwan Bae. 2019. Empirical evaluation of mutation-based test case prioritization techniques. Softw. Test., Verif. Reliab. 29, 1-2 (2019), e1695.
[75]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[76]
Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. 2019. Relational action forecasting. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 273–283.
[77]
Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S. Yu, Lifang He, and Bo Li. 2018. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528 (2018).
[78]
Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295–2329.
[79]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[80]
Dan Wang and Yi Shang. 2014. A new active labeling method for deep learning. In International Joint Conference on Neural Networks (IJCNN’14). IEEE, 112–119.
[81]
Zan Wang, Hanmo You, Junjie Chen, Yingyi Zhang, Xuyuan Dong, and Wenbin Zhang. 2021. Prioritizing test inputs for deep neural networks via mutation analysis. In IEEE/ACM 43rd International Conference on Software Engineering (ICSE’21). IEEE, 397–409.
[82]
Michael Weiss and Paolo Tonella. 2022. Simple techniques work surprisingly well for neural network test prioritization and active learning (replicability study). In 31st ACM SIGSOFT International Symposium on Software Testing and Analysis. 139–150.
[83]
Raymond E. Wright. 1995. Logistic regression. In Reading and Understanding Multivariate Statistics, L. G. Grimm and P. R. Yarnold (Eds.). American Psychological Association, 217–244.
[84]
Le Wu, Peijie Sun, Richang Hong, Yanjie Fu, Xiting Wang, and Meng Wang. 2018. SocialGCN: An efficient graph convolutional network based model for social recommendation. arXiv preprint arXiv:1811.02815 (2018).
[85]
Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, and Bin Cui. 2022. Graph neural networks in recommender systems: A survey. Comput. Surv. 55, 5 (2022), 1–37.
[86]
Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. 2019. Topology attack and defense for graph neural networks: An optimization perspective. arXiv preprint arXiv:1906.04214 (2019).
[87]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[88]
Zhilin Yang, William Cohen, and Ruslan Salakhudinov. 2016. Revisiting semi-supervised learning with graph embeddings. In International Conference on Machine Learning. PMLR, 40–48.
[89]
Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In AAAI Conference on Artificial Intelligence, Vol. 33. 7370–7377.
[90]
Ruiping Yin, Kan Li, Guangquan Zhang, and Jie Lu. 2019. A deeper graph neural network for recommender systems. Knowl.-based Syst. 185 (2019), 105020.
[91]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 974–983.
[92]
Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: A survey. Softw. Test., Verif. Reliab. 22, 2 (2012), 67–120.
[93]
Junliang Yu, Hongzhi Yin, Jundong Li, Min Gao, Zi Huang, and Lizhen Cui. 2020. Enhance social recommendation with adversarial graph convolutional networks. IEEE Trans. Knowl. Data Eng. 34, 8 (2020).
[94]
Long Zhang, Xuechao Sun, Yong Li, and Zhenyu Zhang. 2019. A noise-sensitivity-analysis-based test prioritization technique for deep neural networks. arXiv preprint arXiv:1901.00054 (2019).
[95]
Qin Zhang, Keping Yu, Zhiwei Guo, Sahil Garg, Joel J. P. C. Rodrigues, Mohammad Mehedi Hassan, and Mohsen Guizani. 2021. Graph neural network-driven traffic forecasting for the connected internet of vehicles. IEEE Trans. Netw. Sci. Eng. 9, 5 (2021), 3015–3027.
[96]
Dongbin Zhao, Haitao Wang, Kun Shao, and Yuanheng Zhu. 2016. Deep reinforcement learning with experience replay based on SARSA. In IEEE Symposium Series on Computational Intelligence (SSCI’16). IEEE, 1–6.
[97]
Hang Zhou, Weikun Wang, Jiayun Jin, Zengwei Zheng, and Binbin Zhou. 2022. Graph neural network for protein–protein interaction prediction: A comparative study. Molecules 27, 18 (2022), 6135.
[98]
Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. AI Open 1 (2020), 57–81.
[99]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847–2856.
[100]
Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412 (2019).

Cited By

View all
  • (2025)An empirical study of AI techniques in mobile applicationsJournal of Systems and Software10.1016/j.jss.2024.112233219(112233)Online publication date: Jan-2025
  • (2024)Deep Learning for Hyperspectral Image Classification: A Critical Evaluation via Mutation TestingRemote Sensing10.3390/rs1624469516:24(4695)Online publication date: 16-Dec-2024
  • (2024)Test Optimization in DNN Testing: A SurveyACM Transactions on Software Engineering and Methodology10.1145/364367833:4(1-42)Online publication date: 27-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Software Engineering and Methodology
ACM Transactions on Software Engineering and Methodology  Volume 33, Issue 1
January 2024
933 pages
EISSN:1557-7392
DOI:10.1145/3613536
  • Editor:
  • Mauro Pezzè
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 November 2023
Online AM: 04 July 2023
Accepted: 13 June 2023
Revised: 12 June 2023
Received: 15 January 2023
Published in TOSEM Volume 33, Issue 1

Check for updates

Author Tags

  1. Test Input Prioritization
  2. Graph Neural Networks
  3. Mutation
  4. Labelling

Qualifiers

  • Research-article

Funding Sources

  • Luxembourg National Research Funds (FNR) AFR
  • European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,214
  • Downloads (Last 6 weeks)196
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)An empirical study of AI techniques in mobile applicationsJournal of Systems and Software10.1016/j.jss.2024.112233219(112233)Online publication date: Jan-2025
  • (2024)Deep Learning for Hyperspectral Image Classification: A Critical Evaluation via Mutation TestingRemote Sensing10.3390/rs1624469516:24(4695)Online publication date: 16-Dec-2024
  • (2024)Test Optimization in DNN Testing: A SurveyACM Transactions on Software Engineering and Methodology10.1145/364367833:4(1-42)Online publication date: 27-Jan-2024
  • (2024)Mutation Testing Reinvented: How Artificial Intelligence Complements Classic Methods2024 9th International Conference on Computer Science and Engineering (UBMK)10.1109/UBMK63289.2024.10773427(298-303)Online publication date: 26-Oct-2024
  • (2024)Test Input Prioritization for Graph Neural NetworksIEEE Transactions on Software Engineering10.1109/TSE.2024.338553850:6(1396-1424)Online publication date: Jun-2024
  • (2024)Test Input Prioritization for Machine Learning ClassifiersIEEE Transactions on Software Engineering10.1109/TSE.2024.335001950:3(413-442)Online publication date: Mar-2024
  • (2024)A Survey on Test Input Selection and Prioritization for Deep Neural Networks2024 10th International Symposium on System Security, Safety, and Reliability (ISSSR)10.1109/ISSSR61934.2024.00035(232-243)Online publication date: 16-Mar-2024
  • (2024)Prioritizing test cases for deep learning-based video classifiersEmpirical Software Engineering10.1007/s10664-024-10520-129:5Online publication date: 22-Jul-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media