Abstract
Deep learning has showcased remarkable performance in source code vulnerability detection. However, significant challenges persist in terms of generalization and handling real-world samples. These challenges are frequently attributed to dataset distribution shift, such as spurious correlations. While previous research has explored spurious correlations in other tasks, such as text classification and function naming, vulnerability detection has yet to receive extensive study in this context. This paper proposes a novel approach called VulCausal, which integrates a causal inference framework into neural network models for vulnerability detection. VulCausal aims to capture and address the spurious correlations present in the API function, user-defined identifiers, and code structure during the training phase. The mitigation of spurious correlations is achieved through backdoor adjustment in the inference phase, effectively mitigating the effects of these confounding factors. Experimental results demonstrate that VulCausal significantly enhances the accuracy and robustness of vulnerability detection. It achieves state-of-the-art accuracy in the CodeXGLUE defect dataset benchmark and ranks first on the leaderboard. Additionally, it reduces the attack success rate from 63.08% to 23.7% when confronted with a state-of-the-art adversarial attack called ALERT, which is for a pre-trained language model of code.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alon, U., Zilberstein, M., Levy, O., Yahav, E.: code2vec: Learning distributed representations of code. Proc. ACM Program. Lang. 3(POPL), 1–29 (2019)
Alyami, S., Alhothali, A., Jamal, A.: Systematic literature review of Arabic aspect-based sentiment analysis. J. King Saud Univ. Comput. Inform. Sci. 34(9), 6524–6551 (2022)
Chakraborty, S., Krishna, R., Ding, Y., Ray, B.: Deep learning based vulnerability detection: are we there yet? IEEE Trans. Softw. Eng. 48, 3280–3296 (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Duan, X., Wu, J., Ji, S., Rui, Z., Luo, T., Yang, M., Wu, Y.: VulSniper: focus your attention to shoot fine-grained vulnerabilities. In: IJCAI, pp. 4665–4671 (2019)
Feng, Z., et al.: CodeBERT: a pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155 (2020)
Gao, S., Gao, C., Wang, C., Sun, J., Lo, D., Yu, Y.: Two sides of the same coin: exploiting the impact of identifiers in neural code comprehension. In: 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1933–1945. IEEE (2023)
Guo, D., Lu, S., Duan, N., Wang, Y., Zhou, M., Yin, J.: UniXcoder: unified cross-modal pre-training for code representation. arXiv preprint arXiv:2203.03850 (2022)
Guo, D., et al.: GraphCodeBERT: pre-training code representations with data flow. arXiv preprint arXiv:2009.08366 (2020)
Hanif, H., Maffeis, S.: VulBERTa: simplified source code pre-training for vulnerability detection. In: 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2022)
Henkel, J., Ramakrishnan, G., Wang, Z., Albarghouthi, A., Jha, S., Reps, T.: Semantic robustness of models of source code. In: 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 526–537. IEEE (2022)
Kaushik, D., Hovy, E., Lipton, Z.C.: Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434 (2019)
Kim, S., Choi, J., Ahmed, M.E., Nepal, S., Kim, H.: VulDeBERT: a vulnerability detection system using BERT. In: 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 69–74. IEEE (2022)
Landeiro, V., Culotta, A.: Robust text classification under confounding shift. J. Artif. Intell. Res. 63, 391–419 (2018)
Li, Z., et al.: Towards making deep learning-based vulnerability detectors robust. arXiv preprint arXiv:2108.00669 (2021)
Li, Z., et al.: VulDeePecker: a deep learning-based system for vulnerability detection. arXiv preprint arXiv:1801.01681 (2018)
Lu, S., et al.: CodeXGLUE: a machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664 (2021)
Phan, L., et al.: CoTexT: multi-task learning with code-text transformer. arXiv preprint arXiv:2105.08645 (2021)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Russell, R., et al.: Automated vulnerability detection in source code using deep representation learning. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 757–762. IEEE (2018). https://doi.org/10.1109/ICMLA.2018.00120
Wang, J., Kuang, H., Li, R., Su, Y.: Software source code vulnerability detection based on CNN-gap interpretability model. J. Electron. Inform. Technol. 44(7), 2568–2575 (2022)
Wang, Z., Culotta, A.: Identifying spurious correlations for robust text classification. arXiv preprint arXiv:2010.02458 (2020)
Wang, Z., Culotta, A.: Robustness to spurious correlations in text classification via automatically generated counterfactuals. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 14024–14031 (2021)
Yang, Z., Shi, J., He, J., Lo, D.: Natural attack for pre-trained models of code. In: Proceedings of the 44th International Conference on Software Engineering, pp. 1482–1493 (2022)
Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., Jin, Z.: Generating adversarial examples for holding robustness of source code processing models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 1169–1176 (2020)
Zhou, Y., Liu, S., Siow, J., Du, X., Liu, Y.: Devign: effective vulnerability identification by learning comprehensive program semantics via graph neural networks. In: Advances in neural information processing systems, vol. 32 (2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kuang, H., Zhang, J., Yang, F., Zhang, L., Huang, Z., Yang, L. (2024). VulCausal: Robust Vulnerability Detection Using Neural Network Models from a Causal Perspective. In: Cao, C., Chen, H., Zhao, L., Arshad, J., Asyhari, T., Wang, Y. (eds) Knowledge Science, Engineering and Management. KSEM 2024. Lecture Notes in Computer Science(), vol 14886. Springer, Singapore. https://doi.org/10.1007/978-981-97-5498-4_4
Download citation
DOI: https://doi.org/10.1007/978-981-97-5498-4_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5497-7
Online ISBN: 978-981-97-5498-4
eBook Packages: Computer ScienceComputer Science (R0)