Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3641181.3641182acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccdeConference Proceedingsconference-collections
research-article

AMNeuzz: A Strongly Directed Fuzz Testing Method Based on Attention Mechanism

Published: 11 April 2024 Publication History

Abstract

Fuzzy testing is one of the most popular vulnerability mining techniques recently, it plays a huge role in exploiting software security vulnerabilities and improving software security. Fuzzy testing mainly performs specific variations on the collected seeds to obtain a large number of test cases that can be used to execute the target program and trigger potential crashes in the program. However, traditional fuzzy testing generally suffers from a low level of test automation and fewer types of vulnerabilities detected. Aiming at the above problems, the application of machine learning techniques to fuzzy testing has become a hot research topic in academia. However, some recent studies still have problems, such as low edge coverage and poor generalization ability. Therefore, this paper proposes a strongly directed fuzz testing method based on attention mechanism and we name the fuzzer as AMNeuzz. AMNeuzz uses neural networks combined with attention mechanisms to construct an automatic sample generation model, which is trained so that the model learns the intrinsic formatting features of the samples, thus being able to automatically generate test samples that conform to certain syntactic specifications to quickly examine program paths that may have vulnerabilities, and this improves efficiency. In addition, the performance of the fuzzers is improved by improving Neuzz's gradient strategy. The final experimental results show that the AMNeuzz method proposed in this paper can achieve higher edge coverage than NEUZZ under the same time overhead.

References

[1]
Miller BP, Fredriksen L, So B. "An empirical study of the reliability of UNIX utilities". Communications of the ACM,1990,33(12):32–44.
[2]
Henard C, Papadakis M, Harman M, Comparing White-box and Black-box Test Prioritization. International Conference. IEEE, 2016:523-534. DOI :10.1145/2884781.2884791.
[3]
Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
[4]
WANG Y, WU Z, WEI Q, Neufuzz: efficient fuzzing with deep neural network. IEEE Access, 2019, 7: 36340-36352.
[5]
CHEN Y, AHMADI M, WANG B, MEUZZ: smart seed scheduling for hybrid fuzzing. 23rd International Symposium on Research in Attacks, Intrusions and Defenses, 2020: 77-92.
[6]
LYU C, JI S, LI Y, Smartseed: smart seed generation for efficient fuzzing. arXiv Preprint arXiv: 1807.02606, 2018.
[7]
GODEFROID P, PELEG H, SINGH R. Learnfuzz: machine learning for input fuzzing. 32rd IEEE International Conference on Automated Software Engineering. IEEE, 2017: 50-59.
[8]
WANG J, CHEN B, WEI L, Skyfire: datadriven seed generation for fuzzing. 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017: 579-594.
[9]
BÖTTINGER K, GODEFROID P, SINGH R. Deep reinforcement fuzzing. 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018: 116-122.
[10]
RAJPAL M, BLUM W, SINGH R. Not all bytes are equal: neural byte sieve for fuzzing. arXiv Preprint arXiv: 1711.04596, 2017.
[11]
ZONG P, LV T, WANG D, Fuzzguard: filtering out unreachable inputs in directed grey-box fuzzing through deep learning. 29th USENIX Security Symposium (USENIX Security 20), 2020: 2255-2269.
[12]
SPIEKER H, GOTLIEB A, MARIJAN D, Reinforcement learning for automatic test case prioritization and selection in continuous integration. Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. Santa Barbara: ACM, 2017: 12-22.
[13]
CHEN P, CHEN H. Angora: efficient fuzzing by principled search. 2018 IEEE Symposium on Security and Privacy (SP). San Francisco: IEEE, 2018: 711-725.
[14]
SHE D D, PEI K X, EPSTEIN D, NEUZZ: Efficient fuzzing with neural program learning [EB/OL]. Ithaca: arXiv [2018-07-15]. https://arxiv.org/abs/1807.05620.
[15]
SHIN E C R, SONG D, MOAZZEZI R. Recognizing funcitons in binaries with neural networks. Proceedings of the 24th USENIX Conference on Security Symposium. Washington D.C.: USENIX Association, 2015: 611-626.
[16]
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: 1409. 0473, 2014.
[17]
Luong MT, Pham H, Manning CD. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv: 1508. 04025, 2015.
[18]
Salton G, Ross R, Kelleher J. Attentive language models. In: Proc. of the 8th Int'l Joint Conf. on Natural Language Processing (Vol.1: Long Papers). 2017. 441-450.
[19]
Al-Rfou R, Choe D, Constant N, Guo M, Jones L. Character-level language modeling with deeper self-attention. In: Proc. of the AAAI Conf. on Artificial Intelligence. 2019, 33(1): 3159-3166.
[20]
Zheng W, Chen JZ, Wu XX, Chen X, Xia X. Empirical studies on deep-learning-based security bug report prediction methods. Ruan Jian Xue Bao/Journal of Software, 2020, 31(5): 1294-1313.
[21]
Rush AM, Chopra S, Weston J. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv: 1509.00685, 2015.
[22]
Paulus R, Xiong C, Socher R. A deep reinforced model for abstractive summarization. arXiv preprint arXiv: 1705.04304, 2017.

Index Terms

  1. AMNeuzz: A Strongly Directed Fuzz Testing Method Based on Attention Mechanism

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICCDE '24: Proceedings of the 2024 10th International Conference on Computing and Data Engineering
    January 2024
    157 pages
    ISBN:9798400709319
    DOI:10.1145/3641181
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 April 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. attention mechanism
    2. fuzzing
    3. machine learning
    4. neural network

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICCDE 2024

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 20
      Total Downloads
    • Downloads (Last 12 months)20
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 18 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media