Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

A Survey of Software Dynamic Analysis Methods

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

A review of software dynamic analysis methods is presented, mainly focusing on the methods supported by tools targeted on software security verification and applicable to system software. Fuzzing, runtime verification and dynamic symbolic execution techniques are considered in detail. Dynamic taint data analysis methods and tools are excluded since gathering technical details on them is complicated. The review of fuzzing and dynamic symbolic execution is focused mostly on the techniques to solve various problems that arise during operation of the tools rather than the particular tools that amount to a number greater than 100. In addition, the fuzzing counteraction techniques are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

REFERENCES

  1. Ozkan-Okay, M., Samet, R., Aslan, Ö., and Gupta, D., A comprehensive systematic literature review on intrusion detection systems, IEEE Access, 2021, vol. 9, pp. 157727–157760. https://doi.org/10.1109/ACCESS.2021.3129336

    Article  Google Scholar 

  2. Santos, L., Rabadao, C., and Gonçalves, R., Intrusion detection systems in Internet of Things: A literature review, Proc. of 13th Iberian Conference on Information Systems and Technologies (CISTI), Caceres, Spain, 2018, pp. 1–7. https://doi.org/10.23919/CISTI.2018.8399291

  3. Zhu, H., Hall, P. A. V., and May, J. H. R., Software unit test coverage and adequacy, ACM Computing Surveys, 1997, vol. 29, no. 4, pp. 366–427. https://doi.org/10.1145/267580.267590

    Article  Google Scholar 

  4. Sutton, M., Greene, A., and Amini, P., Fuzzing: Brute Force Vulnerability Discovery, Boston: Addison-Wesley, 2007.

    Google Scholar 

  5. Newsome, J., and Song, D., Dynamic taint analysis for Automatic detection, analysis, and signature generation of exploits on commodity software, Proc. of Network and Distributed System Security Simposium, 2005. https://doi.org/10.1184/R1/6468716.v1

  6. Schwartz, E. J., Avgerinos, T., and Brumley, D., All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask), Proc. of IEEE Symposium on Security and Privacy, 2010, pp. 317–331. https://doi.org/10.1109/SP.2010.26

  7. Wang, T., Wei, T., Gu, G., and Zou, W., TaintScope: a checksum-aware directed fuzzing tool for automatic software vulnerability detection, Proc. of IEEE Symposium on Security and Privacy, 2010, pp. 497–512. https://doi.org/10.1109/SP.2010.37

  8. Miller, B. P., Fredriksen, L., and So, B., An empirical study of the reliability of UNIX utilities, Communications of the ACM, 1990, vol. 33, no. 12, pp. 32–44. https://doi.org/10.1145/96267.96279

    Article  Google Scholar 

  9. The Cyber Grand Challenge. https://blogs.grammatech.com/the-cyber-grand-challenge. Accessed June 13, 2023.

  10. Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Krügel, C., and Vigna, G., Driller: Augmenting fuzzing through selective symbolic execution, Proc. of Network and Distributed System Security Symposium, 2016. https://doi.org/10.14722/NDSS.2016.23368

  11. Goodman, P. and Dinaburg, A., The past, present, and future of Cyberdyne, IEEE Security & Privacy, 2018, vol. 16, no. 2, pp. 61-69. https://doi.org/10.1109/MSP.2018.1870859

    Article  Google Scholar 

  12. Cisco Secure Development Lifecycle. https://www.cisco.com/c/en/us/about/trust-center/technology-built-in-security.html#~trustworthysolutionsfeatures. Accessed June 13, 2023.

  13. Chromium Security. URL: https://www.chromium.org/Home/chromium-security/bugs/ (дocтyп 13.06.2023)

  14. Clusterfuzz. Chrome Fuzzing Infrastructure. https://code.google.com/archive/p/clusterfuzz/. Accessed June 13, 2023.

  15. Aizatsky, M., Serebryany, K., Chang, O., Arya, A., and Whittaker, M., Announcing OSS-Fuzz: Continuous fuzzing for open source software. Google Open Source Blog, 2016. https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html. Accessed June 13, 2023.

  16. Microsoft Security Development Lifecycle. https://www.microsoft.com/en-us/securityengineering/sdl/practices. Accessed June 13, 2023.

  17. Bounimova, E., Godefroid, P., and Molnar, D., Billions and billions of constraints: Whitebox fuzz testing in production, Proc. of 35th International Conference on Software Engineering (ICSE), San Francisco, USA, 2013, pp. 122–131. https://doi.org/10.1109/ICSE.2013.6606558

  18. Fuzzing Survey. https://fuzzing-survey.org/. Accessed June 15, 2023.

  19. Rathaus, N. and Evron, G., Open Source Fuzzing Tools, Oxford: Syngress, 2007.

    Google Scholar 

  20. Takanen, A., DeMott, J. D., Miller, C., and Kettunen, A., Fuzzing for Software Security Testing and Quality Assurance, Norwood: Artech House, 2018. 2nd ed.

    Google Scholar 

  21. Li, J., Zhao, B., and Zhang, C., Fuzzing: a Survey, Cybersecurity, 2018, no. 1. https://doi.org/10.1186/s42400-018-0002-y

  22. Chen, C., Cui, B., Ma, J., Wu, R., Guo, J., and Liu, W., A systematic review of fuzzing techniques, Computers & Security, 2018, vol. 75, pp. 118–137. https://doi.org/10.1016/j.cose.2018.02.002

    Article  Google Scholar 

  23. Manes, V. J. M., Han, H., Han, C., Cha, S. K., Egele, M., Schwartz, E. J., and Woo, M., The art, science, and engineering of fuzzing: A survey, IEEE Transactions on Software Engineering, 2021, vol. 47, no. 11, pp. 2312–2331. http://arxiv.org/abs/1812.00140. https://doi.org/10.1109/TSE.2019.2946563

    Article  Google Scholar 

  24. Liang, H., Pei, X., Jia, X., Shen, W., and Zhang, J., Fuzzing: state of the art, IEEE Transactions on Reliability, 2018, vol. 67, no. 3, pp. 1199–1218. https://doi.org/10.1109/TR.2018.2834476

    Article  Google Scholar 

  25. Vishnyakov, A. V., Error detection in the binary code by methods of dynamic symbolic execution, Cand. Sci. (Phys. Math.) Dissertation, Moscow: ISP RAS, 2022.

  26. Fioraldi, A., Maier, D. C., Zhang, D., and Balzarotti, D., LibAFL: a framework to build modular and reusable fuzzers, Proc of ACM SIGSAC Conference on Computer and Communication Security, 2022, pp. 1051–1065. https://doi.org/10.1145/3548606.3560602

  27. Luk, C.-K., Cohn, R., Muth, R., Patil, H., Klauser, A., Lowney, G., Wallace, S., Reddi, V. J., and Hazelwood, K., Pin: building customized program analysis tools with dynamic instrumentation, ACM SIGPLAN Notices, 2005, vol. 40, no. 6, pp. 190–200. https://doi.org/10.1145/1064978.1065034

    Article  Google Scholar 

  28. Bellard, F., QEMU, a fast and portable dynamic translator, Proc. of ATEC’05, USENIX Annual Technical Conference, 2005, pp. 41-46. https://doi.org/10.5555/1247360.1247401

  29. Dyninst. https://dyninst.org/dyninst. Accessed December 5, 2023.

  30. Dyninst GitHub. https://github.com/dyninst/dyninst. Accessed December 5, 2023.

  31. Bruening, D. L., Efficient, transparent, and comprehensive runtime code manipulation, Ph.D. Thesis, Boston: Massachusetts Institute of Technology, 2004.

  32. DynamoRIO. https://github.com/DynamoRIO/dynamorio. Accessed December 5, 2023.

  33. Zalewski, M., American Fuzzy Lop. https://github.com/mirrorer/afl. Accessed June 14, 2023.

  34. AFL, supported by Google. https://github.com/google/AFL. Accessed June 19, 2023.

  35. Oleksiuk, D., IOCTL Fuzzer. https://github.com/Cr4sh/ioctlfuzzer. Accessed June 14, 2023.

  36. Chen, J., Diao, W., Zhao, Q., Zuo, C., Lin, Z., Wang, X., Lau, W. C., Sun, M., Yang, R., and Zhang, K., IoTFuzzer: discovering memory corruptions in IoT through app-based fuzzing, Proc. of the Network and Distributed System Security Symposium, 2018. https://doi.org/10.14722/ndss.2018.23159

  37. Babić, D., Bucur, S., Chen, Y., Ivančić, F., King, T., Kusano, M., Lemieux, C., Szekeres, L., and Wang, W., FUDGE: fuzz driver generation at scale, Proc. of 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2019, pp. 975–985. https://doi.org/10.1145/3338906.3340456

  38. Ispoglou, K. K., Austin, D., Mohan, V., and Payer, M., FuzzGen: automatic fuzzer generation, Proc. of 29th USENIX Security Symposium, 2020, pp. 2271–2287. https://doi.org/10.5555/3489212.3489340

  39. Zhang, M., Liu, J., Ma, F., Zhang, H., and Jiang, Y., IntelliGen: automatic driver synthesis for fuzz testing, Proc. of IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice, 2021, pp. 318–327. https://arxiv.org/abs/2103.00862. https://doi.org/10.1109/ICSE-SEIP52600.2021.00041

  40. GRR. https://github.com/lifting-bits/grr. Accessed June 14, 2023.

  41. LibFuzzer – a Library for Coverage-guided Fuzz Testing. https://llvm.org/docs/LibFuzzer.html. Accessed June 14, 2023.

  42. Swiecki, R., and Gröbert, F., Honggfuzz. https://github.com/google/honggfuzz. Accessed June 16, 2023.

  43. Sen, K., Effective random testing of concurrent programs, Proc. of 22th IEEE/ACM International Conference on Automated Software Engineering, 2007, pp. 323–332. https://doi.org/10.1145/1321631.1321679

  44. Joshi, P., Park, C.-S., Sen, K., and Naik, M., A randomized dynamic program analysis technique for detecting real deadlocks, ACM SIGPLAN Notices, 2009, vol. 44, no. 6, pp. 110–120. https://doi.org/10.1145/1543135.1542489

    Article  Google Scholar 

  45. Lai, Z., Cheung, S., and Chan, W., Detecting atomic-set serializability violations in multithreaded programs through active randomized testing, Proc. of 32nd ACM/IEEE International Conference on Software Engineering, 2010, vol. 1, pp. 235–244. https://doi.org/10.1145/1806799.1806836

  46. Cai, Y. and Chan, W. K., MagicFuzzer: Scalable deadlock detection for large-scale applications, Proc. of 34th International Conference on Software Engineering (ICSE), Zurich, Switzerland, 2012, pp. 606–616. https://doi.org/10.1109/ICSE.2012.6227156

  47. Samak, M., Ramanathan, M. K., and Jagannathan, S., Synthesizing racy tests, Proc. of 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2015, pp. 175–185. https://doi.org/10.1145/2737924.2737998

  48. Ganesh, V., Leek, T., and Rinard, M., Taint-based directed whitebox fuzzing, Proc. of 31st International Conference on Software Engineering (ICSE’09), 2009, pp. 474–484. https://doi.org/10.1109/ICSE.2009.5070546

  49. Haller, I., Slowinska, A., Neugschwandtner, M., and Bos, H., Dowsing for overflows: a guided fuzzer to find buffer boundary violations, Proc. of 22nd USENIX Security Symposium, 2013, pp. 49–64. https://doi.org/10.5555/2534766.2534772

  50. Ma, L., Artho, C., Zhang, C., Sato, H., Gmeiner, J., and Ramler, R., GRT: Program-analysis-guided random testing, Proc. of 30th IEEE/ACM International Conference on Automated Software Engineering, 2015, pp. 212–223. https://doi.org/10.1109/ASE.2015.49

  51. Rawat, S., Jain, V., Kumar, A., Cojocar, L., Giuffrida, C., and Bos, H., VUzzer: Application-aware evolutionary fuzzing, Proc. of Network and Distributed System Security Symposium, 2017. https://doi.org/10.14722/NDSS.2017.23404

  52. Peng, H., Shoshitaishvili, Y., and Payer, M., T-Fuzz: Fuzzing by program transformation, Proc. of IEEE Symposium on Security and Privacy, 2018, pp. 697–710. https://doi.org/10.1109/SP.2018.00056

  53. FFmpeg Repository. http://samples.ffmpeg.org/. Accessed June 16, 2023.

  54. CERT BFF. https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=507974. Accessed June 15, 2023.

  55. Householder, A. D. and Foote, J., Probability-based parameter selection for black-box fuzz testing, SEI Technical Note, CMU/SEI-2012-TN-019, 2012. https://doi.org/10.21236/ada610472

  56. Woo, M., Cha, S. K., Gottlieb, S., and Brumley, D., Scheduling black-box mutational fuzzing, Proc. of ACM SIGSAC Conference on Computer & Communications Security (CCS '13), 2013, pp. 511–522. https://doi.org/10.1145/2508859.2516736

  57. Böhme, M., Pham, V.-T., and Roychoudhury, A., Coverage-based greybox fuzzing as Markov chain, Proc. of ACM SIGSAC Conference on Computer and Communications Security (CCS '16), 2016, pp. 1032–1043. https://doi.org/10.1145/2976749.2978428

  58. Syzkaller – kernel fuzzer. https://github.com/google/syzkaller. Accessed June 15, 2023.

  59. Vyukov, D., go-fuzz. https://github.com/dvyukov/go-fuzz. Accessed June 19, 2023.

  60. Li, Y., Chen, B., Chandramohan, M., Lin, S.-W., Liu, Y., and Tiu, A., Steelix: Program-state based binary fuzzing, Proc. of 11th Joint Meeting on Foundations of Software Engineering, 2017, pp. 627–637. https://doi.org/10.1145/3106237.3106295

  61. Chen, P. and Chen, H., Angora: Efficient fuzzing by principled search, Proc. of IEEE Symposium on Security and Privacy, 2018, pp. 711–725. https://doi.org/10.1109/SP.2018.00046

  62. Böhme, M., Pham, V.-T., Nguyen, M.-D., and Roychoudhury, A., Directed greybox fuzzing, Proc. of ACM SIGSAC Conference on Computer and Communications Security (CCS '17), 2017, pp. 2329–2344. https://doi.org/10.1145/3133956.3134020

  63. Wang, S., Nam, J., and Tan, L., QTEP: Quality-aware test case prioritization, Proc. of 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017), 2017, pp. 523–534. https://doi.org/10.1145/3106237.3106258

  64. Eddington, M., Peach Fuzzer. https://peachtech.gitlab.io/peach-fuzzer-community/. Accessed June 13, 2023.

  65. Aschermann, C., Frassetto, T., Holz, T., Jauernig, P., Sadeghi, A., and Teuchert, D., NAUTILUS: Fishing for deep bugs with grammars, Proc. of Network and Distributed System Security Symposium, 2019. https://doi.org/10.14722/ndss.2019.23412

  66. Bradshaw, S., Fuzzer Automation with SPIKE. https://resources.infosecinstitute.com/topic/fuzzer-automation-with-spike/. Accessed June 13, 2023.

  67. SPIKE Protocol Fuzzer Creation Kit. https://github.com/guilhermeferreira/spikepp. Accessed June 13, 2023.

  68. Amini, P., Portnoy, A., and Sears, R., Sulley. https://github.com/OpenRCE/sulley. Accessed June 15, 2023.

  69. Kaksonen, R., Laakso, M., and Takanen, A., Software security assessment through specification mutations and fault injection, in Steinmetz, R., Dittman, J., and Steinebach, M. (eds), Communications and Multimedia Security Issues of the New Century, IFIP — The International Federation for Information Processing, Springer, 2001, vol 64, pp. 173–183. https://doi.org/10.1007/978-0-387-35413-2_16

    Book  Google Scholar 

  70. Banks, G., Cova, M., Felmetsger, V., Almeroth, K., Kemmerer, R., and Vigna, G., SNOOZE: Toward a stateful NetwOrk protocol fuzzer, in Katsikas, S. K., López, J., Backes, M., Gritzalis, S., and Preneel, B. (eds), Information Security, ISC 2006. Lecture Notes in Computer Science, Springer, 2006, vol. 4176, pp. 343–358. https://doi.org/10.1007/11836810_25

    Book  Google Scholar 

  71. Abdelnur, H. J., State, R., and Festor, O., KiF: A stateful SIP fuzzer, Principles, Systems and Applications of IP Telecommunications, 2007. https://doi.org/10.1145/1326304.1326313

    Book  Google Scholar 

  72. Johansson, W., Svensson, M., Larson, U. E., Almgren, M., and Gulisano, V., T-Fuzz: Model-based fuzzing for robustness testing of telecommunication protocols, Proc. of IEEE 7th International Conference on Software Testing, Verification and Validation, 2014, pp. 323–332. https://doi.org/10.1109/ICST.2014.45

  73. Trinity: Linux System Call Fuzzer. https://github.com/kernelslacker/trinity. Accessed June 13, 2023.

  74. KernelFuzzer. https://github.com/FSecureLABS/KernelFuzzer. Accessed June 15, 2023.

  75. Godefroid, P., Kiezun, A., and Levin, M. Y., Grammar-based whitebox fuzzing, Proc. of 29th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2008, pp. 206–215. https://doi.org/10.1145/1375581.1375607

  76. Pham, V.-T., Böhme, M., and Roychoudhury, A., Model-based whitebox fuzzing for program binaries, Proc. of 31st IEEE/ACM International Conference on Automated Software Engineering, 2016, pp. 543–553. https://doi.org/10.1145/2970276.2970316

  77. Kim, S. Y., Lee, S., Yun, I., Xu, W., Lee, B., Yun, Y., and Kim, T., CAB-Fuzz: Practical concolic testing techniques for COTS operating systems, Proc. of USENIX Annual Technical Conference, 2017, pp. 689–701. https://doi.org/10.5555/3154690.3154755

  78. DOMFuzz. https://github.com/MozillaSecurity/domfuzz. Accessed June 16, 2023.

  79. Jzfunfuzz. https://github.com/MozillaSecurity/funfuzz. Accessed June 16, 2023.

  80. Brubaker, C., Jana, S., Ray, B., Khurshid, S., Shmatikov, V., Using Frankencerts for automated adversarial testing of certificate validation in SSL/TLS implementations, Proc. of IEEE Symposium on Security and Privacy, 2014, pp. 114–129. https://doi.org/10.1109/SP.2014.15

  81. Kario, H., Tlfuzzer. https://github.com/tlsfuzzer/tlsfuzzer. Accessed June 16, 2023.

  82. Somorovsky, J., Systematic fuzzing and testing of TLS libraries, Proc. of ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 1492–1504. https://doi.org/10.1145/2976749.2978411

  83. Wang, J., Chen, B., Wei, L., and Liu, Y., Skyfire: Data-driven seed generation for fuzzing, Proc. of the IEEE Symposium on Security and Privacy, 2017, pp. 579–594. https://doi.org/10.1109/SP.2017.23

  84. Della Toffola, L., Staicu, C. A., and Pradel, M., Saying ‘hi!’ is not enough: Mining inputs for effective test generation, Proc. of 32nd IEEE/ACM International Conference on Automated Software Engineering, 2017, pp. 44–49. https://doi.org/10.5555/3155562.3155572

  85. Han, H., Oh, D., and Cha, S. K., CodeAlchemist: Semantics-aware code generation to find vulnerabilities in Javascript engines, Proc. of Network and Distributed System Security Symposium, 2019. https://doi.org/10.14722/ndss.2019.23263

  86. Han, H. and Cha, S. K., IMF: Inferred model-based fuzzer, Proc. of ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 2345–2358. https://doi.org/10.1145/3133956.3134103

  87. Godefroid, P., Peleg, H., and Singh, R., Learn&Fuzz: Machine learning for input fuzzing, Proc. of 32nd IEEE/ACM International Conference on Automated Software Engineering, 2017, pp. 50–59. https://arxiv.org/abs/1701.07232. https://doi.org/10.48550/arXiv.1701.07232

  88. Liu, P., Zhang, X., Pistoia, M., Zheng, Y., Marques, M., and Zeng, L., Automatic text input generation for mobile testing, Proc. of IEEE/ACM 39th International Conference on Software Engineering (ICSE), 2017, pp. 643–653. https://doi.org/10.1109/ICSE.2017.65

  89. Höschele, M. and Zeller, A., Mining input grammars from dynamic taints, Proc. of 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), 2016, pp. 720-725. https://doi.org/10.1145/2970276.2970321

  90. Bastani, O., Sharma, R., Aiken, A., and Liang, P., Synthesizing program input grammars, Proc. of 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2017, pp. 95-110. https://arxiv.org/abs/1608.01723. https://doi.org/10.1145/3062341.3062349

  91. Doupé, A., Cavedon, L., Kruegel, C., and Vigna, G., Enemy of the state: A state-aware black-box web vulnerability scanner, Proc. of 21st USENIX Security Symposium, 2012, pp. 523–538. https://doi.org/10.5555/2362793.2362819

  92. Gascon, H., Wressnegger, C., Yamaguchi, F., Arp, D., and Rieck, K., PULSAR: Stateful black-box fuzzing of proprietary network protocols, Proc. of International Conference on Security and Privacy in Communication Systems, 2015, pp. 330–347. https://doi.org/10.1007/978-3-319-28865-9_18

  93. Helin, A., Radamsa. https://gitlab.com/akihe/radamsa. Accessed June 16, 2023.

  94. Hocevar, S., Zzuf. https://github.com/samhocevar/zzuf. Accessed June 16, 2023.

  95. Cha, S. K., Woo, M., and Brumley, D., Program-adaptive mutational fuzzing, Proc. of IEEE Symposium on Security and Privacy, 2015, pp. 725–741. https://doi.org/10.1109/SP.2015.50

  96. Kargén, U. and Shahmehri, N., Turning programs against each other: High coverage fuzz testing using binary-code mutation and dynamic slicing, Proc. of 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015), 2015, pp. 782–792. https://doi.org/10.1145/2786805.2786844

  97. Moura, L. D. and Bjørner, N., Satisfiability modulo theories: Introduction and applications, Communications of the ACM, 2011, vol. 54, no. 9, pp. 69–77. https://doi.org/10.1145/1995376.1995394

    Article  Google Scholar 

  98. Gan, S., Zhang, C., Qin, X., Tu, X., Li, K., Pei, Z., and Chen, Z., CollAFL: Path sensitive fuzzing, Proc. of IEEE Symposium on Security and Privacy, 2018, pp. 679–696. https://doi.org/10.1109/SP.2018.00040

  99. Rustamov, F., Kim, J., Yu, J., and Yun, J., Exploratory review of hybrid fuzzing for automated vulnerability detection, IEEE Access, 2021, vol. 9, pp. 131166-131190. https://doi.org/10.1109/ACCESS.2021.3114202

    Article  Google Scholar 

  100. Sen, K., Marinov, D., and Agha, G., CUTE: A concolic unit testing engine for C, ACM SIGSOFT Software Engineering Notes, 2005, vol. 30, no. 5, pp. 263-272. https://doi.org/10.1145/1095430.1081750

    Article  Google Scholar 

  101. Godefroid, P., Klarlund, N., and Sen, K., DART: Directed automated random testing, ACM SIGPLAN Notices, 2005, vol. 40, no. 6, pp. 213–223. https://doi.org/10.1145/1064978.1065036

    Article  Google Scholar 

  102. Cadar, C., Dunbar, D., and Engler, D., KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs, Proc. of the 8th USENIX conference on Operating System Design and Implementation, 2008, pp. 209–224. https://doi.org/10.5555/1855741.1855756

  103. Godefroid, P., Levin, M. Y., and Molnar, D. A., Automated whitebox fuzz testing, Proc. of Network and Distributed System Security Symposium, 2008, pp. 151–166.

  104. Godefroid, P., Levin, M. Y., and Molnar, D., SAGE: Whitebox fuzzing for security testing, Communications of ACM, 2012, vol. 55, no. 3, pp. 40–44. https://doi.org/10.1145/2093548.2093564

    Article  Google Scholar 

  105. Chipounov, V., Kuznetsov, V., and Candea, G., S2E: A platform for in-vivo multi-path analysis of software systems, ACM SIGARCH Computer Architecture News Notices, 2011, vol. 46, no. 3, pp. 265–278. https://doi.org/10.1145/1961295.1950396

    Article  Google Scholar 

  106. Cha, S. K., Avgerinos, T., Rebert, A., and Brumley, D., Unleashing Mayhem on binary code, Proc. of IEEE Symposium on Security and Privacy, 2012, pp. 380–394. https://doi.org/10.1109/SP.2012.31

  107. Neugschwandtner, M., Comparetti, P. M., Haller, I., and Bos, H., The BORG: Nanoprobing binaries for buffer overreads, Proc. of 5th ACM Conference on Data and Application Security and Privacy (CODASPY '15), 2015, pp. 87–97. https://doi.org/10.1145/2699026.2699098

  108. Yun, I., Lee, S., Xu, M., Jang, Y., and Kim, T., QSYM: A practical concolic execution engine tailored for hybrid fuzzing, Proc. of 27th USENIX Security Symposium, 2018, pp. 745–761. https://doi.org/10.5555/3277203.3277260

  109. Sargsyan, S., Hakobyan, J., Mehrabyan, M., Mishechkin, M., Akozin, V., and Kurmangaleev, S., ISP-fuzzer: extendable fuzzing framework, Proc. of 2019 Ivannikov Memorial Workshop (IVMEM), 2019, pp. 68-71. https://doi.org/10.1109/IVMEM.2019.00017

  110. Mishechkin, M. V., Akolzin, V. V., and Kurmanga-leev, Sh. F., Architecture and functionality of the ISP Fuzzer tool, Ivannikov ISP RAS Open Conference, 2020.

  111. Vishnyakov, A., Fedotov, A., Kuts, D., Novikov, A., Parygina, D., Kobrin, E., Logunova, V., Belecky, P., and Kurmangaleev, S., Sydr: Cutting edge dynamic symbolic execution, Ivannikov ISP RAS Open Conference (ISPRAS), 2020, pp. 46–54. https://doi.org/10.1109/ISPRAS51486.2020.00014

  112. Aschermann, C., Schumilo, S., Blazytko, T., Gawlik, R., and Holz, T., REDQUEEN: Fuzzing with input-to-state correspondence, Proc. of Network and Distributed System Security Symposium, 2019. https://doi.org/10.14722/ndss.2019.23371

  113. Savidov, G., and Fedotov, A., Casr-Cluster: Crash clustering for Linux applications, 2021 Ivannikov ISPRAS Open Conference (ISPRAS), 2021, pp. 47–51. https://doi.org/10.1109/ISPRAS53967.2021.00012

    Book  Google Scholar 

  114. CASR: Crash Analysis and Severity Report. https://github.com/ispras/casr. Accessed December 5, 2023.

  115. Molnar, D., Li, X. C., and Wagner, D. A., Dynamic test generation to find integer bugs in x86 binary Linux programs, Proc. of 18th USENIX Security Symposium, 2009, pp. 67–82. https://doi.org/10.5555/1855768.1855773

  116. Cui, W., Peinado, M., Cha, S. K., Fratantonio, Y., and Kemerlis, V. P., RETracer: Triaging crashes by reverse execution from partial memory dumps, Proc. of 38th International Conference on Software Engineering, 2016, pp. 820–831. https://doi.org/10.1145/2884781.2884844

  117. Regehr, J., Chen, Y., Cuoq, P., Eide, E., Ellison, C., and Yang, X., Test-case reduction for C compiler bugs, Proc. of ACM SIGPLAN Notices, 2012, vol. 47, no. 6, pp. 335–346. https://doi.org/10.1145/2345156.2254104

  118. Foote, J., GDB Exploitable Plugin. https://github.com/jfoote/exploitable. Accessed June 19, 2023.

  119. Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., and Engler, D., EXE: Automatically generating inputs of death, Proc. of 13th ACM Conference on Computer and Communications Security, 2006, pp 322–335. https://doi.org/10.1145/1180405.1180445

  120. KLEE Symbolic Virtual Machine. https://github.com/klee/klee.

  121. Fioraldi, A., Maier, D., Eißfeldt, H., and Heuse, M., AFL++: Combining incremental steps of fuzzing research, Proc. of 14th USENIX Conference on Offensive Technologies (WOOT'20), USENIX Association, 2020. https://doi.org/10.5555/3488877.3488887

  122. AFL++. https://github.com/AFLplusplus/AFLplusplus. Accessed December 5, 2023.

  123. Schumilo, S., Aschermann, C., Gawlik, R., Schinzel, S., and Holz, T., kAFL: Hardware-assisted feedback fuzzing for OS kernels, Proc. of 26th USENIX Security Symposium, 2017, pp. 167–182. https://doi.org/10.5555/3241189.3241204

  124. Boofuzz. https://github.com/jtpereyda/boofuzz. Accessed June 19, 2023.

  125. Defensics. https://www.synopsys.com/software-integrity/security-testing/fuzz-testing.html. Accessed December 5, 2023.

  126. Tsankov, P., Dashti, M. T., and Basin, D., SecFuzz: Fuzz-testing security protocols, Proc. of 7th International Workshop on Automation of Software Test (AST), 2012, pp. 1–7. https://doi.org/10.1109/IWAST.2012.6228985

  127. Munea, T. L., Lim, H., and Shon, T., Network protocol fuzz testing for information systems and applications: A survey and taxonomy, Multimedia Tools and Applications, 2016, vol. 75, pp. 14745–14757. https://doi.org/10.1007/s11042-015-2763-6

    Article  Google Scholar 

  128. Yang, X., Chen, Y., Eide, E., and Regehr, J., Finding and understanding bugs in C compilers, ACM SIGPLAN Notices, 2011, vol. 46, no. 6, pp. 283–294. https://doi.org/10.1145/1993316.1993532

    Article  Google Scholar 

  129. Csmith. https://github.com/csmith-project/csmith. Accessed June 20, 2023.

  130. Holler, C., Herzig, K., and Zeller, A., Fuzzing with code fragments, Proc. of 21th USENIX Security Symposium, 2012, pp. 445–458. https://doi.org/10.5555/2362793.2362831

  131. Ma, H., A survey of modern compiler fuzzing, 2023. https://arxiv.org/abs/2306.06884. https://doi.org/10.48550/arXiv.2306.06884

  132. Henderson, A., Yin, H., Jin, G., Han, H., and Deng, H., VDF: Targeted evolutionary fuzz testing of virtual devices, in Dacier, M., Bailey, M., Polychronakis, M., and Antonakakis, M. (eds), Research in Attacks, Intrusions, and Defenses (RAID 2017). LNCS, Springer, 2017, vol. 10453, pp. 3–25. https://doi.org/10.1007/978-3-319-66332-6_1

    Book  Google Scholar 

  133. Eceiza, M., Flores, J. L., and Iturbe, M., Fuzzing the Internet of Things: A review on the techniques and challenges for efficient vulnerability discovery in embedded systems, IEEE Internet of Things Journal, 2021, vol. 8, no. 13, pp. 10390–10411. https://doi.org/10.1109/JIOT.2021.3056179

    Article  Google Scholar 

  134. Eisele, M., Maugeri, M., Shriwas, R., Huth, C., and Bella, G., Embedded fuzzing: A review of challenges, tools, and solutions, Cybersecurity, 2022, vol. 5, no. 1. https://doi.org/10.1186/s42400-022-00123-y

  135. Yun, J., Rustamov, F., Kim, J., and Shin, Y., Fuzzing of embedded systems: A survey, ACM Computing Surveys, 2023, vol. 55, no. 7, pp. 1-33. https://doi.org/10.1145/3538644

    Article  Google Scholar 

  136. Whitehouse, O., Introduction to Anti-fuzzing: A Defence in Depth Aid. http://research.nccgroup.com/2014/01/02/introduction-to-anti-fuzzing-a-defence-in-depth-aid. Accessed December 5, 2023.

  137. Edholm, E. and Göransson, D., Escaping the Fuzz – Evaluating Fuzzing Techniques and Fooling Them with Anti-Fuzzing, M.S. Thesis, Gothenburg: Chalmers University of Technology, 2016.

  138. Collberg, C., Thomborson, C., and Low, D., Manufacturing cheap, resilient, and stealthy opaque constructs, Proc. of 25th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, 1998, pp. 184–196. https://doi.org/10.1145/268946.268962

  139. Junod, P., Rinaldini, J., Wehrli, J., and Michielin, J., Obfuscator-LLVM—software protection for the masses, Proc. of 2015 IEEE/ACM 1st International Workshop on Software Protection, 2015, pp. 3–9. https://doi.org/10.1109/SPRO.2015.10

  140. Zhang, J., Li, Z., Liu, Y., Sun, Z., and Wang, Z., SAFTE: A Self-injection based anti-fuzzing technique, Computers and Electrical Enginerring, 2023, vol. 111, part B, 108980. https://doi.org/10.1016/j.compeleceng.2023.108980

  141. Cheng, C. CC., Lin, L., Shi, C., and Guan, Y., An anti-fuzzing approach for Android apps, in Peterson, G. and Shenoi, S. (eds), Digital Forensics 2023: Advances in Digital Forensics XIX, IFIP Advances in Information and communication Technology, Springer, 2023, vol. 687, pp. 37–53. https://doi.org/10.1007/978-3-031-42991-0_3

    Book  Google Scholar 

  142. Zhou, Z., Wang, C., and Zhao, Q., No-Fuzz: Efficient anti-fuzzing techniques, in: Li, F., Liang, K., Lin, Z., and Katsikas, S. K. (eds), Security and Privacy in Communication Networks 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Springer, 2023, vol. 462, pp. 731–751. https://doi.org/10.1007/978-3-031-25538-0_38

    Book  Google Scholar 

  143. Zhou, Z. and Wang, C., Practical anti-fuzzing techniques with performance optimization, IEEE Open Journal of the Computer Society, 2023, vol. 4, pp. 206–217. https://doi.org/10.1109/OJCS.2023.3301883

    Article  Google Scholar 

  144. Jung, J., Hu, H., Solodukhin, D., Pagan, D., Lee, K. H., and Kim, T., FUZZIFICATION: Anti-fuzzing techniques, Proc. of 28th USENIX Conference on Security Symposium (SEC'19), 2019, pp. 1913–1930. https://doi.org/10.5555/3361338.3361471

  145. Güler, E., Aschermann, C., Abbasi, A., and Holz, T., ANTIFUZZ: Impeding fuzzing audits of binary executables, Proc. of 28th USENIX Conference on Security Symposium (SEC'19), 2019, pp. 1931–1947. https://doi.org/10.5555/3361338.3361472

  146. ANTIFUZZ. https://github.com/RUB-SysSec/antifuzz. Accessed December 5, 2023.

  147. Li, Y., Meng, G., Xu, J., Zhang, C., Chen, H., Xie, X., Wang, H., and Liu, Y., Vall-nut: Principled anti-grey box – fuzzing, Proc. of IEEE 32nd International Symposium on Software Reliability Engineering, 2021, pp. 288–299. https://doi.org/10.1109/ISSRE52982.2021.00039

  148. Hu, Z., Hu, Y., and Dolan-Gavitt, B., Chaff Bugs: Deterring attackers by making software buggier, arXiv:1808.0065, 2018. https://arxiv.org/abs/1808.00659. Accessed December 5, 2023.

  149. Kaprekar, D. R., On Kaprekar numbers, Journal of Recreational Mathematics, 1980, vol. 13, no. 2, pp. 81–82.

    Google Scholar 

  150. Bartocci, E. and Falcone, Y. (eds), Lectures on Runtime Verification. Introductory and Advanced Topics, in LNCS 10457, Springer, 2018.

    Google Scholar 

  151. Drusinsky, D., The Temporal Rover and the ATG Rover, in Havelund, K., Penix, J., and Visser., W. (eds), SPIN Model Checking and Software Verification (SPIN 2000). LNCS 1885, Springer, 2000, pp. 323–330. https://doi.org/10.1007/10722468_19

  152. Havelund, K. and Roşu, G., Java PathExplorer – A runtime verification tool, Proc. of 6th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS'01), 2001.

  153. Leucker, M. and Schallhart, C., A brief account of runtime verification, Journal of Logic and Algebraic Programming, 2009, vol. 78, no. 5, pp. 293–303. https://doi.org/10.1016/j.jlap.2008.08.004

    Article  Google Scholar 

  154. Falcone, Y., Krstić, S., Reger, G., and Traytel, D., A taxonomy for classifying runtime verification tools, International Journal on Software Tools for Technology Transfer, 2021, vol. 23, pp. 255–284. https://doi.org/10.1007/s10009-021-00609-z

    Article  Google Scholar 

  155. Sánchez, C., Schneider, G., Ahrendt, W., Bartocci, E., Bianculli, D., Colombo, C., Falcone, Y., Francalanza, A., Krstić, S., Lourenço, J. M., Nickovic, D., Pace, G. J., Rufino, J., Signoles, J., Traytel, D., and Weiss, A., A survey of challenges for runtime verification from advanced application domains (beyond software), Formal Methods in System Design, 2019, vol. 54, pp. 279–335. https://doi.org/10.1007/s10703-019-00337-w

    Article  Google Scholar 

  156. Cavalli, A. R., Higashino, T., and Núñez, M., A survey on formal active and passive testing with applications to the cloud, Annals of Telecommunications, 2015, vol. 70, pp. 85–93. https://doi.org/10.1007/s12243-015-0457-8

    Article  Google Scholar 

  157. Itkin, I. and Yavorskiy, R., Overview of applications of passive testing techniques, Modeling and Analysis of Complex Systems and Processes, 2019. https://ceur-ws.org/Vol-2478/paper9.pdf. Accessed June 20, 2023.

  158. Edwards, A., Jaeger, T., and Zhang, X., Runtime verification of authorization hook placement for the Linux security modules framework, Proc. of 9th ACM Conference on Computer and Communications Security, 2002, pp. 225–234. https://doi.org/10.1145/586110.586141

  159. Sarrab, M. K., Policy-Based Runtime Verification of Information Flow, PhD Thesis, Leicester: Software Technology Research Laboratory, De Monfort University, 2011.

  160. Efremov, D. and Shchepetkov, I., Runtime verification of Linux kernel security module, Proc. of International Workshop on Formal Methods, LNCS 12233, Springer, 2020, pp. 185–199. https://arxiv.org/pdf/2001.01442.pdf. https://doi.org/10.1007/978-3-030-54997-8_12

    Book  Google Scholar 

  161. Efremov, D. V., Kopach, V. V., Kornykhin, E. V., Kulyamin, V. V., Petrenko, A. K., Khoroshilov, A. V., and Shchepetkov, I. V., Monitoring and testing OS modules based on abstract models of the system’s behavior, Trudy Instituta systemnogo programmirovaniya RAN (Proc. of ISP RAS), 2021, vol. 33, no. 6, pp. 15–26. https://doi.org/10.15514/ISPRAS-2021-33(6)-2

  162. Bartocci, E., Bonakdarpour, B., and Falcone, Y., First international competition on runtime verification, in Bonakdarpour, B. and Smolka, S. A. (eds.), Runtime Verification 2014. LNCS 8734, Springer, 2014, pp. 1–9. https://doi.org/10.1007/978-3-319-11164-3_1

    Book  Google Scholar 

  163. Falcone, Y., Ničković, D., Reger, G., and Thoma, D., Second international competition on runtime verification, in Bartocci, E. and Majumdar, R. (eds), Runtime Verification 2015. LNCS 9333, Springer, 2015, pp. 405–422. https://doi.org/10.1007/978-3-319-23820-3_27

    Book  Google Scholar 

  164. Reger, G., Hallé, S., and Falcone, Y., Third international competition on runtime verification, in Falcone, Y. and Sánchez, C. (eds), Runtime Verification 2016. LNCS 10012, Springer, 2016, pp. 21–37. https://doi.org/10.1007/978-3-319-46982-9_3

    Book  Google Scholar 

  165. Delahaye, M., Kosmatov, N., and Signoles, J., Common specification language for static and dynamic analysis of C programs, Proc. of 28th Annual ACM Symposium on Applied Computing, 2013, pp. 1230–1235. https://doi.org/10.1145/2480362.2480593

  166. E-ACSL. https://frama-c.com/fc-plugins/e-acsl.html. Accessed June 21, 2023.

  167. E-ACSL Code. https://github.com/evdenis/e-acsl. Accessed June 21, 2023.

  168. ANSI/ISO C Specification Language. https://frama-c.com/html/acsl.html. Accessed June 21, 2023.

  169. Navabpour, S., Joshi, Y., Wu, C. W. W., Berkovich, S., Medhat, R., Bonakdarpour, B., and Fischmeister, S., RiTHM: A tool for enabling time-triggered runtime verification for C programs, Proc. of 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013), 2013, pp. 603–606. https://doi.org/10.1145/2491411.2494596

  170. Medhat, R., Joshi, Y., Bonakdarpour, B., and Fischmeister, S., Accelerated runtime verification of LTL specifications with counting semantics, in Falcone, Y. and Sánchez, C. (eds), Runtime Verification 2016, LNCS 10012, Springer, 2016, pp. 251–267. https://arxiv.org/abs/1411.2239. https://doi.org/10.1007/978-3-319-46982-9_16

    Book  Google Scholar 

  171. Colombo, C., Pace, G. J., and Schneider, G., LARVA—safer monitoring of real-time Java programs, Proc. of 7th IEEE International Conference on Software Engineering and Formal Methods, 2009, pp. 33–37. https://doi.org/10.1109/SEFM.2009.13

  172. LARVA. http://www.cs.um.edu.mt/~svrg/Tools/LARVA/. Accessed June 21, 2023.

  173. LARVA Code. https://github.com/ccol002/larva-rv-tool. Accessed June 21, 2023.

  174. Colombo, C., Pace, G. J., and Schneider, G., Dynamic event-based runtime monitoring of real-time and contextual properties, Proc. of Formal Methods for Industrial Critical Systems (FMICS 2008), LNCS 5596, Springer, 2008, pp. 135–149. https://doi.org/10.1007/978-3-642-03240-0_13

    Book  Google Scholar 

  175. Luo, Q., Zhang, Y., Lee, C., Jin, D., O’Neil Meredith, P., Serbanuta, T.-F., and Roşu, G., RV-monitor: efficient parametric runtime verification with simultaneous properties, in Bonakdarpour, B. and Smolka, A. (eds), Runtime Verification 2014, LNCS 8734, Springer, 2014, pp. 285–300. https://doi.org/10.1007/978-3-319-11164-3_24

    Book  Google Scholar 

  176. RV-Monitor Code. https://github.com/runtimeverification/rv-monitor. Accessed June 21, 2023.

  177. Falcone, Y., Meredith, P., Şerbănuţă, T. F., Shiriashi, S., Iwai, A., and Roşu, G., RV-Android: Efficient parametric Android runtime verification, a brief tutorial, in Bartocci, E. and Majumdar, R. (eds), Runtime Verification 2015. LNCS 9333, Springer, 2015, pp. 342–357. https://doi.org/10.1007/978-3-319-23820-3_24

    Book  Google Scholar 

  178. Reger, G., Cruz, H. C., and Rydeheard, D. E., MarQ: Monitoring at runtime with QEA, Proc. of 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2015), LNCS 9035, Sringer, 2015, pp. 596-610. https://doi.org/10.1007/978-3-662-46681-0_55

  179. Decker, N., Harder, J., Scheffel, T., Schmitz, M., and Thoma, D., Runtime monitoring with union-find structures, Proc. of 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016), LNCS 9636, Springer, 2016, pp. 868–884. https://doi.org/10.1007/978-3-662-49674-9_54

  180. Mufin Project. https://www.isp.uni-luebeck.de/mufin. Accessed June 21, 2023.

  181. Serebryany, K., Bruening, D., Potapenko, A., and Vyukov, D., AddressSanitizer: A fast address sanity checker, Proc. of USENIX Annual Technical Conference, 2012, pp. 309–318. https://doi.org/10.5555/2342821.2342849

  182. AddressSanitizer. https://github.com/google/sanitizers/wiki/AddressSanitizer. Accessed June 22, 2023.

  183. QASan (QEMU-AddressSanitizer). https://github.com/andreafioraldi/qasan. Accessed June 22, 2023.

  184. Han, W., Joe, B., Lee, B., Song, C., and Shin, I., Enhancing memory error detection for large-scale applications and fuzz testing, Proc. of Network and Distributed System Security Symposium, 2018. https://doi.org/10.14722/ndss.2018.23318

  185. Nagarakatte, S., Zhao, J., Martin, M. M. K., and Zdancewic, S., SoftBound: Highly compatible and complete spatial memory safety for C, ACM SIGPLAN Notices, 2009, vol. 44, no. 6, pp. 245–258. https://doi.org/10.1145/1543135.1542504

    Article  Google Scholar 

  186. Nagarakatte, S., Zhao, J., Martin, M. M. K., and Zdancewic, S., CETS: Compiler enforced temporal safety for C, ACM SIGPLAN Notices, 2010, vol. 45, no. 8, pp. 31–40. https://doi.org/10.1145/1837855.1806657

    Article  Google Scholar 

  187. Lee, B., Song, C., Kim, T., and Lee, W., Type casting verification: Stopping an emerging attack vector, Proc. of 24th USENIX Security Symposium, 2015, pp. 81–96. https://doi.org/10.5555/2831143.2831149

  188. Haller, I., Jeon, Y., Peng, H., Payer, M., Giuffrida, C., Bos, H., and van der Kouwe, E., TypeSan: Practical type confusion detection, Proc. of ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 517–528. https://doi.org/10.1145/2976749.2978405

  189. Jeon, Y., Biswas, P., Carr, S., Lee, B., and Payer, M., HexType: Efficient detection of type confusion errors for C++, Proc. of ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 2373–2387. https://doi.org/10.1145/3133956.3134062

  190. Wang, X., Zeldovich, N., Kaashoek, M. F., and Solar-Lezama, A., Towards optimization-safe systems: Analyzing the impact of undefined behavior, Proc. of 24th ACM Symposium on Operating System Principles, 2013, pp. 260–275. https://doi.org/10.1145/2517349.2522728

  191. Valgrind. https://valgrind.org/. Accessed June 21, 2023.

  192. Seward, J. and Nethercote, N., Using Valgrind to detect undefined value errors with bit-precision, Proc. of USENIX Annual Technical Conference, 2005, pp. 2. https://doi.org/10.5555/1247360.1247362

  193. Bruening, D. and Zhao, Q., Practical memory checking with Dr. Memory, Proc. of International Symposium on Code Generation and Optimization, 2011, pp. 213–223. https://doi.org/10.1109/CGO.2011.5764689

  194. Stepanov, E. and Serebryany, K., MemorySanitizer: Fast detector of uninitialized memory use in C++, Proc. of IEEE/ACM International Symposium on Code Generation and Optimization, 2015, pp. 46-55. https://doi.org/10.1109/CGO.2015.7054186

  195. MemorySanitizer in LLVM/Clang. https://clang.llvm.org/docs/MemorySanitizer.html. Accessed June 22, 2023.

  196. Dietz, W., Li, P., Regehr, J., and Adve, V., Understanding integer overflow in C/C++, ACM Transactions on Software Engineering and Methodology, 2015, vol. 25, no. 1, pp. 1–29. https://doi.org/10.1145/2743019

    Article  Google Scholar 

  197. UndefinedBehaviorSanitizer in LLVM/Clang. https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html. Accessed June 22, 2023.

  198. Serebryany, K. and Iskhodzhanov, T., ThreadSanitizer: Data race detection in practice, Proc. of Workshop on Binary Instrumentation and Applications, 2009, pp. 62–71. https://doi.org/10.1145/1791194.1791203

  199. ThreadSanitizer in LLVM/Clang. https://clang.llvm.org/docs/ThreadSanitizer.html. Accessed June 22, 2023.

  200. Boyer, R. S., Elspas, B., and Levitt, K. N., SELECT—a formal system for testing and debugging programs by symbolic execution, ACM SIGPLAN Notices, 1975, vol. 10, no. 6, pp. 234–245. https://doi.org/10.1145/390016.808445

    Article  Google Scholar 

  201. Howden, W. E., Methodology for the generation of program test data, IEEE Transactions on Computers, 1975, vol. C-24, no. 5, pp. 554–560. https://doi.org/10.1109/T-C.1975.224259

    Article  Google Scholar 

  202. King, J. C., A new approach to program testing, Proc. of International Conference on Reliable Software, 1975, pp. 228–233. https://doi.org/10.1145/800027.808444

  203. King, J. C., Symbolic execution and program testing, Communications of the ACM, 1976, vol. 19, no. 7, pp. 385–394. https://doi.org/10.1145/360248.360252

    Article  MathSciNet  Google Scholar 

  204. Cadar, C. and Sen, K., Symbolic execution for software testing: Three decades later, Communications of ACM, 2013, vol. 56, no. 2, pp. 82–90. https://doi.org/10.1145/2408776.2408795

    Article  Google Scholar 

  205. Baldoni, R., Coppa, E., Cono D’Elia, D., Demetrescu, C., and Finocchi, I., A survey of symbolic execution techniques, ACM Computing Surveys, 2018, vol. 51, no. 3, pp. 1–39. https://arxiv.org/abs/1610.00502. https://doi.org/10.1145/3182657

    Article  Google Scholar 

  206. Avgerinos, T., Cha, S. K., Lim, B.T.H., and Brumley, D., AEG: Automatic exploit generation, Proc. of Network and Distributed System Security Symposium, 2011, pp. 283–300.

  207. Mi, X., Rawat, S., Giuffrida, C., and Bos, H., LeanSym: Efficient hybrid fuzzing through conservative constraint debloating, Proc. of 24th International Symposium on Research in Attacks, Intrusions and Defenses (RAID '21), 2012, pp. 62–77. https://doi.org/10.1145/3471621.3471852

  208. Godefroid, P., Compositional dynamic test generation, ACM SIGPLAN Notices, 2007, vol. 42, no. 1, pp. 47–54. https://doi.org/10.1145/1190215.1190226

    Article  Google Scholar 

  209. Godefroid, P. and Luchaup, D., Automatic partial loop summarization in dynamic test generation, Proc. of International Symposium on Software Testing and Analysis (ISSTA’11), 2011, pp. 23-33. https://doi.org/10.1145/2001420.2001424

  210. Xie, X., Chen, B., Liu, Y., Le, W., and Li, X., Proteus: Computing disjunctive loop summary via path dependency analysis, Proc. of 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE’16), 2016, pp. 61–72. https://doi.org/10.1145/2950290.2950340

  211. McMillan, K. L., Lazy annotation for program testing and verification, Proc. of 22nd International Conference on Computer Aided Verification (CAV’10), LNCS 6174, 2010, pp. 104–118. https://doi.org/10.1007/978-3-642-14295-6_10

  212. Yi, Q., Yang, Z., Guo, S., Wang, C., Liu, J., and Zhao, C., Postconditioned symbolic execution, Proc. of IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), 2015, pp. 1–10. https://doi.org/10.1109/ICST.2015.7102601

  213. Kuznetsov, V., Kinder, J., Bucur, S., and Candea, G., Efficient state merging in symbolic execution, ACM SIGPLAN Notices, 2012, vol. 47, no. 6, pp. 193–204. https://doi.org/10.1145/2345156.2254088

    Article  Google Scholar 

  214. Song, D., Brumley, D., Yin, H., Caballero, J., Jager, I., Kang, M. G., Liang, Z., Newsome, J., Poosankam, P., and Saxena, P., BitBlaze: A new approach to computer security via binary analysis, Proc. of 4th International Conference on Information Systems Security ((ICISS’08), LNCS 5352, 2008, pp. 1–25. https://doi.org/10.1007/978-3-540-89862-7_1

  215. BitBlaze: Binary Analysis for Computer Security. http://bitblaze.cs.berkeley.edu/. Accessed June 27, 2023.

  216. Brumley, D., Jager, I., Avgerinos, T., and Schwartz, E. J., BAP: A binary analysis platform, Proc. of 23rd International Conference on Computer Aided Verification (CAV’11), LNCS 6806, 2011, pp. 463–469. https://doi.org/10.1007/978-3-642-22110-1_37

  217. Kus, D., Towards symbolic pointers reasoning in dynamic symbolic execution, arXiv 2109.03698, 2022. https://arxiv.org/abs/2109.03698. Accessed December 5, 2023.

  218. Shoshitaishvili, Y., Wang, R., Salls, C., Stephens, N., Polino, M., Dutcher, A., Grosen, J., Feng, S., Hauser, C., Kruegel, C., and Vigna, G., SOK: (State of) the art of war: Offensive techniques in binary analysis, Proc. of IEEE Symposium on Security and Privacy, 2016, pp. 138–157. https://doi.org/10.1109/SP.2016.17

  219. Poeplau, S. and Francillon, A., Symbolic execution with SymCC: Don’t interpret, compile! Proc. of 29th USENIX Security Symposium, 2020, pp. 181–198. https://doi.org/10.5555/3489212.3489223

  220. Borzacchiello, L., Coppa, E., and Demetrescu, C., FUZZOLIC: Mixing fuzzing and concolic execution, Computers and Security, 2021, vol. 108, no. C. https://doi.org/10.1016/j.cose.2021.102368

  221. Wang, T., Wei, T., Lin, Z., and Zhou, W., IntScope: Automatically detecting integer overflow vulnerability in x86 binary using symbolic execution, Proc of Network and Distributed System Security Simposium, 2009.

  222. Chen, Y., Li, P., Xu, J., Guo, S., Zhou, R., Zhang, Y., Wei, T., and Lu, L., SAVIOR: Towards bug-driven hybrid testing, Proc. of IEEE Symposium on Security and Privacy, 2020, pp. 1580–1596. https://arxiv.org/abs/1906.07327. https://doi.org/10.1109/SP40000.2020.00002

  223. Österlund, S., Razavi, K., Bos, H., and Giuffrida, C., ParmeSan: Sanitizer-guided greybox fuzzing, Proc. of 29th USENIX Conference on Security (SEC'20), pp. 2289–2306. https://doi.org/10.5555/3489212.3489341

  224. Dovgalyuk, P. M., Klimushenkova, M. A., Fursova, N. I., Stepanov V. M., Vasiliev I. A., Ivanov, A. A., Ivanov, A. V., Bakulin, M. G., and Egorov, D. I., Natch: detecting the software attack surface with virtual machine introspection and taint analysis, Trudy Instituta systemnogo programmirovaniya RAN (Proc. of ISP RAS), 2022, vol. 34, no. 5, pp. 89–110. https://doi.org/10.15514/ISPRAS-2022-34(5)-6

  225. Isaev, I. K. and Sidorov, D. V., The use of dynamic analysis for generation of input data that demonstrates critical bugs and vulnerabilities in programs, Programming and Computer Software, 2010, vol. 36, no. 40, pp. 225–236. https://doi.org/10.1134/S0361768810040055

    Article  MathSciNet  Google Scholar 

  226. Ermakov, M. K. and Gerasimov, A. Yu., Avalanche: using parallel and distributed dynamic software analysis to improve defect detection, Trudy Instituta systemnogo programmirovaniya RAN (Proc. of ISP RAS), 2013, vol. 25, pp. 29–38.

Download references

Funding

This work was supported by ongoing institutional funding. No additional grants to carry out or direct this particular research were obtained.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. V. Kuliamin.

Ethics declarations

The author of this work declares that he has no conflicts of interest.

Additional information

Translated by M. Talacheva

Publisher’s Note.

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuliamin, V.V. A Survey of Software Dynamic Analysis Methods. Program Comput Soft 50, 90–114 (2024). https://doi.org/10.1134/S0361768824010079

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768824010079

Keywords: