Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Survey on Software Vulnerability Exploitability Assessment

Published: 26 April 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Knowing the exploitability and severity of software vulnerabilities helps practitioners prioritize vulnerability mitigation efforts. Researchers have proposed and evaluated many different exploitability assessment methods. The goal of this research is to assist practitioners and researchers in understanding existing methods for assessing vulnerability exploitability through a survey of exploitability assessment literature. We identify three exploitability assessment approaches: assessments based on original, manual Common Vulnerability Scoring System, automated Deterministic assessments, and automated Probabilistic assessments. Other than the original Common Vulnerability Scoring System, the two most common sub-categories are Deterministic, Program State based, and Probabilistic learning model assessments.

    References

    [1]
    Abeer Alhuzali, Birhanu Eshete, Rigel Gjomemo, and V. N. Venkatakrishnan. 2016. Chainsaw: Chained automated workflow-based exploit generation. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS’16). ACM, New York, NY, USA, 12.
    [2]
    Abeer Alhuzali, Rigel Gjomemo, Birhanu Eshete, and V. N. Venkatakrishnan. 2018. NAVEX: Precise and scalable exploit generation for dynamic web applications. In Proceedings of the 27th USENIX Security Symposium.
    [3]
    Luca Allodi, Sebastian Banescu, Henning Femmer, and Kristian Beckers. 2018. Identifying relevant information cues for vulnerability assessment using CVSS. In Proceedings of the 8th ACM Conference on Data and Application Security and Privacy (CODASPY’18). ACM, New York, NY, USA, 119–126.
    [4]
    Luca Allodi, Marco Cremonini, Fabio Massacci, and Woohyun Shim. 2020. Measuring the accuracy of software vulnerability assessments: Experiments with students and professionals. Empirical Software Engineering 25, 2 (2020), 1063–1094.
    [5]
    Luca Allodi and Fabio Massacci. 2012. A preliminary analysis of vulnerability scores for attacks in wild: The ekits and sym datasets. In Proceedings of the 2012 ACM Workshop on Building Analysis Datasets and Gathering Experience Returns for Security (BADGERS’12). ACM, New York, NY, USA, 17–24.
    [6]
    Luca Allodi and Fabio Massacci. 2014. Comparing vulnerability severity and exploits using case-control studies. ACM Transactions on Information and System Security 17, 1 (Aug. 2014), Article 1, 20 pages.
    [7]
    Mohammed Almukaynizi, Alexander Grimm, Eric Nunes, Jana Shakarian, and Paulo Shakarian. 2017. Predicting cyber threats through hacker social networks in darkweb and deepweb forums. In Proceedings of the 2017 International Conference of the Computational Social Science Society of the Americas. ACM, New York, NY, USA, Article 12, 7 pages.
    [8]
    Mohammed Almukaynizi, Eric Nunes, Krishna Dharaiya, Manoj Senguttuvan, Jana Shakarian, and Paulo Shakarian. 2017. Proactive identification of exploits in the wild through vulnerability mentions online. In Proceedings of the 2017 International Conference on Cyber Conflict (CyCon U.S.’17). IEEE, 82–88.
    [9]
    Noura Alomar, Primal Wijesekera, Edward Qiu, and Serge Egelman. 2020. “You’ve got your nice list of bugs, now what?” Vulnerability discovery and management processes in the wild. In Proceedings of the 16th Symposium on Usable Privacy and Security (SOUPS’20). 319–339.
    [10]
    Kenneth Alperin, Allan Wollaber, Dennis Ross, Pierre Trepagnier, and Leslie Leonard. 2019. Risk prioritization by leveraging latent vulnerability features in a contested environment. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. ACM, New York, NY, USA.
    [11]
    Thanassis Avgerinos, Sang Kil Cha, Brent Lim Tze Hao, and David Brumley. 2011. AEG: Automatic exploit generation. In Proceedings of the 18th Annual Network and Distributed System Security Symposium (NDSS’11).
    [12]
    Thanassis Avgerinos, Sang Kil Cha, Alexandre Rebert, Edward J. Schwartz, Maverick Woo, and David Brumley. 2014. Automatic exploit generation. Communications of the ACM 57, 2 (2014), 74–84.
    [13]
    Bitdefender. n.d. What Is an Exploit? Exploit Prevention. Retrieved May 17, 2023 from https://www.bitdefender.com/consumer/support/answer/10556/
    [14]
    Mehran Bozorgi, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. 2010. Beyond heuristics: Learning to classify vulnerabilities and predict exploits. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’10). ACM, New York, NY, USA.
    [15]
    Benjamin L. Bullough, Anna K. Yanchenko, Christopher L. Smith, and Joseph R. Zipkin. 2017. Predicting exploitation of disclosed software vulnerabilities using open-source data. In Proceedings of the 3rd ACM International Workshop on Security and Privacy Analytics (IWSPA’17). ACM, New York, NY, USA.
    [16]
    Sang Kil Cha, Thanassis Avgerinos, Alexandre Rebert, and David Brumley. 2012. Unleashing mayhem on binary code. In Proceedings of the 2012 IEEE Symposium on Security and Privacy.
    [17]
    Yueqi Chen and Xinyu Xing. 2019. SLAKE: Facilitating slab manipulation for exploiting vulnerabilities in the Linux kernel. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS’19). ACM, New York, NY, USA.
    [19]
    Clarivate. 2023. Journal Citation Reports. Retrieved August 16, 2023 from https://jcr.clarivate.com/jcr/home
    [20]
    Roddy Correa, Juan Ramón Bermejo Higuera, Javier Bermejo Higuera, Juan Antonio Sicilia Montalvo, Manuel Sánchez Rubio, and Á. Alberto Magreñán. 2021. Hybrid security assessment methodology for web applications. Computer Modeling in Engineering & Sciences 126 (2021), 89–124.
    [21]
    CVE. 2022. CVE Program (Website). Retrieved March 23, 2022 from https://www.cve.org/
    [22]
    CVE Details. 2022. How Does It Work? Retrieved October 27, 2022 from https://www.cvedetails.com/how-does-it-work.php
    [23]
    CVSS Special Interest Group (SIG). 2015. Common Vulnerability Scoring System v3.0: User Guide. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v3.0/cvss-v30-user_guide_v1.6.pdf
    [24]
    CVSS Special Interest Group (SIG). 2017. Common Vulnerability Scoring System v3.0: Examples. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v3.0/cvss-v30-examples_v1.5.pdf
    [25]
    CVSS Special Interest Group (SIG). 2019. Common Vulnerability Scoring System v3.1 Specification Document. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v3.1/specification-document
    [26]
    CVSS Special Interest Group (SIG). 2019. Common Vulnerability Scoring System v3.1: User Guide. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v3-1/cvss-v31-user-guide_r1.pdf
    [27]
    Brittany Day. 2003. OSVDB: An independent and open source vulnerability database. LinuxSecurity (Web) (Dec.2003). Retrieved April 11, 2022 from https://linuxsecurity.com/features/osvdb-an-independent-and-open-source-vulnerability-database
    [28]
    S. de Smale, R. van Dijk, X. Bouwman, J. van der Ham, and M. van Eeten. 2023. No one drinks from the firehose: How organizations filter and prioritize vulnerability information. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE.
    [29]
    Fenglei Deng, Jian Wang, Bin Zhang, Chao Feng, Zhiyuan Jiang, and Yunfei Su. 2020. A pattern-based software testing framework for exploitability evaluation of metadata corruption vulnerabilities. Scientific Programming 2020, 5 (2020), 1–21.
    [30]
    Jay L. Devore. 2014. Probability and Statistics for Engineering and the Sciences (9th ed.). Cengage Learning.
    [31]
    Andrej Dobrovoljc, Denis Trček, and Borut Likar. 2017. Predicting exploitations of information systems vulnerabilities through attackers’ characteristics. IEEE Access 5 (2017), 26063–26075.
    [32]
    Thomas Dullien. 2017. Weird machines, exploitability, and provable unexploitability. IEEE Transactions on Emerging Topics in Computing 8, 2 (2017), 391–403.
    [33]
    Tudor Dumitraş and Darren Shou. 2011. Toward a standard benchmark for computer security research: The worldwide intelligence network environment (WINE). In Proceedings of the 1st Workshop on Building Analysis Datasets and Gathering Experience Returns for Security. ACM, New York, NY, USA, 89–96.
    [34]
    Michel Edkrantz. 2015. Predicting Exploit Likelihood for Cyber Vulnerabilities with Machine Learning. Master’s Thesis. Chalmers University of Technology.
    [35]
    Michel Edkrantz and Alan Said. 2015. Predicting cyber vulnerability exploits with machine learning. In Proceedings of the 13th Scandinavian Conference on Artificial Intelligence (SCAI’15).
    [36]
    Clément Elbaz, Louis Rilling, and Christine Morin. 2020. Fighting N-day vulnerabilities with automated CVSS vector prediction at disclosure. In Proceedings of the 15th International Conference on Availability, Reliability, and Security (ARES’20). ACM, New York, NY, USA, Article 26, 10 pages.
    [37]
    Khaled El Emam. 1999. Benchmarking kappa: Interrater agreement in software process assessments. Empirical Software Engineering 4 (1999), 113–133.
    [38]
    ExploitDB. 2022. Exploit Database History. Retrieved September 22, 2022 from https://www.exploit-db.com/history
    [39]
    Fortinet. n.d. Exploit Definition. Retrieved May 17, 2023 from https://www.fortinet.com/resources/cyberglossary/exploit
    [40]
    Forum of Incident Response and Security Teams (FIRST). 2022. The EPSS Model (Website). Forum of Incident Response and Security Teams (FIRST). https://www.first.org/epss/model
    [41]
    Stefan Frei, Martin May, Ulrich Fiedler, and Bernhard Plattner. 2006. Large-scale vulnerability analysis. In Proceedings of the 2006 SIGCOMM Workshop on Large-Scale Attack Defense. ACM, New York, NY, USA.
    [42]
    Stefan Frei, Dominik Schatzmann, Bernhard Plattner, and Brian Trammell. 2010. Modeling the security ecosystem-the dynamics of (in) security. In Economics of Information Security and Privacy. Springer, 79–106.
    [43]
    Christian Frühwirth and Tomi Mannisto. 2009. Improving CVSS-based vulnerability prioritization and response with context information. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement. 535–544.
    [44]
    Daniele Gallingani, Rigel Gjomemo, V. N. Venkatakrishnan, and Stefano Zanero. 2015. Static detection and automatic exploitation of intent message vulnerabilities in Android applications. In Proceedings of the 2015 Conference on Mobile Security Technologies (MoST’15).
    [45]
    Laurent Gallon. 2011. Vulnerability Discrimination Using CVSS framework. In Proceedings of the 2011 4th IFIP International Conference on New Technologies, Mobility, and Security. 1–6.
    [46]
    Joshua Garcia, Mahmoud Hammad, Negar Ghorbani, and Sam Malek. 2017. Automatic generation of inter-component communication exploits for Android applications. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE’17). ACM, New York, NY, USA, 661–671.
    [47]
    Behrad Garmany, Martin Stoffel, Robert Gawlik, Philipp Koppe, Tim Blazytko, and Thorsten Holz. 2018. Towards automated generation of exploitation primitives for web browsers. In Proceedings of the 34th Annual Computer Security Applications Conference (ACSAC’18). ACM, New York, NY, USA, 300–312.
    [48]
    GII-GRIN-SCIE. 2022. The GII-GRIN-SCIE (GGS) Conference Rating. Retrieved August 16, 2023 from https://scie.lcc.uma.es/gii-grin-scie-rating/conferenceRating.jsf
    [49]
    Jon Gold. 2016. Open-source vulnerabilities database shuts down. CSO Online (Web). Retrieved April 11, 2022 from https://www.csoonline.com/article/3053549/open-source-vulnerabilities-database-shuts-down.html
    [50]
    Xi Gong, Zhenchang Xing, Xiaohong Li, Zhiyong Feng, and Zhuobing Han. 2019. Joint prediction of multiple vulnerability characteristics through multi-task learning. In Proceedings of the 24th International Conference on Engineering of Complex Computer Systems (ICECCS’19). IEEE, 31–40.
    [51]
    Liang He, Yan Cai, Hong Hu, Purui Su, Zhenkai Liang, Yi Yang, Huafeng Huang, Jia Yan, Xiangkun Jia, and Dengguo Feng. 2017. Automatically assessing crashes from heap overflows. In Proceedings of the 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE’17). 274–279.
    [52]
    Shashank Hedge. 2019. Linux permissions: An introduction to chmod. Red Hat. Retrieved August 16, 2023 from https://www.redhat.com/sysadmin/introduction-chmod
    [53]
    Sean Heelan and Agustin Gianni. 2012. Augmenting vulnerability analysis of binary code. In Proceedings of the 28th Annual Computer Security Applications Conference (ACSAC’12). ACM, New York, NY, USA, 199–208.
    [54]
    Sean Heelan, Tom Melham, and Daniel Kroening. 2018. Automatic heap layout manipulation for exploitation. In Proceedings of the 27th USENIX Security Symposium (USENIX Security’18).
    [55]
    Sean Heelan, Tom Melham, and Daniel Kroening. 2019. Gollum: Modular and greybox exploit generation for heap overflows in interpreters. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS’19). ACM, New York, NY, USA, 1689–1706.
    [56]
    Hannes Holm and Khalid Khan Afridi. 2015. An expert-based investigation of the Common Vulnerability Scoring System. Computers & Security 53 (2015), 18–30.
    [57]
    Chien-Cheng Huang, Feng-Yu Lin, Frank Yeong-Sung Lin, and Yeali S. Sun. 2013. A novel approach to evaluate software vulnerability prioritization. Journal of Systems and Software 86, 11 (2013), 2822–2840.
    [58]
    Shih-Kun Huang, Min-Hsiang Huang, Po-Yen Huang, Chung-Wei Lai, Han-Lin Lu, and Wai-Meng Leong. 2012. CRAX: Software crash analysis for automatic exploit generation by modeling attacks as symbolic continuations. In Proceedings of the 2012 IEEE 6th International Conference on Software Security and Reliability. 78–87.
    [59]
    Shih-Kun Huang, Min-Hsiang Huang, Po-Yen Huang, Han-Lin Lu, and Chung-Wei Lai. 2014. Software crash analysis for automatic exploit generation on binary programs. IEEE Transactions on Reliability 63, 1 (2014), 270–289.
    [60]
    Emanuele Iannone, Dario Di Nucci, Antonino Sabetta, and Andrea De Lucia. 2021. Toward automated exploit generation for known vulnerabilities in open-source libraries. In Proceedings of the 2021 IEEE/ACM 29th International Conference on Program Comprehension (ICPC’21). 396–400.
    [61]
    Jay Jacobs, Sasha Romanosky, Idris Adjerid, and Wade Baker. 2020. Improving vulnerability remediation through better exploit prediction. Journal of Cybersecurity 6, 1 (2020), tyaa015.
    [62]
    Jay Jacobs, Sasha Romanosky, Benjamin Edwards, Idris Adjerid, and Michael Roytman. 2021. Exploit prediction scoring system (EPSS). Digital Threats 2, 3 (July 2021), Article 20, 17 pages.
    [63]
    Yuning Jiang and Yacine Atif. 2021. A selective ensemble model for cognitive cybersecurity analysis. Journal of Network and Computer Applications 193 (2021), 103210.
    [64]
    Zhiyuan Jiang, Shuitao Gan, Adrian Herrera, Flavio Toffalini, Lucio Romerio, Chaojing Tang, Manuel Egele, Chao Zhang, and Mathias Payer. 2022. Evocatio: Conjuring bug capabilities from a single PoC. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS’22). ACM, New York, NY, USA, 1599–1613.
    [65]
    HyunChul Joh and Yashwant K. Malaiya. 2011. Defining and assessing quantitative security risk measures using vulnerability lifecycle and CVSS metrics. In Proceedings of the 2011 International Conference on Security and Management (SAM’11). 10–16.
    [66]
    Pontus Johnson, Robert Lagerström, Mathias Ekstedt, and Ulrik Franke. 2018. Can the common vulnerability scoring system be trusted? A Bayesian analysis. IEEE Transactions on Dependable and Secure Computing 15, 6 (2018), 1002–1015.
    [67]
    Hong Jin Kang, Truong Giang Nguyen, Bach Le, Corina S. Păsăreanu, and David Lo. 2022. Test mimicry to assess the exploitability of library vulnerabilities. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’22). ACM, New York, NY, USA.
    [68]
    Barbara Ann Kitchenham, David Budgen, and Pearl Brereton. 2015. Evidence-Based Software Engineering and Systematic Reviews. Vol. 4. CRC Press.
    [69]
    Martijn Koster, Gary Illyes, Henner Zeller, and Lizzi Sassman. 2022. Robots Exclusion Protocol. Retrieved September 6, 2022 from https://www.rfc-editor.org/rfc/internet-drafts/draft-koster-rep-12.html
    [70]
    Igor Kotenko, Konstantin Izrailov, and Mikhail Buinevich. 2022. Static analysis of information systems for IoT cyber security: A survey of machine learning approaches. Sensors 22, 4 (2022), 1335.
    [71]
    Vadim Kotov and Fabio Massacci. 2013. Anatomy of exploit kits. In Engineering Secure Software and Systems, Jan Jürjens, Benjamin Livshits, and Riccardo Scandariato (Eds.). Springer, Berlin, Germany, 181–196.
    [72]
    Eduard Kovacs. 2016. OSVDB shut down permanently. Security Week (Web) (April2016). Retrieved April 11, 2022 from https://www.securityweek.com/osvdb-shut-down-permanently
    [73]
    J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics 33, 1 (1977), 159–174. http://www.jstor.org/stable/2529310
    [74]
    Arash Habibi Lashkari, Andi Fitriah A. Kadir, Laya Taheri, and Ali A. Ghorbani. 2018. Toward developing a systematic approach to generate benchmark Android malware datasets and classification. In Proceedings of the 2018 International Carnahan Conference on Security Technology (ICCST’18).
    [75]
    Triet Huynh Minh Le and M. Ali Babar. 2022. On the use of fine-grained vulnerable code statements for software vulnerability assessment models. In Proceedings of the 19th International Conference on Mining Software Repositories (MSR’22). ACM, New York, NY, USA.
    [76]
    Triet H. M. Le, Huaming Chen, and M. Ali Babar. 2022. A survey on data-driven software vulnerability assessment and prioritization. ACM Computing Surveys.Just Accepted.
    [77]
    Triet Huynh Minh Le, Bushra Sabir, and Muhammad Ali Babar. 2019. Automated software vulnerability assessment with concept drift. In Proceedings of the IEEE/ACM 16th International Conference on Mining Software Repositories (MSR’19). 371–382.
    [78]
    Yoochan Lee, Changwoo Min, and Byoungyoung Lee. 2021. ExpRace: Exploiting kernel races through raising interrupts. In Proceedings of the 30th USENIX Security Symposium (USENIX Security’21).
    [79]
    Qi Liu, Kaibin Bao, and Veit Hagenmeyer. 2022. Binary exploitation in industrial control systems: Past, present and future. IEEE Access 10 (2022), 48242–48273.
    [80]
    Qixu Liu and Yuqing Zhang. 2011. VRSS: A new system for rating and scoring vulnerabilities. Computer Communications 34, 3 (2011), 264–273. Special Issue of Computer Communications on Information and Future Communication Security.
    [81]
    Jian Luo, Kueiming Lo, and Haoran Qu. 2014. A software vulnerability rating approach based on the vulnerability database. Journal of Applied Mathematics 2014 (2014), 932397.
    [82]
    Peter Mell, Karen Scarfone, and Sasha Romanosky. 2006. Common vulnerability scoring system. IEEE Security & Privacy 4, 6 (2006), 85–89.
    [83]
    Peter Mell, Karen Scarfone, and Sasha Romansky. 2007. A Complete Guide to the Common Vulnerability Scoring System Version 2.0. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v2/cvss-v2-guide.pdf
    [84]
    Microsoft. 2022. Microsoft Exploitability Index (Website). Retrieved April 10, 2022 from https://www.microsoft.com/en-us/msrc/exploitability-index
    [85]
    Microsoft. 2022. Security Update Severity Rating System (Website). Retrieved September 30, 2022 from https://www.microsoft.com/en-us/msrc/security-update-severity-rating-system
    [86]
    Triet Huynh Minh Le, David Hin, Roland Croft, and M. Ali Babar. 2021. DeepCVA: Automated commit-level vulnerability assessment with deep multi-task learning. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE’21). 717–729.
    [87]
    National Institute of Standards and Technology (NIST). 2022. Computer Security Resource Center (CSRC) Glossary. National Institute of Standards and Technology (NIST). https://csrc.nist.gov/glossary
    [88]
    Kartik Nayak, Daniel Marino, Petros Efstathopoulos, and Tudor Dumitraş. 2014. Some vulnerabilities are different than others. In Research in Attacks, Intrusions and Defenses, Angelos Stavrou, Herbert Bos, and Georgios Portokalidis (Eds.). Springer International Publishing, Cham, 426–446.
    [89]
    NIST. 2021. National Vulnerability Database. Retrieved March 30, 2024 from https://nvd.nist.gov/
    [90]
    Oxford English Dictionary (OED). 2022. Exploit, v. In OED Online. Oxford University Press. https://www.oed.com/view/Entry/66647
    [91]
    Marcus Pendleton, Richard Garcia-Lebron, Jin-Hee Cho, and Shouhuai Xu. 2016. A survey on systems security metrics. ACM Computing Surveys 49, 4 (Dec. 2016), Article 62, 35 pages.
    [92]
    Silvio Peroni and David Shotton. 2020. OpenCitations, an infrastructure organization for open scholarship. Quantitative Science Studies 1, 1 (2020), 428–444.
    [93]
    Anthony Peruma and Daniel Krutz. 2018. Understanding the relationship between quality and security: A large-scale analysis of Android applications. In Proceedings of the 2018 IEEE/ACM 1st International Workshop on Security Awareness from Design to Deployment (SEAD’18).
    [94]
    Kai Petersen, Robert Feldt, Shahid Mujtaba, and Michael Mattsson. 2008. Systematic mapping studies in software engineering. In Proceedings of the 12th International Conference on Evaluation and Assessment in Software Engineering (EASE’08). 1–10.
    [95]
    Henrik Plate, Serena Elisa Ponta, and Antonino Sabetta. 2015. Impact assessment for vulnerabilities in open-source software libraries. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME’15). 411–420.
    [96]
    Serena Elisa Ponta, Henrik Plate, and Antonino Sabetta. 2018. Beyond metadata: Code-centric and usage-based analysis of known vulnerabilities in open-source software. In Proceedings of the 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME’18).
    [97]
    Serena Elisa Ponta, Henrik Plate, and Antonino Sabetta. 2020. Detection, assessment and mitigation of vulnerabilities in open source dependencies. Empirical Software Engineering 25, 5 (2020), 3175–3215.
    [98]
    Serena Elisa Ponta, Henrik Plate, Antonino Sabetta, Michele Bezzi, and Cédric Dangremont. 2019. A manually-curated dataset of fixes to vulnerabilities of open-source software. In Proceedings of the 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR’19).
    [99]
    Sasith M. Rajasooriya, Chris P. Tsokos, and Pubudu Kalpani Kaluarachchi. 2016. Stochastic modelling of vulnerability life cycle and security risk evaluation. Journal of Information Security 7, 4 (2016), 269–279.
    [100]
    Sasith M. Rajasooriya, Chris P. Tsokos, and Pubudu Kalpani Kaluarachchi. 2017. Cyber security: Nonlinear stochastic models for predicting the exploitability. Journal of Information Security 8, 2 (2017), 125–140.
    [101]
    Gavin Reid, Peter Mell, and Karen Scarfone. 2007. CVSS-SIG Version 2 History. Technical Report. Forum of Incident Response and Security Teams (FIRST). https://www.first.org/cvss/v2/history
    [102]
    Dusan Repel, Johannes Kinder, and Lorenzo Cavallaro. 2017. Modular synthesis of heap exploits. In Proceedings of the 2017 Workshop on Programming Languages and Analysis for Security (PLAS’17). ACM, New York, NY, USA, 25–35.
    [103]
    Yaman Roumani and Joseph Nwankpa. 2020. Examining exploitability risk of vulnerabilities: A hazard model. Communications of the Association for Information Systems 46, 1 (2020), 18.
    [104]
    Carl Sabottke, Octavian Suciu, and Tudor Dumitraş. 2015. Vulnerability disclosure in the age of social media: Exploiting Twitter for predicting real-world exploits. In Proceedings of the 24th USENIX Security Symposium (USENIX Security’15).
    [105]
    N. Amira M. Saffie, Nur’Amirah Mohd Shukor, and Khairul A. Rasmani. 2016. Fuzzy Delphi method: Issues and challenges. In Proceedings of the 2016 International Conference on Logistics, Informatics, and Service Sciences (LISS’16).
    [106]
    Karen Scarfone and Peter Mell. 2009. An analysis of CVSS version 2 vulnerability scoring. In Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement. 516–525.
    [107]
    Edward J. Schwartz, Thanassis Avgerinos, and David Brumley. 2011. Q: Exploit hardening made easy. In Proceedings of the 20th USENIX Security Symposium (USENIX Security’11).
    [108]
    SCImago. 2023. SCImago Journal & Country Rank [Portal]. Retrieved August 16, 2023 from https://www.scimagojr.com
    [109]
    SciTools Support. 2021. What Metrics Does Understand Have? Retrieved October 27, 2022 from https://support.scitools.com/support/solutions/articles/70000582223-what-metrics-does-understand-have-
    [110]
    Semantic Scholar. n.d. Semantic Scholar Academic Graph API. Retrieved July 18, 2023 from https://www.semanticscholar.org/product/api
    [111]
    Ravi Sen and Gregory R. Heim. 2016. Managing enterprise risks of technological systems: An exploratory empirical analysis of vulnerability characteristics as drivers of exploit publication. Decision Sciences 47, 6 (2016), 1073–1102.
    [112]
    Yan Shoshitaishvili, Ruoyu Wang, Christopher Salls, Nick Stephens, Mario Polino, Andrew Dutcher, John Grosen, Siji Feng, Christophe Hauser, Christopher Kruegel, and Giovanni Vigna. 2016. SOK: (State of) the art of war: Offensive techniques in binary analysis. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP’16). 138–157.
    [113]
    Eva Sotos Martínez, Nora M. Villanueva, and Lilian Adkinson Orellana. 2021. A survey on the state of the art of vulnerability assessment techniques. In 14th International Conference on Computational Intelligence in Security for Information Systems and 12th International Conference on European Transnational Educational (CISIS 2021 and ICEUTE 2021), Juan José Gude Prego, José Gaviria de la Puerta, Pablo García Bringas, Héctor Quintián, and Emilio Corchado (Eds.). Springer International Publishing, Cham, 203–213.
    [114]
    Georgios Spanos and Lefteris Angelis. 2018. A multi-target approach to estimate software vulnerability characteristics and severity scores. Journal of Systems and Software 146 (2018), 152–166.
    [115]
    Georgios Spanos, Angeliki Sioziou, and Lefteris Angelis. 2013. WIVSS: A new methodology for scoring information systems vulnerabilities. In Proceedings of the 17th Panhellenic Conference on Informatics (PCI’13). ACM, New York, NY, USA.
    [116]
    Octavian Suciu, Connor Nelson, Zhuoer Lyu, Tiffany Bao, and Tudor Dumitras. 2022. Expected exploitability: Predicting the development of functional vulnerability exploits. In Proceedings of the 31st USENIX Security Symposium. 377–394.
    [117]
    Dimitrios Toloudis, Georgios Spanos, and Lefteris Angelis. 2016. Associating the severity of vulnerabilities with their description. In Advanced Information Systems Engineering Workshops, John Krogstie, Haralambos Mouratidis, and Jianwen Su (Eds.). Springer International Publishing, Cham, 231–242.
    [118]
    TrendMicro. n.d. Exploit. Retrieved May 17, 2023 from https://www.trendmicro.com/vinfo/us/security/definition/exploit
    [119]
    Shubham Tripathi, Gustavo Grieco, and Sanjay Rawat. 2017. Exniffer: Learning to prioritize crashes by assessing the exploitability from memory dump. In Proceedings of the 2017 24th Asia-Pacific Software Engineering Conference (APSEC’17). 239–248.
    [120]
    Max van Haastrecht, Injy Sarhan, Bilge Yigit Ozkan, Matthieu Brinkhuis, and Marco Spruit. 2021. SYMBALS: A systematic review methodology blending active learning and snowballing. Frontiers in Research Metrics and Analytics 6 (2021), 685591.
    [121]
    Debbie Walkowski. 2021. F5: Threats, Vulnerabilities, Exploits and Their Relationship to Risk. Retrieved May 17, 2023 from https://www.f5.com/labs/learning-center/threats-vulnerabilities-exploits-and-their-relationship-to-risk
    [122]
    Yan Wang, Wei Wu, Chao Zhang, Xinyu Xing, Xiaorui Gong, and Wei Zou. 2019. From proof-of-concept to exploitable. Cybersecurity 2, 1 (2019), 1–25.
    [123]
    Yulong Wang, Fangchun Yang, and Qibo Sun. 2008. Measuring network vulnerability based on pathology. In Proceedings of the 2008 9th International Conference on Web-Age Information Management. 640–646.
    [124]
    Yan Wang, Chao Zhang, Xiaobo Xiang, Zixuan Zhao, Wenjie Li, Xiaorui Gong, Bingchang Liu, Kaixiang Chen, and Wei Zou. 2018. Revery: From proof-of-concept to exploitable. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS’18). ACM, New York, NY, USA, 1914–1927.
    [125]
    Thomas Wood, Sewwandi Perera, Shi Yan, Lin Padgha, and Alistair Moffat. 2022. CORE: Conference Details. Retrieved August 16, 2023 from https://www.core.edu.au/conference-portal#h.p_ID_44
    [126]
    Qiushi Wu, Yue Xiao, Xiaojing Liao, and Kangjie Lu. 2022. OS-aware vulnerability prioritization via differential severity analysis. In Proceedings of the 31st USENIX Security Symposium (USENIX Security’22).
    [127]
    Wei Wu, Yueqi Chen, Xinyu Xing, and Wei Zou. 2019. KEPLER: Facilitating control-flow hijacking primitive evaluation for Linux kernel vulnerabilities. In Proceedings of the 28th USENIX Security Symposium (USENIX Security’19).
    [128]
    Wei Wu, Yueqi Chen, Jun Xu, Xinyu Xing, Xiaorui Gong, and Wei Zou. 2018. FUZE: Towards facilitating exploit generation for kernel use-after-free vulnerabilities. In Proceedings of the 27th USENIX Security Symposium (USENIX Security’18).
    [129]
    Chaowei Xiao, Armin Sarabi, Yang Liu, Bo Li, Mingyan Liu, and Tudor Dumitras. 2018. From patching delays to infection symptoms: Using risk profiles for an early discovery of vulnerabilities exploited in the wild. In Proceedings of the 27th USENIX Security Symposium (USENIX Security’18).
    [130]
    Yasuhiro Yamamoto, Daisuke Miyamoto, and Masaya Nakayama. 2015. Text-mining approach for estimating vulnerability score. In Proceedings of the 2015 4th International Workshop on Building Analysis Datasets and Gathering Experience Returns for Security (BADGERS’15).
    [131]
    Guanhua Yan, Junchen Lu, Zhan Shu, and Yunus Kucuk. 2017. ExploitMeter: Combining fuzzing with machine learning for automated evaluation of software exploitability. In Proceedings of the 2017 IEEE Symposium on Privacy-Aware Computing (PAC’17). 164–175.
    [132]
    Heedong Yang, Seungsoo Park, Kangbin Yim, and Manhee Lee. 2020. Better not to use vulnerability’s reference for exploitability prediction. Applied Sciences 10, 7 (2020), 2555.
    [133]
    Fadi Yilmaz, Meera Sridhar, and Wontae Choi. 2020. Guide me to exploit: Assisted ROP exploit generation for actionscript virtual machine. In Proceedings of the Annual Computer Security Applications Conference. ACM, New York, NY, USA, 386–400.
    [134]
    Awad Younis, Yashwant Malaiya, Charles Anderson, and Indrajit Ray. 2016. To fear or not to fear that is the question: Code characteristics of a vulnerable function with an existing exploit. In Proceedings of the 6th ACM Conference on Data and Application Security and Privacy. ACM, New York, NY, USA, 97–104.
    [135]
    Awad Younis, Yashwant K. Malaiya, and Indrajit Ray. 2016. Assessing vulnerability exploitability risk using software properties. Software Quality Journal 24, 1 (2016), 159–202.
    [136]
    Awad A. Younis and Yashwant K. Malaiya. 2015. Comparing and evaluating CVSS base metrics and Microsoft rating system. In Proceedings of the 2015 IEEE International Conference on Software Quality, Reliability, and Security. 252–261.
    [137]
    Awad A. Younis, Yashwant K. Malaiya, and Indrajit Ray. 2014. Using attack surface entry points and reachability analysis to assess the risk of software vulnerability exploitability. In Proceedings of the IEEE 15th International Symposium on High-Assurance Systems Engineering. 1–8.
    [138]
    Zhe Yu, Nicholas A. Kraft, and Tim Menzies. 2018. Finding better active learners for faster literature reviews. Empirical Software Engineering 23 (2018), 3161–3186.
    [139]
    Zhe Yu and Tim Menzies. 2019. FAST2: An intelligent assistant for finding relevant papers. Expert Systems with Applications 120 (2019), 57–71.
    [140]
    Kyle Zeng, Yueqi Chen, Haehyun Cho, Xinyu Xing, Adam Doupé, Yan Shoshitaishvili, and Tiffany Bao. 2022. Playing for K(H)heaps: Understanding and improving Linux kernel exploit reliability. In Proceedings of the 31st USENIX Security Symposium.
    [141]
    Fengli Zhang and Qinghua Li. 2020. Dynamic risk-aware patch scheduling. In Proceedings of the 2020 IEEE Conference on Communications and Network Security (CNS’20). 1–9.
    [142]
    Zixuan Zhao, Yan Wang, and Xiaorui Gong. 2020. HAEPG: An automatic multi-hop exploitation generation framework. In Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, Cham, 89–109.
    [143]
    Yajin Zhou, Zhi Wang, Wu Zhou, and Xuxian Jiang. 2012. Hey, you, get off of my market: Detecting malicious apps in official and alternative Android markets. In Proceedings of the 19th Annual Network and Distributed System Security Symposium (NDSS’12).
    [144]
    Deqing Zou, Ju Yang, Zhen Li, Hai Jin, and Xiaojing Ma. 2019. AutoCVSS: An approach for automatic assessment of vulnerability severity based on attack process. In Proceedings of the International Conference on Green, Pervasive, and Cloud Computing. 238–253.

    Cited By

    View all
    • (2024)Leveraging Digital Twin Technology for Enhanced Cybersecurity in Cyber–Physical Production SystemsFuture Internet10.3390/fi1604013416:4(134)Online publication date: 17-Apr-2024

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 56, Issue 8
    August 2024
    963 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3613627
    • Editors:
    • David Atienza,
    • Michela Milano
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2024
    Online AM: 20 March 2024
    Accepted: 31 January 2024
    Revised: 22 December 2023
    Received: 28 November 2022
    Published in CSUR Volume 56, Issue 8

    Check for updates

    Author Tags

    1. Exploitability
    2. software vulnerability

    Qualifiers

    • Survey

    Funding Sources

    • National Science Foundation

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)618
    • Downloads (Last 6 weeks)158

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Leveraging Digital Twin Technology for Enhanced Cybersecurity in Cyber–Physical Production SystemsFuture Internet10.3390/fi1604013416:4(134)Online publication date: 17-Apr-2024

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media