Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Comparing Vulnerability Severity and Exploits Using Case-Control Studies

Published: 15 August 2014 Publication History

Abstract

(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.

Supplementary Material

a1-allodi-apndx.pdf (allodi.zip)
Supplemental movie, appendix, image and software files for, Comparing Vulnerability Severity and Exploits Using Case-Control Studies

References

[1]
O. H. Alhazmi and Y. K. Malaiya. 2008. Application of vulnerability discovery models to major operating systems. IEEE Trans. Reliab. 57, 1 (2008), 14--22.
[2]
Luca Allodi, Vadim Kotov, and Fabio Massacci. 2013. MalwareLab: Experimentation with Cybercrime attack tools. In Proceedings of the 6th Workshop on Cybersecurity Security and Test.
[3]
Luca Allodi and Fabio Massacci. 2012. A preliminary analysis of vulnerability scores for attacks in wild. In Proceedings of the ACM CCS Workshop on Building Analysis Datasets and Gathering Experience Returns for Security.
[4]
J. Martin Bland and Douglas G. Altman. 1995. Multiple significance tests: The Bonferroni method. Brit. Med. J. 310, 6973 (1995), 170.
[5]
Mehran Bozorgi, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. 2010. Beyond heuristics: Learning to classify vulnerabilities and predict exploits. In Proceedings of the 16th ACM International Conference on Knowledge Discovery and Data Mining. ACM, 105--114.
[6]
Steve Christey and Brian Martin. 2013. Buying into the bias: Why vulnerability statistics suck. https://www.blackhat.com/us-13/archives.html#Martin.
[7]
Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith. 2010. Familiarity breeds contempt: The honeymoon effect and the role of legacy code in zero-day vulnerabilities. In Proceedings of the 26th Annual Computer Security Applications Conference. 251--260. http://doi.acm.org/10.1145/1920261.1920299.
[8]
Richard Doll and A. Bradford Hill. 1950. Smoking and carcinoma of the lung. Brit. Med. J. 2, 4682 (1950), 739--748.
[9]
Tudor Dumitras and Petros Efstathopoulos. 2012. Ask WINE: Are we safer today? Evaluating operating system security through big data analysis. In Proceeding of the USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET'12). 11--11.
[10]
Tudor Dumitras and Darren Shou. 2011. Toward a standard benchmark for computer security research: The worldwide intelligence network environment (WINE). In Proceedings of the 1st Workshop on Building Analysis Datasets and Gathering Experience Returns for Security. ACM, 89--96.
[11]
L. Evans. 1986. The effectiveness of safety belts in preventing fatalities. Accident Anal. Prevent. 18, 3 (1986), 229--241.
[12]
Stefan Frei, Martin May, Ulrich Fiedler, and Bernhard Plattner. 2006. Large-scale vulnerability analysis. In Proceedings of the SIGCOMM Workshop on Large-Scale Attack Defense. ACM, 131--138.
[13]
L. Gallon. 2011. Vulnerability discrimination using CVSS framework. In Proceedings of the 4th IFIP International Conference on New Technologies, Mobility and Security. 1--6.
[14]
Michael Gegick, Pete Rotella, and Laurie A. Williams. 2009. Predicting attack-prone components. In Proceedings of the 2nd International Conference on Software Testing Verification and Validation (ICST'09). 181--190.
[15]
Chris Grier, Lucas Ballard, Juan Caballero, Neha Chachra, Christian J. Dietrich, Kirill Levchenko, Panayiotis Mavrommatis, Damon McCoy, Antonio Nappa, Andreas Pitsillidis, Niels Provos, M. Zubair Rafique, Moheeb Abu Rajab, Christian Rossow, Kurt Thomas, Vern Paxson, Stefan Savage, and Geoffrey M. Voelker. 2012. Manufacturing compromise: The emergence of exploit-as-a-service. In Proceedings of the 19th ACM Conference on Computer and Communications Security. ACM, 821--832.
[16]
C. Herley and D. Florencio. 2010. Nobody sells gold for the price of silver: Dishonesty, uncertainty and the underground economy. In Economics of Information Security and Privacy. Springer, 33--53.
[17]
Siv Hilde Houmb, Virginia N. L. Franqueira, and Erlend A. Engum. 2010. Quantifying security risk level from CVSS estimates of frequency and impact. J. Syst. Softw. 83, 9 (2010), 1622--1634.
[18]
Vadim Kotov and Fabio Massacci. 2013. Anatomy of exploit kits: Preliminary analysis of exploit kits as software artefacts. In Proceedings of the Engineering Secure Software and Systems Conference (ESSoS'13). 181--196.
[19]
Pratyusa K. Manadhata and Jeannette M. Wing. 2011. An attack surface metric. IEEE Trans. Softw. Eng. 37 (2011), 371--386.
[20]
Fabio Massacci, Stephan Neuhaus, and Viet Nguyen. 2011. After-life vulnerabilities: A study on firefox evolution, its vulnerabilities, and fixes. In Proceedings of the Engineering Secure Software and Systems Conference (ESSoS'11). Lecture Notes in Computer Science, vol. 6542, Springer, 195--208.
[21]
Fabio Massacci and Viet Nguyen. 2012. An independent validation of vulnerability discovery models. In Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security (ASIACCS'12).
[22]
Peter Mell, Karen Scarfone, and Sasha Romanosky. 2007. A Complete Guide to the Common Vulnerability Scoring System Version 2.0. Technical Report. FIRST. http://www.first.org/cvss.
[23]
C. Miller. 2007. The legitimate vulnerability market: Inside the secretive world of 0-day exploit sales. In Proceedings of the 6th Workshop on Economics and Information Security.
[24]
Stephan Neuhaus, Thomas Zimmermann, Christian Holler, and Andreas Zeller. 2007. Predicting vulnerable software components. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 529--540.
[25]
A. Ozment. 2005. The likelihood of vulnerability rediscovery and the social utility of vulnerability hunting. In Proceedings of the 4th Workshop on Economics and Information Security.
[26]
Andy Ozment. 2007. Improving vulnerability discovery models. In Proceedings of the 3rd Workshop on Quality of Protection. 6--11.
[27]
PCI Council. 2010. PCI DSS Requirements and Security Assessment Procedures, Version 2.0. (2010). https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf.
[28]
Stephen D. Quinn, Karen A. Scarfone, Matthew Barrett, and Christopher S. Johnson. 2010. Guide to Adopting and Using the Security Content Automation Protocol (SCAP) Version 1.0. Technical Report, National Institute of Standards and Technology, U.S. Department of Commerce, Special Publication 800-117.
[29]
R Core Team. 2012. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org ISBN 3-900051-07-0.
[30]
Karen Scarfone and Peter Mell. 2009. An analysis of CVSS version 2 vulnerability scoring. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement. 516--525.
[31]
Guido Schryen. 2009. A comprehensive and comparative analysis of the patching behavior of open source and closed source software vendors. In Proceedings of the 5th International Conference on IT Security Incident Management and IT Forensics (IMF'09). IEEE Computer Society, Los Alamitos, CA, 153--168.
[32]
Muhammad Shahzad, Muhammad Zubair Shafiq, and Alex X. Liu. 2012. A large scale exploratory analysis of software vulnerability life cycles. In Proceedings of the 34th International Conference on Software Engineering. IEEE Press, 771--781.
[33]
Yonghee Shin and Laurie Williams. 2013. Can traditional fault prediction models be used for vulnerability prediction? Empirical Softw. Eng. 18, 1 (2013), 25--59.
[34]
Symantec. 2011. Analysis of Malicious Web Activity by Attack Toolkits (online ed.). Symantec. http://www.symantec.com/threatreport/topic.jsp?id=threat_activity_trends&aid=analysis_of_malicious_web _activity. (Last accessed June 1012).
[35]
Lingyu Wang, Tania Islam, Tao Long, Anoop Singhal, and Sushil Jajodia. 2008. An attack graph-based probabilistic security metric. In Proceedings of the 22nd IFIP WG 11.3 Working Conference on Data and Applications Security. Lecture Notes in Computer Science, vol. 5094. Springer, Berlin/Heidelberg, 283--296.

Cited By

View all
  • (2024)Early and Realistic Exploitability Prediction of Just-Disclosed Software Vulnerabilities: How Reliable Can It Be?ACM Transactions on Software Engineering and Methodology10.1145/365444333:6(1-41)Online publication date: 27-Jun-2024
  • (2024)A Survey on Software Vulnerability Exploitability AssessmentACM Computing Surveys10.1145/364861056:8(1-41)Online publication date: 26-Apr-2024
  • (2024)Cognition in Social Engineering Empirical Research: A Systematic Literature ReviewACM Transactions on Computer-Human Interaction10.1145/363514931:2(1-55)Online publication date: 29-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Information and System Security
ACM Transactions on Information and System Security  Volume 17, Issue 1
August 2014
118 pages
ISSN:1094-9224
EISSN:1557-7406
DOI:10.1145/2660572
  • Editor:
  • Gene Tsudik
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 August 2014
Accepted: 01 May 2014
Revised: 01 February 2014
Received: 01 September 2013
Published in TISSEC Volume 17, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. CVSS
  2. Software vulnerability
  3. compliance
  4. exploitation
  5. patching

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)126
  • Downloads (Last 6 weeks)14
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Early and Realistic Exploitability Prediction of Just-Disclosed Software Vulnerabilities: How Reliable Can It Be?ACM Transactions on Software Engineering and Methodology10.1145/365444333:6(1-41)Online publication date: 27-Jun-2024
  • (2024)A Survey on Software Vulnerability Exploitability AssessmentACM Computing Surveys10.1145/364861056:8(1-41)Online publication date: 26-Apr-2024
  • (2024)Cognition in Social Engineering Empirical Research: A Systematic Literature ReviewACM Transactions on Computer-Human Interaction10.1145/363514931:2(1-55)Online publication date: 29-Jan-2024
  • (2024)A Case-Control Study to Measure Behavioral Risks of Malware Encounters in OrganizationsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.345696019(9419-9432)Online publication date: 2024
  • (2024)ILLATION: Improving Vulnerability Risk Prioritization by Learning From NetworkIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.329443321:4(1890-1901)Online publication date: 1-Jul-2024
  • (2024)Patchy Performance? Uncovering the Vulnerability Management Practices of IoT-Centric Vendors2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00154(1198-1216)Online publication date: 19-May-2024
  • (2024)The Holy Grail of Vulnerability PredictionsIEEE Security and Privacy10.1109/MSEC.2023.333393622:1(4-6)Online publication date: 1-Jan-2024
  • (2024)A risk estimation study of native code vulnerabilities in Android applicationsJournal of Cybersecurity10.1093/cybsec/tyae01510:1Online publication date: 29-Aug-2024
  • (2024)A robust statistical framework for cyber-vulnerability prioritisation under partial information in threat intelligenceExpert Systems with Applications10.1016/j.eswa.2024.124572255(124572)Online publication date: Dec-2024
  • (2024)Advancing Software Vulnerability Scoring: A Statistical Approach with Machine Learning Techniques and GridSearchCV Parameter TuningSN Computer Science10.1007/s42979-024-02942-x5:5Online publication date: 28-May-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media