Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3474624.3474633acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbesConference Proceedingsconference-collections
research-article

Towards a Technique to Detect Weaknesses in C Programs

Published: 05 October 2021 Publication History
  • Get Citation Alerts
  • Abstract

    Several critical systems, such as Linux, are implemented using the C language, and a security flaw in these systems may impact a vast number of users. Despite the effort to providing security support, these systems still have weaknesses, leading to vulnerable code. In fact, the number of reported vulnerabilities has increased in the last years, where more than 18 thousand vulnerabilities were reported to the National Vulnerability Database (NVD) in 2020. Static analysis tools, such as Flawfinder and Cppcheck, may help in this problem, reporting some kinds of weaknesses. However, they present a high rate of false alarms, an issue reported in a program when no problem actually exists. We present a technique that combines static analysis with software testing to detect weaknesses introduced in the code during earlier development stages of C programs. The technique is implemented in a framework named WTT. To verify our technique’s relevance, we evaluated 103 warnings of 6 different projects, and we detected 22 weaknesses of three different kinds: Buffer Overflow, Format String, and Integer Overflow. Results show evidence that our technique may help developers anticipate weakness detection in C programs, reducing vulnerability occurrence in operational versions.

    References

    [1]
    Iago Abal, Jean Melo, Ştefan Stănciulescu, Claus Brabrand, Márcio Ribeiro, and Andrzej Wąsowski. 2018. Variability bugs in highly configurable systems: a qualitative analysis. ACM Transactions on Software Engineering and Methodology (TOSEM) 26, 3(2018), 1–34.
    [2]
    E. Bounimova, P. Godefroid, and D. Molnar. 2013. Billions and Billions of Constraints: Whitebox Fuzz Testing in Production. In Proceedings of the 13rd International Conference on Software Engineering. IEEE, San Francisco, CA, 122–131.
    [3]
    B. Chess and J. West. 2007. Secure programming with static analysis. Pearson Education.
    [4]
    I. Chowdhury and M. Zulkernine. 2010. Can complexity, coupling, and cohesion metrics be used as early indicators of vulnerabilities?. In Proceedings of the 25th Symposium on Applied Computing. ACM, Sierre, Switzerland., 1963–1969.
    [5]
    M. Christakis and P. Godefroid. 2015. Proving memory safety of the ANI Windows image parser using compositional exhaustive testing. In International Workshop on Verification, Model Checking, and Abstract Interpretation. ACM, St. Petersburg, USA, 373–392.
    [6]
    S. Christey, M. Brown, D. Kirby, B. Martin, and A. Paller. 2011. CWE/SANS top 25 most dangerous software errors. Common Weakness Enumeration(2011).
    [7]
    S. Christey, J. Kenderdine, J. Mazella, and B. Miles. 2013. Common Weakness Enumeration. Mitre Corporation (2013).
    [8]
    Caesar Jude Clemente, Fehmi Jaafar, and Yasir Malik. 2018. Is predicting software security bugs using deep learning better than the traditional machine learning algorithms?. In 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). IEEE, 95–102.
    [9]
    Cppcheck 2021. A tool for static C/C++ code analysis. Retrieved May 15, 2021 from http://cppcheck.sourceforge.net/
    [10]
    D. Evans. 1996. Static detection of dynamic memory errors. In ACM SIGPLAN Notices, Vol. 31. ACM, 44–53.
    [11]
    Katheryn A Farris, Ankit Shah, George Cybenko, Rajesh Ganesan, and Sushil Jajodia. 2018. Vulcon: A system for vulnerability prioritization, mitigation, and management. ACM Transactions on Privacy and Security (TOPS) 21, 4 (2018), 1–28.
    [12]
    M. Finifter, D. Akhawe, and D. Wagner. 2013. An empirical study of vulnerability rewards programs. In Proceedings of the 22nd USENIX Conference on Security. 273–288.
    [13]
    Flawfinder 2021. Retrieved May 15, 2021 from https://dwheeler.com/flawfinder/
    [14]
    J. E. Forrester and B. P. Miller. 2000. An empirical study of the robustness of Windows NT applications using random testing. In Proceedings of the 4th USENIX Windows System Symposium, Vol. 4. Seattle, 59–68.
    [15]
    S. Frei, D. Schatzmann, B. Plattner, and B. Trammell. 2010. Modeling the security ecosystem - the dynamics of (In)security. Springer US, 79–106.
    [16]
    P. Godefroid, M.Y. Levin, and D. Molnar. 2012. SAGE: Whitebox Fuzzing for Security Testing. Commun. ACM 55, 3 (2012), 40–44.
    [17]
    ISDW Group 2010. 1044-2009-IEEE Standard Classification for Software Anomalies. IEEE, New York (2010).
    [18]
    Hazim Hanif, Mohd Hairul Nizam Md Nasir, Mohd Faizal Ab Razak, Ahmad Firdaus, and Nor Badrul Anuar. 2021. The rise of software vulnerability: Taxonomy of software vulnerabilities detection and machine learning approaches. Journal of Network and Computer Applications (2021), 103009.
    [19]
    M. Howard, D. LeBlanc, and J. Viega. 2010. 24 deadly sins of software security: programming flaws and how to fix them. McGraw-Hill, Inc.
    [20]
    L. Juranić. 2006. Using fuzzing to detect security vulnerabilities. Retrieved Apr 26(2006), 2012.
    [21]
    George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. 2018. Evaluating Fuzz Testing. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (Toronto, Canada) (CCS ’18). ACM, New York, NY, USA, 2123–2138. https://doi.org/10.1145/3243734.3243804
    [22]
    Ivan Victor Krsul. 1998. Software vulnerability analysis. Purdue University West Lafayette, IN.
    [23]
    Thien La. 2002. Secure software development and code analysis tools. SANS Inst. InfoSec Read. Room(2002), 1–51.
    [24]
    F. Li and V. Paxson. 2017. A large-scale empirical study of security patches. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2201–2215.
    [25]
    Gary McGraw, Brian Chess, and Sammy Migues. 2009. Building security in maturity model. Fortify & Cigital (2009).
    [26]
    R. Muniz, L. Braz, R. Gheyi, W. Andrade, B. Fonseca, and M. Ribeiro. 2018. A Qualitative Analysis of Variability Weaknesses in Configurable Systems with #ifdefs. In Proceedings of the 12th International Workshop on Variability Modeling of Software-Intensive Systems. 51–58.
    [27]
    National Vulnerability Database 2021. Statistics Results for CWE-120. Retrieved May 15, 2021 from https://nvd.nist.gov/vuln/search/statistics?form_type=Advanced&results_type=statistics&search_type=all&cwe_id=CWE-120
    [28]
    National Vulnerability Database 2021. Statistics Results for CWE-134. Retrieved May 15, 2021 from https://nvd.nist.gov/vuln/search/statistics?form_type=Advanced&results_type=statistics&search_type=all&cwe_id=CWE-134
    [29]
    N. Nethercote and J. Seward. 2007. Valgrind: a framework for heavyweight dynamic binary instrumentation. In ACM Sigplan notices, Vol. 42. ACM, 89–100.
    [30]
    S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller. 2007. Predicting vulnerable software components. In Proceedings of the 14th Conference on Computer and Communications Security. 529–540.
    [31]
    Fayola Peters, Thein Tun, Yijun Yu, and Bashar Nuseibeh. 2017. Text filtering and ranking for security bug report prediction. IEEE Transactions on Software Engineering(2017).
    [32]
    C. P. Pfleeger, S. L. Pfleeger, and J. Margulies. 2015. Security in computing. Prentice Hall Professional Technical Reference.
    [33]
    M. Sutton, A. Greene, and P. Amini. 2007. Fuzzing: brute force vulnerability discovery. Pearson Education.
    [34]
    Chenxin Wang and Shunyao Kang. 2018. Adfl: An improved algorithm for american fuzzy lop in fuzz testing. In International Conference on Cloud Computing and Security. Springer, 27–36.
    [35]
    Haijun Wang, Xiaofei Xie, Yi Li, Cheng Wen, Yuekang Li, Yang Liu, Shengchao Qin, Hongxu Chen, and Yulei Sui. 2020. Typestate-guided fuzzer for discovering use-after-free vulnerabilities. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 999–1010.
    [36]
    Cheng Wen, Haijun Wang, Yuekang Li, Shengchao Qin, Yang Liu, Zhiwu Xu, Hongxu Chen, Xiaofei Xie, Geguang Pu, and Ting Liu. 2020. Memlock: Memory usage guided fuzzing. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 765–777.
    [37]
    Laurie Williams, Gary McGraw, and Sammy Migues. 2018. Engineering security vulnerability prevention, detection, and response. IEEE Software 35, 5 (2018), 76–80.
    [38]
    Dazhi Zhang, Donggang Liu, Yu Lei, David Kung, Christoph Csallner, and Wenhua Wang. 2010. Detecting vulnerabilities in c programs using trace-based testing. In 2010 IEEE/IFIP International Conference on Dependable Systems & Networks (DSN). IEEE, Chicago, IL, 241–250.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    SBES '21: Proceedings of the XXXV Brazilian Symposium on Software Engineering
    September 2021
    473 pages
    ISBN:9781450390613
    DOI:10.1145/3474624
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 October 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. C Programs.
    2. Security
    3. Software Testing
    4. Weakness

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    SBES '21
    SBES '21: Brazilian Symposium on Software Engineering
    September 27 - October 1, 2021
    Joinville, Brazil

    Acceptance Rates

    Overall Acceptance Rate 147 of 427 submissions, 34%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 80
      Total Downloads
    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media