Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2338965.2336789acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
Article

Residual investigation: predictive and precise bug detection

Published: 15 July 2012 Publication History

Abstract

We introduce the concept of “residual investigation” for program analysis. A residual investigation is a dynamic check installed as a result of running a static analysis that reports a possible program error. The purpose is to observe conditions that indicate whether the statically predicted program fault is likely to be realizable and relevant. The key feature of a residual investigation is that it has to be much more precise (i.e., with fewer false warnings) than the static analysis alone, yet significantly more general (i.e., reporting more errors) than the dynamic tests in the program's test suite pertinent to the statically reported error. That is, good residual investigations encode dynamic conditions that, when taken in conjunction with the static error report, increase confidence in the existence of an error, as well as its severity, without needing to directly observe a fault resulting from the error.
We enhance the static analyzer FindBugs with several residual investigations, appropriately tuned to the static error patterns in FindBugs, and apply it to 7 large open-source systems and their native test suites. The result is an analysis with a low occurrence of false warnings (“false positives”) while reporting several actual errors that would not have been detected by mere execution of a program's test suite.

References

[1]
N. Ayewah and W. Pugh. The Google FindBugs fixit. In Proc. 19th International Symposium on Software Testing and Analysis (ISSTA), pages 241-252. ACM, 2010.
[2]
M. Barnett, K. R. M. Leino, and W. Schulte. The Spec# programming system: An overview. In Proc. International Workshop on the Construction and Analysis of Safe, Secure, and Interoperable Smart Devices (CASSIS), pages 49-69. Springer, Mar. 2004.
[3]
E. Bodden. Efficient hybrid typestate analysis by determining continuation-equivalent states. In Proc. 32nd ACM/IEEE International Conference on Software Engineering (ICSE), pages 5-14. ACM, May 2010.
[4]
E. Bodden, L. Hendren, and O. Lhoták. A staged static program analysis to improve the performance of runtime monitoring. In Proc. 21st European Conference on Object-Oriented Programming (ECOOP), pages 525-549, July 2007.
[5]
R. Bodik, R. Gupta, and V. Sarkar. ABCD: Eliminating array bounds checks on demand. In Proc. ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 321-333. ACM, June 2000.
[6]
C. Cadar and D. R. Engler. Execution generated test cases: How to make systems code crash itself. In Proc. 12th International SPIN Workshop on Model Checking Software, pages 2-23. Springer, Aug. 2005.
[7]
F. Chen and G. Rosu. Mop: An efficient and generic runtime verification framework. In Proc. 22nd ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 569-588. ACM, Oct. 2007.
[8]
R. Chugh, J. A. Meister, R. Jhala, and S. Lerner. Staged information flow for JavaScript. In Proc. ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 50-62. ACM, June 2009.
[9]
C. Csallner and Y. Smaragdakis. Check 'n' Crash: Combining static checking and testing. In Proc. 27th International Conference on Software Engineering (ICSE), pages 422-431. ACM, May 2005.
[10]
C. Csallner and Y. Smaragdakis. DSD-Crasher: A hybrid analysis tool for bug finding. In Proc. ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), pages 245-254. ACM, July 2006.
[11]
M. B. Dwyer and R. Purandare. Residual dynamic typestate analysis exploiting static analysis: Results to reformulate and reduce the cost of dynamic analysis. In Proc. 22nd IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 124-133. ACM, Nov. 2007.
[12]
P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Proc. ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 213-223. ACM, June 2005.
[13]
M. Gopinathan and S. K. Rajamani. Enforcing object protocols by combining static and runtime analysis. In Proc. 23rd ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), pages 245-260. ACM, Oct. 2008.
[14]
D. Hovemeyer and W. Pugh. Finding bugs is easy. In Companion to the 19th ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA), pages 132-136. ACM, Oct. 2004.
[15]
D. Hovemeyer and W. Pugh. Finding bugs is easy. SIGPLAN Notices, 39(12):92-106, Dec. 2004.
[16]
D. Hovemeyer and W. Pugh. Finding more null pointer bugs, but not too many. In Proc. 7th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE), pages 9-14. ACM, June 2007.
[17]
M. Islam and C. Csallner. Dsc+Mock: A test case + mock class generator in support of coding against interfaces. In Proc. 8th International Workshop on Dynamic Analysis (WODA), pages 26-31. ACM, July 2010.
[18]
G. Kiczales, E. Hilsdale, J. Hugunin, M. Kersten, J. Palm, and W. G. Griswold. An overview of AspectJ. In Proc. 15th European Conference on Object Oriented Programming (ECOOP), pages 327-353. Springer, June 2001.
[19]
S. Kim and M. D. Ernst. Which warnings should I fix first? In Proc. 11th European Software Engineering Conference and the 15th ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE), pages 45-54. ACM, Sept. 2007.
[20]
T. Kremenek, K. Ashcraft, J. Yang, and D. Engler. Correlation exploitation in error ranking. In Proc. 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), pages 83-93. ACM, Oct. 2004.
[21]
M. Musuvathi and D. Engler. Some lessons from using static analysis and software model checking for bug finding. In Proc. Workshop on Software Model Checking (SoftMC). Elsevier, July 2003.
[22]
N. Rutar, C. B. Almazan, and J. S. Foster. A comparison of bug finding tools for Java. In Proc. 15th International Symposium on Software Reliability Engineering (ISSRE), pages 245-256. IEEE, Nov. 2004.
[23]
S. Savage, M. Burrows, G. Nelson, P. Sobalvarro, and T. Anderson. Eraser: A dynamic data race detector for multi-threaded programs. In Proc. 16th Symposium on Operating Systems Principles (SOSP), pages 27-37. ACM, Oct. 1997.
[24]
Y. Smaragdakis and C. Csallner. Combining static and dynamic reasoning for bug detection. In Proc. International Conference on Tests And Proofs (TAP), pages 1-16. Springer, Feb. 2007.
[25]
N. Tillmann and J. de Halleux. Pex - white box test generation for .Net. In Proc. 2nd International Conference on Tests And Proofs (TAP), pages 134-153. Springer, Apr. 2008.
[26]
A. Tomb, G. P. Brat, and W. Visser. Variably interprocedural program analysis for runtime error detection. In Proc. ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), pages 97-107. ACM, July 2007.
[27]
S. Wagner, J. Jürjens, C. Koller, and P. Trischberger. Comparing bug finding tools with reviews and tests. In Proc. 17th IFIP TC6/WG 6.1 International Conference on Testing of Communicating Systems (TestCom), pages 40-55. Springer, May 2005.
[28]
M. Zitser, R. Lippmann, and T. Leek. Testing static analysis tools using exploitable buffer overflows from open source code. In Proc. 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), pages 97-106. ACM, Oct. 2004.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSTA 2012: Proceedings of the 2012 International Symposium on Software Testing and Analysis
July 2012
341 pages
ISBN:9781450314541
DOI:10.1145/2338965
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 July 2012

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

ISSTA '12
Sponsor:

Acceptance Rates

Overall Acceptance Rate 58 of 213 submissions, 27%

Upcoming Conference

ISSTA '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)2
Reflects downloads up to 02 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Parallel Program Analysis on Path RangesScience of Computer Programming10.1016/j.scico.2024.103154(103154)Online publication date: May-2024
  • (2023)Parallel Program Analysis via Range SplittingFundamental Approaches to Software Engineering10.1007/978-3-031-30826-0_11(195-219)Online publication date: 22-Apr-2023
  • (2022)Verification WitnessesACM Transactions on Software Engineering and Methodology10.1145/347757931:4(1-69)Online publication date: 8-Sep-2022
  • (2022)Reusing Predicate Precision in Value AnalysisIntegrated Formal Methods10.1007/978-3-031-07727-2_5(63-85)Online publication date: 1-Jun-2022
  • (2021)Boosting static analysis accuracy with instrumented test executionsProceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3468264.3468626(1154-1165)Online publication date: 20-Aug-2021
  • (2021)Cooperative verifier-based testing with CoVeriTestInternational Journal on Software Tools for Technology Transfer10.1007/s10009-020-00587-8Online publication date: 25-Apr-2021
  • (2020)FRed: Conditional Model Checking via Reducers and FoldersSoftware Engineering and Formal Methods10.1007/978-3-030-58768-0_7(113-132)Online publication date: 8-Sep-2020
  • (2020)MetaVal: Witness Validation via VerificationComputer Aided Verification10.1007/978-3-030-53291-8_10(165-177)Online publication date: 14-Jul-2020
  • (2018)Shooting from the heap: ultra-scalable static analysis with heap snapshotsProceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3213846.3213860(198-208)Online publication date: 12-Jul-2018
  • (2018)Tests from WitnessesTests and Proofs10.1007/978-3-319-92994-1_1(3-23)Online publication date: 2-Jun-2018
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media