Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
Symbolic execution algorithms for test generation
Publisher:
  • University of California at Los Angeles
  • Computer Science Department 405 Hilgard Avenue Los Angeles, CA
  • United States
ISBN:978-1-124-00942-1
Order Number:AAI3410346
Pages:
135
Reflects downloads up to 26 Jan 2025Bibliometrics
Skip Abstract Section
Abstract

Correctness of software has become increasingly important and difficult as programs become more complicated and have more impact on our day-to-day lives. There are two approaches to ensure the correctness of software. Testing is the approach widely used in industry. Today, testing is tedious, expensive and prone to leave errors undetected. The other approach is to verify the correctness or guarantee the proper behavior of software through static analysis and model checking. However, this approach does not scale well, are restricted to simple properties or overwhelm the user with many false alarms. In the recent years, testing and verification have come closer together. Directed testing or concolic testing generates tests from constraints generated through both symbolic and real executions. However, the basic concolic execution algorithms do not scale to larger programs and cannot identify or seek out many types of bugs. This dissertation extends the basic concolic execution algorithm to scale to larger programs and more complex properties.

Specifically, this dissertation presents four symbolic execution algorithms that automatically and systematically generate tests. These algorithms reduce the input space of automated testing and find different classes of errors. Symbolic grammars are introduced to generate orders of magnitude less input strings without sacrificing coverage. Symmetry reduces redundant tests by showing that some parts of the input are independent from other parts. Ideas in Liveness allow test generation to find errors leading to non-termination. Abstraction allows larger inputs to be generated that lead to memory safety violations and thus stop security holes before they happen.

This work has resulted in a tool that generates tests for C programs called S PLAT . S PLAT was used on a wide variety of open-source programs that compare these techniques to conventional industry-wide practices and state-of-the-art research. Preliminary studies show that these ideas are effective in finding new bugs quicker and can explore more of the program than other approaches.

Contributors
  • Max Planck Institute for Software Systems
  • University of California, Los Angeles

Recommendations