Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Reusing Predicate Precision in Value Analysis

  • Conference paper
  • First Online:
Integrated Formal Methods (IFM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13274))

Included in the following conference series:

Abstract

Software verification allows one to examine the reliability of software. Thereby, analyses exchange information to become more effective, more efficient, or to eliminate false results and increase trust in the analysis result. One type of information that analyses provide are precisions, which describe an analysis’ degree of abstraction (tracked predicates, etc.). So far, analyses mainly reuse their own precision to reverify a changed program. In contrast, we aim to reuse the precision of a predicate analysis within a value analysis. To this end, we propose 13 options to convert a predicate precision into a precision for value analysis. All options compute precisions with various degrees of abstraction and are broadly evaluated on three applications (cooperative verification, result validation, and regression verification). Also, we compare our options against using the coarsest and finest precision as well as a state-of-the-art approach for each application. Our evaluation reveals that coarser precisions work better for proof detection, while finer precisions perform better in alarm detection. Moreover, reusing a predicate precision in value analysis can be beneficial in cooperative verification and works well for validating and reverifying programs without property violations.

This work was funded by the Hessian LOEWE initiative within the Software-Factory 4.0 project.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Data Availability Statement

All experimental data is made publicly available in a replication package [69].

Notes

  1. 1.

    Our implementation supports C programs.

  2. 2.

    Many safety properties can be encoded by the unreachability of an error location [61].

  3. 3.

    In our scenarios, the predicates have been used for the (incomplete) verification of the same or a previous version of the program and therefore are likely relevant.

  4. 4.

    https://cpachecker.sosy-lab.org/

  5. 5.

    Configurations \(\texttt {config}=(\bot ,\bot ,\bot ,\bot ,\bot )\), \(\texttt {config}=(\top ,\cdot ,\bot ,\bot ,\cdot )\), \(\texttt {config}=(\top ,\cdot ,\top ,\bot ,\cdot )\), and \(\texttt {config}=(\top ,\cdot ,\top ,\top ,\cdot )\), where \(\cdot \) is either \(\top \) (true) or \(\bot \) (false).

  6. 6.

    https://github.com/sosy-lab/sv-benchmarks/

  7. 7.

    https://www.sosy-lab.org/research/cpa-reuse/supplementary-archive.zip

  8. 8.

    Since we did not retrieve the temporary predicate precision, we checked that \(\varPi _\mathcal {V}^0\ne \emptyset \) if the value analysis with configuration no adapt (\((\bot ,\bot ,\bot ,\bot ,\bot )\)) reports a result.

  9. 9.

    We use a CPU time limit of 900 s for all unsolved programs.

  10. 10.

    We checked that the predicate analysis refined at least once and that \(\varPi _\mathcal {V}^0\ne \emptyset \) if the value analysis with configuration no adapt (\((\bot ,\bot ,\bot ,\bot ,\bot )\)) reports a result.

  11. 11.

    We checked that the predicate analysis refined at least once and that \(\varPi _\mathcal {V}^0\ne \emptyset \) if the value analysis with configuration no adapt (\((\bot ,\bot ,\bot ,\bot ,\bot )\)) reports a result.

References

  1. Ádám, Z., Sallai, G., Hajdu, Á.: Gazer-Theta: LLVM-based verifier portfolio with BMC/CEGAR (Competition Contribution). In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12652, pp. 433–437. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72013-1_27

  2. Afzal, M., et al.: VeriAbs: Verification by abstraction and test generation. In: ASE, pp. 1138–1141. IEEE (2019). https://doi.org/10.1109/ASE.2019.00121

  3. Albarghouthi, A., Gurfinkel, A., Chechik, M.: From under-approximations to over-approximations and back. In: Flanagan, C., König, B. (eds.) TACAS 2012. LNCS, vol. 7214, pp. 157–172. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28756-5_12

  4. Albert, E., Puebla, G., Hermenegildo, M.: Abstraction-carrying code. In: Baader, F., Voronkov, A. (eds.) LPAR 2005. LNCS (LNAI), vol. 3452, pp. 380–397. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-32275-7_25

  5. Alhawi, O.M., Rocha, H., Gadelha, M.R., Cordeiro, L.C., de Lima Filho, E.B.: Verification and refutation of C programs based on k-induction and invariant inference. STTT 23(2), 115–135 (2021). https://doi.org/10.1007/s10009-020-00564-1

  6. Amme, W., Möller, M., Adler, P.: Data flow analysis as a general concept for the transport of verifiable program annotations. Electron. Notes Theor. Comput. Sci. 176(3), 97–108 (2007). https://doi.org/10.1016/j.entcs.2006.06.019

  7. Aquino, A., Bianchi, F.A., Chen, M., Denaro, G., Pezzè, M.: Reusing constraint proofs in program analysis. In: ISSTA, pp. 305–315. ACM (2015). https://doi.org/10.1145/2771783.2771802

  8. Aquino, A., Denaro, G., Pezzè, M.: Heuristically matching solution spaces of arithmetic formulas to efficiently reuse solutions. In: ICSE, pp. 427–437. IEEE (2017). https://doi.org/10.1109/ICSE.2017.46

  9. Arzt, S., Bodden, E.: Reviser: Efficiently updating IDE-/IFDS-based data-flow analyses in response to incremental program changes. In: ICSE, pp. 288–298. ACM (2014). https://doi.org/10.1145/2568225.2568243

  10. Barthe, G., Crespo, J.M., Kunz, C.: Relational verification using product programs. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 200–214. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21437-0_17

  11. Beckman, N.E., Nori, A.V., Rajamani, S.K., Simmons, R.J.: Proofs from tests. In: ISSTA, pp. 3–14. ACM (2008). https://doi.org/10.1145/1390630.1390634

  12. Besson, F., Jensen, T.P., Pichardie, D.: Proof-carrying code from certified abstract interpretation and fixpoint compression. Theor. Comput. Sci. 364(3), 273–291 (2006). https://doi.org/10.1016/j.tcs.2006.08.012

  13. Beyer, D.: Software verification: 10th comparative evaluation (SV-COMP 2021). In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12652, pp. 401–422. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72013-1_24

  14. Beyer, D., Dangl, M.: Strategy selection for software verification based on boolean features. In: Margaria, T., Steffen, B. (eds.) ISoLA 2018. LNCS, vol. 11245, pp. 144–159. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03421-4_11

  15. Beyer, D., Dangl, M., Dietsch, D., Heizmann, M.: Correctness witnesses: exchanging verification results between verifiers. In: FSE, pp. 326–337. ACM (2016). https://doi.org/10.1145/2950290.2950351

  16. Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: FSE, pp. 721–733. ACM (2015). https://doi.org/10.1145/2786805.2786867

  17. Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses. In: Dubois, C., Wolff, B. (eds.) TAP 2018. LNCS, vol. 10889, pp. 3–23. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92994-1_1

  18. Beyer, D., Dangl, M., Wendler, P.: Boosting k-induction with continuously-refined invariants. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 622–640. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21690-4_42

  19. Beyer, D., Friedberger, K.: Violation witnesses and result validation for multi-threaded programs. In: Margaria, T., Steffen, B. (eds.) ISoLA 2020. LNCS, vol. 12476, pp. 449–470. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61362-4_26

  20. Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: A technique to pass information between verifiers. In: FSE, pp. 57:1–57:11. ACM (2012). https://doi.org/10.1145/2393596.2393664

  21. Beyer, D., Henzinger, T.A., Théoduloz, G.: Program analysis with dynamic precision adjustment. In: ASE, pp. 29–38. IEEE (2008). https://doi.org/10.1109/ASE.2008.13

  22. Beyer, D., Holzer, A., Tautschnig, M., Veith, H.: Information reuse for multi-goal reachability analyses. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 472–491. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37036-6_26

  23. Beyer, D., Jakobs, M.-C.: FRed: Conditional model checking via reducers and folders. In: de Boer, F., Cerone, A. (eds.) SEFM 2020. LNCS, vol. 12310, pp. 113–132. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58768-0_7

  24. Beyer, D., Jakobs, M.: Cooperative verifier-based testing with CoVeriTest. STTT 23(3), 313–333 (2021). https://doi.org/10.1007/s10009-020-00587-8

  25. Beyer, D., Jakobs, M.-C., Lemberger, T.: Difference verification with conditions. In: de Boer, F., Cerone, A. (eds.) SEFM 2020. LNCS, vol. 12310, pp. 133–154. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58768-0_8

  26. Beyer, D., Jakobs, M., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: ICSE, pp. 1182–1193. ACM (2018). https://doi.org/10.1145/3180155.3180259

  27. Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_16

  28. Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustable-block encoding. In: FMCAD, pp. 189–197. IEEE (2010). https://ieeexplore.ieee.org/document/5770949/

  29. Beyer, D., Lemberger, T.: Conditional testing. In: Chen, Y.-F., Cheng, C.-H., Esparza, J. (eds.) ATVA 2019. LNCS, vol. 11781, pp. 189–208. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31784-3_11

  30. Beyer, D., Löwe, S.: Explicit-state software model checking based on CEGAR and interpolation. In: Cortellessa, V., Varró, D. (eds.) FASE 2013. LNCS, vol. 7793, pp. 146–162. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37057-1_11

  31. Beyer, D., Löwe, S., Novikov, E., Stahlbauer, A., Wendler, P.: Precision reuse for efficient regression verification. In: FSE, pp. 389–399. ACM (2013). https://doi.org/10.1145/2491411.2491429

  32. Beyer, D., Stefan, W.P.: Refinement selection. In: Fischer, B., Geldenhuys, J. (eds.) SPIN 2015. LNCS, vol. 9232, pp. 20–38. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23404-5_3

  33. Beyer, D., Löwe, S., Wendler, P.: Sliced path prefixes: An effective method to enable refinement selection. In: Graf, S., Viswanathan, M. (eds.) FORTE 2015. LNCS, vol. 9039, pp. 228–243. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19195-9_15

  34. Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: Requirements and solutions. STTT 21(1), 1–29 (2019). https://doi.org/10.1007/s10009-017-0469-y

  35. Beyer, D., Spiessl, M.: MetaVal: Witness validation via verification. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12225, pp. 165–177. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53291-8_10

  36. Böhme, M.D.S., Oliveira, B.C., Roychoudhury, A.: Partition-based regression verification. In: ICSE, pp. 302–311. IEEE (2013). https://doi.org/10.1109/ICSE.2013.6606576

  37. Chaieb, A.: Proof-producing program analysis. In: Barkaoui, K., Cavalcanti, A., Cerone, A. (eds.) ICTAC 2006. LNCS, vol. 4281, pp. 287–301. Springer, Heidelberg (2006). https://doi.org/10.1007/11921240_20

  38. Chalupa, M., Jašek, T., Novák, J., Řechtáčková, A., Šoková, V., Strejček, J.: Symbiotic 8: Beyond symbolic execution. In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12652, pp. 453–457. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72013-1_31

  39. Chebaro, O., Kosmatov, N., Giorgetti, A., Julliand, J.: Program slicing enhances a verification technique combining static and dynamic analysis. In: SAC, pp. 1284–1291. ACM (2012). https://doi.org/10.1145/2245276.2231980

  40. Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 132–146. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32759-9_13

  41. Christakis, M., Müller, P., Wüstholz, V.: Guiding dynamic symbolic execution toward unverified program executions. In: ICSE, pp. 144–155. ACM (2016). https://doi.org/10.1145/2884781.2884843

  42. Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50(5), 752–794 (2003). http://doi.acm.org/10.1145/876638.876643

  43. Cousot, P., Cousot, R.: Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL, pp. 238–252. ACM (1977). https://doi.org/10.1145/512950.512973

  44. Cousot, P., et al.: Combination of abstractions in the ASTRÉE static analyzer. In: Okada, M., Satoh, I. (eds.) ASIAN 2006. LNCS, vol. 4435, pp. 272–300. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-77505-8_23

  45. Csallner, C., Smaragdakis, Y.: Check ‘n’ crash: Combining static checking and testing. In: ICSE, pp. 422–431. ACM (2005). https://doi.org/10.1145/1062455.1062533

  46. Czech, M., Jakobs, M.-C., Wehrheim, H.: Just test what you cannot verify! In: Egyed, A., Schaefer, I. (eds.) FASE 2015. LNCS, vol. 9033, pp. 100–114. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46675-9_7

  47. Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Jobstmann, B., Leino, K., Rustan, M.: (eds.) VMCAI 2016. LNCS, vol. 9583, pp. 328–347. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49122-5_16

  48. Dams, D.R., Namjoshi, K.S.: Orion: High-precision methods for static error analysis of C and C++ Programs. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2005. LNCS, vol. 4111, pp. 138–160. Springer, Heidelberg (2006). https://doi.org/10.1007/11804192_7

  49. Dangl, M., Löwe, S., Wendler, P.: CPAchecker with support for recursive programs and floating-point arithmetic. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 423–425. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46681-0_34

  50. Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 561–579. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21690-4_39

  51. Felsing, D., Grebing, S., Klebanov, V., Rümmer, P., Ulbrich, M.: Automating regression verification. In: ASE, pp. 349–360. ACM (2014). https://doi.org/10.1145/2642937.2642987

  52. Ferles, K., Wüstholz, V., Christakis, M., Dillig, I.: Failure-directed program trimming. In: FSE, pp. 174–185. ACM (2017). https://doi.org/10.1145/3106237.3106249

  53. Ge, X., Taneja, K., Xie, T., Tillmann, N.: Dyta: Dynamic symbolic execution guided with static verification results. In: ICSE, pp. 992–994. ACM (2011). https://doi.org/10.1145/1985793.1985971

  54. Gerrard, M.J., Dwyer, M.B.: ALPACA: A large portfolio-based alternating conditional analysis. In: ICSE, pp. 35–38. IEEE/ACM (2019). https://doi.org/10.1109/ICSE-Companion.2019.00032

  55. Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional may-must program analysis: Unleashing the power of alternation. In: POPL, pp. 43–56. ACM (2010). https://doi.org/10.1145/1706299.1706307

  56. Godlin, B., Strichman, O.: Regression verification. In: DAC, pp. 466–471. ACM (2009). https://doi.org/10.1145/1629911.1630034

  57. Gulavani, B.S., Henzinger, T.A., Kannan, Y., Nori, A.V., Rajamani, S.K.: SYNERGY: A new algorithm for property checking. In: FSE, pp. 117–127. ACM (2006). https://doi.org/10.1145/1181775.1181790

  58. Haltermann, J., Wehrheim, H.: CoVEGI: Cooperative verification via externally generated invariants. In: Guerra, E., Stoelinga, M. (eds.) FASE 2021. LNCS, vol. 12649, pp. 108–129. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-71500-7_6

  59. He, F., Yu, Q., Cai, L.: Efficient summary reuse for software regression verification. TSE (2020). https://doi.org/10.1109/TSE.2020.3021477

  60. Helm, D., Kübler, F., Reif, M., Eichberg, M., Mezini, M.: Modular collaborative program analysis in OPAL. In: FSE, pp. 184–196. ACM (2020), https://doi.org/10.1145/3368089.3409765

  61. Henzinger, T.A., Necula, G.C., Jhala, R., Sutre, G., Majumdar, R., Weimer, W.: Temporal-safety proofs for systems code. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 526–538. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45657-0_45

  62. Henzinger, T.A., Jhala, R., Majumdar, R., Sanvido, M.A.A.: Extreme model checking. In: Dershowitz, N. (ed.) Verification: Theory and Practice. LNCS, vol. 2772, pp. 332–358. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39910-0_16

  63. Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: POPL, pp. 58–70. ACM (2002). https://doi.org/10.1145/503272.503279

  64. Holík, L., Kotoun, M., Peringer, P., Soková, V., Trtík, M., Vojnar, T.: Predator shape analysis tool suite. In: HVC, pp. 202–209. LNCS 10028 (2016). https://doi.org/10.1007/978-3-319-49052-6_13

  65. Holzmann, G.J., Joshi, R., Groce, A.: Swarm verification. In: ASE, pp. 1–6. IEEE (2008). https://doi.org/10.1109/ASE.2008.9

  66. Inkumsah, K., Xie, T.: Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: ASE, pp. 297–306. IEEE (2008). https://doi.org/10.1109/ASE.2008.40

  67. Jakobs, M.-C.: Speed up configurable certificate validation by certificate reduction and partitioning. In: Calinescu, R., Rumpe, B. (eds.) SEFM 2015. LNCS, vol. 9276, pp. 159–174. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22969-0_12

  68. Jakobs, M.: PEQcheck: Localized and context-aware checking of functional equivalence. In: FormaliSE, pp. 130–140. IEEE (2021). https://doi.ieeecomputersociety.org/10.1109/FormaliSE52586.2021.00019

  69. Jakobs, M.: Replication package for article ‘Reusing Predicate Precision in Value Analysis’ In: iFM 2022 (2022). https://doi.org/10.5281/zenodo.5645043

  70. Jakobs, M., Wehrheim, H.: Certification for configurable program analysis. In: SPIN, pp. 30–39. ACM (2014). https://doi.org/10.1145/2632362.2632372

  71. Lauterburg, S., Sobeih, A., Marinov, D., Viswanathan, M.: Incremental state-space exploration for programs with dynamically allocated data. In: ICSE, pp. 291–300. ACM (2008). https://doi.org/10.1145/1368088.1368128

  72. Li, K., Reichenbach, C., Csallner, C., Smaragdakis, Y.: Residual investigation: Predictive and precise bug detection. In: ISSTA, pp. 298–308. ACM (2012)

    Google Scholar 

  73. Majumdar, R., Sen, K.: Hybrid concolic testing. In: ICSE, pp. 416–426. IEEE (2007). https://doi.org/10.1109/ICSE.2007.41

  74. Mudduluru, R., Ramanathan, M.K.: Efficient incremental static analysis using path abstraction. In: Gnesi, S., Rensink, A. (eds.) FASE 2014. LNCS, vol. 8411, pp. 125–139. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54804-8_9

  75. Necula, G.C.: Proof-carrying code. In: POPL, pp. 106–119. ACM (1997). https://doi.org/10.1145/263699.263712

  76. Nguyen, T.L., Schrammel, P., Fischer, B., Torre, S.L., Parlato, G.: Parallel bug-finding in concurrent programs via reduced interleaving instances. In: ASE, pp. 753–764. IEEE (2017). https://doi.org/10.1109/ASE.2017.8115686

  77. Noller, Y., Kersten, R., Pasareanu, C.S.: Badger: Complexity analysis with fuzzing and symbolic execution. In: ISSTA, pp. 322–332. ACM (2018). https://doi.org/10.1145/3213846.3213868

  78. Noller, Y., Pasareanu, C.S., Böhme, M., Sun, Y., Nguyen, H.L., Grunske, L.: Hy-Diff: Hybrid differential software analysis. In: ICSE, pp. 1273–1285. ACM (2020). https://doi.org/10.1145/3377811.3380363

  79. Palikareva, H., Kuchta, T., Cadar, C.: Shadow of a doubt: Testing for divergences between software versions. In: ICSE, pp. 1181–1192. ACM (2016). https://doi.org/10.1145/2884781.2884845

  80. Person, S., Dwyer, M.B., Elbaum, S.G., Pasareanu, C.S.: Differential symbolic execution. In: FSE, pp. 226–237. ACM (2008). https://doi.org/10.1145/1453101.1453131

  81. Person, S., Yang, G., Rungta, N., Khurshid, S.: Directed incremental symbolic execution. In: PLDI, pp. 504–515. ACM (2011). https://doi.org/10.1145/1993498.1993558

  82. Post, H., Sinz, C., Kaiser, A., Gorges, T.: Reducing false positives by combining abstract interpretation and bounded model checking. In: ASE, pp. 188–197. IEEE (2008). https://doi.org/10.1109/ASE.2008.29

  83. Richter, C., Hüllermeier, E., Jakobs, M., Wehrheim, H.: Algorithm selection for software validation based on graph kernels. JASE 27(1), 153–186 (2020). https://doi.org/10.1007/s10515-020-00270-x

  84. Rose, E.: Lightweight bytecode verification. JAR 31(3–4), 303–334 (2003). https://doi.org/10.1023/B:JARS.0000021015.15794.82

  85. Rothenberg, B.-C., Dietsch, D., Heizmann, M.: Incremental verification using trace abstraction. In: Podelski, A. (ed.) SAS 2018. LNCS, vol. 11002, pp. 364–382. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99725-4_22

  86. Seidl, H., Erhard, J., Vogler, R.: Incremental abstract interpretation. In: Di Pierro, A., Malacaria, P., Nagarajan, R. (eds.) From Lambda Calculus to Cybersecurity Through Program Analysis. LNCS, vol. 12065, pp. 132–148. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-41103-9_5

  87. Seo, S., Yang, H., Yi, K.: Automatic construction of Hoare proofs from abstract interpretation results. In: Ohori, A. (ed.) APLAS 2003. LNCS, vol. 2895, pp. 230–245. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-40018-9_16

  88. Sery, O., Fedyukovich, G., Sharygina, N.: Incremental upgrade checking by means of interpolation-based function summaries. In: FMCAD. pp. 114–121. FMCAD Inc. (2012). http://ieeexplore.ieee.org/document/6462563/

  89. Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10806, pp. 249–265. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89963-3_15

  90. Siddiqui, J.H., Khurshid, S.: Scaling symbolic execution using ranged analysis. In: Leavens, G.T., Dwyer, M.B. (eds.) SPLASH, pp. 523–536. ACM (2012). https://doi.org/10.1145/2384616.2384654

  91. Staats, M., Pasareanu, C.S.: Parallel symbolic execution for structural test generation. In: ISSTA, pp. 183–194. ACM (2010). https://doi.org/10.1145/1831708.1831732

  92. Stephens, N., et al.: Driller: Augmenting fuzzing through selective symbolic execution. In: NDSS. The Internet Society (2016)

    Google Scholar 

  93. Švejda, J., Berger, P., Katoen, J.-P.: Interpretation-based violation witness validation for C: NITWIT. In: Biere, A., Parker, D. (eds.) TACAS 2020. LNCS, vol. 12078, pp. 40–57. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45190-5_3

  94. Szabó, T., Erdweg, S., Voelter, M.: IncA: A DSL for the definition of incremental program analyses. In: ASE, pp. 320–331. ACM (2016). https://doi.org/10.1145/2970276.2970298

  95. Trostanetski, A., Grumberg, O., Kroening, D.: Modular demand-driven analysis of semantic difference for program versions. In: Ranzato, F. (ed.) SAS 2017. LNCS, vol. 10422, pp. 405–427. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66706-5_20

  96. Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: MUX: Algorithm selection for software model checkers. In: MSR, pp. 132–141. ACM (2014). https://doi.org/10.1145/2597073.2597080

  97. Visser, W., Geldenhuys, J., Dwyer, M.B.: Green: Reducing, reusing, and recycling constraints in program analysis. In: FSE, pp. 58:1–58:11. ACM (2012). https://doi.org/10.1145/2393596.2393665

  98. Yang, G., Dwyer, M.B., Rothermel, G.: Regression model checking. In: ICSM, pp. 115–124. IEEE (2009). https://doi.org/10.1109/ICSM.2009.5306334

  99. Yang, G., Păsăreanu, C.S., Khurshid, S.: Memoized symbolic execution. In: ISSTA, pp. 144–154. ACM (2012). https://doi.org/10.1145/2338965.2336771

  100. Yorsh, G., Ball, T., Sagiv, M.: Testing, abstraction, theorem proving: Better together! In: ISSTA, pp. 145–156. ACM (2006). https://doi.org/10.1145/1146238.1146255

  101. Yu, Q., He, F., Wang, B.: Incremental predicate analysis for regression verification. TOPLAS 4(OOPSLA), 184:1–184:25 (2020). https://doi.org/10.1145/3428252

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marie-Christine Jakobs .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jakobs, MC. (2022). Reusing Predicate Precision in Value Analysis. In: ter Beek, M.H., Monahan, R. (eds) Integrated Formal Methods. IFM 2022. Lecture Notes in Computer Science, vol 13274. Springer, Cham. https://doi.org/10.1007/978-3-031-07727-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07727-2_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07726-5

  • Online ISBN: 978-3-031-07727-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics