Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Reevaluating the Small-Scope Testing Hypothesis of Answer Set Programs

  • Conference paper
  • First Online:
Testing Software and Systems (ICTSS 2024)

Abstract

As we increasingly rely on artificial intelligence systems, we must ensure that those systems are reliable and need to know how much we can rely on them. In software quality assurance, testing is a useful method to highlight and fix issues during development to avoid unexpected behavior after the system has been deployed. Artificial intelligence engineers are increasingly becoming aware of quality assurance as a requirement. Previous results in the area of answer set programming suggest that a high proportion of errors can be found when testing a program against a small scope, i.e. by inputs from a small domain. However, these results are based on assumptions that may be impractical for testing. To find out whether small scopes remain sufficient in practice, we evaluate several benchmarks against actual test oracles. Our findings suggest that small scopes can indeed find a high proportion of errors, but results depend on the observed benchmark and appropriate test oracles are required to achieve reliable scores.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://git.ist.tugraz.at/clingabomino/clingabomino.

  2. 2.

    Clingo represents the empty head using an invisible NOGOOD literal.

  3. 3.

    These execution times need to be taken with a spoonful of salt and can not be compared to [13], as we generate more mutants and run the tests on all instances while only limiting the time allowed to generate the answer sets, but not the time for validating them in a test script.

  4. 4.

    You may imagine the floor to be lava.

  5. 5.

    QuickCheck would need to be adapted to answer set programming and also require a “shrinker”, as, unlike Harvey, it starts with large instances and then shrinks them if a bug is found.

References

  1. Alblwi, S., Ayad, A., Mili, A.: Mutation coverage is not strongly correlated with mutation coverage. In: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024), AST 2024, pp. 1–11. Association for Computing Machinery, New York (2024). https://doi.org/10.1145/3644032.3644442

  2. Amendola, G., Berei, T., Ricca, F.: Testing in ASP: revisited language and programming environment. In: Faber, W., Friedrich, G., Gebser, M., Morak, M. (eds.) JELIA 2021. LNCS (LNAI), vol. 12678, pp. 362–376. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75775-5_24

    Chapter  MATH  Google Scholar 

  3. Calimeri, F., Gebser, M., Maratea, M., Ricca, F.: Design and results of the fifth answer set programming competition. Artif. Intell. 231, 151–181 (2016). https://doi.org/10.1016/j.artint.2015.09.008

    Article  MathSciNet  MATH  Google Scholar 

  4. DeMillo, R., Lipton, R., Sayward, F.: Hints on test data selection: help for the practicing programmer. Computer 11(4), 34–41 (1978). https://doi.org/10.1109/C-M.1978.218136

    Article  MATH  Google Scholar 

  5. Febbraro, O., Leone, N., Reale, K., Ricca, F.: Unit testing in ASPIDE. In: Tompits, H., et al. (eds.) INAP/WLP -2011. LNCS (LNAI), vol. 7773, pp. 345–364. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41524-1_21

    Chapter  MATH  Google Scholar 

  6. Gebser, M., Harrison, A., Kaminski, R., Lifschitz, V., Schaub, T.: Abstract gringo. Theory Pract. Logic Program. 15(4–5), 449–463 (2015). https://doi.org/10.1017/S1471068415000150

    Article  MathSciNet  MATH  Google Scholar 

  7. Gebser, M., Kaminski, R., Kaufmann, B., Schaub, T.: Multi-shot asp solving with Clingo. Theory Pract. Logic Program. 19(1), 27–82 (2019). https://doi.org/10.1017/S1471068418000054

  8. Greßler, A., Oetsch, J., Tompits, H.: \(\sf Harvey\): a system for random testing in ASP. In: Balduccini, M., Janhunen, T. (eds.) LPNMR 2017. LNCS (LNAI), vol. 10377, pp. 229–235. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61660-5_21

    Chapter  MATH  Google Scholar 

  9. Hughes, J.: QuickCheck testing for fun and profit. In: Hanus, M. (ed.) Practical Aspects of Declarative Languages, pp. 1–32. Springer, Heidelberg (2007)

    Google Scholar 

  10. Jackson, D., Damon, C.A.: Elements of style: analyzing a software design feature with a counterexample detector. ACM SIGSOFT Softw. Eng. Notes 21(3), 239–249 (1996).https://doi.org/10.1145/226295.226322

  11. Janhunen, T., Niemelä, I., Oetsch, J., Pührer, J., Tompits, H.: On testing answer-set programs. In: Coelho, H., Studer, R., Wooldridge, M.J. (eds.) ECAI 2010 - 19th European Conference on Artificial Intelligence, Lisbon, Portugal, 16–20 August 2010. Frontiers in Artificial Intelligence and Applications, vol. 215, pp. 951–956. IOS Press (2010). https://doi.org/10.3233/978-1-60750-606-5-951

  12. Janhunen, T., Niemelä, I., Oetsch, J., Pührer, J., Tompits, H.: Random vs. structure-based testing of answer-set programs: an experimental comparison. In: Delgrande, J.P., Faber, W. (eds.) LPNMR 2011. LNCS (LNAI), vol. 6645, pp. 242–247. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20895-9_26

    Chapter  MATH  Google Scholar 

  13. Oetsch, J., Prischink, M., Pührer, J., Schwengerer, M., Tompits, H.: On the small-scope hypothesis for testing answer-set programs. In: Brewka, G., Eiter, T., McIlraith, S.A. (eds.) Principles of Knowledge Representation and Reasoning: Proceedings of the Thirteenth International Conference, KR 2012, Rome, Italy, 10–14 June 2012. AAAI Press (2012). http://www.aaai.org/ocs/index.php/KR/KR12/paper/view/4550

  14. Offutt, A., Rothermel, G., Zapf, C.: An experimental evaluation of selective mutation. In: Proceedings of 1993 15th International Conference on Software Engineering, pp. 100–107 (1993). https://doi.org/10.1109/ICSE.1993.346062

  15. Papadakis, M., Henard, C., Harman, M., Jia, Y., Le Traon, Y.: Threats to the validity of mutation-based test assessment. In: Proceedings of the 25th International Symposium on Software Testing and Analysis, ISSTA 2016, pp. 354–365. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2931037.2931040

  16. Rosa, A.: On certain valuations of the vertices of a graph. In: Theory of Graphs, International Symposium, pp. 349–355. Gordon and Breach (1966)

    Google Scholar 

  17. Tange, O.: GNU Parallel 20240222 (2024). https://doi.org/10.5281/zenodo.10719803. GNU Parallel is a general parallelizer to run multiple serial command line programs in parallel without changing them

  18. Viola Pizzoleto, A., Cutigi Ferrari, F., Offutt, J., Fernandes, L., Ribeiro, M.: A systematic literature review of techniques and metrics to reduce the cost of mutation testing. J. Syst. Softw. 157, 110388 (2019). https://doi.org/10.1016/j.jss.2019.07.100

    Article  Google Scholar 

  19. Wotawa, F., Tazl, O.: On the verification of diagnosis models. In: Industrial Artificial Intelligence Technologies and Applications, pp. 189–203 (2022). https://doi.org/10.13052/rp-9788770227902

Download references

Acknowledgments

This paper is part of the AI4CSM project that has received funding within the ECSEL JU in collaboration with the European Union’s H2020 Framework Programme (H2020/2014-2020) and National Authorities, under grant agreement No. 101007326. The work was partially funded by the Austrian Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK) under the program “ICT of the Future” project 877587.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liliana Marie Prikler .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare relevant to this article’s content.

Rights and permissions

Reprints and permissions

Copyright information

© 2025 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Prikler, L.M., Wotawa, F. (2025). Reevaluating the Small-Scope Testing Hypothesis of Answer Set Programs. In: Menéndez, H.D., et al. Testing Software and Systems. ICTSS 2024. Lecture Notes in Computer Science, vol 15383. Springer, Cham. https://doi.org/10.1007/978-3-031-80889-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-80889-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-80888-3

  • Online ISBN: 978-3-031-80889-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics