Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3464974.3468446acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Test’n’Mo: a collaborative platform for human testers and intelligent monitoring agents

Published: 11 July 2021 Publication History

Abstract

Many software bugs have disruptive consequences, both in financial terms and in loss of life. Software Testing is one widely used approach to detect software bugs and ensure software quality but the testing activity, conducted either manually or using testing frameworks, is repetitive and expensive. Runtime Monitoring, differently from Software Testing, does not require test cases to be designed and executed and – once the property to be monitored has been specified – it does not rely on human beings performing any further actions, unless a violation is detected. However the property to be monitored, that must feed the monitor along with the trace or stream of observed events, may be very hard to identify and specify. In this extended abstract we present the Test'n'Mo vision which goes in the direction of exploiting Artificial intelligence and Machine Learning as enabling techniques for a hybrid platform for Software Testing and Runtime Monitoring. In Test'n'Mo, human testers and software agents of different kinds – 'Learning Agents' and 'Runtime Monitoring and Testing Agents' – collaborate to achieve their common testing goal. Although Test'n'Mo is meant to address User Interface testing of web/mobile apps, the Test'n'Mo approach may be adapted to other software testing activities.

References

[1]
Eleni Adamopoulou and Lefteris Moussiades. 2020. An Overview of Chatbot Technology. In Artificial Intelligence Applications and Innovations, Ilias Maglogiannis, Lazaros Iliadis, and Elias Pimenidis (Eds.). Springer International Publishing, Cham. 373–383. isbn:978-3-030-49186-4
[2]
John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Ralf Lämmel, Erik Meijer, Silvia Sapora, and Justin Spahr-Summers. 2020. WES: Agent-based User Interaction Simulation on Real Infrastructure. arxiv:2004.05363.
[3]
Davide Ancona, Daniela Briola, Angelo Ferrando, and Viviana Mascardi. 2015. Global Protocols as First Class Entities for Self-Adaptive Agents. In AAMAS. ACM, 1019–1029.
[4]
Davide Ancona, Angelo Ferrando, and Viviana Mascardi. 2016. Comparing Trace Expressions and Linear Temporal Logic for Runtime Verification. In Theory and Practice of Formal Methods (Lecture Notes in Computer Science, Vol. 9660). Springer, 47–64.
[5]
Davide Ancona, Luca Franceschini, Angelo Ferrando, and Viviana Mascardi. 2021. RML: Theory and practice of a domain specific language for runtime verification. Sci. Comput. Program., 205 (2021), 102610.
[6]
Saba Gholizadeh Ansari, Wishnu Prasetya, Mehdi Dastani, Frank Dignum, and Gabriele Keller. 2021. An Appraisal Transition System for Event-driven Emotions in Agent-based Player Experience Testing. In EMAS@AAMAS, Pre-Proceedings.
[7]
2020. Appium: Mobile App Automation. http://appium.io/ Accessed: 2021-02-16.
[8]
Xiaoying Bai, Guilan Dai, Dezheng Xu, and Wei-Tek Tsai. 2006. A multi-agent based framework for collaborative testing on web services. In The Fourth IEEE Workshop on Software Technologies for Future Embedded and Ubiquitous Systems, and the Second International Workshop on Collaborative Computing, Integration, and Assurance (SEUS-WCCIA’06). 6–pp.
[9]
2010. Runtime Verification - First International Conference, RV 2010, St. Julians, Malta, November 1-4, 2010. Proceedings, Howard Barringer, Yliès Falcone, Bernd Finkbeiner, Klaus Havelund, Insup Lee, Gordon J. Pace, Grigore Rosu, Oleg Sokolsky, and Nikolai Tillmann (Eds.) (Lecture Notes in Computer Science, Vol. 6418). Springer.
[10]
Fabio Luigi Bellifemine, Giovanni Caire, and Dominic Greenwood. 2007. Developing multi-agent systems with JADE. 7, John Wiley & Sons.
[11]
Matteo Biagiola, Filippo Ricca, and Paolo Tonella. 2017. Search Based Path and Input Data Generation for Web Application Testing. In Proceedings of 9th International Symposium on Search Based Software Engineering (SSBSE 2017). Springer, 18–32. https://doi.org/10.1007/978-3-319-66299-2_2
[12]
Rafael H. Bordini, Jomi Fred Hübner, and Michael Wooldridge. 2007. Programming multi-agent systems in AgentSpeak using Jason. 8, John Wiley & Sons.
[13]
Daniela Briola and Viviana Mascardi. 2017. Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and Its Evaluation on Real Data. In RuleML+RR (Lecture Notes in Computer Science, Vol. 10364). Springer, 53–69.
[14]
Priyanka Dhareula and Anita Ganpati. 2016. Identification of attributes for test case reusability in regression test selection techniques. In 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom). 1144–1147.
[15]
Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. 2020. Go-Explore: a New Approach for Hard-Exploration Problems. arxiv:1901.10995.
[16]
Eduard Enoiu and Mirgita Frasheri. 2019. Test Agents: The Next Generation of Test Cases. In 2019 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). 305–308. https://doi.org/10.1109/ICSTW.2019.00070
[17]
Md Sadek Ferdous, Andrea Margheri, Federica Paci, Mu Yang, and Vladimiro Sassone. 2017. Decentralised runtime monitoring for access control systems in cloud federations. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). 2632–2633.
[18]
Boni García, Micael Gallego, Francisco Gortázar, and Mario Munoz-Organero. 2020. A Survey of the Selenium Ecosystem. Electronics, 9 (2020), 06, 1067.
[19]
Vahid Garousi and Michael Felderer. 2016. Developing, Verifying, and Maintaining High-Quality Automated Test Scripts. IEEE Softw., 33, 3 (2016), May, 68–75. issn:0740-7459 https://doi.org/10.1109/MS.2016.30
[20]
Zhang Juan, Lizhi Cai, Weiqing Tong, Yuan Song, and Li Ying. 2010. Test case reusability metrics model. In 2010 2nd International Conference on Computer Technology and Development. 294–298.
[21]
Pavithra Perumal Kumaresen, Mirgita Frasheri, and Eduard Enoiu. 2020. Agent-Based Software Testing: A Definition and Systematic Mapping Study. arxiv:2007.10224.
[22]
Maurizio Leotta, Diego Clerissi, Filippo Ricca, and Paolo Tonella. 2016. Approaches and Tools for Automated End-to-End Web Testing. Advances in Computers, 101 (2016), 193–237. issn:0065-2458 https://doi.org/10.1016/bs.adcom.2015.11.007
[23]
Yoo Jin Lim, Gwangui Hong, Donghwan Shin, Eunkyoung Jee, and Doo-Hwan Bae. 2016. A runtime verification framework for dynamically adaptive multi-agent systems. In 2016 International Conference on Big Data and Smart Computing (BigComp). 509–512.
[24]
Alessio Lomuscio, Monika Solanki, Wojciech Penczek, and Maciej Szreter. 2010. Runtime monitoring of contract regulated web services. In AAMAS. 1449–1450.
[25]
Ali Mesbah, Arie van Deursen, and Stefan Lenselink. 2012. Crawling Ajax-Based Web Applications through Dynamic Analysis of User Interface State Changes. ACM Transactions on the Web (TWEB), 6, 1 (2012), 3:1–3:30.
[26]
Christian Murphy, Gail E. Kaiser, Lifeng Hu, and Leon Wu. 2008. Properties of Machine Learning Applications for Use in Metamorphic Testing. In Proceedings of the Twentieth International Conference on Software Engineering & Knowledge Engineering (SEKE’2008), San Francisco, CA, USA, July 1-3, 2008. Knowledge Systems Institute Graduate School, 867–872.
[27]
Andrea Omicini, Alessandro Ricci, and Mirko Viroli. 2008. Artifacts in the A&A meta-model for multi-agent systems. Auton. Agents Multi Agent Syst., 17, 3 (2008), 432–456.
[28]
Kamalendu Pal. 2020. Framework for Reusable Test Case Generation in Software Systems Testing. In Software Engineering for Agile Application Development. IGI Global, 212–229.
[29]
Gordon D. Plotkin. 1971. Automatic Methods of Inductive Inference. Ph.D. Dissertation. Edinburgh University.
[30]
Filippo Ricca, Maurizio Leotta, and Andrea Stocco. 2019. Three Open Problems in the Context of E2E Web Testing and a Vision: NEONATE. Advances in Computers, 113 (2019), 89–133. issn:0065-2458 https://doi.org/10.1016/bs.adcom.2018.10.005
[31]
Filippo Ricca and Andrea Stocco. 2021. AI-based Test Automation: A Grey Literature Analysis. In NEXTA workshop.
[32]
César Sánchez, Gerardo Schneider, Wolfgang Ahrendt, Ezio Bartocci, Domenico Bianculli, Christian Colombo, Yliès Falcone, Adrian Francalanza, Srdan Krstic, João M. Lourenço, Dejan Nickovic, Gordon J. Pace, José Rufino, Julien Signoles, Dmitriy Traytel, and Alexander Weiss. 2019. A survey of challenges for runtime verification from advanced application domains (beyond software). Formal Methods Syst. Des., 54, 3 (2019), 279–335.
[33]
2020. SeleniumHQ Web Browser Automation. https://www.selenium.dev/ Accessed: 2020-07-22.
[34]
Ehud Y. Shapiro. 1981. Inductive Inference of Theories From Facts. Yale University Department of Computer Science.
[35]
Helge Spieker, Arnaud Gotlieb, Dusica Marijan, and Morten Mossige. 2017. Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration. In Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2017). Association for Computing Machinery, New York, NY, USA. 12–22. isbn:9781450350761 https://doi.org/10.1145/3092703.3092709
[36]
Mark Utting and Bruno Legeard. 2006. Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. isbn:0123725011
[37]
Oriol Vinyals, Igor Babuschkin, M Wojciech Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, H David Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, P John Agapiou, Max Jaderberg, S Alexander Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, L Tom Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 1–5.
[38]
Tanja E.J. Vos, Peter M. Kruse, Nelly Condori-Fernández, Sebastian Bauersfeld, and Joachim Wegener. 2015. TESTAR: Tool Support for Test Automation at the User Interface Level. International Journal of Information System Modeling and Design (IJISMD), 6, 3 (2015), July, 46–83. https://ideas.repec.org/a/igg/jismd0/v6y2015i3p46-83.html
[39]
Yubo Jia, Chengwei Huang, and Hao Cai. 2009. A comparison of three agent-oriented software development methodologies: MaSE, Gaia, and Tropos. In 2009 IEEE Youth Conference on Information, Computing and Telecommunication. 106–109. https://doi.org/10.1109/YCICT.2009.5382417

Cited By

View all
  • (2023)Exploiting Logic Programming for Runtime Verification: Current and Future PerspectivesProlog: The Next 50 Years10.1007/978-3-031-35254-6_25(300-317)Online publication date: 17-Jun-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
VORTEX 2021: Proceedings of the 5th ACM International Workshop on Verification and mOnitoring at Runtime EXecution
July 2021
39 pages
ISBN:9781450385466
DOI:10.1145/3464974
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. collaborative human-agent testing platform
  2. runtime monitoring agents
  3. test agents

Qualifiers

  • Research-article

Conference

ISSTA '21
Sponsor:

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)1
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Exploiting Logic Programming for Runtime Verification: Current and Future PerspectivesProlog: The Next 50 Years10.1007/978-3-031-35254-6_25(300-317)Online publication date: 17-Jun-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media