Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3430665.3456316acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article
Open access

Reproduction for Insight: Towards Better Understanding the Quality of Students Tests

Published: 26 June 2021 Publication History

Abstract

Unit testing is an essential part of computer science education. Students leaving the university should be able to test at a high quality. Traditionally, teaching assistants or teachers themselves check projects and assignments manually. Clearly, this does not scale well. Furthermore, such methods are not suitable to easily get an overview of the overall student performance. A more efficient approach is the so-called all-pairs method, applied by the study of Edwards and Shams (ITiCSE, 2014). Each student implements both the unit (e.g. a class) and a corresponding test suite. Additionally, the teacher creates a reference implementation. The measurement is done by running all tests against all implementations. The present study builds on that research.
Our fundamental question is how teachers can get insight into the testing ability of their students. In particular, we develop a software framework and a replicable method. Students have to solve an implementation assignment. Assembling the results, the software provides an overview to the teachers about the test quality of each student's work as well as of the overall testing ability of students. In this study, we replicate and improve the method of Edwards and Shams. We perform measurements to get a better understanding of the testing ability of our students ($N=98$). Additionally, we lay the foundations to make the measurement of students test quality replicable; this enables us later to compare different study programs or the same student population before and after a given course.

References

[1]
Samantha F Anderson and Scott E Maxwell. 2016. There's more than one way to conduct a replication study: Beyond statistical significance. Psychological Methods, Vol. 21, 1 (2016), 1.
[2]
Gustavo MN Avellar, Rogério F da Silva, Lilian P Scatalon, Stev ao A Andrade, Márcio E Delamaro, and Ellen F Barbosa. 2019. Integration of Software Testing to Programming Assignments: An Experimental Study. In 2019 IEEE Frontiers in Education Conference (FIE). IEEE, 1--9.
[3]
Victor R Basili, Forrest Shull, and Filippo Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering, Vol. 25, 4 (1999), 456--473.
[4]
Juan C Burguillo. 2010. Using game theory and competition-based learning to stimulate student motivation and performance. Computers & Education, Vol. 55, 2 (2010), 566--575.
[5]
Henrik Bærbak Christensen. 2016. Teaching DevOps and cloud computing using a cognitive apprenticeship and story-telling approach. In Proceedings of the 2016 ACM conference on innovation and technology in computer science education. 174--179.
[6]
Fabio QB Da Silva, Marcos Suassuna, A César C Francc a, Alicia M Grubb, Tatiana B Gouveia, Cleviton VF Monteiro, and Igor Ebrahim dos Santos. 2014. Replication of empirical studies in software engineering research: a systematic mapping study. Empirical Software Engineering, Vol. 19, 3 (2014), 501--557.
[7]
Stephen H Edwards. 2004. Using software testing to move students from trial-and-error to reflection-in-action. In ACM SIGCSE Bulletin, Vol. 36. ACM, 26--30.
[8]
Stephen H Edwards and Zalia Shams. 2014. Do student programmers all tend to write the same software tests?. In Proceedings of the 2014 conference on Innovation & technology in computer science education. ACM, 171--176.
[9]
Colin Fidge, Jim Hogan, and Raymond Lister. 2013. What vs. how: comparing students' testing and coding skills. In Conferences in Research and Practice in Information Technology Series .
[10]
Michael H Goldwasser. 2002. A gimmick to integrate software testing throughout the curriculum. In ACM SIGCSE Bulletin, Vol. 34. ACM, 271--275.
[11]
Matthew Hertz. 2010. What do “CS1” and “CS2” mean? Investigating differences in the early courses. In Proceedings of the 41st ACM technical symposium on Computer science education. Milwaukee, Wisconsin, USA, 199--203.
[12]
Edward L Jones. 2001. Grading student programs-a software testing approach. Journal of Computing Sciences in Colleges, Vol. 16, 2 (2001), 185--192.
[13]
Natalia Juristo and Omar S Gómez. 2010. Replication of software engineering experiments. In Empirical software engineering and verification. Springer, 60--88.
[14]
Natalia Juristo and Sira Vegas. 2011. The role of non-exact replications in software engineering experiments. Empirical Software Engineering, Vol. 16, 3 (2011), 295--324.
[15]
Ville Karavirta and Cliff Shaffer. 2016. OpenDSA. https://opendsa-server.cs.vt.edu/
[16]
Jonathan L Krein and Charles D Knutson. 2010. A case for replication: synthesizing research methodologies in software engineering. In RESER2010: proceedings of the 1st international workshop on replication in empirical software engineering research. Citeseer.
[17]
Daniel E Krutz, Samuel A Malachowsky, and Thomas Reichlmayr. 2014. Using a real world project in a software testing course. In Proceedings of the 45th ACM technical symposium on Computer science education. ACM, 49--54.
[18]
M. Lawende. 2021. Students Tests Analysis Tool. https://github.com/mauritsl/studenttests
[19]
Jonathan Lung, Jorge Aranda, Steve Easterbrook, and Gregory Wilson. 2008. On the difficulty of replicating human subjects studies in software engineering. In 2008 ACM/IEEE 30th International Conference on Software Engineering. IEEE, 191--200.
[20]
Roy P. Pargas, Joe C. Lundy, and John N. Underwood. 1997. Tournament Play in CS1. SIGCSE Bull., Vol. 29, 1 (March 1997), 214--218. https://doi.org/10.1145/268085.268166
[21]
Keith Quille and Susan Bergin. 2018. Programming: predicting student success early in CS1. a re-validation and replication study. In Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education. 15--20.
[22]
Giuseppe Scanniello, Simone Romano, Davide Fucci, Burak Turhan, and Natalia Juristo. 2016. Students' and Professionals' Perceptions of Test-driven Development: A Focus Group Study. In Proceedings of the 31st Annual ACM Symposium on Applied Computing (Pisa, Italy) (SAC '16). ACM, New York, NY, USA, 1422--1427. https://doi.org/10.1145/2851613.2851778
[23]
Zalia Shams and Stephen H. Edwards. 2013. Toward Practical Mutation Analysis for Evaluating the Quality of Student-written Software Tests. In Proceedings of the Ninth Annual International ACM Conference on International Computing Education Research (San Diego, San California, USA) (ICER '13). ACM, New York, NY, USA, 53--58. https://doi.org/10.1145/2493394.2493402
[24]
Martin Shepperd. 2018. Replication studies considered harmful. In 2018 IEEE/ACM 40th International Conference on Software Engineering: New Ideas and Emerging Technologies Results (ICSE-NIER). IEEE, 73--76.
[25]
Jacqueline L Whalley and Anne Philpott. 2011. A unit testing approach to building novice programmers' skills and confidence. In Proceedings of the Thirteenth Australasian Computing Education Conference, Vol. 114. 113--118.
[26]
Daniel Zingaro, Michelle Craig, Leo Porter, Brett A Becker, Yingjun Cao, Phill Conrad, Diana Cukierman, Arto Hellas, Dastyni Loksa, and Neena Thota. 2018. Achievement goals in CS1: Replication and extension. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 687--692.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1
June 2021
611 pages
ISBN:9781450382144
DOI:10.1145/3430665
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. assessment
  2. automatic grading
  3. replication
  4. software testing

Qualifiers

  • Research-article

Funding Sources

  • Radboud University Nijmegen The Netherlands
  • Open Universiteit Heerlen The Netherlands

Conference

ITiCSE 2021
Sponsor:

Acceptance Rates

Overall Acceptance Rate 552 of 1,613 submissions, 34%

Upcoming Conference

ITiCSE '25
Innovation and Technology in Computer Science Education
June 27 - July 2, 2025
Nijmegen , Netherlands

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 233
    Total Downloads
  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)17
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media