Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Systematic Literature Review of Automated Feedback Generation for Programming Exercises

Published: 28 September 2018 Publication History

Abstract

Formative feedback, aimed at helping students to improve their work, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We have performed a systematic literature review to find out what kind of feedback is provided, which techniques are used to generate the feedback, how adaptable the feedback is, and how these tools are evaluated. We have designed a labelling to classify the tools, and use Narciss’ feedback content categories to classify feedback messages. We report on the results of coding a total of 101 tools. We have found that feedback mostly focuses on identifying mistakes and less on fixing problems and taking a next step. Furthermore, teachers cannot easily adapt tools to their own needs. However, the diversity of feedback types has increased over the past decades and new techniques are being applied to generate feedback that is increasingly helpful for students.

Supplementary Material

a3-keuning-apndx.pdf (keuning.zip)
Supplemental movie, appendix, image and software files for, A Systematic Literature Review of Automated Feedback Generation for Programming Exercises

References

[1]
Anne Adam and Jean-Pierre Laurent. 1980. LAURA, a system to debug student programs. Artific. Intell. 15, 1--2 (1980), 75--122.
[2]
Kirsti Ala-Mutka. 2005. A survey of automated assessment approaches for programming assignments. Comput. Sci. Edu. 15, 2 (2005), 83--102.
[3]
Kirsti Ala-Mutka and Hannu Matti Järvinen. 2004. Assessment process for programming assignments. In Proceedings of the IEEE Conference on Advanced Learning Technologies. 181--185.
[4]
Vincent Aleven, Bruce Mclaren, Jonathan Sewall, and Kenneth Koedinger. 2009. A new paradigm for intelligent tutoring systems: Example-tracing tutors. Int. J. Artific. Intell. Edu. 19 (2009), 105--154.
[5]
Dean Allemang. 1991. Using functional models in automatic debugging. IEEE Expert 6, 6 (1991), 13--18.
[6]
John R. Anderson. 1983. The Architecture of Cognition. Lawrence Erlbaum Associates, Inc.
[7]
John R. Anderson and Edward Skwarecki. 1986. The automated tutoring of introductory computer programming. Commun. ACM 29, 9 (1986), 842--849.
[8]
Paolo Antonucci. 2014. AutoTeach: Incremental Hints For Programming Exercises. Master’s thesis. ETH Zurich.
[9]
Paolo Antonucci, Christian Estler, Đurica Nikolić, Marco Piccioni, and Bertrand Meyer. 2015. An incremental hint system for automated programming assignments. In Innovat. Technol. Comput. Sci. Edu. 320--325.
[10]
David Arnow and Oleg Barshay. 1999. WebToTeach: An interactive focused programming exercise system. In Proceedings of the Frontiers in Education Conference, Vol. 1. 39--44.
[11]
Avron Barr and Marian Beard. 1976. An instructional interpreter for basic. ACM SIGCSE Bull. 8, 1 (1976), 325--334.
[12]
Avron Barr, Marian Beard, and Richard C. Atkinson. 1975. A rationale and description of a CAI program to teach the BASIC programming language. Instruct. Sci. 4, 1 (1975), 1--31.
[13]
Avron Barr, Marian Beard, and Richard C. Atkinson. 1976. The computer as a tutorial laboratory: The Stanford BIP project. Int. J. Man-Mach. Studies 8, 5 (1976), 567--596.
[14]
María Lucía Barrón-Estrada, Ramón Zatarain-Cabada, Francisco González Hernández, Raúl Oramas Bustillos, and Carlos A. Reyes-García. 2015. An affective and cognitive tutoring system for learning programming. In Advances in Artificial Intelligence and Its Applications. Vol. 9414, LNCS. 171--182.
[15]
Christoph Beierle, Marija Kulaš, and Manfred Widera. 2003. Automatic analysis of programming assignments. In DeLFI: Die 1. e-Learning Fachtagung Informatik. 144--153.
[16]
Christoph Beierle, Marija Kulaš, and Manfred Widera. 2004. Partial specifications of program properties. In Proceedings of the International Workshop on Teaching Logic Programming. 18--34.
[17]
Steve Benford, Edmund Burke, and Eric Foxley. 1993. Learning to construct quality software with the ceilidh system. Softw. Qual. J. 2, 3 (1993), 177--197.
[18]
Steve Benford, Edmund Burke, Eric Foxley, Neil Gutteridge, and Abdullah Mohd Zin. 1993. Early experiences of computer-aided assessment and administration when teaching computer programming. Res. Learn. Technol. 1, 2 (1993), 55--70.
[19]
Steve Benford, Edmund Burke, Eric Foxley, and Colin Higgins. 1995. The ceilidh system for the automatic grading of students on programming courses. In Proceedings of the ACM Southeast Conference. 176--182.
[20]
Jens Bennedsen and Michael E. Caspersen. 2007. Failure rates in introductory programming. ACM SIGCSE Bull. 39, 2 (2007), 32--36.
[21]
Michael Blumenstein, Steve Green, Shoshana Fogelman, Ann Nguyen, and Vallipuram Muthukkumarasamy. 2008. Performance analysis of GAME: A generic automated marking environment. Comput. Edu. 50 (2008), 1203--1216.
[22]
Michael Blumenstein, Steve Green, Ann Nguyen, and Vallipuram Muthukkumarasamy. 2004. GAME: A generic automated marking environment for programming assessment. In Proceedings of the Conference on Information Technology: Coding and Computing, Vol. 1. 212--216.
[23]
Jeffrey G. Bonar and Robert Cunningham. 1988. Bridge: Intelligent Tutoring with Intermediate Representations. Technical Report. Carnegie Mellon University, University of Pittsburgh.
[24]
David Boud and Elizabeth Molloy (Eds.). 2012. Feedback in Higher and Professional Education: Understanding it and Doing it Well. Routledge.
[25]
Peter Brusilovsky. 1992. Intelligent tutor, environment and manual for introductory programming. Innovat. Edu. Train. Int. 29, 1 (1992), 26--34.
[26]
Peter Brusilovsky, Stephen Edwards, Amruth Kumar, Lauri Malmi, Luciana Benotti, Duane Buck, Petri Ihantola, Rikki Prince, Teemu Sirkiä, Sergey Sosnovsky et al. 2014. Increasing adoption of smart learning content for computer science education. In Proceedings of the Working Group Reports of Innovation and Technology in Computer Science Education. 31--57.
[27]
Peter Brusilovsky and Gerhard Weber. 1996. Collaborative example selection in an intelligent example-based programming environment. In Proceedings of the Conference on Learning Sciences. 357--362.
[28]
Julio C. Caiza and Jose M. Del Alamo. 2013. Programming assignments automatic grading: Review of tools and implementations. In Proceedings of the International Technology, Education and Development Conference. 5691--5700.
[29]
Michael E. Caspersen and Jens Bennedsen. 2007. Instructional design of a programming course: A learning theoretic approach. In Proceedings of the Workshop on Computing Education Research. 111--122.
[30]
Kuo En Chang, Bea Chu Chiao, Sei Wang Chen, and Rong Shue Hsiao. 2000. A programming learning system for beginners—A completion strategy approach 2000. IEEE Trans. Edu. 43, 2 (2000), 211--220.
[31]
Brenda Cheang, Andy Kurnia, Andrew Lim, and Wee-Chong Oon. 2003. On automated grading of programming assignments in an academic institution. Comput. Edu. 41, 2 (2003), 121--131.
[32]
Yam San Chee. 1995. Cognitive apprenticeship and its application to the teaching of Smalltalk in a multimedia interactive learning environment. Instruction. Sci. 23, 1-3 (1995), 133--161.
[33]
Koen Claessen and John Hughes. 2011. QuickCheck: A lightweight tool for random testing of Haskell programs. ACM SIGPLAN Notices 46, 4 (2011), 53--64.
[34]
Albert T. Corbett and John R. Anderson. 1993. Student modeling in an intelligent programming tutor. In Cognitive Models and Intelligent Environments for Learning Programming. Vol. 111. Springer, 135--144.
[35]
Albert T. Corbett and John R. Anderson. 1994. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. User-Adapt. Interact. 4, 4 (1994), 253--278.
[36]
Albert T. Corbett and John R. Anderson. 2001. Locus of feedback control in computer-based tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 245--252.
[37]
Albert T. Corbett, John R. Anderson, and Eric J. Patterson. 1990. Student modeling and tutoring flexibility in the lisp intelligent tutoring system. In Intelligent Tutoring Systems. Ablex, 83--106.
[38]
Tonci Dadic. 2011. Intelligent tutoring system for learning programming. In Intelligent Tutoring Systems in E-Learning Environments. IGI Global, 166--186.
[39]
Tonci Dadic, Slavomir Stankov, and Marko Rosic. 2008. Meaningful learning in the tutoring system for programming. In Proceedings of the Conference on Information Technology Interfaces. 483--488.
[40]
Ronald Lee Danielson. 1975. Pattie: An Automated Tutor for Top-down Programming. Ph.D. Dissertation. University of Illinois at Urbana-Champaign.
[41]
Leliane Nunes de Barros and Karina Valdivia Delgado. 2006. Model based diagnosis of student programs. In Proceedings of the Monet Workshop on Model-Based System at ECAI.
[42]
Draylson M. De Souza, Seiji Isotani, and Ellen F. Barbosa. 2015. Teaching novice programmers using ProgTest. Int. J. Knowl. Learn. 10, 1 (2015), 60--77.
[43]
Draylson M. De Souza, José Carlos Maldonado, and Ellen F. Barbosa. 2011. ProgTest: An environment for the submission and evaluation of programming assignments based on testing activities. In Proceedings of the IEEE Conference on Software Engineering Education and Training. 1--10.
[44]
Draylson M. De Souza, Bruno H. Oliveira, José C. Maldonado, Simone R. S. Souza, and Ellen F. Barbosa. 2014. Towards the use of an automatic assessment system in the teaching of software testing. In Proceedings of the Frontiers in Education Conference. 1--8.
[45]
Fadi P. Deek, Ki-Wang Ho, and Haider Ramadhan. 2000. A critical analysis and evaluation of web-based environments for program development. Internet Higher Edu. 3, 4 (2000), 223--269.
[46]
Fadi P. Deek and James A. McHugh. 1998. A survey and critical analysis of tools for learning programming. Comput. Sci. Edu. 8, 2 (1998), 130--178.
[47]
Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic test-based assessment of programming: A review. J. Edu. Res. Comput. 5, 3 (2005).
[48]
Stephen H. Edwards. 2003. Improving student performance by evaluating how well students test their own programs. J. Edu. Res. Comput. 3, 3 (2003), 1--24.
[49]
Stephen H. Edwards and Manuel A. Pérez-Quiñones. 2007. Experiences using test-driven development with an automated grader. J. Comput. Sci. Colleges 22, 3 (2007), 44--50.
[50]
John English and Tammy English. 2015. Experiences of using automated assessment in computer science courses. J. Info. Technol. Edu.: Innovat. Pract. 14 (2015), 237--254.
[51]
Gregor Fischer and Jürgen Wolff von Gudenberg. 2006. Improving the quality of programming education by online assessment. In Proceedings of the Symposium on Principles and Practice of Programming in Java. 208--211.
[52]
Eric Foxley and Colin A. Higgins. 2001. The CourseMaster CBA system: Improvements over ceilidh improvements over ceilidh. In Proceedings of the CAA Conference.
[53]
Timothy S. Gegg-Harrison. 1993. Exploiting Program Schemata in a Prolog Tutoring System. Ph.D. Dissertation. Duke University.
[54]
Alex Gerdes, Johan Jeuring, and Bastiaan Heeren. 2010. Using strategies for assessment of programming exercises. In Proceedings of the SIGCSE Technical Symposium on Computer Science Education. 441--445.
[55]
Alex Gerdes, Johan Jeuring, and Bastiaan Heeren. 2012. An interactive functional programming tutor. In Innovat. Technol. Comput. Sci. Edu. 250--255.
[56]
Moumita Ghosh, Brijesh Kumar Verma, and Anne T. Nguyen. 2002. An automatic assessment marking and plagiarism detection. In Proceedings of the Conference on Information Technology and Applications.
[57]
Michael Goedicke, Michael Striewe, and Moritz Balz. 2008. Computer Aided Assessments and Programming Exercises with JACK. Technical Report 28. ICB, University Duisburg-Essen.
[58]
Mercedes Gómez-Albarrán. 2005. The teaching and learning of programming: A survey of supporting software tools. Comput. J. 48, 2 (2005), 130--144.
[59]
Olly Gotel, Christelle Scharff, and Andrew Wildenberg. 2008. Teaching software quality assurance by encouraging student contributions to an open source web-based system for the assessment of programming assignments. ACM SIGCSE Bull. 40, 3 (2008), 214--218.
[60]
Olly Gotel, Christelle Scharff, Andrew Wildenberg, Mamadou Bousso, Chim Bunthoeurn, Phal Des, Vidya Kulkarni, Srisupa Palakvangsa Na Ayudhya, Cheikh Sarr, and Thanwadee Sunetnanta. 2008. Global perceptions on the use of WeBWorK as an online tutor for computer science. In Proceedings of the Frontiers in Education Conference. 5--10.
[61]
Paul Gross and Kris Powers. 2005. Evaluating assessments of novice programming environments. In Proceedings of the International Workshop on Computing Education Research. 99--110.
[62]
Sebastian Gross, Bassam Mokbel, Barbara Hammer, and Niels Pinkwart. 2015. Learning feedback in intelligent tutoring systems. Künstliche Intelligenz 29, 4 (2015), 413--418.
[63]
Sebastian Gross, Bassam Mokbel, Benjamin Paassen, Barbara Hammer, and Niels Pinkwart. 2014. Example-based feedback provision using structured solution spaces. Int. J. Learn. Technol. 9, 3 (2014), 248--280.
[64]
Sebastian Gross and Niels Pinkwart. 2015. Towards an integrative learning environment for java programming. In Proceedings of the IEEE Conference on Advanced Learning Technologies. 24--28.
[65]
Sumit Gulwani, Ivan Radiček, and Florian Zuleger. 2014. Feedback generation for performance problems in introductory programming assignments. In Proceedings of the SIGSOFT International Symposium on Foundations of Software Engineering. New York, New York, USA, 41--51.
[66]
Mark Guzdial. 2004. Programming environments for novices. In Computer Science Education Research, Sally Fincher and Marian Petre (Eds.). CRC Press, 127--154.
[67]
Budi Hartanto. 2014. Incorporating Anchored Learning in a C# Intelligent Tutoring System. Ph.D. Dissertation. Queensland University of Technology.
[68]
Budi Hartanto and Jim Reye. 2013. CSTutor: An intelligent tutoring system that supports natural learning. In Proceedings of the Conference on Computer Science Education Innovation and Technology. 19--26.
[69]
Helen M. Hasan. 1988. Assessment of student programming assignments in COBOL. Edu. Comput. 4 (1988), 99--107.
[70]
John Hattie and Helen Timperley. 2007. The power of feedback. Rev. Edu. Res. 77, 1 (2007), 81--112.
[71]
Yu He, Mitsuru Ikeda, and Riichiro Mizoguchi. 1994. Helping novice programmers bridge the conceptual gap. In Proceedings of the Conference on Expert Systems for Development. 192--197.
[72]
Michael T. Helmick. 2007. Interface-based programming assignments and automatic grading of Java programs. ACM SIGCSE Bull. 39, 3 (2007), 63--67.
[73]
Colin A. Higgins, Geoffrey Gray, Pavlos Symeonidis, and Athanasios Tsintsifas. 2005. Automated assessment and experiences of teaching programming. J. Edu. Res. Comput. 5, 3 (2005).
[74]
Colin A. Higgins and Fatima Z. Mansouri. 2000. PRAM: A courseware system for the automatic assessment of AI programs. In Innovative Teaching and Learning. Vol. 1. Springer, 311--329.
[75]
Colin A. Higgins, Pavlos Symeonidis, and Athanasios Tsintsifas. 2002. The marking system for CourseMaster. ACM SIGCSE Bull. 34, 3 (2002), 46--50.
[76]
Jay Holland, Antonija Mitrovic, and Brent Martin. 2009. J-LATTE: A constraint-based tutor for Java. In Proceedings of the Conference on Computers in Education. 142--146.
[77]
Jun Hong. 2004. Guided programming and automated error analysis in an intelligent Prolog tutor. Int. J. Hum.-Comput. Studies 61, 4 (2004), 505--534.
[78]
David Hovemeyer and William Pugh. 2004. Finding bugs is easy. ACM SIGPLAN Notices 39, 12 (2004), 92--106.
[79]
Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the Koli Calling International Conference on Computing Education Research. 86--93.
[80]
Carlo Innocenti, Claudio Massucco, Donatella Persico, and Luigi Sarti. 1991. Ugo: An intelligent tutoring system for prolog. In Proceedings of the PEG Conference on Knowledge Based Environments for Teaching and Learning. 322--329.
[81]
David Jackson. 1996. A software system for grading student computer programs. Comput. Edu. 27, 3--4 (1996), 171--180.
[82]
David Jackson. 2000. A semi-automated approach to online assessment. ACM SIGCSE Bull. 32, 3 (2000), 164--167.
[83]
David Jackson and Michelle Usher. 1997. Grading student programs using ASSYST. ACM SIGCSE Bull. 29, 1 (1997), 335--339.
[84]
Johan Jeuring, L. Thomas van Binsbergen, Alex Gerdes, and Bastiaan Heeren. 2014. Model solutions and properties for diagnosing student programs in Ask-Elle. In Proceedings of the Computer Science Education Research Conference. 31--40.
[85]
Wei Jin, Tiffany Barnes, and John Stamper. 2012. Program representation for automatic hint generation for a data-driven novice programming tutor. In Intelligent Tutoring Systems. Springer, 304--309.
[86]
Wei Jin, Albert Corbett, Will Lloyd, Lewis Baumstark, and Christine Rolka. 2014. Evaluation of guided-planning and assisted-coding with task relevant dynamic hinting. In Intelligent Tutoring Systems. Springer, 318--328.
[87]
W. Lewis Johnson. 1990. Understanding and debugging novice programs. Artific. Intell. 42, 1 (1990), 51--97.
[88]
W. Lewis Johnson and Elliot Soloway. 1984. Intention-based diagnosis of novice programming errors. In Proceedings of the AAAI Conference. 162--168.
[89]
W. Lewis Johnson and Elliot Soloway. 1985. PROUST: Knowledge-based program understanding. IEEE Trans. Softw. Eng. 11, 3 (1985), 267--275.
[90]
Joint Task Force on Computing Curricula, ACM and IEEE Computer Society. 2013. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. ACM.
[91]
Francisco Jurado, Miguel Redondo, and Manuel Ortega. 2012. Using fuzzy logic applied to software metrics and test cases to assess programming assignments and give advice. J. Netw. Comput. Appl. 35, 2 (2012), 695--712.
[92]
Francisco Jurado, Miguel Redondo, and Manuel Ortega. 2014. eLearning standards and automatic assessment in a distributed eclipse based environment for learning computer programming. Comput. Appl. Eng. Edu. 22, 4 (2014), 774--787.
[93]
Sokratis Karkalas and Sergio Gutierrez-Santos. 2014. Enhanced javascript learning using code quality tools and a rule-based system in the FLIP exploratory learning environment. In Proceedings of the IEEE Conference on Advanced Learning Technologies. 84--88.
[94]
Caitlin Kelleher and Randy Pausch. 2005. Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers. Comput. Surveys 37, 2 (2005), 83--137.
[95]
Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2014. Strategy-based feedback in a programming tutor. In Proceedings of the Computer Science Education Research Conference. 43--54.
[96]
Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2016. Towards a systematic review of automated feedback generation for programming exercises. In Innovation and Technology in Computer Science Education. ACM, 41--46.
[97]
Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2016. Towards a Systematic Review of Automated Feedback Generation for Programming Exercises—Extended Version. Technical Report UU-CS-2016-001.
[98]
Seon-Man Kim and Jin H. Kim. 1998. A hybrid approach for program understanding based on graph parsing and expectation-driven analysis. Appl. Artific. Intell. 12, 6 (1998), 521--546.
[99]
Barbara Kitchenham and Stuart Charters. 2007. Guidelines for performing systematic literature reviews in software engineering. Technical Report EBSE-2007-01.
[100]
Hikyoo Koh and Daniel Ming-Jen Wu. 1988. Goal-directed semantic tutor. In Proceedings of the Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems. 171--176.
[101]
Carsten Köllmann and Michael Goedicke. 2006. Automation of java code analysis for programming exercises. In Proceedings of the Workshop on Graph Based Tools, Electronic Communications of the EASST, Vol. 1. 1--12.
[102]
Carsten Köllmann and Michael Goedicke. 2008. A specification language for static analysis of student exercises. In Proceedings of the Conference on Automated Software Engineering. 355--358.
[103]
Utku Kose and Omer Deperlioglu. 2012. Intelligent learning environments within blended learning for ensuring effective C programming course. Int. J. Artific. Intell. Appl. 3, 1 (2012), 105--124.
[104]
Angelo Kyrilov and David C. Noelle. 2015. Binary instant feedback on programming exercises can reduce student engagement and promote cheating. In Proceedings of the Koli Calling International Conference on Computing Education Research. 122--126.
[105]
H. Chad Lane and Kurt VanLehn. 2005. Teaching the tacit knowledge of programming to novices with natural language tutoring. Comput. Sci. Edu. 15, 3 (2005), 183--201.
[106]
Timotej Lazar and Ivan Bratko. 2014. Data-driven program synthesis for hint generation in programming tutors. In Intelligent Tutoring Systems. Springer, 306--311.
[107]
Nguyen-Thinh Le. 2016. A classification of adaptive feedback in educational systems for programming. Systems 4, 2 (2016).
[108]
Nguyen-Thinh Le and Wolfgang Menzel. 2006. Problem solving process oriented diagnosis in logic programming. In Proceedings of the Conference on Computers in Education. 63--70.
[109]
Nguyen-Thinh Le, Wolfgang Menzel, and Niels Pinkwart. 2009. Evaluation of a constraint-based homework assistance system for logic programming. In Proceedings of the Conference on Computers in Education. 51--58.
[110]
Nguyen-Thinh Le and Niels Pinkwart. 2011. Adding weights to constraints in intelligent tutoring systems: Does it improve the error diagnosis? In Towards Ubiquitous Learning. LNCS 6964. 233--247.
[111]
Nguyen-Thinh Le and Niels Pinkwart. 2011. INCOM: A web-based homework coaching system for logic programming. In Proceedings of the Conference on Cognition and Exploratory Learning in Digital Age. 43--50.
[112]
Nguyen-Thinh Le and Niels Pinkwart. 2014. Towards a classification for programming exercises. In Proceedings of the Workshop on AI-supported Education for Computer Science. 51--60.
[113]
Nguyen-Thinh Le, Sven Strickroth, Sebastian Gross, and Niels Pinkwart. 2013. A review of AI-supported tutoring approaches for learning programming. In Advanced Computational Methods for Knowledge Engineering. Springer, 267--279.
[114]
Chee-Kit Looi. 1991. Automatic debugging of prolog programs in a prolog intelligent tutoring system. Instruct. Sci. 20, 2--3 (1991), 215--263.
[115]
Susan Lowes. 2007. Online Teaching and Classroom Change: The Impact of Virtual High School on Its Teachers and Their Schools. Technical Report. Columbia University, Institute for Learning Technologies.
[116]
Cara MacNish. 2000. Java facilities for automating analysis, feedback and assessment of laboratory work. Comput. Sci. Edu. 10, 2 (2000), 147--163.
[117]
Cara MacNish. 2002. Machine learning and visualisation techniques for inferring logical errors in student code submissions. In Proceedings of the IEEE Conference on Advanced Learning Technologies. 317--321.
[118]
Tim A. Majchrzak and Claus A. Usener. 2013. Evaluating the synergies of integrating e-assessment and software testing. In Information Systems Development. Springer, New York, NY, 179--193.
[119]
Amit Kumar Mandal, Chittaranjan Mandal, and Chris Reade. 2007. A system for automatic evaluation of programs for correctness and performance. In Proceedings of the Conferences on Web Information Systems and Technologies 2005 and 2006. 367--380.
[120]
Fatima Z. Mansouri, Cleveland A. Gibbon, and Colin A. Higgins. 1998. PRAM: Prolog automatic marker. In Innovation and Technology in Computer Science Education. ACM, 166--170.
[121]
Roozbeh Matloobi, Michael Blumenstein, and Steve Green. 2007. An enhanced generic automated marking environment: GAME-2. IEEE Multidisc. Eng. Edu. Mag. 2, 2 (2007), 55--60.
[122]
Roozbeh Matloobi, Michael Blumenstein, and Steve Green. 2009. Extensions to generic automated marking environment: Game-2+. In Proceedings of the Interactive Computer Aided Learning Conference, Vol. 1. 1069--1076.
[123]
Gordon McCalla, Richard Bunt, and Janelle Harms. 1986. The design of the SCENT automated advisor. Comput. Intell. 2, 1 (1986), 76--92.
[124]
Gordon McCalla, Jim Greer, Bryce Barrie, and Paul Pospisil. 1992. Granularity hierarchies. Comput. Math. Appl. 23, 2--5 (1992), 363--375.
[125]
Michael McCracken, Vicki Almstrum, Danny Diaz, Mark Guzdial, Dianne Hagan, Yifat Ben-David Kolikant, Cary Laxer, Lynda Thomas, Ian Utting, and Tadeusz Wilusz. 2001. A multi-national, multi-institutional study of assessment of programming skills of first-year CS students. In Proceedings of the Working Group Reports of Innovation and Technology in Computer Science Education. 125--180.
[126]
Jean McKendree, Bob Radlinski, and Michael E. Atwood. 1992. The grace tutor: A qualified success. In Intelligent Tutoring Systems. Springer, 677--684.
[127]
Douglas C. Merrill, Brian J. Reiser, Michael Ranney, and J. Gregory Trafton. 1992. Effective tutoring techniques: A comparison of human tutors and intelligent tutoring systems. J. Learn. Sci. 2, 3 (1992), 277--305.
[128]
Antonija Mitrovic, Kenneth Koedinger, and Brent Martin. 2003. A comparative analysis of cognitive tutoring and constraint-based modeling. In User Modeling. Springer, 313--322.
[129]
Joseph Moghadam, Rohan Roy Choudhury, HeZheng Yin, and Armando Fox. 2015. AutoStyle: Toward coding style feedback at scale. In Proceedings of the ACM Conference on Learning at Scale. 261--266.
[130]
William R. Murray. 1987. Automatic program debugging for intelligent tutoring systems. Comput. Intell. 3, 1 (1987), 1--16.
[131]
Susanne Narciss. 2008. Feedback strategies for interactive learning tasks. Handbook of Research on Educational Communications and Technology. Routledge, 125--144.
[132]
Peter Naur. 1964. Automatic grading of student’s ALGOL programming. BIT Numer. Math. 4, 3 (1964), 177--188.
[133]
Andy Nguyen, Christopher Piech, Jonathan Huang, and Leonidas Guibas. 2014. Codewebs: Scalable homework search for massive open online programming courses. In Proceedings of the Conference on World wide web. 491--502.
[134]
Elizabeth Odekirk-Hash and Joseph L. Zachary. 2001. Automated feedback on programs means students need less help from teachers. ACM SIGCSE Bull. 33, 1 (2001), 55--59.
[135]
Claudia Ott, Anthony Robins, and Kerry Shephard. 2016. Translating principles of effective feedback for students into the CS1 context. ACM Trans. Comput. Edu. 16, 1, Article 1 (2016), 27 pages.
[136]
Martin Pärtel, Matti Luukkainen, Arto Vihavainen, and Thomas Vikberg. 2013. Test my code. Int. J. Technol. Enhanced Learn. 5, 3--4 (2013), 271--283.
[137]
Arnold Pears, Stephen Seidman, Lauri Malmi, Linda Mannila, Elizabeth Adams, Jens Bennedsen, Marie Devlin, and James Paterson. 2007. A survey of literature on the teaching of introductory programming. ACM SIGCSE Bull. 39, 4 (2007), 204--223.
[138]
Daniel Perelman, Judith Bishop, Sumit Gulwani, and Dan Grossman. 2015. Automated Feedback and Recognition through Data Mining in Code Hunt, MSR-TR-2015-57. Technical Report. Microsoft Research.
[139]
Daniel Perelman, Sumit Gulwani, and Dan Grossman. 2014. Test-driven synthesis for automated feedback for introductory computer science assignments. In Proceedings of the Workshop on Data Mining for Educational Assessment and Feedback.
[140]
Raymond Pettit and James Prather. 2017. Automated assessment tools: Too many cooks, not enough collaboration. J. Comput. Sci. Colleges 32, 4 (2017), 113--121.
[141]
Christoph Peylo, Tobias Thelen, Claus Rollinger, and Helmar Gust. 2000. A web-based intelligent educational system for PROLOG. In Proceedings of the Workshop on Adaptive and Intelligent Web-Based Education Systems, ITS. 70--80.
[142]
Nelishia Pillay. 2003. Developing intelligent programming tutors for novice programmers. ACM SIGCSE Bull. 35, 2 (2003), 78--82.
[143]
Yusuf Pisan, Debbie Richards, Anthony Sloane, Helena Koncek, and Simon Mitchell. 2002. Submit! A web-based system for automatic program critiquing. In Proceedings of the Australasian Conference on Computing Education. 59--68.
[144]
Ivan Pribela, Mirjana Ivanović, and Zoran Budimac. 2011. System for testing different kinds of students’ programming assignments. In Proceedings of the Conference on Information Technology.
[145]
Yizhou Qian and James Lehman. 2017. Students’ misconceptions and other difficulties in introductory programming: A literature review. ACM Trans. Comput. Edu. 18, 1 (2017), 1.
[146]
Rob Radlinski and Jean McKendree. 1992. Grace meets the real world: Tutoring COBOL as a second language. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 343--350.
[147]
Khirulnizam Abd Rahman and Md Jan Nordin. 2007. A review on the static analysis approach in the automated programming assessment systems. In Proceedings of the National Conference on Programming.
[148]
Haider A. Ramadhan, Fadi Deek, and Khalil Shihab. 2001. Incorporating software visualization in the design of intelligent diagnosis systems for user programming. Artific. Intell. Rev. 16, 1 (2001), 61--84.
[149]
Kelly Rivers and Kenneth Koedinger. 2015. Data-driven hint generation in vast solution spaces: A self-improving python programming tutor. Int. J. Artific. Intell. Edu. 27, 1 (2015), 37--64.
[150]
Juan Carlos Rodríguez-del Pino, Enrique Rubio-Royo, and Zenón Hernández-Figueroa. 2012. A virtual programming lab for moodle with automatic assessment and anti-plagiarism features. In Proceedings of the Conference on e-Learning, e-Business, Entreprise Information Systems, 8 e-Government.
[151]
Rohaida Romli, Shahida Sulaiman, and Kamal Zuhairi Zamli. 2010. Automatic programming assessment and test data generation. In Proceedings of the International Symposium in Information Technology. 1186--1192.
[152]
Tammy Rosenthal, Patrick Suppes, and Nava Ben-Zvi. 2002. Automated evaluation methods with attention to individual differences—A study of a computer-based course in C. In Proceedings of the Frontiers in Education Conference, Vol. 1. 7--12.
[153]
Gregory R. Ruth. 1976. Intelligent program analysis. Artific. Intell. 7 (1976), 65--85.
[154]
Warren Sack. 1992. Knowledge base compilation and the language design game. In Intell. Tutor. Syst. 225--233.
[155]
Warren Sack and Elliot Soloway. 1992. From PROUST to CHIRON: ITS design as iterative engineering; intermediate results are important! Comput.-Assist. Instruct. Intell. Tutor. Syst.: Shared Goals Complement. Approaches (1992), 239--274.
[156]
Riku Saikkonen, Lauri Malmi, and Ari Korhonen. 2001. Fully automatic assessment of programming exercises. In ACM SIGCSE Bulletin, Vol. 33. 133--136.
[157]
Joseph A. Sant. 2009. Mailing it in: Email-centric automated assessment. ACM SIGCSE Bull. 41, 3 (2009), 308--312.
[158]
Steven C. Shaffer. 2005. Ludwig: An online programming tutoring and assessment system. ACM SIGCSE Bull. 37, 2 (2005), 56--60.
[159]
Goran Shimic and Aleksandar Jevremovic. 2012. Problem-based learning in formal and informal learning environments. Interact. Learn. Environ. 20, 4 (2012), 351--367.
[160]
Valerie J. Shute. 2008. Focus on formative feedback. Rev. Edu. Res. 78, 1 (2008), 153--189.
[161]
Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. 2013. Automated feedback generation for introductory programming assignments. ACM SIGPLAN Notices 48, 6 (2013), 15--26.
[162]
Matthew Z. Smith and Joseph J. Ekstrom. 2004. String of perls : Using perl to teach perl. In Proceedings of the ASEE Annual Conference and Exposition.
[163]
Elliot Soloway, Eric Rubin, Beverly Woolf, Jeffrey Bonar, and W. Lewis Johnson. 1983. Meno-II: An AI-based programming tutor. J. Comput.-Based Instruct. 10, 1 (1983).
[164]
J. S. Song, S. H. Hahn, K. Y. Tak, and J. H. Kim. 1997. An intelligent tutoring system for introductory C language course. Comput. Edu. 28, 2 (1997), 93--102.
[165]
Juha Sorva, Ville Karavirta, and Lauri Malmi. 2013. A review of generic program visualization systems for introductory programming education. ACM Trans. Comput. Edu. 13, 4 (2013), 1--64.
[166]
Jaime Spacco, David Hovemeyer, William Pugh, Fawzi Emad, Jeffrey K. Hollingsworth, and Nelson Padua-Perez. 2006. Experiences with marmoset: Designing and using an advanced submission and testing system for programming courses. ACM SIGCSE Bull. 38, 3 (2006), 13--17.
[167]
Michael Striewe, Moritz Balz, and Michael Goedicke. 2009. A flexible and modular software architecture for computer aided assessments and automated marking. Proceedings of the Conference on Computer Supported Education 2 (2009), 54--61.
[168]
Michael Striewe and Michael Goedicke. 2011. Using run time traces in automated programming tutoring. In Innovation and Technology in Computer Science Education. ACM, 303--307.
[169]
Michael Striewe and Michael Goedicke. 2014. A review of static analysis approaches for programming exercises. In Computer Assisted Assessment. Research into E-Assessment. Springer, 100--113.
[170]
Ryo Suzuki, Gustavo Soares, Elena Glassman, Andrew Head, Loris D’Antoni, and Björn Hartmann. 2017. Exploring the design space of automatically synthesized hints for introductory programming assignments. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems. 2951--2958.
[171]
Edward Sykes. 2005. Qualitative evaluation of the java intelligent tutoring system. J. System., Cybernet. Info. 3, 5 (2005), 49--60.
[172]
Edward Sykes. 2010. Design, development and evaluation of the java intelligent tutoring system. Technol., Instruct., Cogn. Learn. 8, 1 (2010), 25--65.
[173]
Gareth Thorburn and Glenn Rowe. 1997. PASS: An automated system for program assessment. Comput. Edu. 29, 4 (1997), 195--206.
[174]
Nikolai Tillmann, Jonathan de Halleux, Tao Xie, Sumit Gulwani, and Judith Bishop. 2013. Teaching and learning programming and software engineering via interactive gaming. In Proceedings of the Conference on Software Engineering. 1117--1126.
[175]
Nghi Truong, Peter Bancroft, and Paul Roe. 2005. Learning to program through the web. ACM SIGCSE Bull. 37, 3 (2005), 9--13.
[176]
Nghi Truong, Paul Roe, and Peter Bancroft. 2004. Static analysis of students’ Java programs. In Proceedings of the Australasian Conference on Computing Education, Vol. 30. 317--325.
[177]
Haruki Ueno. 2000. A generalized knowledge-based approach to comprehend pascal and C programs. IEICE Trans. Info. Syst. 83, 4 (2000), 591--598.
[178]
Miguel Ulloa. 1980. Teaching and learning computer programming: A survey of student problems, teaching methods, and automated instructional tools. ACM SIGCSE Bull. 12, 2 (1980), 48--64.
[179]
Alexandria Katarina Vail and Kristy Elizabeth Boyer. 2014. Identifying effective moves in tutoring: On the refinement of dialogue act annotation schemes. In Intelligent Tutoring Systems. Springer, 199--209.
[180]
Jeroen Van Merriënboer and Marcel De Croock. 1992. Strategies for computer-based programming instruction: Program completion vs. program generation. J. Edu. Comput. Res. 8, 3 (1992), 365--94.
[181]
Kurt VanLehn. 2006. The behavior of tutoring systems. Int. J. Artific. Intell. Edu. 16, 3 (2006), 227--265.
[182]
Philip Vanneste, Koen Bertels, and Bart Decker de. 1996. The use of reverse engineering to analyse student computer programs. Instruct. Sci. 24 (1996), 197--221.
[183]
Anne Venables and Liz Haywood. 2003. Programming students NEED instant feedback&anp;excl; In Proceedings of the Australasian Conference on Computing Education, Vol. 20. 267--272.
[184]
Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin Pärtel. 2013. Scaffolding students’ learning using test my code. In Innovation and Technology in Computer Science Education. ACM, 117--122.
[185]
Aurora Vizcaíno. 2005. A simulated student can improve collaborative learning. Int. J. Artific. Intell. Edu. 15 (2005), 3--40.
[186]
Aurora Vizcaíno, Juan Contreras, Jesús Favela, and Manuel Prieto. 2000. An adaptive, collaborative environment to develop good habits in programming. In Intelligent Tutoring Systems. LNCS 1839. 262--271.
[187]
Milena Vujošević-Janičić, Mladen Nikolić, Dušan Tošić, and V. Kuncak. 2013. Software verification and graph similarity for automated evaluation of students’ assignments. Info. Softw. Technol. 55, 6 (2013), 1004--1016.
[188]
Tiantian Wang, Xiaohong Su, Peijun Ma, Yuying Wang, and Kuanquan Wang. 2011. Ability-training-oriented automated assessment in introductory programming course. Comput. Edu. 56, 1 (2011), 220--226.
[189]
Gerhard Weber. 1996. Episodic learner modeling. Cogn. Sci. 20, 2 (1996), 195--236.
[190]
Gerhard Weber and Peter Brusilovsky. 2001. ELM-ART: An adaptive versatile system for web-based instruction. Int. J. Artific. Intell. Edu. 12 (2001), 351--384.
[191]
Gerhard Weber and Marcus Specht. 1997. User modeling and adaptive navigation support in WWW-based tutoring systems. In Proceedings of the Conference on User Modeling. 289--300.
[192]
Dinesha Weragama. 2013. Intelligent Tutoring System for Learning PHP. Ph.D. Dissertation. Queensland University of Technology.
[193]
Dinesha Weragama and Jim Reye. 2014. Analysing student programs in the PHP intelligent tutoring system. Int. J. Artific. Intell. Edu. 24, 2 (2014), 162--188.
[194]
Weimin Wu, Guangqiang Li, Yinai Sun, Jing Wang, and Tianwu Lai. 2007. AnalyseC: A framework for assessing students’ programs at structural and semantic level. In Proceedings of the Conference on Control and Automation. 742--747.
[195]
Songwen Xu and Yam San Chee. 2003. Transformation-based diagnosis of student programs for programming tutoring systems. IEEE Trans. Softw. Eng. 29, 4 (2003), 360--384.
[196]
Cheng Yongqing, Hu Qing, and Yang Jingyu. 1988. An expert system for education: IPTS. In Proceedings of the Conference on Systems, Man, and Cybernetics. 930--933.

Cited By

View all
  • (2024)An Image-Based User Interface Testing Method for Flutter Programming Learning Assistant SystemInformation10.3390/info1508046415:8(464)Online publication date: 3-Aug-2024
  • (2024)ASSIST: Automated Feedback Generation for Syntax and Logical Errors in Programming ExercisesProceedings of the 2024 ACM SIGPLAN International Symposium on SPLASH-E10.1145/3689493.3689981(66-76)Online publication date: 17-Oct-2024
  • (2024)Non-Expert Programmers in the Generative AI FutureProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3663384.3663393(1-19)Online publication date: 25-Jun-2024
  • Show More Cited By

Index Terms

  1. A Systematic Literature Review of Automated Feedback Generation for Programming Exercises

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Computing Education
        ACM Transactions on Computing Education  Volume 19, Issue 1
        March 2019
        156 pages
        EISSN:1946-6226
        DOI:10.1145/3282284
        Issue’s Table of Contents
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 28 September 2018
        Accepted: 01 May 2018
        Revised: 01 April 2018
        Received: 01 January 2018
        Published in TOCE Volume 19, Issue 1

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Systematic literature review
        2. automated feedback
        3. learning programming
        4. programming tools

        Qualifiers

        • Research-article
        • Research
        • Refereed

        Funding Sources

        • Netherlands Organisation for Scientific Research (NWO)

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)702
        • Downloads (Last 6 weeks)117
        Reflects downloads up to 16 Oct 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)An Image-Based User Interface Testing Method for Flutter Programming Learning Assistant SystemInformation10.3390/info1508046415:8(464)Online publication date: 3-Aug-2024
        • (2024)ASSIST: Automated Feedback Generation for Syntax and Logical Errors in Programming ExercisesProceedings of the 2024 ACM SIGPLAN International Symposium on SPLASH-E10.1145/3689493.3689981(66-76)Online publication date: 17-Oct-2024
        • (2024)Non-Expert Programmers in the Generative AI FutureProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3663384.3663393(1-19)Online publication date: 25-Jun-2024
        • (2024)Combining LLM-Generated and Test-Based Feedback in a MOOC for ProgrammingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662040(177-187)Online publication date: 9-Jul-2024
        • (2024)CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning ProgrammingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662032(51-62)Online publication date: 9-Jul-2024
        • (2024)Integrating Automated Feedback into a Creative Coding CourseProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 210.1145/3649405.3659490(799-799)Online publication date: 8-Jul-2024
        • (2024)Scalable Feedback for Student Live Coding in Large Courses Using Automatic Error GroupingProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653620(499-505)Online publication date: 3-Jul-2024
        • (2024)Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-JudgeProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653612(52-58)Online publication date: 3-Jul-2024
        • (2024)Improving Student Learning with Automated AssessmentProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653603(464-470)Online publication date: 3-Jul-2024
        • (2024)Feedback-Generation for Programming Exercises With GPT-4Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653594(31-37)Online publication date: 3-Jul-2024
        • Show More Cited By

        View Options

        Get Access

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media