Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2016911.2016931acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Explaining program code: giving students the answer helps - but only just

Published: 08 August 2011 Publication History

Abstract

Of the students who pass introductory programming courses, many appear unable to explain the purpose of simple code fragments such as a loop to find the greatest element in an array. It has never been established whether this is because the students are unable to determine the purpose of the code or because they can determine the purpose but lack the ability to express that purpose. This study explores that question by comparing the answers of students in several offerings of an introductory programming course. In the earlier offerings students were asked to express the purpose in their own words; in the later offerings they were asked to choose the purpose from several options in a multiple-choice question. At an overseas campus, students performed significantly better on the multiple-choice version of the question; at a domestic campus, performance was better, but not significantly so. Many students were unable to identify the correct purpose of small fragments of code when given that purpose and some alternatives. The conclusion is that students' failure to perform well in code-explaining questions is not because they cannot express the purpose of the code, but because they are truly unable to determine the purpose of the code - or even to recognize it from a short list.

References

[1]
Biggs, J. B. & K. F. Collis (1982). Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press.
[2]
Clear, Tony, Jacqueline Whalley, Raymond Lister, Angela Carbone, Minjie Hu, Judy Sheard, Beth Simon, & Errol Thompson, (2008). Reliably classifying novice programmer exam response using the SOLO taxonomy. 21st Annual NACCQ Conference, NACCQ, Auckland, New Zealand, 23--30.
[3]
Hansen, James D. & Lee Dexter (1997). Quality multiple-choice test questions: item-writing guidelines and an analysis of auditing testbanks. Journal of Education for Business 73(2):94--97.
[4]
Isaacs, Geoff (1994). Multiple choice testing. HERDSA Green Guide No 16. Higher Education Research and Development Society of Australasia Inc, Campbelltown, Australia.
[5]
Lister, Raymond (2000). On Blooming first year programming, and its Blooming assessment. Fourth Australasian Computing Education Conference (ACE2000), 158--162.
[6]
Lister, Raymond (2005). One small step toward a culture of peer review and multi-institutional sharing of educational resources: a multiple-choice exam for first semester programming students. Seventh Australasian Computing Education Conference (ACE2005), 155--164.
[7]
Lister, Raymond, E. S. Adams, Sue Fitzgerald, W. Fone, John Hamer, M. Lindholm, Robert McCartney, E. Moström, Kate Sanders, Otto Seppälä, Beth Simon, & Lynda Thomas (2004). A multi-national study of reading and tracing skills in novice programmers. SIGCSE Bulletin, 36(4):119--150.
[8]
Lister, Raymond, Colin Fidge, & Donna Teague (2009). Further evidence of a relationship between explaining, tracing and writing skills in introductory programming. 14th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE'09), July 3-8, 2009, Paris, France.
[9]
Lister, Raymond, Beth Simon, Errol Thompson, Jacqueline L Whalley, & Christine Prasad (2006). Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. SIGCSE Bulletin 38(3):118--122.
[10]
Lopez, Mike, Jacqueline Whalley, Phil Robbins, & Raymond Lister (2008). Relationships between reading, tracing and writing skills in introductory programming. Fourth International Workshop on Computing Education Research (ICER '08), Sydney, Australia, 101--112.
[11]
Roberts, Tim (2006). The use of multiple choice tests for formative and summative assessment. Eighth Australasian Computing Education Conference (ACE2006), 175--180.
[12]
Rudolph, P. E. (1988). Robustness of multiple comparison procedures: treatment versus control. Biometrical Journal 30(1):41--45.
[13]
Sall, John, Ann Lehman, & Lee Creighton (2001). JMP start statistics. Duxbury, Pacific Grove, CA, USA.
[14]
Shadish, William R, Thomas D. Cook, & Donald T. Campbell (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin Company.
[15]
Sheard, Judy, Angela Carbone, Raymond Lister, Beth Simon, Errol Thompson, & Jacqueline L Whalley (2008). Going SOLO to assess novice programmers. SIGCSE Bulletin 40(3):209--213.
[16]
Shuhidan, Shuhaida, Margaret Hamilton, & Daryl D'Souza (2010). Instructor perspectives of multiple-choice questions in summative assessment for novice programmers. Computer Science Education 20(3):229--259.
[17]
Simon (2009). A note on code-explaining examination questions. Ninth International Conference on Computing Education Research - Koli Calling 2009, Koli, Finland, November 2009.
[18]
Simon (2011). Wrong is a relative concept: part marks for multiple-choice questions. Thirteenth Australasian Computing Education Conference (ACE2011), Perth, Australia, 47--53.
[19]
Tew, Allison Elliott & Mark Guzdial (2010). Developing a validated assessment of fundamental CS1 concepts. 41st ACM Technical Symposium on Computer Science Education (SIGCSE 2010), 97--101.
[20]
Thompson, Errol (2009). How do they understand? Practitioner perceptions of an object-oriented program. Dissertation, Massey University, Palmerston North.
[21]
Venables, Anne, Grace Tan, & Raymond Lister (2009). A closer look at tracing, explaining and code writing skills in the novice programmer. Fifth International Workshop on Computing Education Research (ICER '09), Berkeley, California, USA.
[22]
Whalley, Jaqueline L, Raymond Lister, Errol Thompson, Tony Clear, Phil Robbins, P. J. Ajith Kumar, & Christine Prasad (2006). An Australasian study of reading and comprehension skills in novice programmers, using the Bloom and SOLO taxonomies. Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia, 243--252.
[23]
Whalley, Jacqueline L & Raymond Lister (2009). The BRACElet 2009.1 (Wellington) specification. Eleventh Australasian Computing Education Conference (ACE2009), pp-pp.
[24]
Woodford, Karyn & Peter Bancroft (2005). Multiple choice questions not considered harmful. Seventh Australasian Computing Education Conference (ACE2005), 109--115.

Cited By

View all
  • (2023)“There is no ambiguity on what to return”: Investigating the Prevalence of SQL MisconceptionsProceedings of the 23rd Koli Calling International Conference on Computing Education Research10.1145/3631802.3631821(1-12)Online publication date: 13-Nov-2023
  • (2022)Predicting Student Success in CS2Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 110.1145/3478431.3499276(140-146)Online publication date: 22-Feb-2022
  • (2021)Looking at the main Method – An Educator’s PerspectiveProceedings of the 21st Koli Calling International Conference on Computing Education Research10.1145/3488042.3488068(1-10)Online publication date: 17-Nov-2021
  • Show More Cited By

Index Terms

  1. Explaining program code: giving students the answer helps - but only just
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          ICER '11: Proceedings of the seventh international workshop on Computing education research
          August 2011
          156 pages
          ISBN:9781450308298
          DOI:10.1145/2016911
          • General Chair:
          • Kate Sanders,
          • Program Chairs:
          • Michael E. Caspersen,
          • Alison Clear,
          • Kate Sanders
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 08 August 2011

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. assessment
          2. code-explaining questions
          3. introductory programming
          4. multiple choice

          Qualifiers

          • Research-article

          Conference

          ICER '11
          Sponsor:
          ICER '11: International Computing Education Research Workshop
          August 8 - 9, 2011
          Rhode Island, Providence, USA

          Acceptance Rates

          Overall Acceptance Rate 189 of 803 submissions, 24%

          Upcoming Conference

          ICER 2025
          ACM Conference on International Computing Education Research
          August 3 - 6, 2025
          Charlottesville , VA , USA

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)5
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 28 Dec 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2023)“There is no ambiguity on what to return”: Investigating the Prevalence of SQL MisconceptionsProceedings of the 23rd Koli Calling International Conference on Computing Education Research10.1145/3631802.3631821(1-12)Online publication date: 13-Nov-2023
          • (2022)Predicting Student Success in CS2Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 110.1145/3478431.3499276(140-146)Online publication date: 22-Feb-2022
          • (2021)Looking at the main Method – An Educator’s PerspectiveProceedings of the 21st Koli Calling International Conference on Computing Education Research10.1145/3488042.3488068(1-10)Online publication date: 17-Nov-2021
          • (2021)Refute: An Alternative to ‘Explain in Plain English’ QuestionsProceedings of the 17th ACM Conference on International Computing Education Research10.1145/3446871.3469791(438-440)Online publication date: 16-Aug-2021
          • (2021)Trade-offs for Substituting a Human with an Agent in a Pair Programming Context: The Good, the Bad, and the UglyProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445659(1-20)Online publication date: 6-May-2021
          • (2020)Engaging Students with Instructor Solutions in Online Programming HomeworkProceedings of the 2020 CHI Conference on Human Factors in Computing Systems10.1145/3313831.3376857(1-7)Online publication date: 21-Apr-2020
          • (2020)When Does Scaffolding Provide Too Much Assistance? A Code-Tracing Tutor InvestigationInternational Journal of Artificial Intelligence in Education10.1007/s40593-020-00217-z31:4(784-819)Online publication date: 23-Oct-2020
          • (2019)On the Frequency of Words Used in Answers to Explain in Plain English Questions by Novice ProgrammersProceedings of the Twenty-First Australasian Computing Education Conference10.1145/3286960.3286962(11-20)Online publication date: 29-Jan-2019
          • (2016)Benchmarking Introductory Programming ExamsProceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education10.1145/2899415.2899473(154-159)Online publication date: 11-Jul-2016
          • (2015)Benefits of Self-explanation in Introductory ProgrammingProceedings of the 46th ACM Technical Symposium on Computer Science Education10.1145/2676723.2677260(284-289)Online publication date: 24-Feb-2015
          • Show More Cited By

          View Options

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media