Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Identifying Difficult Questions and Student Difficulties in a Spanish Version of a Programming Assessment Instrument (SCS1)

Published: 19 August 2024 Publication History

Abstract

Currently, there is little evidence about how non-English-speaking students learn computer programming. For example, there are few validated assessment instruments to measure the development of programming skills, especially for the Spanish-speaking population. Having valid assessment instruments is essential to identify the difficulties of the students, make formative decisions, and evaluate the impact of pedagogical interventions. This article presents the outcomes of the early steps toward a validation in Spanish of the SCS1 assessment, derived from the first conceptual inventory to measure programming knowledge decoupled from a particular language. The goal of this initial step is twofold: (1) identifying the questions that are too difficult in this version and might need to be adapted based on empirical evidence of how students answer them and (2) identifying students’ common difficulties with the most challenging questions. This study started with a “think-aloud” protocol to validate whether the Spanish version was similar to the original instrument and if the students understood the translation as intended. Then, the instrument was applied to a sample of 71 university students who had previously taken an introductory programming course using Python. We computed the difficulty index for each question and identified students’ difficulties in answering them. We also propose potential improvements for these items. The main contributions of this study include (1) progress toward the validation of an instrument for evaluating programming skills in Spanish, (2) noting the advantage that experienced students in Python have to complete the instrument, (3) identifying the tendency of students to translate the pseudo-code from Spanish into English, and (4) the identification of common mistakes, like misunderstanding the scope of local variables or the confusion between returning and printing.

References

[1]
Valerie Barr and Chris Stephenson. 2011. Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads 2, 1 (Feb. 2011), 48–54. DOI:
[2]
Ado Abdu Bichi. 2015. Item analysis using a derived science achievement test data. International Journal of Science and Research (IJSR) 4, 5 (2015), 1656–1662.
[3]
Ricardo Caceffo, Pablo Frank-Bolton, Renan Souza, and Rodolfo Azevedo. 2019. Identifying and validating java misconceptions toward a cs1 concept inventory. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’19). ACM, New York, NY, 23–29. DOI:
[4]
J. Chiu, Camilo Vieira and B. Velasquez. 2023. Towards a learning progression of sequencing and algorithm design for five- and six-years-old children engaging with an educational robot. Computer Science Education, 1–21. DOI:
[5]
Jose M. Cortina. 1993. What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology 78, 1 (1993), 98.
[6]
Sayamindu Dasgupta and Benjamin Mako Hill. 2017. Learning to code in localized programming languages. In Proceedings of the 4th (2017) ACM Conference on Learning @ Scale (L@S ’17). ACM, New York, NY, 33–39. DOI:
[7]
Rodrigo Duran, Jan-Mikael Rybicki, Juha Sorva, and Arto Hellas. 2019. Exploring the value of student self-evaluation in introductory programming. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER ’19). ACM, New York, NY, 121–130. DOI:
[8]
Alejandro Espinal, Camilo Vieira, and Valeria Guerrero-Bequis. 2022. Student ability and difficulties with transfer from a block-based programming language into other programming languages: A case study in Colombia. Computer Science Education, 33 (4), 567–599. DOI:
[9]
Ann E. Fleury. 1991. Parameter passing: The rules the students construct. SIGCSE Bulletin 23, 1 (Mar. 1991), 283–286. DOI:
[10]
Marleen Gilsing, Jesús Pelay, and Felienne Hermans. 2022. Design, implementation and evaluation of the Hedy programming language. Journal of Computer Languages 73, 101158. DOI:
[11]
Ken Goldman, Paul Gross, Cinda Heeren, Geoffrey Herman, Lisa Kaczmarczyk, Michael C. Loui, and Craig Zilles. 2008. Identifying important and difficult concepts in introductory computing courses using a delphi process. SIGCSE Bulletin 40, 1 (Mar. 2008), 256–260. DOI:
[12]
Carina González González, Verónica Violant, Alfonso Infante, Lorena Cáceres García, and M. D. Guzmán Franco. 2021. Robótica educativa en contextos inclusivos: El caso de las aulas hospitalarias [Educational robotics in inclusive contexts: The case of hospital classrooms]. Educación XXI [Education XXI] 24, 375–403.
[13]
Philip J. Guo. 2018. Non-native English speakers learning computer programming: barriers, desires, and design opportunities. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, 1–14. DOI:
[14]
Mark Guzdial. 2019. We should stop saying “language independent.” We don’t know how to do that. Retrieved from https://cacm.acm.org/blogs/blog-cacm/238782-we-should-stop-saying-language-independent-we-dont-know-how-to-do-that/fulltext
[15]
Ronald K. Hambleton and Russell W. Jones. 1993. Comparison of classical test theory and item response theory and their applications to test development. Educational Measurement: Issues and Practice 12, 3 (1993), 38–47.
[16]
Juan Hernández-Lalinde, Jhon-Franklin Espinosa-Castro, Mariana Peñaloza Tarazona, Johel Rodríguez, José Chacón, Sandra Carrillo Sierra, Valmore Pirela, and Cristian Toloza. 2018. Sobre el uso adecuado del coeficiente de correlación de Pearson: definición, propiedades y suposiciones [On the proper use of Pearson's correlation coefficient: definition, properties and assumptions]. Archivos Venezolanos de Farmacología y Terapéutica [Venezuelan Archives of Pharmacology and Therapeutics] 37, 587–595.
[17]
Angelica Hotiu. 2006. The Relationship Between Item Difficulty and Discrimination Indices in Multiple-Choice Tests in a Physical Science Course. Master's thesis. Florida Atlantic University, Boca Raton, FL.
[18]
Michael Kölling, Neil C. C. Brown, and Amjad Altadmri. 2015. Frame-based editing: Easing the transition from blocks to text-based programming. In Proceedings of the Workshop in Primary and Secondary Computing Education (WiPSCE ’15). ACM, New York, NY, 29–38. DOI:
[19]
Irene Lee, Fred Martin, Jill Denner, Bob Coulter, Walter Allan, Jeri Erickson, Joyce Malyn-Smith, and Linda Werner. 2011. Computational thinking for youth in practice. ACM Inroads 2, 1 (Feb. 2011), 32–37. DOI:
[20]
Yinchen Lei and Meghan Allen. 2022. English language learners in computer science education: A scoping review. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education (SIGCSE ’22), Vol. 1. ACM, New York, NY, 57–63. DOI:
[21]
Raymond Lister, Colin Fidge, and Donna Teague. 2009. Further evidence of a relationship between explaining, tracing and writing skills in introductory programming. SIGCSE Bulletin 41, 3 (Jul. 2009), 161–165. DOI:
[22]
Alejandra J. Magana, Michael L. Falk, Camilo Vieira, and Michael J. Reese. 2016. A case study of undergraduate engineering students’ computational literacy and self-beliefs about computing in the context of authentic practices. Computers in Human Behavior 61, C (Aug. 2016), 427–442. DOI:
[23]
Yogendra Pal and Sridhar Iyer. 2012. Comparison of english versus hindi medium students for programming abilities acquired through video-based instruction. In Proceedings of the 2012 IEEE Fourth International Conference on Technology for Education. 26–30. DOI:
[24]
Yogendra Pal and Sridhar Iyer. 2015. Effect of medium of instruction on programming ability acquired through screencast. In Proceedings of the 2015 International Conference on Learning and Teaching in Computing and Engineering. 17–21. DOI:
[25]
Miranda C. Parker, Mark Guzdial, and Shelly Engleman. 2016. Replication, validation, and use of a language independent CS1 knowledge assessment. In Proceedings of the 2016 ACM Conference on International Computing Education Research (ICER ’16). ACM, New York, NY, 93–101. DOI:
[26]
Miranda C. Parker, Mark Guzdial, and Allison Elliott Tew. 2021. Uses, revisions, and the future of validated assessments in computing education: A case study of the FCS1 and SCS1. In Proceedings of the 17th ACM Conference on International Computing Education Research (ICER ’21). ACM, New York, NY, 60–68. DOI:
[27]
Yizhou Qian and James Lehman. 2017. Students’ misconceptions and other difficulties in introductory programming: A literature review. ACM Transactions on Computing Education 18, 1, Article 1 (Oct. 2017), 24 pages. DOI:
[28]
The Royal Society. 2012. Shut Down or Restart? The Way Forward for Computing in UK Schools. The Royal Society. Retrieved from https://www.stem.org.uk/rx326t
[29]
Yang Song, Yunkai Xiao, Jonathan Stephens, Emma Ruesch, Sean Roginski, and Lucas Layman. 2020. Suitability of SCS1 as a pre-CS2 assessment instrument: A comparison with short deliberate-practice questions. In Proceedings of the 2020 ACM Southeast Conference(ACM SE ’20). ACM, New York, NY, 313–314. DOI:
[30]
Juha Sorva. 2012. Visual Program Simulation in Introductory Programming Education. Doctoral thesis. School of Science. Retrieved from http://urn.fi/URN:ISBN:978-952-60-4626-6
[31]
Andreas Stefik and Susanna Siebert. 2013. An empirical investigation into programming language syntax. ACM Transactions on Computing Education 13, 4 (Nov. 2013) Article 19, 40 pages. DOI:
[32]
Paul S. Steif and John A. Dantzler. 2005. A statics concept inventory: Development and psychometric analysis. Journal of Engineering Education 94, 4 (2005), 363–371.
[33]
John Sweller, Paul Ayres, and Slava Kalyuga. 2011. The Expertise Reversal Effect. Springer, New York, NY, USA, 155–170. DOI: https://doi.org/10.1007/978-1-4419-8126-4_12
[34]
Allison Elliott Tew and Mark Guzdial. 2010. Developing a validated assessment of fundamental CS1 concepts. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (SIGCSE ’10). ACM, New York, NY, 97–101. DOI:
[35]
Dion Timmermann, Christian Kautz, and Volker Skwarek. 2016. Evidence-based re-design of an introductory course “programming in C”. In Proceedings of the IEEE Frontiers in Education Conference (FIE ’16). 1–5. DOI:
[36]
Ethel Tshukudu and Quintin Cutts. 2020. Understanding conceptual transfer for students learning new programming languages. In Proceedings of the 2020 ACM Conference on International Computing Education Research (ICER ’20). ACM, New York, NY, 227–237. DOI:
[37]
Camilo Vieira, Alejandra J. Magana, Anindya Roy, and Michael Falk. 2021. Providing students with agency to self-scaffold in a computational science and engineering course. Journal of Computing in Higher Education 33, 328–366.
[38]
Ronald E. Walpole, Raymond H. Myers, Sharon L. Myers, and Keying Ye. 2016. Probability & Statistics for Engineers and Scientists (9th ed.). Pearson Education, Boston.
[39]
Jeannette M. Wing. 2006. Computational thinking. Communications of the ACM 49, 3 (Mar. 2006), 33–35. DOI:
[40]
Benjamin Xie, Matthew J. Davidson, Min Li, and Amy J. Ko. 2019. An item response theory evaluation of a language-independent CS1 knowledge assessment. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (SIGCSE ’19). ACM, New York, NY, 699–705. DOI:
[41]
Alan Zilberman and Lindsey Ice. 2021. Why computer occupations are behind strong STEM employment growth in the 2019–29 decade. Beyond the Numbers: Employment & Unemployment 10, 1 (Jan. 2021).

Index Terms

  1. Identifying Difficult Questions and Student Difficulties in a Spanish Version of a Programming Assessment Instrument (SCS1)

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Computing Education
    ACM Transactions on Computing Education  Volume 24, Issue 3
    September 2024
    411 pages
    EISSN:1946-6226
    DOI:10.1145/3613728
    • Editor:
    • Amy J. Ko
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 August 2024
    Online AM: 14 June 2024
    Accepted: 22 February 2024
    Revised: 10 November 2023
    Received: 04 October 2022
    Published in TOCE Volume 24, Issue 3

    Check for updates

    Author Tags

    1. Programming
    2. SCS1
    3. assessment
    4. validation
    5. instrument

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 142
      Total Downloads
    • Downloads (Last 12 months)142
    • Downloads (Last 6 weeks)35
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media