Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3626252.3630773acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

AI Teaches the Art of Elegant Coding: Timely, Fair, and Helpful Style Feedback in a Global Course

Published: 07 March 2024 Publication History

Abstract

Teaching students how to write code that is elegant, reusable, and comprehensible is a fundamental part of CS1 education. However, providing this "style feedback" in a timely manner has proven difficult to scale. In this paper, we present our experience deploying a novel, real-time style feedback tool in Code in Place, a large-scale online CS1 course. Our tool is based on the latest breakthroughs in large-language models (LLMs) and was carefully designed to be safe and helpful for students. We used our Real-Time Style Feedback tool (RTSF) in a class with over 8,000 diverse students from across the globe and ran a randomized control trial to understand its benefits. We show that students who received style feedback in real-time were five times more likely to view and engage with their feedback compared to students who received delayed feedback. Moreover, those who viewed feedback were more likely to make significant style-related edits to their code, with over 79% of these edits directly incorporating their feedback. We also discuss the practicality and dangers of LLM-based tools for feedback, investigating the quality of the feedback generated, LLM limitations, and techniques for consistency, standardization, and safeguarding against demographic bias, all of which are crucial for a tool utilized by students.

References

[1]
Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. 2015. Suggesting accurate method and class names. Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (2015). https://api.semanticscholar.org/CorpusID:9279336
[2]
R E. Berry and B A.E. Meekings. 1985. A Style Analysis of C Programs. Commun. ACM, Vol. 28, 1 (jan 1985), 80--88. https://doi.org/10.1145/2465.2469
[3]
Anastasiia Birillo, Ilya Vlasov, Artyom Burylov, Vitalii Selishchev, Artyom Goncharov, Elena Tikhomirova, Nikolay Vyahhi, and Timofey Bryksin. 2022. Hyperstyle: A Tool for Assessing the Code Quality of Solutions to Programming Assignments. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 307--313. https://doi.org/10.1145/3478431.3499294
[4]
Dennis M. Breuker, Jan Derriks, and Jacob Brunekreef. 2011. Measuring Static Quality of Student Code. In Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education (Darmstadt, Germany) (ITiCSE '11). Association for Computing Machinery, New York, NY, USA, 13--17. https://doi.org/10.1145/1999747.1999754
[5]
Charis Charitsis, Chris Piech, and John C. Mitchell. 2022a. Function Names: Quantifying the Relationship Between Identifiers and Their Functionality to Improve Them. In Proceedings of the Ninth ACM Conference on Learning @ Scale (New York City, NY, USA) (L@S '22). Association for Computing Machinery, New York, NY, USA, 93--101. https://doi.org/10.1145/3491140.3528269
[6]
Charis Charitsis, Chris Piech, and John C. Mitchell. 2022b. Using NLP to Quantify Program Decomposition in CS1. In Proceedings of the Ninth ACM Conference on Learning @ Scale (New York City, NY, USA) (L@S '22). Association for Computing Machinery, New York, NY, USA, 113--120. https://doi.org/10.1145/3491140.3528272
[7]
John C. Chen, Dexter C. Whittinghill, and Jennifer A. Kadlowec. 2006. Using Rapid Feedback to Enhance Student Learning and Satisfaction. In Proceedings. Frontiers in Education. 36th Annual Conference. 13--18. https://doi.org/10.1109/FIE.2006.322306
[8]
Pedro Henrique de Andrade Gomes, Rogério Eduardo Garcia, Gabriel Spadon, Danilo Medeiros Eler, Celso Olivete, and Ronaldo Celso Messias Correia. 2017. Teaching Software Quality via Source Code Inspection Tool. In 2017 IEEE Frontiers in Education Conference (FIE) (Indianapolis, IN, USA). IEEE Press, 1--8. https://doi.org/10.1109/FIE.2017.8190658
[9]
Tomche Delev and Dejan Gjorgjevikj. 2017. Static analysis of source code written by novice programmers. In 2017 IEEE Global Engineering Education Conference (EDUCON). 825--830. https://doi.org/10.1109/EDUCON.2017.7942942
[10]
Nupur Garg and Aaron W. Keen. 2018. Earthworm: Automated Decomposition Suggestions. In Proceedings of the 18th Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling '18). Association for Computing Machinery, New York, NY, USA, Article 16, bibinfonumpages5 pages. https://doi.org/10.1145/3279720.3279736
[11]
Elena L. Glassman, Lyla Fischer, Jeremy Scott, and Robert C. Miller. 2015. Foobaz: Variable Name Feedback for Student Code at Scale. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte, NC, USA) (UIST '15). Association for Computing Machinery, New York, NY, USA, 609--617. https://doi.org/10.1145/2807442.2807495
[12]
D.S. Goldstein, J.W. Pellegrino, S.R. Goldman, T.M. Stoelinga, N.T. Heffernan, and C. Heffernan. 2016. Improving mathematical learning outcomes through applying principles of spaced practice and assessment with feedback. Poster presented at the Annual Meeting of the American Educational Research Association.
[13]
Robert Green and Henry Ledgard. 2011. Coding guidelines: Finding the art in the science. Commun. ACM, Vol. 54, 12 (2011), 57--63.
[14]
Thorsten Haendler, Gustaf Neumann, and Fiodor Smirnov. 2020. RefacTutor: An Interactive Tutoring System for Software Refactoring. In Computer Supported Education, H. Chad Lane, Susan Zvacek, and James Uhomoibhi (Eds.). Springer International Publishing, Cham, 236--261.
[15]
Rowan Hart, Brian Hays, Connor McMillin, El Kindi Rezig, Gustavo Rodriguez-Rivera, and Jeffrey A. Turkstra. 2023. Eastwood-Tidy: C Linting for Automated Code Style Assessment in Programming Courses. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 799--805. https://doi.org/10.1145/3545945.3569817
[16]
Kathryn Haughney, Shawnee Wakeman, and Laura Hart. 2020. Quality of Feedback in Higher Education: A Review of Literature. Education Sciences, Vol. 10, 3 (2020). https://doi.org/10.3390/educsci10030060
[17]
HF Canonical Model Maintainers. 2022. distilbert-base-uncased-finetuned-sst-2-english (Revision bfdd146). https://doi.org/10.57967/hf/0181
[18]
Thomas Jefferson, Chris Gregg, and Chris Piech. 2024. PyodideU: Unlocking Python Entirely in a Browser for CS1. In Proceedings of the 55th acm technical symposium on computer science education. in press.
[19]
Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2021. A Tutoring System to Learn Code Refactoring. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (Virtual Event, USA) (SIGCSE '21). Association for Computing Machinery, New York, NY, USA, 562--568. https://doi.org/10.1145/3408877.3432526
[20]
Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2023. A Systematic Mapping Study of Code Quality in Education -- with Complete Bibliography. arxiv: 2304.13451 [cs.SE]
[21]
James A. Kulik and Chen-Lin C. Kulik. 1988. Timing of Feedback and Verbal Learning. Review of Educational Research, Vol. 58 (1988), 79 -- 97. https://api.semanticscholar.org/CorpusID:145572756
[22]
David Liu and Andrew Petersen. 2019. Static Analyses in Python Programming Courses. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 666--671. https://doi.org/10.1145/3287324.3287503
[23]
Ali Malik, Juliette Woodrow, Brahm Capoor, Thomas Jefferson, Miranda Li, Sierra Wang, Patricia Wei, Dora Demszky, Jennifer Langer-Osuna, Julie Zelenski, Mehran Sahami, and Chris Piech. 2023. Code in Place 2023: Understanding learning and teaching at scale through a massive global classroom. https://piechlab.stanford.edu/assets/papers/codeinplace2023.pdf.
[24]
Susan A. Mengel and Vinay Yerramilli. 1999. A Case Study of the Static Analysis of the Quality of Novice Student Programs. SIGCSE Bull., Vol. 31, 1 (mar 1999), 78--82. https://doi.org/10.1145/384266.299689
[25]
Joseph Bahman Moghadam, Rohan Roy Choudhury, HeZheng Yin, and Armando Fox. 2015. AutoStyle: Toward Coding Style Feedback at Scale. In Proceedings of the Second (2015) ACM Conference on Learning @ Scale (Vancouver, BC, Canada) (L@S '15). Association for Computing Machinery, New York, NY, USA, 261--266. https://doi.org/10.1145/2724660.2728672
[26]
Stephen Nutbrown and Colin Higgins. 2016. Static analysis of programming exercises: Fairness, usefulness and a method for application. Computer Science Education, Vol. 26, 2--3 (2016), 104--128.
[27]
OpenAI. 2022. GPT-3.5-turbo. Available at: https://platform.openai.com/docs/models/gpt-3--5.
[28]
Christopher Piech, Ali Malik, Kylie Jue, and Mehran Sahami. 2021. Code in place: Online section leading for scalable human-centered learning. In Proceedings of the 52nd acm technical symposium on computer science education. 973--979.
[29]
Simon P Rose, MP Jacob Habgood, and Tim Jay. 2019. Using Pirate Plunder to develop children's abstraction skills in Scratch. In Extended abstracts of the 2019 CHI conference on human factors in computing systems. 1--6.
[30]
Leo C Ureel II and Charles Wallace. 2019. Automated critique of early programming antipatterns. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education. 738--744.
[31]
Eliane S. Wiese, Michael Yen, Antares Chen, Lucas A. Santos, and Armando Fox. 2017. Teaching Students to Recognize and Implement Good Coding Style. In Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale (Cambridge, Massachusetts, USA) (L@S '17). Association for Computing Machinery, New York, NY, USA, 41--50. https://doi.org/10.1145/3051457.3051469
[32]
Imre Zsigmond., Maria Iuliana Bocicor., and Arthur-Jozsef Molnar. 2020. Gamification based Learning Environment for Computer Science Students. In Proceedings of the 15th International Conference on Evaluation of Novel Approaches to Software Engineering - ENASE. INSTICC, SciTePress, 556--563. https://doi.org/10.5220/0009579305560563 io

Cited By

View all
  • (2024)Socratic Mind: Scalable Oral Assessment Powered By AIProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664661(340-345)Online publication date: 9-Jul-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1
March 2024
1583 pages
ISBN:9798400704239
DOI:10.1145/3626252
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 March 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. cs1
  2. deployed at scale
  3. gpt
  4. llms
  5. real time
  6. style feedback

Qualifiers

  • Research-article

Conference

SIGCSE 2024
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,595 of 4,542 submissions, 35%

Upcoming Conference

SIGCSE Virtual 2024
1st ACM Virtual Global Computing Education Conference
December 5 - 8, 2024
Virtual Event , NC , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)259
  • Downloads (Last 6 weeks)38
Reflects downloads up to 21 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Socratic Mind: Scalable Oral Assessment Powered By AIProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3664661(340-345)Online publication date: 9-Jul-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media