Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3576882.3617930acmconferencesArticle/Chapter ViewAbstractPublication PagescompedConference Proceedingsconference-collections
research-article
Open access

Experiences with TA-Bot in CS1

Published: 05 December 2023 Publication History

Abstract

Automated Assessment Tools (AATs) have been used in undergraduate CS education for decades. TA-Bot, a modular AAT, has existed in some form for 25 years serving thousands of students across multiple universities. Class sizes throughout the last decade have continued to grow, while the number of instructors remains stagnant. AATs help instructors mitigate issues without additional resources, while simultaneously providing students with helpful feedback. The research team implemented novel features into the new, web-based TA-Bot such as dynamic rate limiting between submissions, custom code style feedback, and a gamified points system. The experiment discussed in this paper used TA-Bot over the course of three semesters involving 145 students in CS1. During the first semester, student and instructor feedback was collected on how to improve the tool. The second semester was used to rate limit submissions using a new dynamic rate limiting system. Finally, the third semester of TA-Bot was used as a control group with simple submission input/output checking. Instructors found that TA-Bot helped mitigate issues with continual increases in class sizes. When using TA-Bot with a dynamic rate limit, students were more inclined to start their assignment earlier. In addition to this, TA-Bot provides students with the ability to compare their solution against test cases, while simultaneously providing code-style advice using curated novice-friendly examples.

References

[1]
Alex Aiken. 1994. Moss (for a Measure Of Software Similarity). http://theory.stanford.edu/~aiken/moss/
[2]
Kirsti Ala-Mutka, Toni Uimonen, and Hannu-Matti Järvinen. 2004. Supporting Students in C++ Programming Courses with Automatic Program Style Assessment. JITE 3 (01 2004), 245--262. https://doi.org/10.28945/300
[3]
Python Code Quality Authority. 2022. Piston. https://github.com/PyCQA/pylint Retrieved August 19, 2022 from
[4]
Kent Beck. 1999. Extreme Programming Explained. Addison-Wesley.
[5]
Jens Bennedsen and Michael E. Caspersen. 2019. Failure Rates in Introductory Programming: 12 Years Later. ACM Inroads 10, 2 (apr 2019), 30--36. https://doi.org/10.1145/3324888
[6]
Tristan Call, Erik Fox, and Gina Sprint. 2021. Gamifying Software Engineering Tools to Motivate Computer Science Students to Start and Finish Programming Assignments Earlier. IEEE Transactions on Education (2021), 1--9. https://doi.org/10.1109/TE.2021.3069945
[7]
Sebastian Deterding, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. From Game Design Elements to Gamefulness: Defining "Gamification". In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (Tampere, Finland) (MindTrek '11). Association for Computing Machinery, New York, NY, USA, 9--15. https://doi.org/10.1145/2181037.2181040
[8]
Stephen H. Edwards, Nischel Kandru, and Mukund B.M. Rajagopal. 2017. Investigating Static Analysis Errors in Student Java Programs. In Proceedings of the 2017 ACM Conference on International Computing Education Research (Tacoma, Washington, USA) (ICER '17). Association for Computing Machinery, New York, NY, USA, 65--73. https://doi.org/10.1145/3105726.3106182
[9]
Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Automatically Grading Programming Assignments. In Proceedings of the 13th Annual Conference on Innovation and Technology in Computer Science Education (Madrid, Spain) (ITiCSE '08). Association for Computing Machinery, New York, NY, USA, 328. https://doi.org/10.1145/1384271.1384371
[10]
Free Software Foundation. 2022. GNU Operating System. https://www.gnu.org/software/diffutils/ Retrieved August 19, 2022 from
[11]
Michael S. Irwin and Stephen H. Edwards. 2019. Can Mobile Gaming Psychology Be Used to Improve Time Management on Programming Assignments?. In Proceedings of the ACM Conference on Global Computing Education (Chengdu,Sichuan, China) (CompEd '19). Association for Computing Machinery, New York, NY, USA, 208--214. https://doi.org/10.1145/3300115.3309517
[12]
Vladyslav Krylasov. 2022. Pylint Errors. Retrieved August 19, 2022 from https://github.com/vald-phoenix/pylint-errors
[13]
Juho Leinonen, Paul Denny, and Jacqueline Whalley. 2022. A Comparison of Immediate and Scheduled Feedback in Introductory Programming Projects. In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education - Volume 1 (Providence, RI, USA) (SIGCSE 2022). Association for Computing Machinery, New York, NY, USA, 885--891. https://doi.org/10.1145/3478431.3499372
[14]
David Liu and Andrew Petersen. 2019. Static Analyses in Python Programming Courses. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education (Minneapolis, MN, USA) (SIGCSE '19). Association for Computing Machinery, New York, NY, USA, 666--671. https://doi.org/10.1145/3287324.3287503
[15]
Matthew H. Netkow and Dennis Brylow. 2010. Xest: An Automated Framework for Regression Testing of Embedded Software. In Proceedings of the 2010 Workshop on Embedded Systems Education (Scottsdale, Arizona) (WESE '10). Association for Computing Machinery, New York, NY, USA, Article 7, 8 pages. https://doi.org/10.1145/1930277.1930284
[16]
Semantic Organization. 2022. Semantic UI. Retrieved August 19, 2022 from https://github.com/Semantic-Org/Semantic-UI
[17]
José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. 2022. Automated Assessment in Computer Science Education: A State-of-the-Art Review. ACM Trans. Comput. Educ. 22, 3, Article 34 (jun 2022), 40 pages. https://doi.org/10.1145/3513140
[18]
Pallets. 2022. Flask. Retrieved August 19, 2022 from https://github.com/pallets/flask
[19]
Raymond Pettit, John Homer, Roger Gee, Susan Mengel, and Adam Starbuck. 2015. An Empirical Study of Iterative Improvement in Programming Assignments. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (Kansas City, Missouri, USA) (SIGCSE '15). Association for Computing Machinery, New York, NY, USA, 410--415. https://doi.org/10.1145/2676723.2677279
[20]
Raymond Pettit and James Prather. 2017. Automated Assessment Tools: Too Many Cooks, Not Enough Collaboration. J. Comput. Sci. Coll. 32, 4 (April 2017), 113--121.
[21]
Brian Seymour. 2022. Piston. Retrieved August 19, 2022 from https://github.com/engineer-man/piston
[22]
Simon, Andrew Luxton-Reilly, Vangel V. Ajanovski, Eric Fouh, Christabel Gonsalvez, Juho Leinonen, Jack Parkinson, Matthew Poole, and Neena Thota. 2019. Pass Rates in Introductory Programming and in Other STEM Disciplines. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education (Aberdeen, Scotland Uk) (ITiCSE-WGR '19). Association for Computing Machinery, New York, NY, USA, 53--71. https://doi.org/10.1145/3344429.3372502
[23]
Jaime Spacco, Davide Fossati, John Stamper, and Kelly Rivers. 2013. Towards Improving Programming Habits to Create Better Computer Science Course Outcomes. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (Canterbury, England, UK) (ITiCSE '13). Association for Computing Machinery, New York, NY, USA, 243--248. https://doi.org/10.1145/2462476.2465594
[24]
Jaime Spacco, William Pugh, Nat Ayewah, and David Hovemeyer. 2006. The Marmoset Project: An Automated Snapshot, Submission, and Testing System. In Companion to the 21st ACM SIGPLAN Symposium on Object-Oriented Programming Systems, Languages, and Applications (Portland, Oregon, USA) (OOPSLA '06). Association for Computing Machinery, New York, NY, USA, 669--670. https://doi.org/10.1145/1176617.1176665
[25]
Chris Wilcox. 2016. Testing Strategies for the Automated Grading of Student Programs. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (Memphis, Tennessee, USA) (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 437--442. https://doi.org/10.1145/2839509.2844616

Cited By

View all
  • (2024)MiniJava on RISC-V: A Game of Global Compilers DominationProceedings of the Workshop Dedicated to Jens Palsberg on the Occasion of His 60th Birthday10.1145/3694848.3694854(21-29)Online publication date: 22-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CompEd 2023: Proceedings of the ACM Conference on Global Computing Education Vol 1
December 2023
180 pages
ISBN:9798400700484
DOI:10.1145/3576882
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2023

Check for updates

Author Tags

  1. CS1
  2. automated assessment tools
  3. gamification
  4. study behaviors
  5. unit testing

Qualifiers

  • Research-article

Conference

CompEd 2023
Sponsor:

Acceptance Rates

Overall Acceptance Rate 33 of 100 submissions, 33%

Upcoming Conference

CompEd '25
ACM Global Computing Education Conference 2025
October 21 - 25, 2025
Gaborone , Botswana

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)157
  • Downloads (Last 6 weeks)20
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)MiniJava on RISC-V: A Game of Global Compilers DominationProceedings of the Workshop Dedicated to Jens Palsberg on the Occasion of His 60th Birthday10.1145/3694848.3694854(21-29)Online publication date: 22-Oct-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media