Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3105726.3106181acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Comparison of Time Metrics in Programming

Published: 14 August 2017 Publication History
  • Get Citation Alerts
  • Abstract

    Research on the indicators of student performance in introductory programming courses has traditionally focused on individual metrics and specific behaviors. These metrics include the amount of time and the quantity of steps such as code compilations, the number of completed assignments, and metrics that one cannot acquire from a programming environment. However, the differences in the predictive powers of different metrics and the cross-metric correlations are unclear, and thus there is no generally preferred metric of choice for examining time on task or effort in programming. In this work, we contribute to the stream of research on student time on task indicators through the analysis of a multi-source dataset that contains information about students' use of a programming environment, their use of the learning material as well as self-reported data on the amount of time that the students invested in the course and per-assignment perceptions on workload, educational value and difficulty. We compare and contrast metrics from the dataset with course performance. Our results indicate that traditionally used metrics from the same data source tend to form clusters that are highly correlated with each other, but correlate poorly with metrics from other data sources. Thus, researchers should utilize multiple data sources to gain a more accurate picture of students' learning.

    References

    [1]
    Alireza Ahadi, Raymond Lister, Heikki Haapala, and Arto Vihavainen. 2015. Exploring machine learning methods to automatically identify students in need of assistance. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research. ACM, 121--130.
    [2]
    AD Baddeley and DJA Longman. 1978. The influence of length and frequency of training session on the rate of learning to type. Ergonomics 21, 8 (1978), 627--635.
    [3]
    Benjamin S Bloom. 1974. Time and learning. American psychologist 29, 9 (1974), 682.
    [4]
    John B Carroll. 1963. A model of school learning. Teachers college record (1963).
    [5]
    Adam S Carter, Christopher D Hundhausen, and Olusola Adesope. 2015. The normalized programming state model: Predicting student performance in computing courses based on programming behavior. In Proceedings of the eleventh annual International Conference on International Computing Education Research. ACM, 141--150.
    [6]
    John Dunlosky, Katherine A Rawson, Elizabeth J Marsh, Mitchell J Nathan, and Daniel T Willingham. 2013. Improving students' learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest 14, 1 (2013), 4--58.
    [7]
    Olive Jean Dunn. 1961. Multiple comparisons among means. J. Amer. Statist. Assoc. 56, 293 (1961), 52--64.
    [8]
    Gregory Dyke. 2011. Which aspects of novice programmers' usage of an IDE predict learning outcomes. In Proceedings of the 42nd ACM technical symposium on Computer science education. ACM, 505--510.
    [9]
    Herm Ebbinghaus. 1885. Ueber das Gedachtnis. (1885).
    [10]
    Stephen H Edwards, Jason Snyder, Manuel A Pérez-Quinones, Anthony Allevato, Dongkwan Kim, and Betsy Tretola. 2009. Comparing effective and ineffective behaviors of student programmers. In Proceedings of the fifth international workshop on Computing education research workshop. ACM, 3--14.
    [11]
    K Anders Ericsson, Ralf T Krampe, and Clemens Tesch-Römer. 1993. The role of deliberate practice in the acquisition of expert performance. Psychological review 100, 3 (1993), 363.
    [12]
    Anthony Estey and Yvonne Coady. 2016. Can Interaction Patterns with Supplemental Study Tools Predict Outcomes in CS1?. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education. ACM, 236--241.
    [13]
    Anthony Estey, Hieke Keuning, and Yvonne Coady. 2017. Automatically Classifying Students in Need of Support by Detecting Changes in Programming Behaviour. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. ACM, 189--194.
    [14]
    Malcolm Gladwell. 2008. The 10,000 hour-rule. In Outliers: the story of success. Little, Brown and Company, New York, 35--68.
    [15]
    J Hauke and T Kossowski. 2011. Comparison of values of Pearson's and Spearman's correlation coefficient on the same sets of data. Quaestiones Geographicae 30, 2 (2011).
    [16]
    Arto Hellas, Juho Leinonen, and Petri Ihantola. 2017. Plagiarism in Take-home Exams: Help-seeking, Collaboration, and Systematic Cheating. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education. ACM, 238--243.
    [17]
    Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, and others. 2015. Educational data mining and learning analytics in programming: Literature review and case studies. In Proceedings of the 2015 ITiCSE on Working Group Reports. ACM, 41--63.
    [18]
    Nate Kornell and Robert A Bjork. 2007. The promise and perils of self-regulated study. Psychonomic Bulletin & Review 14, 2 (2007), 219--224.
    [19]
    Jaakko Kurhila and Arto Vihavainen. 2011. Management, Structures and Tools to Scale Up Personal Advising in Large Programming Courses. In Proceedings of the 2011 Conference on Information Technology Education (SIGITE '11). ACM, New York, NY, USA, 3--8. x978--1--4503--1017--8 http://dx.doi.org/10.1145/2047594.2047596
    [20]
    Juho Leinonen, Krista Longi, Arto Klami, and Arto Vihavainen. 2016. Automatic inference of programming performance and experience from typing patterns. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education. ACM, 132--137.
    [21]
    Leo Leppanen, Juho Leinonen, and Arto Hellas. 2016. Pauses and spacing in learning to program. In Proceedings of the 16th Koli Calling International Conference on Computing Education Research. ACM, 41--50.
    [22]
    Leo Leppanen, Juho Leinonen, Petri Ihantola, and Arto Hellas. 2017. Using and collecting fine-grained usage data to improve online learning materials. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering and Education Track. IEEE Press, 4--12.
    [23]
    Krista Longi, Juho Leinonen, Henrik Nygren, Joni Salmi, Arto Klami, and Arto Vihavainen. 2015. Identification of programmers from typing patterns. In Proceedings of the 15th Koli Calling Conference on Computing Education Research. ACM, 60--67.
    [24]
    Brooke N Macnamara, David Z Hambrick, and Frederick L Oswald. 2014. Deliberate practice and performance in music, games, sports, education, and professions a meta-analysis. Psychological science 25, 8 (2014), 1608--1618.
    [25]
    MM Mukaka. 2012. A guide to appropriate use of correlation coefficient in medical research. Malawi Medical Journal 24, 3 (2012), 69--71.
    [26]
    Jonathan P Munson. 2017. Metrics for timely assessment of novice programmers. Journal of Computing Sciences in Colleges 32, 3 (2017), 136--148.
    [27]
    Christian Murphy, Gail Kaiser, Kristin Loveland, and Sahar Hasan. 2009. Retina: helping students and instructors based on observed programming activities. ACM SIGCSE Bulletin 41, 1 (2009), 178--182.
    [28]
    David A Omahen. 2009. The 10 000-hour rule and residency training. Canadian Medical Association Journal 180, 12 (2009), 1272--1272.
    [29]
    David N Perkins, Chris Hancock, Renee Hobbs, Fay Martin, and Rebecca Simmons. 1986. Conditions of learning in novice programmers. Journal of Educational Computing Research 2, 1 (1986), 37--55.
    [30]
    Andrew Petersen, Jaime Spacco, and Arto Vihavainen. 2015. An exploration of error quotient in multiple contexts. In Proceedings of the 15th Koli Calling Conference on Computing Education Research. ACM, 77--86.
    [31]
    Ma Mercedes T Rodrigo, Ryan S Baker, Matthew C Jadud, Anna Christine M Amarra, Thomas Dy, Maria Beatriz V Espejo-Lahoz, Sheryl Ann L Lim, Sheila AMS Pascua, Jessica O Sugay, and Emily S Tabanao. 2009. Affective and behavioral predictors of novice programmer achievement. In ACM SIGCSE Bulletin, Vol. 41. ACM, 156--160.
    [32]
    Herbert Simon and William Chase. 1988. Skill in chess. In Computer chess compendium. Springer, 175--188.
    [33]
    Daniel Toll. 2016. Measuring Programming Assignment Effort. Ph.D. Dissertation. Faculty of Technology, Linnaeus University.
    [34]
    Daniel Toll, Tobias Olsson, Morgan Ericsson, and Anna Wingkvist. 2016. Fine-grained recording of student programming sessions to improve teaching and time estimations. In International Journal of Engineering, Science and Innovative Technology, Vol. 32. 1069--1077.
    [35]
    Arto Vihavainen, Matti Luukkainen, and Petri Ihantola. 2014. Analysis of source code snapshot granularity levels. In Proceedings of the 15th Annual Conference on Information technology education. ACM, 21--26.
    [36]
    Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin Partel. 2013. Scaffolding students' learning using test my code. In Proceedings of the 18th ACM conference on Innovation and technology in computer science education. ACM, 117--122.
    [37]
    Herbert J Walberg. 1988. Synthesis of research on time and learning. Educational leadership 45, 6 (1988), 76--85.
    [38]
    Christopher Watson, Frederick WB Li, and Jamie L Godwin. 2013. Predicting performance in an introductory programming course by logging and analyzing student programming behavior. In Advanced Learning Technologies (ICALT), 2013 IEEE 13th International Conference on. IEEE, 319--323.
    [39]
    Christopher Watson, Frederick WB Li, and Jamie L Godwin. 2014. No tests required: comparing traditional and dynamic predictors of programming success. In Proceedings of the 45th ACM technical symposium on Computer science education. ACM, 469--474.

    Cited By

    View all
    • (2024)Writing Between the Lines: How Novices Construct Java ProgramsProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 110.1145/3626252.3630968(165-171)Online publication date: 7-Mar-2024
    • (2024)Effect of Deadlines on Student Submission Timelines and Success in a Fully-Online Self-Paced CourseProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 110.1145/3626252.3630837(207-213)Online publication date: 7-Mar-2024
    • (2023)Exploring the Responses of Large Language Models to Beginner Programmers’ Help RequestsProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600139(93-105)Online publication date: 7-Aug-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICER '17: Proceedings of the 2017 ACM Conference on International Computing Education Research
    August 2017
    316 pages
    ISBN:9781450349680
    DOI:10.1145/3105726
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 August 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. academic success prediction
    2. educational data mining
    3. multi-source data analysis
    4. time metrics
    5. time on task

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ICER '17
    Sponsor:
    ICER '17: International Computing Education Research Conference
    August 18 - 20, 2017
    Washington, Tacoma, USA

    Acceptance Rates

    ICER '17 Paper Acceptance Rate 29 of 180 submissions, 16%;
    Overall Acceptance Rate 189 of 803 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)34
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 10 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Writing Between the Lines: How Novices Construct Java ProgramsProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 110.1145/3626252.3630968(165-171)Online publication date: 7-Mar-2024
    • (2024)Effect of Deadlines on Student Submission Timelines and Success in a Fully-Online Self-Paced CourseProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 110.1145/3626252.3630837(207-213)Online publication date: 7-Mar-2024
    • (2023)Exploring the Responses of Large Language Models to Beginner Programmers’ Help RequestsProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600139(93-105)Online publication date: 7-Aug-2023
    • (2023)Developing Novice Programmers’ Self-Regulation Skills with Code ReplaysProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600127(298-313)Online publication date: 7-Aug-2023
    • (2023)G is for GeneralisationProceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569824(1028-1034)Online publication date: 2-Mar-2023
    • (2023)Accurate Estimation of Time-on-Task While ProgrammingProceedings of the 54th ACM Technical Symposium on Computer Science Education V. 110.1145/3545945.3569804(708-714)Online publication date: 2-Mar-2023
    • (2022)Experiences With and Lessons Learned on Deadlines and Submission BehaviorProceedings of the 22nd Koli Calling International Conference on Computing Education Research10.1145/3564721.3564728(1-13)Online publication date: 17-Nov-2022
    • (2022)Time-on-task metrics for predicting performanceACM Inroads10.1145/353456413:2(42-49)Online publication date: 17-May-2022
    • (2022)Methodological Considerations for Predicting At-risk StudentsProceedings of the 24th Australasian Computing Education Conference10.1145/3511861.3511873(105-113)Online publication date: 14-Feb-2022
    • (2022)CodeProcess Charts: Visualizing the Process of Writing CodeProceedings of the 24th Australasian Computing Education Conference10.1145/3511861.3511867(46-55)Online publication date: 14-Feb-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media