Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Evaluating Crowdworkers as a Proxy for Online Learners in Video-Based Learning Contexts

Published: 01 November 2018 Publication History
  • Get Citation Alerts
  • Abstract

    Crowdsourcing has emerged as an effective method of scaling-up tasks previously reserved for a small set of experts. Accordingly, researchers in the large-scale online learning space have begun to employ crowdworkers to conduct research about large-scale, open online learning. We here report results from a crowdsourcing study (N=135) to evaluate the extent to which crowdworkers and MOOC learners behave comparably on lecture viewing and quiz tasks---the most utilized learning activities in MOOCs. This serves to (i) validate the assumption of previous research that crowdworkers are indeed reliable proxies of online learners and (ii) address the potential of employing crowdworkers as a means of online learning environment testing. Overall, we observe mixed results---in certain contexts (quiz performance and video watching behavior) crowdworkers appear to behave comparably to MOOC learners, and in other situations (interactions with in-video quizzes), their behaviors appear to be disparate. We conclude that future research should be cautious if employing crowdworkers to carry out learning tasks, as the two populations do not behave comparably on all learning-related activities.

    References

    [1]
    Omar Alonso and Stefano Mizzaro. 2012. Using crowdsourcing for TREC relevance assessment. Information processing & management, Vol. 48, 6 (2012), 1053--1066.
    [2]
    Omar Alonso, Daniel E Rose, and Benjamin Stewart. 2008. Crowdsourcing for relevance evaluation. In ACM SigIR Forum, Vol. 42. ACM, 9--15.
    [3]
    Yigal Attali. 2015. Effects of multiple-try feedback and question type during mathematics problem solving on performance in similar problems. Computers & Education, Vol. 86 (2015), 260--267.
    [4]
    Yigal Attali and Meirav Arieli-Attali. 2015. Gamification in assessment: Do points affect test performance? Computers & Education, Vol. 83 (2015), 57--63.
    [5]
    John Bohannon. 2011. Social science for pennies. (2011).
    [6]
    Anne Cocos, Ting Qian, Chris Callison-Burch, and Aaron J Masino. 2017. Crowd control: Effectively utilizing unscreened crowd workers for biomedical data annotation. Journal of biomedical informatics, Vol. 69 (2017), 86--92.
    [7]
    Derrick Coetzee, Seongtaek Lim, Armando Fox, Bjorn Hartmann, and Marti A. Hearst. 2015. Structuring interactions for large-scale synchronous peer learning. In CSCW '15. 1139--1152.
    [8]
    Gabriel Culbertson, Solace Shen, Erik Andersen, and Malte Jung. 2017. Have your Cake and Eat it Too: Foreign Language Learning with a Crowdsourced Video Captioning System. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 286--296.
    [9]
    Maria Cutumisu and Daniel L Schwartz. 2016. Choosing versus Receiving Feedback: The Impact of Feedback Valence on Learning in an Assessment Game. In EDM '16. 341--346.
    [10]
    Dan Davis, Guanliang Chen, Claudia Hauff, and Geert-Jan Houben. 2016. Gauging MOOC Learners' Adherence to the Designed Learning Path. In Proceedings of the 9th International Conference on Educational Data Mining. 54--61.
    [11]
    Dan Davis, Guanliang Chen, Claudia Hauff, and Geert-Jan Houben. 2018. Activating learning at scale: A review of innovations in online learning strategies. Computers & Education, Vol. 125 (2018), 327 -- 344.
    [12]
    Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. 2016. Toward a learning science for complex crowdsourcing tasks. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2623--2634.
    [13]
    Ujwal Gadiraju and Stefan Dietze. 2017. Improving learning through achievement priming in crowdsourced information finding microtasks. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference. ACM, 105--114.
    [14]
    Ujwal Gadiraju, Besnik Fetahu, and Ricardo Kawase. 2015. Training workers for improving performance in crowdsourcing microtasks. In Design for Teaching and Learning in a Networked World. Springer, 100--114.
    [15]
    Ujwal Gadiraju, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. 2017. Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Springer, 6--26.
    [16]
    Chase Geigle and ChengXiang Zhai. 2017. Modeling Student Behavior With Two-Layer Hidden Markov Models. Journal of Educational Data Mining, Vol. 9, 1 (2017), 1--24.
    [17]
    Catherine Grady and Matthew Lease. 2010. Crowdsourcing document relevance assessment with mechanical turk. In Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's mechanical turk. Association for Computational Linguistics, 172--179.
    [18]
    Philip J Guo, Juho Kim, and Rob Rubin. 2014. How video production affects student engagement: An empirical study of mooc videos. In Proceedings of the first ACM conference on Learning@ scale conference. ACM, 41--50.
    [19]
    Sherif Halawa, Daniel Greene, and John Mitchell. 2014. Dropout prediction in MOOCs using learner activity features. In Experiences and best practices in and around MOOCs, Vol. 7. 3--12.
    [20]
    Walter W Hauck and Sharon Anderson. 1984. A new statistical procedure for testing equivalence in two-group comparative bioavailability trials. Journal of Pharmacokinetics and Biopharmaceutics, Vol. 12, 1 (1984), 83--91.
    [21]
    Andrew Dean Ho, Justin Reich, Sergiy O Nesterko, Daniel Thomas Seaton, Tommy Mullaney, Jim Waldo, and Isaac Chuang. 2014. HarvardX and MITx: The first year of open online courses, fall 2012-summer 2013. Technical Report. Harvard University and Massachusetts Institute of Technology.
    [22]
    Jeff Howe. 2008. Crowdsourcing: How the power of the crowd is driving the future of business .Random House.
    [23]
    Gabriella Kazai and Natasa Milic-Frayling. 2009. On the evaluation of the quality of relevance assessments collected through crowdsourcing. In SIGIR 2009 Workshop on the Future of IR Evaluation. 21--22.
    [24]
    Juho Kim, Philip J Guo, Daniel T Seaton, Piotr Mitros, Krzysztof Z Gajos, and Robert C Miller. 2014. Understanding in-video dropouts and interaction peaks inonline lecture videos. In Proceedings of the first ACM conference on Learning@ scale conference. ACM, 31--40.
    [25]
    Yea-Seul Kim, Katharina Reinecke, and Jessica Hullman. 2017. Explaining the gap: Visualizing one's predictions improves recall and comprehension of data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 1375--1386.
    [26]
    Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--456.
    [27]
    René F Kizilcec, Mar Pérez-Sanagust'in, and Jorge J Maldonado. 2017. Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Computers & education, Vol. 104 (2017), 18--33.
    [28]
    Kenneth R Koedinger, Jihee Kim, Julianna Zhuxin Jia, Elizabeth A McLaughlin, and Norman L Bier. 2015. Learning is not a spectator sport: Doing is better than watching for learning from a MOOC. In Proceedings of the second (2015) ACM conference on learning@ scale. ACM, 111--120.
    [29]
    Geza Kovacs. 2016. Effects of in-video quizzes on MOOC lecture viewing. In Proceedings of the third (2016) ACM conference on Learning@ Scale. ACM, 31--40.
    [30]
    Bum Chul Kwon and Bongshin Lee. 2016. A Comparative Evaluation on Online Learning Approaches using Parallel Coordinate Visualization. In CHI '16. ACM, 993--997.
    [31]
    Chappell Lawson, Gabriel S Lenz, Andy Baker, and Michael Myers. 2010. Looking like a winner: Candidate appearance and electoral success in new democracies. World Politics, Vol. 62, 4 (2010), 561--593.
    [32]
    Xiao Ma, Megan Cackett, Leslie Park, Eric Chien, and Mor Naaman. 2018. Web-Based VR Experiments Powered by the Crowd. In Proceedings of the 2018 World Wide Web Conference. 33--43.
    [33]
    Jaclyn K Maass and Philip I Pavlik Jr. 2016. Modeling the Influence of Format and Depth during Effortful Retrieval Practice. In EDM '16. 143--150.
    [34]
    Thi Thao Duyen T Nguyen, Thomas Garncarz, Felicia Ng, Laura A Dabbish, and Steven P Dow. 2017. Fruitful Feedback: Positive affective language and source anonymity improve critique reception and work outcomes. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 1024--1034.
    [35]
    Jihyun Park, Kameryn Denaro, Fernando Rodriguez, Padhraic Smyth, and Mark Warschauer. 2017. Detecting changes in student behavior from clickstream data. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference. ACM, 21--30.
    [36]
    Oleksandra Poquet, Lisa Lim, Negin Mirriahi, and Shane Dawson. 2018. Video and learning: a systematic review (2007--2017). In Proceedings of the 8th International Conference on Learning Analytics and Knowledge. ACM, 151--160.
    [37]
    Anna N Rafferty, Rachel A Jansen, and Thomas L Griffiths. 2016. Using Inverse Planning for Personalized Feedback. In EDM '16. 472--477.
    [38]
    Daniel T Seaton, Sergiy Nesterko, Tommy Mullaney, Justin Reich, Andrew Ho, and Isaac Chuang. 2014. Characterizing video use in the catalogue of MITx MOOCs. In European MOOC Stakeholders Summit, Lausanne. 140--146.
    [39]
    Abdulhadi Shoufan. 2018. Estimating the cognitive value of YouTube's educational videos: A learning analytics approach. Computers in Human Behavior, Vol. -, - (2018), --.
    [40]
    Tanmay Sinha, Patrick Jermann, Nan Li, and Pierre Dillenbourg. 2014. Your click decides your fate: Inferring Information Processing and Attrition Behavior from MOOC Video Clickstream Interactions. In 2014 Empirical Methods in Natural Language Processing Workshop on Modeling Large Scale Social Interaction in Massively Open Online Courses. 1--12.
    [41]
    David J Stanley and Jeffrey R Spence. 2014. Expectations for replications: Are yours realistic? Perspectives on Psychological Science, Vol. 9, 3 (2014), 305--318.
    [42]
    Jennifer Tosti-Kharas and Caryn Conley. 2016. Coding psychological constructs in text using Mechanical Turk: A reliable, accurate, and efficient alternative. Frontiers in psychology, Vol. 7 (2016), 741.
    [43]
    Selen Türkay. 2016. The effects of whiteboard animations on retention and subjective experiences when learning advanced physics topics. Computers & Education, Vol. 98 (2016), 102--114.
    [44]
    Donna Vakharia and Matthew Lease. 2015. Beyond Mechanical Turk: An analysis of paid crowd work platforms. In Proceedings of the iConference. 1--17.
    [45]
    Frans Van der Sluis, Jasper Ginn, and Tim Van der Zee. 2016. Explaining Student Behavior at Scale: The influence of video complexity on student dwelling time. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale. ACM, 51--60.
    [46]
    Tim Van der Zee, Wilfried Admiraal, Fred Paas, Nadira Saab, and Bas Giesbers. 2017. Effects of subtitles, complexity, and language proficiency on learning from online education videos. Journal of Media Psychology: Theories, Methods, and Applications, Vol. 29, 1 (2017), 18.
    [47]
    Miaomiao Wen and Carolyn Penstein Rosé. 2014. Identifying latent study habits by mining learner behavior patterns in massive open online courses. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, 1983--1986.
    [48]
    Hadley Wickham. 2013. Bin-summarise-smooth: a framework for visualising large data. Technical Report. had. co. nz, Tech. Rep.
    [49]
    Joseph Jay Williams, Juho Kim, Anna Rafferty, Samuel Maldonado, Krzysztof Z. Gajos, Walter S. Lasecki, and Neil Heffernan. 2016a. AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning. In L@S '16. 379--388. http://dl.acm.org/citation.cfm?id=2876042
    [50]
    Joseph Jay Williams, Tania Lombrozo, Anne Hsu, Bernd Huber, and Juho Kim. 2016b. Revising Learner Misconceptions Without Feedback: Prompting for Reflection on Anomalies. In CHI '16. ACM, 470--474.

    Cited By

    View all
    • (2024)Enhancing student experience in remote computer programming course practice: A case of the Java languageE-Learning and Digital Media10.1177/20427530241262485Online publication date: 14-Jun-2024
    • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
    • (2021)The State-of-the-Art on Collective Intelligence in Online Educational TechnologiesIEEE Transactions on Learning Technologies10.1109/TLT.2021.307355914:2(257-271)Online publication date: 1-Apr-2021
    • Show More Cited By

    Index Terms

    1. Evaluating Crowdworkers as a Proxy for Online Learners in Video-Based Learning Contexts

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 2, Issue CSCW
      November 2018
      4104 pages
      EISSN:2573-0142
      DOI:10.1145/3290265
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 01 November 2018
      Published in PACMHCI Volume 2, Issue CSCW

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. crowdwork
      2. learning analytics
      3. moocs
      4. replication

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)12
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Enhancing student experience in remote computer programming course practice: A case of the Java languageE-Learning and Digital Media10.1177/20427530241262485Online publication date: 14-Jun-2024
      • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
      • (2021)The State-of-the-Art on Collective Intelligence in Online Educational TechnologiesIEEE Transactions on Learning Technologies10.1109/TLT.2021.307355914:2(257-271)Online publication date: 1-Apr-2021
      • (2021)Bandit algorithms to personalize educational chatbotsMachine Learning10.1007/s10994-021-05983-y110:9(2389-2418)Online publication date: 25-May-2021
      • (2019)An Evaluation of the Impact of Automated Programming Hints on Performance and LearningProceedings of the 2019 ACM Conference on International Computing Education Research10.1145/3291279.3339420(61-70)Online publication date: 30-Jul-2019

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media