Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2993148.2993166acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Semi-supervised model personalization for improved detection of learner's emotional engagement

Published: 31 October 2016 Publication History

Abstract

Affective states play a crucial role in learning. Existing Intelligent Tutoring Systems (ITSs) fail to track affective states of learners accurately. Without an accurate detection of such states, ITSs are limited in providing truly personalized learning experience. In our longitudinal research, we have been working towards developing an empathic autonomous 'tutor' closely monitoring students in real-time using multiple sources of data to understand their affective states corresponding to emotional engagement. We focus on detecting learning related states (i.e., 'Satisfied', 'Bored', and 'Confused'). We have collected 210 hours of data through authentic classroom pilots of 17 sessions. We collected information from two modalities: (1) appearance, which is collected from the camera, and (2) context-performance, that is derived from the content platform. The learning content of the content platform consists of two section types: (1) instructional where students watch instructional videos and (2) assessment where students solve exercise questions. Since there are individual differences in expressing affective states, the detection of emotional engagement needs to be customized for each individual. In this paper, we propose a hierarchical semi-supervised model adaptation method to achieve highly accurate emotional engagement detectors. In the initial calibration phase, a personalized context-performance classifier is obtained. In the online usage phase, the appearance classifier is automatically personalized using the labels generated by the context-performance model. The experimental results show that personalization enables performance improvement of our generic emotional engagement detectors. The proposed semi-supervised hierarchical personalization method result in 89.23% and 75.20% F1 measures for the instructional and assessment sections respectively.

References

[1]
S. Aslan and C. M. Reigeluth, " A trip to the past and future of educational computing: Understanding its evolution," Contemporary Educational Technology, vol. 2, no. 1, pp. 1- 17, 2011.
[2]
M. Martinez, "Designing learning objects to personalize learning," The Instructional Use of Learning Objects, pp. 151-171, 2002.
[3]
G. Paviotti, P. G. Rossi and D. Zarka, Intelligent tutoring systems: an overview, Pensa Multimedia, 2012.
[4]
F. F. Burton, Foundations of Intelligent Tutoring Systems, 2013.
[5]
B. Woolf, W. Burleson, I. Arroyo, T. Dragon, D. Cooper and R. Picard, "Affect-aware tutors: recognising and responding to student affect," Int. Journal of Learning Technology, vol. 4, no. 3, pp. 129-164, 2009.
[6]
M. B. Ammar, M. Neji, A. M. Alimi and G. Gouardères, "The affective tutoring system," Expert Systems and Applications, vol. 37, no. 4, pp. 3013-3023, 2010.
[7]
A. Kapoor and R. W. Picard, "Multimodal affect recognition in learning environments," in Int. Conf. on Multimedia, 2005.
[8]
A. Kapoor, W. Burleson and R. W. Picard, "Automatic prediction of frustration," Int. Journal of Human-Computer Studies, vol. 65, no. 8, pp. 724-736, 2007.
[9]
M. E. Hoque, D. J. McDuff and R. W. Picard, "Exploring temporal patterns in classifying frustrated and delighted smiles," Transactions on Affective Computing, vol. 65, no. 8, pp. 323-334, 2012.
[10]
J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe and J. C. Lester, "Automatically recognizing facial indicators of frustration: a learning-centric analysis," in Affective Computing and Intelligent Interaction, 2013.
[11]
S. D'Mello, P. Chipman and A. Graesser, "Posture as a predictor of learner's affective engagement," in Annual Cognitive Science Society, 2007.
[12]
N. Bosch, S. D'Mello, R. Baker, J. Ocumpaugh, V. Shute, M. Ventura and W. Zhao, "Automatic detection of learningcentered afective states in the wild," in Int. Conf. on Intelligent User Interfaces, 2015.
[13]
I. Arroyo, D. G. Cooper, W. Burleson, B. P. Woolf, K. Muldner and R. Christopherson, "Emotion sensors go to school," Artificial Intelligence in Education (AIED), vol. 200, pp. 17-24, 2009.
[14]
C. Kappeler-Setz, "Multimodal emotion and stress recognition (dissertation)," ETH, 2012.
[15]
D. Watson and L. A. Clark, "Negative affectivity: the disposition to experience aversive emotional states," Psychological Bulletin, vol. 96, no. 3, pp. 465-490, 1984.
[16]
R. E. Lucas and F. Fujita, "Factors Influencing the Relation Between Extraversion and Pleasant Affect," Journal of Personality and Social Psychology, vol. 79, no. 6, pp. 1039- 1056, 2000.
[17]
M. F. Valstar, B. Jiang, M. Mehu, M. Pantic and K. Scherer, "The first facial expression recognition and analysis challenge," in Int. Conf. on Automatic Face and Gesture Recognition and Workshops, 2011.
[18]
J. A. Russell, "A circumplex model of affect," Journal of Personality and Social Psychology, vol. 39, no. 6, p. 1161, 1980.
[19]
A. Blum and T. Mitchell, "Combining labeled and unlabeled data with co-training," in Annual Conf. on Computational Learning Theory, 1998.
[20]
N. Alyuz, E. Okur, E. Oktay, U. Genc, S. Aslan, S. E. Mete, D. Stanhill, B. Arnrich and A. A. Esme, "Towards an emotional engagement model: Can affective states of a learner be automatically detected in a 1:1 learning scenario?," in ACM Conf. on User Modeling, Adaptation and Personalization (UMAP) - Workshops, 2016.
[21]
S. Aslan, Z. Cataltepe, I. Diner, O. Dundar, A. A. Esme, R. Ferens, G. Kamhi, E. Oktay, C. Soysal and M. Yener, "Learner Engagement Measurement and Classification in 1: 1 Learning," in Int. Conf. on Machine Learning and Applications, 2014.
[22]
Intel Corporation, "Intel RealSense SDK: Design Guidelines," 2014: https://software.intel.com/sites/default/ files/managed/27/50/Intel%20RealSense%20SDK%20Desi gn%20Guidelines%20F200%20v2.pdf.
[23]
C. Chen, A. Liaw and L. Breiman, Using random forest to learn imbalanced data, University of California, Berkeley, 2004.
[24]
H. Boström, "Calibrating random forests," in Int. Conf. on Machine Learning and Applications, 2008.
[25]
J. Platt, "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods," Advances in Large Margin Classifiers, vol. 10, no. 3, pp. 61- 74, 1999.
[26]
A. Niculescu-Mizil and R. Caruana, "Predicting good probabilities with supervised learning," in Int. Conf. on Machine Learning, 2005.
[27]
J. Chen, X. Kiu, P. Tu and A. Aragones, "Learning personspecific models for facial expression and action unit recognition," Pattern Recognition Letters, vol. 34, 2013.
[28]
S. Aslan, S. E. Mete, E. Okur, E. Oktay, N. Alyuz, U. Genc, D. Stanhill and A. A. Esme, "Human Expert Labeling Process (HELP): Towards a reliable higher-order user state labeling by human experts," in Int. Conf. on Intelligent Tutoring Systems (ITS) - Workshops, 2016.

Cited By

View all
  • (2024)Exploring the transactive relationships of influence factors for online asynchronous learning transactive memory systemHeliyon10.1016/j.heliyon.2024.e36441(e36441)Online publication date: Aug-2024
  • (2024)Towards Integrating Automatic Emotion Recognition in Education: A Deep Learning Model Based on 5 EEG ChannelsInternational Journal of Computational Intelligence Systems10.1007/s44196-024-00638-x17:1Online publication date: 2-Sep-2024
  • (2023)Developing Machine Learning based Effective and Automated Patient Engagement Estimator for Telehealth: Algorithm Development and Validation Study (Preprint)JMIR Formative Research10.2196/46390Online publication date: 9-Feb-2023
  • Show More Cited By

Index Terms

  1. Semi-supervised model personalization for improved detection of learner's emotional engagement

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal Interaction
      October 2016
      605 pages
      ISBN:9781450345569
      DOI:10.1145/2993148
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 31 October 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Emotional engagement detection
      2. adaptive learning
      3. affective computing
      4. intelligent tutoring systems
      5. personalization

      Qualifiers

      • Research-article

      Conference

      ICMI '16
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 20 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Exploring the transactive relationships of influence factors for online asynchronous learning transactive memory systemHeliyon10.1016/j.heliyon.2024.e36441(e36441)Online publication date: Aug-2024
      • (2024)Towards Integrating Automatic Emotion Recognition in Education: A Deep Learning Model Based on 5 EEG ChannelsInternational Journal of Computational Intelligence Systems10.1007/s44196-024-00638-x17:1Online publication date: 2-Sep-2024
      • (2023)Developing Machine Learning based Effective and Automated Patient Engagement Estimator for Telehealth: Algorithm Development and Validation Study (Preprint)JMIR Formative Research10.2196/46390Online publication date: 9-Feb-2023
      • (2023)Learning User Interface Design for Smart Learning EnvironmentProceedings of the 15th International Conference on Education Technology and Computers10.1145/3629296.3629306(59-63)Online publication date: 26-Sep-2023
      • (2023)Understanding Flow Experience in Video Learning by Multimodal DataInternational Journal of Human–Computer Interaction10.1080/10447318.2023.218187840:12(3144-3158)Online publication date: 23-Feb-2023
      • (2023)Recognition of student engagement in classroom from affective statesInternational Journal of Multimedia Information Retrieval10.1007/s13735-023-00284-712:2Online publication date: 31-Jul-2023
      • (2022)Personalized Productive Engagement Recognition in Robot-Mediated Collaborative LearningProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556569(632-641)Online publication date: 7-Nov-2022
      • (2022)DMCNet: Diversified model combination network for understanding engagement from video screengrabsSystems and Soft Computing10.1016/j.sasc.2022.2000394(200039)Online publication date: Dec-2022
      • (2021)Design Considerations for a Multiple Smart-Device-Based Personalized Learning EnvironmentAdvances in Intelligent Systems Research and Innovation10.1007/978-3-030-78124-8_8(173-184)Online publication date: 3-Nov-2021
      • (2021)Learner Engagement Examination Via Computer Usage BehaviorsHumanity Driven AI10.1007/978-3-030-72188-6_12(241-253)Online publication date: 2-Dec-2021
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media