Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Dynamic Handwriting Signal Features Predict Domain Expertise

Published: 24 July 2018 Publication History
  • Get Citation Alerts
  • Abstract

    As commercial pen-centric systems proliferate, they create a parallel need for analytic techniques based on dynamic writing. Within educational applications, recent empirical research has shown that signal-level features of students’ writing, such as stroke distance, pressure and duration, are adapted to conserve total energy expenditure as they consolidate expertise in a domain. The present research examined how accurately three different machine-learning algorithms could automatically classify users’ domain expertise based on signal features of their writing, without any content analysis. Compared with an unguided machine-learning classification accuracy of 71%, hybrid methods using empirical-statistical guidance correctly classified 79–92% of students by their domain expertise level. In addition to improved accuracy, the hybrid approach contributed a causal understanding of prediction success and generalization to new data. These novel findings open up opportunities to design new automated learning analytic systems and student-adaptive educational technologies for the rapidly expanding sector of commercial pen systems.

    References

    [1]
    E. Andre. In press. Real-time sensing of affect and social signals in a multimodal context. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [2]
    Apple. 2017. Retrieved January 17, 2017 from http://www.apple.com/apple-pencil/.
    [3]
    A. Arthur, R. Lunsford, M. Wesson, and S. Oviatt. 2006. Prototyping novel collaborative multimodal systems: Simulation, data collection and analysis tools for the next decade. In Proceedings of the 8th ACM International Conference on Multimodal Interaction. ACM, New York, NY, 209--226.
    [4]
    R. Baker and K. Yacef. 2009. The state of educational data mining in 2009: A review and future visions. J. Educ. Data Min. 1 (2009), 3--17 .
    [5]
    R. Baker and G. Siemens. 2014. Learning analytics and educational data mining. In Cambridge Handbook of the Leaning Sciences (2nd ed), R. K. Sawyer (Ed.). Cambridge University Press, New York, NY, 253--272.
    [6]
    L. Bourne, J. Kole, and A. Healy. 2014. Expertise: Defined, described and explained. Front. Psychol. 5, 186 (2014), 1--4.
    [7]
    M. Burzo, M. Abouelenien, V. Perez-rosas, and R. Mihalcea. In press. Multimodal deception detection. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [8]
    A. Caramazza and A. H. Hillis. 1991. Lexical organization of nouns and verbs in the brain. Nature 349 (1991), 788--790.
    [9]
    P. C. H. Cheng and H. Rojas-anaya. 2007. Measuring mathematical formula writing competence: An application of graphical protocol analysis. In Proceedings of the 29th Conference of the Cognitive Science Society. D. S. McNamara and J. G. Trafton (Eds.). Cognitive Science Society, Austin, TX, 869--874.
    [10]
    P. C. H. Cheng and H. Rojas-anaya. 2008. A graphical chunk production model: Evaluation using graphical protocol analysis with artificial sentences. In Proceedings of the 30th Annual Conference of the Cognitive Science Society. B. C. Love, K. McRae, and V. M. Sloutsky (Eds). Cognitive Science Society, Austin, TX, 1972--1977.
    [11]
    J. Cohn, N. Cummins, J. Epps, R. Goecke, J. Joshi, and S. Scherer. In press. Multimodal assessment of depression and related disorders based on behavioural signals. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [12]
    S. D'mello, N. Bosch, and H. Chen. In press. Multimodal-multisensor affect detection. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [13]
    P. Domingos. 2012. A few useful things to know about machine learning. Commun. ACM 55, 10 (2012), 78--87.
    [14]
    K. A. Ericsson, R. T. Krampe, and C. Tesch-romer. 1993. The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 100, 3 (1993), 363--406.
    [15]
    B. Jee, D. Gentner, K. Forbus, B. Sageman, and D. Uttal. 2009. Drawing on experience: Use of sketching to evaluate knowledge of spatial scientific concepts. In Proceedings of the 31st Conference of Cognitive Science Society. Cognitive Science Society, Amsterdam.
    [16]
    B. Jee, D. Gentner, D. Uttal, B. Sageman, K. Forbus, C. Manduca, C. Ormand, T. Shipley, and B. Tikoff. 2014. Drawing on experience: How domain knowledge is reflected in sketches of scientific structures and processes. In Research in Science Education. Springer, Dordrecht.
    [17]
    D. Kahnemann. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, NY.
    [18]
    J. M. Kantner and K. Veermanachaneni. 2015. Deep feature synthesis: Towards automating data science endeavors. In Proceedings of the IEEE/ACM Data Science and Advance Analytics Conference.
    [19]
    M. Lewis. 2014. Flash Boys. W.W. Norton and Company, New York, NY.
    [20]
    Livescribe. 2017. Retrieved January 17, 2017 from http://www.livescribe.com/en-us/smartpen/ls3/.
    [21]
    L. Martin and D. Schwartz. 2009. Prospective adaptation in the use of external representations. Cogn. Instruct. 27, 4 (2009), 370--400.
    [22]
    Microsoft. 2016. Retrieved April 1, 2016 from https://www.microsoft.com/surface/enus?8SEMID=18WT.srch=18ocid=_SEM_GOO_MSBranded_INV_enUS_+microsoft%20+surface%20+tablets8wt.mc_id=_SEM_GOO_MSBranded_INV_en-US_+microsoft%20+surface%20+tablets.
    [23]
    MyScript. 2017. Retrieved January 17, 2017, http://myscript.com/technology/.
    [24]
    Neo Smartpen. 2017; http://www.neosmartpen.com (retrieved Jan. 17, 2017).
    [25]
    X. Ochoa, K. Chiluiza, G. Mendez, G. Luzardo, B. Guaman, and J. Castells. 2013. Expertise estimation based on simple multimodal features. In Proceedings of the 15th ACM International Conference on Multimodal Interaction. ACM Press, New York, NY. 583--590.
    [26]
    Orange. 2013. Retrieved October 31, 2013 from http://orange.biolab.si/.
    [27]
    S. L Oviatt. 1996. User-centered design of spoken language and multimodal interfaces. IEEE Multimedia 3, 4 (1996), 26--35.
    [28]
    S. L. Oviatt. 2012. Multimodal interfaces. In The Human--Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (3rd ed.), J. Jacko (Ed.) Lawrence Erlbaum and Associates, Mahwah, NJ. 405--430.
    [29]
    S. L. Oviatt. 2013a. The Design of Future Educational Interfaces. Routledge Press, New York, NY.
    [30]
    S. L. Oviatt. 2013b. Problem solving, domain expertise and learning: Ground-truth performance results for math data corpus. In Proceedings of the 2nd International Grand Challenge Workshop on Multimodal Learning Analytics. ACM Press, New York, NY.
    [31]
    S. L. Oviatt, A. Arthur, Y. Brock, and J. Cohen. 2007. Expressive pen-based interfaces for math education. In Proceedings of Computer-Supported Collaborative Learning Conference. ISLS Chinn, Erkens and Puntambekar (Eds.) 8, 2, 569--578.
    [32]
    S. L. Oviatt, A. Arthur and J. Cohen. 2006. Quiet interfaces that help students think. In Proceedings of the ACM Conference on User Interface Software Technology. ACM Press, New York, NY. 191--200.
    [33]
    S. L. Oviatt and A. Cohen. 2010. Toward high-performance communication interfaces for science problem solving. J. Sci. Educ. Technol. 19, 6 (2010), 515--531.
    [34]
    S. L. Oviatt and A. Cohen. 2013. Written and multimodal representations as predictors of expertise and problem-solving success in mathematics. In Proceedings of the 15th ACM International Conference on Multimodal Interaction. ACM Press, New York, NY. 599--606.
    [35]
    S. L. Oviatt and A. Cohen. 2014. Written activity, representations, and fluency as predictors of domain expertise in mathematics. In Proceedings of the 16th ACM International Conference on Multimodal Interfaces. ACM Press, New York, NY.
    [36]
    S. L. Oviatt and P. R. Cohen. 2015. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces. Human-Centered Interfaces Synthesis series, Jack Carroll (Ed.). Morgan Claypool, San Rafael, CA.
    [37]
    S. L. Oviatt, A. Cohen, A. Miller, K. Hodge, and A. Mann. 2012. The impact of interface affordances on human ideation, problem solving and inferential reasoning. ACM Transactions on Computer Human Interaction. ACM Press, New York, NY. 19, 3 (2012), 1--30.
    [38]
    S. L. Oviatt, A. Cohen, and N. Weibel. 2013. Multimodal learning analytics: Description of math data corpus for ICMI grand challenge workshop. In Proceedings of the 2nd International Workshop on Multimodal Learning Analytics. ACM Press, New York, NY.
    [39]
    S. L. Oviatt, J. Graafsgard, L. Chen, and X. Ochoa. In press. Multimodal learning analytics: Assessing learners’ mental state during the process of learning. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [40]
    S. L. Oviatt, K. Hang, and J. Zhou. Unpublished. Predicting expertise from adaptive energy expenditure during writing.
    [41]
    S. L. Oviatt, K. Hang, J. Zhou, K. YU, and F. Chen. 2016. Dynamic handwriting signal features predict domain expertise. Incaa Designs Technical Report 52016. Incaa Designs, Deer Harbor, WA.
    [42]
    Samsung. 2017. Retrieved January 17, 2017 from http://www.samsung.com/global/galaxy/galaxy-tab-pro-s/.
    [43]
    B. Schuller. In press. Multimodal user state and trait recognition: An overview. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.
    [44]
    R. Shiffren and W. Schneider. 1977. Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychol. Rev. 84 (1977), 127--190.
    [45]
    N. Singer. 2014. With tech taking over schools, worries rise. New York Times, Sept. 15, 2014.
    [46]
    Smart. 2016. Retrieved April 1, 2016 from http://smartkapp.com/Products/kapp.
    [47]
    I. H. Witten, E. Frank, and M. A. Hall. 2011. Data Mining: Practical Machine Learning Tools and Techniques (3rd ed.). Morgan Kaufmann: Burlington, MA.
    [48]
    M. Worsley and P. Blikstein. 2010. Toward the development of learning analytics: Student speech as an automatic and natural form of assessment. In Proceedings of the American Educational Research Association Conference.
    [49]
    K. Yu, J. Epps, and F. Chen. 2011. Cognitive load evaluation of handwriting using stroke-level features. In Proceedings of the 16th International Conference on Intelligent User Interfaces. ACM Press, New York, NY. 423--426.
    [50]
    K. Yu, J. Epps, and F. Chen. 2013. Mental workload classification via online writing features. In Proceedings of the 12th IEEE International Conference on Document Analysis and Recognition. 1110--1114.
    [51]
    J. Zhou, K. Hang, S. Oviatt, K. Yu, and F. Chen. 2014. Combining empirical and machine learning techniques to predict math expertise using pen signal features. In Proceedings of the 3rd International Conference on Multimodal Interaction's Grand Challenge Workshop on Multimodal Learning Analytics. ACM Press, New York, NY.
    [52]
    J. Zhou, J. Sun, F. Chen, Y. Wang, R. Taib, A. Khawaji, and Z. Li. 2015. Measurable decision making with GSR and pupillary analysis for intelligent user interface. ACM Trans. Comput.-Hum. Interact. 21, 6 (2015), 1--33.
    [53]
    J. Zhou, K. Yu, F. Chen, Y. Wang, and S. Arshad. In press. Multimodal behavioral and physiological signals as indicators of cognitive load. The Handbook of Multimodal-Multisensor Interfaces, Volume 2: Signal Processing, Architectures, and Detection of Emotion and Cognition, S. Oviatt, B. Schuller, P. Cohen, D. Sonntag, G. Potamianos, and A. Krueger (Eds.). San Rafael, CA: Morgan and Claypool.

    Cited By

    View all
    • (2024)Inclusive learning using industry 4.0 technologies: addressing student diversity in modern educationCogent Education10.1080/2331186X.2024.233023511:1Online publication date: 23-Mar-2024
    • (2024)Detecting non-verbal speech and gaze behaviours with multimodal data and computer vision to interpret effective collaborative learning interactionsEducation and Information Technologies10.1007/s10639-023-12315-129:1(1071-1098)Online publication date: 1-Jan-2024
    • (2023)Detection of contract cheating in pen-and-paper exams through the analysis of handwriting styleCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3617162(26-30)Online publication date: 9-Oct-2023
    • Show More Cited By

    Index Terms

    1. Dynamic Handwriting Signal Features Predict Domain Expertise

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Interactive Intelligent Systems
      ACM Transactions on Interactive Intelligent Systems  Volume 8, Issue 3
      September 2018
      235 pages
      ISSN:2160-6455
      EISSN:2160-6463
      DOI:10.1145/3236465
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 July 2018
      Accepted: 01 April 2018
      Revised: 01 January 2018
      Received: 01 April 2017
      Published in TIIS Volume 8, Issue 3

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Multimodal learning analytics
      2. dynamic handwriting
      3. empirical and statistical sciences
      4. hybrid techniques
      5. machine learning
      6. pen signal features
      7. prediction of domain expertise
      8. total energy expenditure

      Qualifiers

      • Research-article
      • Research
      • Refereed

      Funding Sources

      • Eminent Visiting Scholars Program in the Faculty of Engineering at the University of New South Wales in Sydney, Australia
      • Data61/CSIRO research internship
      • Incaa Designs, DATA61/CSIRO and UNSW

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)8

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Inclusive learning using industry 4.0 technologies: addressing student diversity in modern educationCogent Education10.1080/2331186X.2024.233023511:1Online publication date: 23-Mar-2024
      • (2024)Detecting non-verbal speech and gaze behaviours with multimodal data and computer vision to interpret effective collaborative learning interactionsEducation and Information Technologies10.1007/s10639-023-12315-129:1(1071-1098)Online publication date: 1-Jan-2024
      • (2023)Detection of contract cheating in pen-and-paper exams through the analysis of handwriting styleCompanion Publication of the 25th International Conference on Multimodal Interaction10.1145/3610661.3617162(26-30)Online publication date: 9-Oct-2023
      • (2023)A novel interaction for competence assessment using micro-behaviors:Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581519(1-14)Online publication date: 19-Apr-2023
      • (2023)What is Human-Centered about Human-Centered AI? A Map of the Research LandscapeProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580959(1-23)Online publication date: 19-Apr-2023
      • (2023)Measuring and classifying students' cognitive load in pen‐based mobile learning using handwriting, touch gestural and eye‐tracking dataBritish Journal of Educational Technology10.1111/bjet.1339455:2(625-653)Online publication date: 30-Sep-2023
      • (2023)Multimodal learning analytics—In‐between student privacy and encroachment: A systematic reviewBritish Journal of Educational Technology10.1111/bjet.1337354:6(1566-1586)Online publication date: 21-Aug-2023
      • (2023)Digital ink and differentiated subjective ratings for cognitive load measurement in middle childhoodBritish Journal of Educational Psychology10.1111/bjep.1259593:S2(368-385)Online publication date: 26-Mar-2023
      • (2023)Data mining techniques for predicting teacher evaluation in higher education: A systematic literature reviewHeliyon10.1016/j.heliyon.2023.e139399:3(e13939)Online publication date: Mar-2023
      • (2022)Modeling Users' Cognitive Performance Using Digital Pen FeaturesFrontiers in Artificial Intelligence10.3389/frai.2022.7871795Online publication date: 3-May-2022
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media