Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3524273.3532899acmconferencesArticle/Chapter ViewAbstractPublication PagesmmsysConference Proceedingsconference-collections
research-article

A QoE evaluation of procedural and example instruction formats for procedure training in augmented reality

Published: 05 August 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Augmented reality (AR) has significant potential as a training platform. The pedagogical purpose of training is learning or transfer. Learning is the acquisition of an ability to perform a procedure as taught while transfer involves generalising that knowledge to similar procedures in the same domain. Quality of experience (QoE) concerns the fulfilment of the application, system or service user's pragmatic and hedonic needs and expectations. Learning or transfer fulfil the AR trainee's pragmatic needs. Training instructions can be presented in procedural, and example formats. Procedural instructions tell the trainee what to do while examples show the trainee how to do it. These two different instruction formats can influence learning, transfer, and hardware resource availability differently. The AR trainee's hedonic needs and expectations may be influenced by the impact of instruction format resource consumption on system performance. Efficient training efficacy is a design concern for mobile AR training applications. This work aims to inform AR training application design by evaluating the influence of procedural and example instruction formats on AR trainee QoE.
    In this demo, an AR GoCube™ solver training application will be exhibited on the state-of-the-art Hololens 2 (HL2) mixed reality (MR) headset. This AR training app is part of a test framework that will be used in a between groups study to evaluate the influence of text-based and animated 3D model instruction formats on AR trainee QoE. This framework will record the trainee's physiological ratings, eye gaze features and facial expressions. Learning will be evaluated in a post-training recall phase while transfer will be evaluated using a pre and post training comparison of mental rotation skills. Application profiling code will monitor AR headset resource consumption.

    References

    [1]
    N. Gavish et al., 'Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks', Interact. Learn. Environ., vol. 23, no. 6, pp. 778--798, Nov. 2015
    [2]
    C. Liu, S. Huot, J. Diehl, W. Mackay, and M. Beaudouin-Lafon, 'Evaluating the benefits of real-time feedback in mobile augmented reality with hand-held devices', in Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12, Austin, Texas, USA, 2012, p. 2973.
    [3]
    S. Antifakos, F. Michahelles, and B. Schiele, 'Proactive Instructions for Furniture Assembly', in UbiComp 2002: Ubiquitous Computing, vol. 2498, G. Borriello and L. E. Holmquist, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002, pp. 351--360.
    [4]
    D. E. Qeshmy, J. Makdisi, E. H. D. Ribeiro da Silva, and J. Angelis, 'Managing Human Errors: Augmented Reality systems as a tool in the quality journey', Procedia Manuf., vol. 28, pp. 24--30, Jan. 2019
    [5]
    R. E. Mayer, 'Techniques that increase generative processing in multimedia learning: Open questions for cognitive load research', in Cognitive load theory, R. Moreno, Ed. New York, NY, US: Cambridge University Press, 2010, pp. 153--177.
    [6]
    M. Terrell, 'Anatomy of learning: Instructional design principles for the anatomical sciences', Anat. Rec. B. New Anat., vol. 289B, no. 6, pp. 252--260, Nov. 2006
    [7]
    R. E. Mayer, 'Techniques that reduce extraneous cognitive load and manage intrinsic cognitive load during multimedia learning', in Cognitive load theory, R. Moreno, Ed. New York, NY, US: Cambridge University Press, 2010, pp. 131--152.
    [8]
    H. B. Yim and P. H. Seong, 'Heuristic guidelines and experimental evaluation of effective augmented-reality based instructions for maintenance in nuclear power plants', Nucl. Eng. Des., vol. 240, no. 12, pp. 4096--4102, Dec. 2010
    [9]
    E. S. Wilschut, R. Könemann, M. S. Murphy, G. J. W. van Rhijn, and T. Bosch, 'Evaluating learning approaches for product assembly: using chunking of instructions, spatial augmented reality and display based work instructions', in Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, New York, NY, USA, Jun. 2019, pp. 376--381.
    [10]
    E. Eiriksdottir and R. Catrambone, 'Procedural Instructions, Principles, and Examples: How to Structure Instructions for Procedural Tasks to Enhance Performance, Learning, and Transfer', Hum. Factors J. Hum. Factors Ergon. Soc., vol. 53, no. 6, pp. 749--770, Dec. 2011
    [11]
    S. J. Henderson and S. Feiner, 'Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret', in 2009 8th IEEE International Symposium on Mixed and Augmented Reality, Oct. 2009, pp. 135--144.
    [12]
    U. Neumann and A. Majoros, 'Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance', in Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180), Mar. 1998, pp. 4--11.
    [13]
    M. Ljubojevic, V. Vaskovic, S. Stankovic, and J. Vaskovic, 'Using supplementary video in multimedia instruction as a teaching tool to increase efficiency of learning and quality of experience', Int. Rev. Res. Open Distrib. Learn., vol. 15, no. 3, Jun. 2014
    [14]
    S. Webel, U. Bockholt, T. Engelke, N. Gavish, M. Olbrich, and C. Preusche, 'An augmented reality training platform for assembly and maintenance skills', Robot. Auton. Syst., vol. 61, no. 4, pp. 398--403, Apr. 2013
    [15]
    S. Webel, U. Bockholt, T. Engelke, M. Peveri, M. Olbrich, and C. Preusche, 'Augmented Reality Training for Assembly and Maintenance Skills', BIO Web Conf., vol. 1, p. 00097, 2011
    [16]
    S. Webel, U. Bockholt, and J. Keil, 'Design Criteria for AR-Based Training of Maintenance and Assembly Tasks', in Virtual and Mixed Reality - New Trends, vol. 6773, R. Shumaker, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 123--132.
    [17]
    G. Evans, J. Miller, M. Iglesias Pena, A. MacAllister, and E. Winer, 'Evaluating the Microsoft HoloLens through an augmented reality assembly application', Anaheim, California, United States, May 2017, p. 101970V.
    [18]
    S. Möller and A. Raake, Quality of Experience, Advanced Concepts, Applications and Methods. Springer, 2013.
    [19]
    U. Neumann and A. Majoros, 'Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance', in Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180), Mar. 1998, pp. 4--11.
    [20]
    S. Vogt, A. Khamene, and F. Sauer, 'Reality Augmentation for Medical Procedures: System Architecture, Single Camera Marker Tracking, and System Evaluation', Int. J. Comput. Vis., vol. 70, no. 2, p. 179, Nov. 2006
    [21]
    J. Valerie, G. Aylward, and K. Varma, 'I Solved it! Using the Rubik's Cube to Support Mental Rotation in a Middle School Science Classroom', Jun. 2020
    [22]
    J. Valerie, 'Supporting Middle School Students' Spatial Skills Through Rubik'S Cube Play', May 2020, Accessed: Jun. 24, 2021. [Online]. Available: http://conservancy.umn.edu/handle/11299/215041
    [23]
    S. Bhaduri, K. Van Horne, and T. Sumner, 'Designing an Informal Learning Curriculum to Develop 3D Modeling Knowledge and Improve Spatial Thinking Skills', in Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland Uk, May 2019, pp. 1--8.
    [24]
    B. Berki, 'Does Effective Use of MaxWhere VR Relate to the Individual Spatial Memory and Mental Rotation Skills?', Acta Polytech. Hung., vol. 16, no. 5, Aug. 2019
    [25]
    S. J. Henderson and S. K. Feiner, 'Augmented reality in the psychomotor phase of a procedural task', in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Oct. 2011, pp. 191--200.
    [26]
    V. Krauß, 'Current Practices, Challenges, and Design Implications for Collaborative AR/VR Application Development', p. 15, 2021.
    [27]
    J. Rambach, A. Pagani, M. Schneider, O. Artemenko, and D. Stricker, '6DoF Object Tracking based on 3D Scans for Augmented Reality Remote Live Support', Computers, vol. 7, no. 1, p. 6, Mar. 2018
    [28]
    Ó. Blanco-Novoa, T. M. Fernández-Caramés, P. Fraga-Lamas, and M. A. Vilar-Montesinos, 'A Practical Evaluation of Commercial Industrial Augmented Reality Systems in an Industry 4.0 Shipyard', IEEE Access, vol. 6, pp. 8201--8218, 2018
    [29]
    C. McCarthy, N. Pradhan, C. Redpath, and A. Adler, 'Validation of the Empatica E4 wristband', in 2016 IEEE EMBS International Student Conference (ISC), May 2016, pp. 1--4.
    [30]
    S. Kapp, M. Barz, S. Mukhametov, D. Sonntag, and J. Kuhn, 'ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays', Sensors, vol. 21, no. 6, Art. no. 6, Jan. 2021
    [31]
    E. Hynes, R. Flynn, B. Lee, and N. Murray, 'An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application', in 2020 31st Irish Signals and Systems Conference (ISSC), Jun. 2020, pp. 1--6.
    [32]
    T. Baltrušaitis, P. Robinson, and L. P. Morency, 'OpenFace: An open source facial behavior analysis toolkit', in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Mar. 2016, pp. 1--10.
    [33]
    F. Rivalta, N. Pongsakornsathien, A. Gardi, R. Sabatini, T. Kistan, and N. Ezer, Facial Expression Analysis for Cognitive State Estimation in Aerospace Human-Machine Systems. 2020.
    [34]
    S. Aldekhyl, R. B. Cavalcanti, and L. M. Naismith, 'Cognitive load predicts point-of-care ultrasound simulator performance', Perspect. Med. Educ., vol. 7, no. 1, pp. 23--32, Feb. 2018
    [35]
    S. G. Vandenberg and A. R. Kuse, 'Mental Rotations, a Group Test of Three-Dimensional Spatial Visualization.', Percept. Mot. Skills, vol. 47, no. 2, pp. 599--604, Dec. 1978
    [36]
    P. Gerjets, K. Scheiter, and R. Catrambone, 'Can learning from molar and modular worked examples be enhanced by providing instructional explanations and prompting self-explanations?', Learn. Instr., vol. 16, no. 2, pp. 104--121, Apr. 2006
    [37]
    P. Gerjets, K. Scheiter, and R. Catrambone, 'Designing Instructional Examples to Reduce Intrinsic Cognitive Load: Molar versus Modular Presentation of Solution Procedures', Instr. Sci., vol. 32, Jan. 2004
    [38]
    D. Concannon, R. Flynn, and N. Murray, 'A quality of experience evaluation system and research challenges for networked virtual reality-based teleoperation applications', in Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, Amherst, Massachusetts, Jun. 2019, pp. 10--12.
    [39]
    D. D. Salvucci and J. H. Goldberg, 'Identifying fixations and saccades in eye-tracking protocols', in Proceedings of the symposium on Eye tracking research & applications - ETRA '00, Palm Beach Gardens, Florida, United States, 2000, pp. 71--78.
    [40]
    G. Donato, M. S. Bartlett, J. C. Hager, P. Ekman, and T. J. Sejnowski, 'Classifying facial actions', IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 10, pp. 974--989, Oct. 1999
    [41]
    Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, 'A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions', IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1, pp. 39--58, Jan. 2009
    [42]
    P. Reichl et al., 'Towards a comprehensive framework for QOE and user behavior modelling', in 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), Pylos-Nestoras, May 2015, pp. 1--6.
    [43]
    Y. Huang, F. Chen, S. Lv, and X. Wang, 'Facial Expression Recognition: A Survey', Symmetry, vol. 11, no. 10, p. 1189, Oct. 2019
    [44]
    A. K. Davison, C. Lansley, N. Costen, K. Tan, and M. H. Yap, 'SAMM: A Spontaneous Micro-Facial Movement Dataset', IEEE Trans. Affect. Comput., vol. 9, no. 1, pp. 116--129, Jan. 2018
    [45]
    S. Polikovsky, Y. Kameda, and Y. Ohta, 'Facial micro-expressions recognition using high speed camera and 3D-gradient descriptor', pp. 16--16, Jan. 2009
    [46]
    M. Takalkar, M. Xu, Q. Wu, and Z. Chaczko, 'A survey: facial microexpression recognition', Multimed. Tools Appl., vol. 77, no. 15, pp. 19301--19325, Aug. 2018
    [47]
    S. Du, Y. Tao, and A. M. Martinez, 'Compound facial expressions of emotion', Proc. Natl. Acad. Sci., vol. 111, no. 15, pp. E1454--E1462, Apr. 2014
    [48]
    P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, 'The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression', in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, Jun. 2010, pp. 94--101.
    [49]
    T. Kanade, J. F. Cohn, and Y. Tian, 'Comprehensive database for facial expression analysis', in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), Mar. 2000, pp. 46--53.
    [50]
    Y.-I. Tian, T. Kanade, and J. F. Cohn, 'Recognizing action units for facial expression analysis', IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 2, pp. 97--115, Feb. 2001
    [51]
    K. Ghamen and A. Caplier, 'Positive and Negative Expressions Classification Using the Belief Theory', Int. J. Tomogr. Stat., vol. 17, no. S11, pp. 72--87, Jul. 2011.
    [52]
    W.-J. Yan, Q. Wu, J. Liang, Y.-H. Chen, and X. Fu, 'How Fast are the Leaked Facial Expressions: The Duration of Micro-Expressions', J. Nonverbal Behav., vol. 37, no. 4, pp. 217--230, Dec. 2013
    [53]
    T. Pfister, Xiaobai Li, G. Zhao, and M. Pietikäinen, 'Recognising spontaneous facial micro-expressions', in 2011 International Conference on Computer Vision, Nov. 2011, pp. 1449--1456.
    [54]
    J. Zhai and A. Barreto, 'Stress Detection in Computer Users Based on Digital Signal Processing of Noninvasive Physiological Variables', in 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, Aug. 2006, pp. 1355--1358.
    [55]
    J. Aigrain, M. Spodenkiewicz, S. Dubuisson, M. Detyniecki, D. Cohen, and M. Chetouani, 'Multimodal Stress Detection from Multiple Assessments', IEEE Trans. Affect. Comput., vol. 9, no. 4, pp. 491--506, Oct. 2018
    [56]
    M. Garbarino, M. Lai, D. Bender, R. W. Picard, and S. Tognetti, 'Empatica E3: A wearable wireless multi-sensor device for real-time computerized biofeedback and data acquisition', in 2014 4th International Conference on Wireless Mobile Communication and Healthcare - Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH), Nov. 2014, pp. 39--42.
    [57]
    D. Ungureanu et al., 'HoloLens 2 Research Mode as a Tool for Computer Vision Research', Aug. 2020, Accessed: Nov. 25, 2020. [Online]. Available: https://arxiv.org/abs/2008.11239v1
    [58]
    R. E. Mayer and R. Moreno, 'Aids to computer-based multimedia learning', Learn. Instr., vol. 12, no. 1, pp. 107--119, Feb. 2002
    [59]
    R. E. Mayer, 'Multimedia aids to problem-solving transfer', Int. J. Educ. Res., vol. 31, no. 7, pp. 611--623, Jan. 1999
    [60]
    R. E. Mayer and R. Moreno, 'Nine Ways to Reduce Cognitive Load in Multimedia Learning', Educ. Psychol., vol. 38, no. 1, pp. 43--52, Mar. 2003

    Cited By

    View all
    • (2023)Exercisable Learning-Theory and Evidence-Based Andragogy for Training Effectiveness using XR (ELEVATE-XR): Elevating the ROI of Immersive TechnologiesInternational Journal of Human–Computer Interaction10.1080/10447318.2023.218852939:11(2177-2198)Online publication date: 19-Mar-2023

    Index Terms

    1. A QoE evaluation of procedural and example instruction formats for procedure training in augmented reality

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MMSys '22: Proceedings of the 13th ACM Multimedia Systems Conference
      June 2022
      432 pages
      ISBN:9781450392839
      DOI:10.1145/3524273
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 05 August 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. augmented reality
      2. cognitive load
      3. eye gaze
      4. learning
      5. memory
      6. micro facial expressions
      7. quality of experience
      8. training

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      MMSys '22
      Sponsor:
      MMSys '22: 13th ACM Multimedia Systems Conference
      June 14 - 17, 2022
      Athlone, Ireland

      Acceptance Rates

      Overall Acceptance Rate 176 of 530 submissions, 33%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)42
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 10 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Exercisable Learning-Theory and Evidence-Based Andragogy for Training Effectiveness using XR (ELEVATE-XR): Elevating the ROI of Immersive TechnologiesInternational Journal of Human–Computer Interaction10.1080/10447318.2023.218852939:11(2177-2198)Online publication date: 19-Mar-2023

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media