Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3351529.3360662acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
abstract

Multimodal Assessment on Teaching Skills via Neural Networks

Published: 14 October 2019 Publication History

Abstract

Repeated training of teaching skills for student teachers is difficult as many collaborators should be needed for rehearsal environment. Therefore, we are studying a teaching training system using virtual classroom which is constructed by virtual agents as students. In order to construct such a training system, an automatic assessment of human behaviour by the system is required. On the other hand, it is difficult to assess teaching skills in term of educational environment due to complex interactions with many students and subjectivity of the task of teaching assessment. In this study, we propose an assessment model by neural networks (NN) to learn more potential assessment features using multi-modal information: gesture and prosodic information, facial expressions, and the teacher’s intention.

References

[1]
T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L. Morency. 2018. OpenFace 2.0: Facial Behavior Analysis Toolkit. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018). 59–66. https://doi.org/10.1109/FG.2018.00019
[2]
Harry Bunt. 2009. The DIT++ Taxonomy for Functional Dialogue Markup. In AAMAS 2009 Workshop, Towards a Standard Markup Language for Embodied Dialogue Acts. 13–24.
[3]
Harry Bunt, Volha Petukhova, David Traum, and Jan Alexandersson. 2017. Dialogue Act Annotation with the ISO 24617-2 Standard. In Multimodal Interaction with W3C Standards: Toward Natural User Interfaces to Everything, Deborah A. Dahl (Ed.). Springer International Publishing, Cham, 109–135. https://doi.org/10.1007/978-3-319-42816-1_6
[4]
Mathieu Chollet, Pranav Ghate, Catherine Neubauer, and Stefan Scherer. 2018. Influence of Individual Differences When Training Public Speaking with Virtual Audiences. In Proceedings of the 18th International Conference on Intelligent Virtual Agents(IVA ’18). ACM, New York, NY, USA, 1–7. https://doi.org/10.1145/3267851.3267874
[5]
M. Chollet and S. Scherer. 2017. Assessing Public Speaking Ability from Thin Slices of Behavior. In 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017). 310–316. https://doi.org/10.1109/FG.2017.45
[6]
Florian Eyben, Martin Wöllmer, and Björn Schuller. 2010. Opensmile: The Munich Versatile and Fast Open-Source Audio Feature Extractor. In Proceedings of the 18th ACM International Conference on Multimedia(MM ’10). ACM, New York, NY, USA, 1459–1462. https://doi.org/10.1145/1873951.1874246
[7]
Masato Fukuda, Hung-Hsuan Huang, Kazuhiro Kuwabara, and Toyoaki Nishida. 2018. Proposal of a Multi-Purpose and Modular Virtual Classroom Framework for Teacher Training. In Proceedings of the 18th International Conference on Intelligent Virtual Agents(IVA ’18). ACM, New York, NY, USA, 355–356. https://doi.org/10.1145/3267851.3267917
[8]
Masato Fukuda., Hung-Hsuan Huang., and Toyoaki Nishida.2019. Detection of Student Teacher’s Intention Using Multimodal Features in a Virtual Classroom. In Proceedings of the 11th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART,. INSTICC, SciTePress, 170–177. https://doi.org/10.5220/0007379901700177
[9]
Masato Fukuda, Hung-Hsuan Huang, Naoki Ohta, and Kazuhiro Kuwabara. 2017. Proposal of a Parameterized Atmosphere Generation Model in a Virtual Classroom. In Proceedings of the 5th International Conference on Human Agent Interaction(HAI ’17). ACM, New York, NY, USA, 11–16. https://doi.org/10.1145/3125739.3125776
[10]
Jean-Luc Lugrin, Fred Charles, Michael Habel, Jamie Matthews, Henrik Dudaczy, Sebastian Oberdörfer, Alice Wittmann, Christian Seufert, Julie Porteous, Silke Grafe, and Marc Erich Latoschik. 2018. Benchmark Framework for Virtual Students’ Behaviours. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems(AAMAS ’18). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2236–2238.
[11]
Jean-Luc Lugrin, Marc Erich Latoschik, Michael Habel, Daniel Roth, Christian Seufert, and Silke Grafe. 2016. Breaking Bad Behaviors: A New Tool for Learning Classroom Management Using Virtual Reality. Frontiers in ICT 3(2016). https://doi.org/10.3389/fict.2016.00026

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '19: Adjunct of the 2019 International Conference on Multimodal Interaction
October 2019
86 pages
ISBN:9781450369374
DOI:10.1145/3351529
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 October 2019

Check for updates

Author Tags

  1. education application
  2. multimodal interaction
  3. user assessment
  4. virtual classroom
  5. virtual reality

Qualifiers

  • Abstract
  • Research
  • Refereed limited

Conference

ICMI '19

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 97
    Total Downloads
  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media