Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, scientific, peer-reviewed, open access journal of multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and many other databases.
- Journal Rank: CiteScore - Q2 (Computer Science Applications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision provided to authors approximately 21 days after submission; acceptance to publication is undertaken in 6.5 days (median values for papers published in this journal in the second half of 2021).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
A Survey on Databases for Multimodal Emotion Recognition and an Introduction to the VIRI (Visible and InfraRed Image) Database
Multimodal Technol. Interact. 2022, 6(6), 47; https://doi.org/10.3390/mti6060047 (registering DOI) - 17 Jun 2022
Abstract
Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is
[...] Read more.
Multimodal human–computer interaction (HCI) systems pledge a more human–human-like interaction between machines and humans. Their prowess in emanating an unambiguous information exchange between the two makes these systems more reliable, efficient, less error prone, and capable of solving complex tasks. Emotion recognition is a realm of HCI that follows multimodality to achieve accurate and natural results. The prodigious use of affective identification in e-learning, marketing, security, health sciences, etc., has increased demand for high-precision emotion recognition systems. Machine learning (ML) is getting its feet wet to ameliorate the process by tweaking the architectures or wielding high-quality databases (DB). This paper presents a survey of such DBs that are being used to develop multimodal emotion recognition (MER) systems. The survey illustrates the DBs that contain multi-channel data, such as facial expressions, speech, physiological signals, body movements, gestures, and lexical features. Few unimodal DBs are also discussed that work in conjunction with other DBs for affect recognition. Further, VIRI, a new DB of visible and infrared (IR) images of subjects expressing five emotions in an uncontrolled, real-world environment, is presented. A rationale for the superiority of the presented corpus over the existing ones is instituted.
Full article
Open AccessArticle
The Quantitative Case-by-Case Analyses of the Socio-Emotional Outcomes of Children with ASD in Robot-Assisted Autism Therapy
Multimodal Technol. Interact. 2022, 6(6), 46; https://doi.org/10.3390/mti6060046 - 15 Jun 2022
Abstract
With its focus on robot-assisted autism therapy, this paper presents case-by-case analyses of socio-emotional outcomes of 34 children aged 3–12 years old, with different cases of Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). We grouped children by the following characteristics:
[...] Read more.
With its focus on robot-assisted autism therapy, this paper presents case-by-case analyses of socio-emotional outcomes of 34 children aged 3–12 years old, with different cases of Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). We grouped children by the following characteristics: ASD alone (n = 22), ASD+ADHD (n = 12), verbal (n = 11), non-verbal (n = 23), low-functioning autism (n = 24), and high-functioning autism (n = 10). This paper provides a series of separate quantitative analyses across the first and last sessions, adaptive and non-adaptive sessions, and parent and no-parent sessions, to present child experiences with the NAO robot, during play-based activities. The results suggest that robots are able to interact with children in social ways and influence their social behaviors over time. Each child with ASD is a unique case and needs an individualized approach to practice and learn social skills with the robot. We, finally, present specific child–robot intricacies that affect how children engage and learn over time as well as across different sessions.
Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
►▼
Show Figures
Figure 1
Open AccessReview
Human–Machine Interface for Remote Crane Operation: A Review
Multimodal Technol. Interact. 2022, 6(6), 45; https://doi.org/10.3390/mti6060045 - 10 Jun 2022
Abstract
►▼
Show Figures
Cranes are traditionally controlled by operators who are present on-site. While this operation mode is still common nowadays, a significant amount of progress has been made to move operators away from their cranes, so that they would not be exposed to hazardous situations
[...] Read more.
Cranes are traditionally controlled by operators who are present on-site. While this operation mode is still common nowadays, a significant amount of progress has been made to move operators away from their cranes, so that they would not be exposed to hazardous situations that may occur in their workplace. Despite its apparent benefits, remote operation has a major challenge that does not exist in on-site operation, i.e., the amount of information that operators could receive remotely is more limited than what they could receive by being on-site. Since operators and their cranes are located separately, human–machine interface plays an important role in facilitating information exchange between operators and their machines. This article examines various kinds of human–machine interfaces for remote crane operation that have been proposed within the scientific community, discusses their possible benefits, and highlights opportunities for future research.
Full article
Figure 1
Open AccessArticle
Smartphone Usage and Studying: Investigating Relationships between Type of Use and Self-Regulatory Skills
Multimodal Technol. Interact. 2022, 6(6), 44; https://doi.org/10.3390/mti6060044 - 07 Jun 2022
Abstract
The purpose of this study is to investigate the relationships between self-regulated learning skills and smartphone usage in relation to studying. It is unclear whether poor learning habits related to smartphone usage are unique traits or a reflection of existing self-regulated learning skills.
[...] Read more.
The purpose of this study is to investigate the relationships between self-regulated learning skills and smartphone usage in relation to studying. It is unclear whether poor learning habits related to smartphone usage are unique traits or a reflection of existing self-regulated learning skills. The self-regulatory skills (a) regulation, (b) knowledge, and (c) management of cognition were measured and compared to the smartphone practices (a) multitasking, (b) avoiding distractions, and (c) mindful use. First-year undergraduates (n = 227) completed an online survey of self-regulatory skills and common phone practices. The results support the predictions that self-regulatory skills are negatively correlated with multitasking while studying and are positively correlated with distraction avoidance and mindful use of the phone. The management of cognition factor, which includes effort, time, and planning, was strongly correlated with multitasking (r = −0.20) and avoiding distractions (r = 0.45). Regulation of cognition was strongly correlated with mindful use (r = 0.33). These results support the need to consider the relationship between self-regulation and smartphone use as it relates to learning.
Full article
Open AccessArticle
Design, Development, and a Pilot Study of a Low-Cost Robot for Child–Robot Interaction in Autism Interventions
Multimodal Technol. Interact. 2022, 6(6), 43; https://doi.org/10.3390/mti6060043 - 06 Jun 2022
Abstract
Socially assistive robots are widely deployed in interventions with children on the autism spectrum, exploiting the benefits of this technology in social behavior intervention plans, while reducing their autistic behavior. Furthermore, innovations in modern technologies such as machine learning enhance these robots with
[...] Read more.
Socially assistive robots are widely deployed in interventions with children on the autism spectrum, exploiting the benefits of this technology in social behavior intervention plans, while reducing their autistic behavior. Furthermore, innovations in modern technologies such as machine learning enhance these robots with great capabilities. Since the results of this implementation are promising, their total cost makes them unaffordable for some organizations while the needs are growing progressively. In this paper, a low-cost robot for autism interventions is proposed, benefiting from the advantages of machine learning and low-cost hardware. The mechanical design of the robot and the development of machine learning models are presented. The robot was evaluated by a small group of educators for children with ASD. The results of various model implementations, together with the design evaluation of the robot, are encouraging and indicate that this technology would be advantageous for deployment in child–robot interaction scenarios.
Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Interactive Visualizations of Transparent User Models for Self-Actualization: A Human-Centered Design Approach
Multimodal Technol. Interact. 2022, 6(6), 42; https://doi.org/10.3390/mti6060042 - 30 May 2022
Abstract
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer
[...] Read more.
This contribution sheds light on the potential of transparent user models for self-actualization. It discusses the development of EDUSS, a conceptual framework for self-actualization goals of transparent user modeling. Drawing from a qualitative research approach, the framework investigates self-actualization from psychology and computer science disciplines and derives a set of self-actualization goals and mechanisms. Following a human-centered design (HCD) approach, the framework was applied in an iterative process to systematically design a set of interactive visualizations to help users achieve different self-actualization goals in the scientific research domain. For this purpose, an explainable user interest model within a recommender system is utilized to provide various information on how the interest models are generated from users’ publication data. The main contributions are threefold: First, a synthesis of research on self-actualization from different domains. Second, EDUSS, a theoretically-sound self-actualization framework for transparent user modeling consisting of five main goals, namely, Explore, Develop, Understand, Scrutinize, and Socialize. Third, an instantiation of the proposed framework to effectively design interactive visualizations that can support the different self-actualization goals, following an HCD approach.
Full article
(This article belongs to the Special Issue Explainable User Models)
►▼
Show Figures
Figure 1
Open AccessArticle
Designing with Genius Loci: An Approach to Polyvocality in Interactive Heritage Interpretation
Multimodal Technol. Interact. 2022, 6(6), 41; https://doi.org/10.3390/mti6060041 - 24 May 2022
Abstract
Co-design with communities interested in heritage has oriented itself towards designing for polyvocality to diversify the accepted knowledges, values and stories associated with heritage places. However, engagement with heritage theory has only recently been addressed in HCI design, resulting in some previous work
[...] Read more.
Co-design with communities interested in heritage has oriented itself towards designing for polyvocality to diversify the accepted knowledges, values and stories associated with heritage places. However, engagement with heritage theory has only recently been addressed in HCI design, resulting in some previous work reinforcing the same realities that designers set out to challenge. There is need for an approach that supports designers in heritage settings in working critically with polyvocality to capture values, knowledges, and authorised narratives and reflect on how these are negotiated and presented in the designs created. We contribute “Designing with Genius Loci” (DwGL)—our proposed approach to co-design for polyvocality. We conceptualised DwGL through long-term engagement with volunteers and staff at a UK heritage site. First, we used ongoing recruitment to incentivise participation. We held a series of making workshops to explore participants’ attitudes towards authorised narratives. We built participants’ commitments to collaboration by introducing the common goal of creating an interactive digital design. Finally, as we designed, we enacted our own commitments to the heritage research and to participants’ experiences. These four steps form the backbone of our proposed approach and serve as points of reflexivity. We applied DwGL to co-creating three designs: Un/Authorised View, SDH Palimpsest and Loci Stories, which we present in an annotated portfolio. Grounded in research through design, we reflect on working with the proposed approach and provide three lessons learned, guiding further research efforts in this design space: (1) creating a conversation between authorised and personal heritage stories; (2) designing using polyvocality negotiates voices; and (3) designs engender existing qualities and values. The proposed approach places polyvocality foremost in interactive heritage interpretation and facilitates valuable discussions between the designers and communities involved.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
►▼
Show Figures
Figure 1
Open AccessArticle
Medievals and Moderns in Conversation: Co-Designing Creative Futures for Underused Historic Churches in Rural Communities
by
, , , , , and
Multimodal Technol. Interact. 2022, 6(5), 40; https://doi.org/10.3390/mti6050040 - 18 May 2022
Abstract
For many living in rural areas, the loss of traditional community assets and increased social fragmentation are a common feature of everyday life. The empty village church is a poignant symbol of these challenges; yet, these are sites that hold considerable potential for
[...] Read more.
For many living in rural areas, the loss of traditional community assets and increased social fragmentation are a common feature of everyday life. The empty village church is a poignant symbol of these challenges; yet, these are sites that hold considerable potential for new placemaking solutions that respond to the needs of communities today. This means looking beyond “the traditional village church” to recognise a longer history of church adaptation and resilience within the lives of communities. In this paper we ask: how can co-design, projected through a Wicked problems and Clumsy solutions lens, help imagine new futures for communities and their historic churches today? Clumsy solutions consider a plurality of different perspectives on the nature of problems and their resolution to deliver more effective solutions with broad appeal. In the search for clumsiness, we turn to ‘long history’ and ‘slow technology’ for inspiration, uncovering deeper resonance with historical communities of place and anchoring that continuity within church sites themselves. Our paper demonstrates how Wicked/Clumsy thinking can account for the challenges faced by rural communities today, bootstrap co-design activities in the development of clumsy solutions, and uncover clumsiness in long history and slow technology dimensions—together laying the foundation for new placemaking strategies.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
►▼
Show Figures
Figure 1
Open AccessArticle
Parental Influence in Disengagement during Robot-Assisted Activities: A Case Study of a Parent and Child with Autism Spectrum Disorder
Multimodal Technol. Interact. 2022, 6(5), 39; https://doi.org/10.3390/mti6050039 - 18 May 2022
Abstract
We examined the influence of a parent on robot-assisted activities for a child with Autism Spectrum Disorder. We observed the interactions between a robot and the child wearing a wearable device during free play sessions. The child participated in four sessions with the
[...] Read more.
We examined the influence of a parent on robot-assisted activities for a child with Autism Spectrum Disorder. We observed the interactions between a robot and the child wearing a wearable device during free play sessions. The child participated in four sessions with the parent and interacted willingly with the robot, therapist, and parent. The parent intervened when the child did not interact with the robot, considered “disengagement with the robot”. The number and method of intervention were decided solely by the parent. This study adopted video recording for behavioral observations and specifically observed the situations before the disengagement with the robot, the child’s behaviors during disengagement, and the parent’s intervention. The results showed that mostly the child abruptly discontinued the interactions with the robot without being stimulated by the surrounding environment. The second most common reason was being distracted by various devices in the play sessions, such as the wearable device, a video camera, and a laptop. Once he was disengaged with the robot, he primarily exhibited inappropriate and repetitive behaviors accentuating the symptoms of autism spectrum disorder. The child could re-initiate the interaction with the robot with an 80% chance through the parent’s intervention. This suggests that engagement with a robot may differ depending on the parent’s participation. Moreover, we must consider types of parental feedback to re-initiate engagement with a robot to benefit from the therapy adequately. In addition, environmental distractions must be considered, especially when using multiple devices for therapy.
Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Didactic Use of Virtual Reality in Colombian Universities: Professors’ Perspective
Multimodal Technol. Interact. 2022, 6(5), 38; https://doi.org/10.3390/mti6050038 - 16 May 2022
Abstract
This paper presents quantitative research on the perception of the didactic use of virtual reality by university professors in Colombia, with special attention to the differences according to their area of knowledge, as the main variable, and gender and digital generation, as secondary
[...] Read more.
This paper presents quantitative research on the perception of the didactic use of virtual reality by university professors in Colombia, with special attention to the differences according to their area of knowledge, as the main variable, and gender and digital generation, as secondary variables. The study involved 204 professors from different Colombian universities. As an instrument, a survey designed for this purpose was used with four scales that were used to measure, on a Likert scale, different dimensions involving the participants’ perception of the use of virtual reality in the classroom. The answers were analyzed statistically and the differences in the perceptions have been identified by means of parametric statistical tests according to the following: (i) area of knowledge, (ii) gender, (iii) digital generation of the participants. The results showed that the participants expressed high valuations of virtual reality, despite having intermediate or low levels of digital competence. Gaps were identified in terms of area of knowledge, gender, and digital generation (digital natives or immigrants) with respect to opinions of virtual reality and digital competence. The highest valuations of virtual reality are given by professors of Humanities, and by digital natives. It is suggested that Colombian universities implement training plans on digital competence for professors and that these plans be aimed at strengthening knowledge of virtual reality.
Full article
(This article belongs to the Special Issue Virtual Reality and Augmented Reality)
►▼
Show Figures
Figure 1
Open AccessArticle
A Web-Based Platform for Traditional Craft Documentation
by
, , , , , , and
Multimodal Technol. Interact. 2022, 6(5), 37; https://doi.org/10.3390/mti6050037 - 10 May 2022
Abstract
A web-based authoring platform for the representation of traditional crafts is proposed. This platform is rooted in a systematic method for craft representation, the adoption, knowledge, and representation standards of the cultural heritage (CH) domain, and the integration of outcomes from advanced digitization
[...] Read more.
A web-based authoring platform for the representation of traditional crafts is proposed. This platform is rooted in a systematic method for craft representation, the adoption, knowledge, and representation standards of the cultural heritage (CH) domain, and the integration of outcomes from advanced digitization techniques. In this paper, we present the implementation of this method by an online, collaborative documentation platform where digital assets are curated into digitally preservable craft representations. The approach is demonstrated through the representation of three traditional crafts as use cases, and the lessons learned from this endeavor are presented.
Full article
(This article belongs to the Special Issue Digital Cultural Heritage (Volume II))
►▼
Show Figures
Figure 1
Open AccessArticle
Co-Designing the User Experience of Location-Based Games for a Network of Museums: Involving Cultural Heritage Professionals and Local Communities
by
, , , , , , and
Multimodal Technol. Interact. 2022, 6(5), 36; https://doi.org/10.3390/mti6050036 - 04 May 2022
Abstract
The design of location-based games (LBGs) for cultural heritage should ensure the active participation and contribution of local communities and heritage professionals to achieve contextual relevance, importance, and content validity. This paper presents an approach and methods of the participatory and co-design of
[...] Read more.
The design of location-based games (LBGs) for cultural heritage should ensure the active participation and contribution of local communities and heritage professionals to achieve contextual relevance, importance, and content validity. This paper presents an approach and methods of the participatory and co-design of LBGs that promote awareness and learning about the intangible cultural heritage of craftsmanship and artisanal technology throughout a long-term project from sensitization to implementation. Following the design thinking process, we outline the participatory methods (and reflect on results and lessons learnt) of involving cultural heritage professionals, local communities, and visitors (users) of museums and cultural settlements, mainly: field visits, design workshops, field playtesting, and field studies. We discuss issues of participatory design that we experienced throughout the project such as participant centrality and representativeness, producing tangible output from meetings, co-creation of content via playtesting, and implications from the pandemic. This work contributes a case of participatory and co-design of LBGs for cultural heritage that is characterized by longevity and engagement throughout the design process for three LBGs of a museum network in different cultural sites.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
►▼
Show Figures
Figure 1
Open AccessArticle
Brain Melody Interaction: Understanding Effects of Music on Cerebral Hemodynamic Responses
Multimodal Technol. Interact. 2022, 6(5), 35; https://doi.org/10.3390/mti6050035 - 04 May 2022
Abstract
Music elicits strong emotional reactions in people, regardless of their gender, age or cultural background. Understanding the effects of music on brain activity can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research. We explore the
[...] Read more.
Music elicits strong emotional reactions in people, regardless of their gender, age or cultural background. Understanding the effects of music on brain activity can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research. We explore the effects of three different music genres on people’s cerebral hemodynamic responses. Functional near-infrared spectroscopy (fNIRS) signals were collected from 27 participants while they listened to 12 different pieces of music. The signals were pre-processed to reflect oxyhemoglobin (HbO2) and deoxyhemoglobin (HbR) concentrations in the brain. K-nearest neighbor (KNN), random forest (RF) and a one-dimensional (1D) convolutional neural network (CNN) were used to classify the signals using music genre and subjective responses provided by the participants as labels. Results from this study show that the highest accuracy in distinguishing three music genres was achieved by deep learning models (73.4% accuracy in music genre classification and 80.5% accuracy when predicting participants’ subjective rating of emotional content of music). This study validates a strong motivation for using fNIRS signals to detect people’s emotional state while listening to music. It could also be beneficial in giving personalised music recommendations based on people’s brain activity to improve their emotional well-being.
Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
►▼
Show Figures
Figure 1
Open AccessArticle
When Digital Doesn’t Work: Experiences of Co-Designing an Indigenous Community Museum
by
and
Multimodal Technol. Interact. 2022, 6(5), 34; https://doi.org/10.3390/mti6050034 - 03 May 2022
Abstract
The challenges to implement digital technologies in community-based projects are exposed in a case study co-designing an indigenous Community Museum, situated in the Kelabit Highlands of Borneo, Malaysia. Over a five-year period, this co-design project consisted of field trips, community engagements, and creating
[...] Read more.
The challenges to implement digital technologies in community-based projects are exposed in a case study co-designing an indigenous Community Museum, situated in the Kelabit Highlands of Borneo, Malaysia. Over a five-year period, this co-design project consisted of field trips, community engagements, and creating a documentary film and an inaugural exhibition in the newly constructed Kelabit Museum. This article highlights the limitations of digital technologies in museum contexts. Co-designing with stakeholders resulted in the decision to take a non-digital approach to the museum development to encourage greater community agency and prevent disengagement, as it incorporated heritage values in local community developments and cultural tourism plans. The findings demonstrate that community self-determination conflicted with preconceived outcomes, resulting in a need to re-evaluate the goals of the project. Instead, the ambition of cultural heritage preservation that maintained community participation emerged as the central goal. Removing the focus on a digital solution expanded community participation, which is a finding that should be used to frame other community cultural developments.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
Open AccessConcept Paper
Too Low Motivation, Too High Authority? Digital Media Support for Co-Curation in Local Cultural Heritage Communities
Multimodal Technol. Interact. 2022, 6(5), 33; https://doi.org/10.3390/mti6050033 - 01 May 2022
Abstract
Over the last decades, a shift towards participatory approaches could be observed in cultural heritage institutions. In co-curation processes, museums collaborate with public audiences to identify, select, prepare, and interpret cultural materials. This article focuses on the question how to engage and motivate
[...] Read more.
Over the last decades, a shift towards participatory approaches could be observed in cultural heritage institutions. In co-curation processes, museums collaborate with public audiences to identify, select, prepare, and interpret cultural materials. This article focuses on the question how to engage and motivate local communities or individuals in rethinking dominant discourses or expert narratives regarding cultural heritage and bringing in their own experiences and knowledge. Based on four case studies of cultural co-curation, we delineate two basic challenges for this process: (1) Authority—even though museums strive to involve the public, there is still an imbalance in participation due to the museums’ authoritative status. (2) Motivation—participation in co-curation processes requires high levels of motivation, which are difficult to achieve. Based on the media synchronicity theory, we discuss which characteristics of new media technologies can be helpful to overcome these challenges. Media can increase awareness on counternarratives and blind spots in cultural collections. They can provide a setting where the participants can easily contribute, feel competent to do so, are empowered to rethink dominant discourses, develop a sense of relatedness with other contributors, and maintain autonomy in how and to which degree they engage in the discourse.
Full article
(This article belongs to the Special Issue Co-Design Within and Between Communities in Cultural Heritage)
►▼
Show Figures
Figure 1
Open AccessArticle
An Enactivist Account of Mind Reading in Natural Language Understanding
by
Multimodal Technol. Interact. 2022, 6(5), 32; https://doi.org/10.3390/mti6050032 - 29 Apr 2022
Abstract
In this paper we apply our understanding of the radical enactivist agenda to the classic AI-hard problem of Natural Language Understanding. When Turing devised his famous test the assumption was that a computer could use language and the challenge would be to mimic
[...] Read more.
In this paper we apply our understanding of the radical enactivist agenda to the classic AI-hard problem of Natural Language Understanding. When Turing devised his famous test the assumption was that a computer could use language and the challenge would be to mimic human intelligence. It turned out playing chess and formal logic were easy compared to understanding what people say. The techniques of good old-fashioned AI (GOFAI) assume symbolic representation is the core of reasoning and by that paradigm human communication consists of transferring representations from one mind to another. However, one finds that representations appear in another’s mind without appearing in the intermediary language. People communicate by mind reading it seems. Systems with speech interfaces such as Alexa and Siri are of course common, but they are limited. Rather than adding mind reading skills, we introduced a “cheat” that enabled our systems to fake it. The cheat is simple and only slightly interesting to computer scientists and not at all interesting to philosophers. However, reading about the enactivist idea that we “directly perceive” the intentions of others, our cheat took on a new light and in this paper we look again at how natural language understanding might actually work between humans.
Full article
(This article belongs to the Special Issue Speech-Based Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Temporal Development of Sense of Presence and Cybersickness during an Immersive VR Experience
Multimodal Technol. Interact. 2022, 6(5), 31; https://doi.org/10.3390/mti6050031 - 22 Apr 2022
Abstract
Following the advances in modern head-mounted displays, research exploring the human experience of virtual environments has seen a surge in interest. Researchers have examined how to promote individuals’ sense of presence, i.e., their experience of “being” in the VE, as well as to
[...] Read more.
Following the advances in modern head-mounted displays, research exploring the human experience of virtual environments has seen a surge in interest. Researchers have examined how to promote individuals’ sense of presence, i.e., their experience of “being” in the VE, as well as to diminish the negative side effects of cybersickness. Studies investigating the relationship between sense of presence and cybersickness have reported heterogeneous results. Authors that found a positive relation have argued that the phenomena have shared cognitive underpinnings. However, recent literature has reported that positive associations can be explained by the confounding factor of immersion. The current study aims to investigate how cybersickness and sense of presence are associated and develop over time. During the experiment, participants were exposed to a virtual roller coaster and presented orally with questions aimed to quantify their perceived sense of presence and cybersickness. The results of the experiment indicate that cybersickness and sense of presence are both modulated by the time spent in the virtual setting. The utilized short measures for sense of presence and cybersickness were found to be reliable alternatives to multi-item questionnaires.
Full article
(This article belongs to the Special Issue 3D Human–Computer Interaction (Volume II))
►▼
Show Figures
Figure 1
Open AccessArticle
Designing Tangible as an Orchestration Tool for Collaborative Activities
Multimodal Technol. Interact. 2022, 6(5), 30; https://doi.org/10.3390/mti6050030 - 19 Apr 2022
Abstract
►▼
Show Figures
Orchestrating collaborative learning activities is a challenge, even with the support of technology. Tangibles as orchestration tools represent an ambient and embodied approach to sharing information about the learning content and flow of the activity, thus facilitating both collaboration and its orchestration. Therefore,
[...] Read more.
Orchestrating collaborative learning activities is a challenge, even with the support of technology. Tangibles as orchestration tools represent an ambient and embodied approach to sharing information about the learning content and flow of the activity, thus facilitating both collaboration and its orchestration. Therefore, we propose tangibles as a solution to orchestrate productive collaborative learning. Concretely, this paper makes three contributions toward this end: First, we analyze the design space for tangibles as an orchestration tool to support collaborative learning and identify twelve essential dimensions. Second, we present five tangible tools for collaborative learning activities in face-to-face and online classrooms. Third, we present principles and challenges to designing tangibles for orchestrating collaborative learning based on our findings from the evaluation of ten educational experts who evaluated these tools using a usability scale and open questions. The key findings were: (1) they had good usability; (2) their main advantages are ease of use and support for collaborative learning; (3) their main disadvantages are limited functions and the difficulty to scale them to more users. We conclude by providing reflections and recommendations for the future design of tangibles for orchestration.
Full article
Figure 1
Open AccessArticle
A Trustworthy Robot Buddy for Primary School Children
by
, , , , , , , , and
Multimodal Technol. Interact. 2022, 6(4), 29; https://doi.org/10.3390/mti6040029 - 14 Apr 2022
Abstract
Social robots hold potential for supporting children’s well-being in classrooms. However, it is unclear which robot features add to a trustworthy relationship between a child and a robot and whether social robots are just as able to reduce stress as traditional interventions, such
[...] Read more.
Social robots hold potential for supporting children’s well-being in classrooms. However, it is unclear which robot features add to a trustworthy relationship between a child and a robot and whether social robots are just as able to reduce stress as traditional interventions, such as listening to classical music. We set up two experiments wherein children interacted with a robot in a real-life school environment. Our main results show that regardless of the robotic features tested (intonation, male/female voice, and humor) most children tend to trust a robot during their first interaction. Adding humor to the robots’ dialogue seems to have a negative impact on children’s trust, especially for girls and children without prior experience with robots. In comparing a classical music session with a social robot interaction, we found no significant differences. Both interventions were able to lower the stress levels of children, however, not significantly. Our results show the potential for robots to build trustworthy interactions with children and to lower children’s stress levels. Considering these results, we believe that social robots provide a new tool for children to make their feelings explicit, thereby enabling children to share negative experiences (such as bullying) which would otherwise stay unnoticed.
Full article
(This article belongs to the Special Issue Intricacies of Child–Robot Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Emotion Classification from Speech and Text in Videos Using a Multimodal Approach
Multimodal Technol. Interact. 2022, 6(4), 28; https://doi.org/10.3390/mti6040028 - 12 Apr 2022
Abstract
Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes
[...] Read more.
Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes a method for classifying the emotions expressed in multimodal data extracted from videos. The proposed method models multimodal data as a sequence of features extracted from facial expressions, speech, gestures, and text, using a linguistic approach. Each sequence of multimodal data is correctly associated with the emotion by a method that models each emotion using a hidden Markov model. The trained model is evaluated on samples of multimodal sentences associated with seven basic emotions. The experimental results demonstrate a good classification rate for emotions.
Full article
(This article belongs to the Special Issue Feature Papers of MTI in 2021)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Entropy, Future Internet, Algorithms, Computation, MAKE, MTI
Interactive Artificial Intelligence and Man-Machine Communication
Topic Editors: Christos Troussas, Cleo Sgouropoulou, Akrivi Krouska, Ioannis Voyiatzis, Athanasios VoulodimosDeadline: 11 December 2022
Topic in
AI, Algorithms, Information, MTI, Sensors
Lightweight Deep Neural Networks for Video Analytics
Topic Editors: Amin Ullah, Tanveer Hussain, Mohammad Farhad BulbulDeadline: 31 December 2023
Conferences
Special Issues
Special Issue in
MTI
Multimodal Conversational Interaction and Interfaces, Volume II
Guest Editors: Gabriel Murray, Catharine Oertel, Yukiko I. NakanoDeadline: 30 June 2022
Special Issue in
MTI
Cooperative Intelligence in Automated Driving
Guest Editors: Andreas Riener, Myounghoon Jeon (Philart), Ronald SchroeterDeadline: 22 July 2022
Special Issue in
MTI
Design for Wellbeing at Scale
Guest Editors: Naseem Ahmadpour, Michael Burmester, Dorian Peters, Anja ThiemeDeadline: 31 July 2022
Special Issue in
MTI
User Interfaces for Cyclists
Guest Editors: Andrii Matviienko, Markus LöchtefeldDeadline: 31 August 2022