Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,410)

Search Parameters:
Keywords = emotion recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1112 KiB  
Article
Multimodal Emotion Recognition Method Based on Domain Generalization and Graph Neural Networks
by Jinbao Xie, Yulong Wang, Tianxin Meng, Jianqiao Tai, Yueqian Zheng and Yury I. Varatnitski
Electronics 2025, 14(5), 885; https://doi.org/10.3390/electronics14050885 (registering DOI) - 23 Feb 2025
Abstract
In recent years, multimodal sentiment analysis has attracted increasing attention from researchers owing to the rapid development of human–computer interactions. Sentiment analysis is an important task for understanding dialogues. However, with the increase of multimodal data, the processing of individual modality features and [...] Read more.
In recent years, multimodal sentiment analysis has attracted increasing attention from researchers owing to the rapid development of human–computer interactions. Sentiment analysis is an important task for understanding dialogues. However, with the increase of multimodal data, the processing of individual modality features and the methods for multimodal feature fusion have become more significant for research. Existing methods that handle the features of each modality separately are not suitable for subsequent multimodal fusion and often fail to capture sufficient global and local information. Therefore, this study proposes a novel multimodal sentiment analysis method based on domain generalization and graph neural networks. The main characteristic of this method is that it considers the features of each modality as domains. It extracts domain-specific and cross-domain-invariant features, thereby facilitating cross-domain generalization. Generalized features are more suitable for multimodal fusion. Graph neural networks were employed to extract global and local information from the dialogue to capture the emotional changes of the speakers. Specifically, global representations were captured by modeling cross-modal interactions at the dialogue level, whereas local information was typically inferred from temporal information or the emotional changes of the speakers. The method proposed in this study outperformed existing models on the IEMOCAP, CMU-MOSEI, and MELD datasets by 0.97%, 1.09% (for seven-class classification), and 0.65% in terms of weighted F1 score, respectively. This clearly demonstrates that the domain-generalized features proposed in this study are better suited for subsequent multimodal fusion, and that the model developed here is more effective at capturing both global and local information. Full article
(This article belongs to the Special Issue Multimodal Learning and Transfer Learning)
Show Figures

Figure 1

16 pages, 603 KiB  
Article
Overcoming Challenges in Video-Based Health Monitoring: Real-World Implementation, Ethics, and Data Considerations
by Simão Ferreira, Catarina Marinheiro, Catarina Mateus, Pedro Pereira Rodrigues, Matilde A. Rodrigues and Nuno Rocha
Sensors 2025, 25(5), 1357; https://doi.org/10.3390/s25051357 (registering DOI) - 22 Feb 2025
Viewed by 204
Abstract
In the context of evolving healthcare technologies, this study investigates the application of AI and machine learning in video-based health monitoring systems, focusing on the challenges and potential of implementing such systems in real-world scenarios, specifically for knowledge workers. The research underscores the [...] Read more.
In the context of evolving healthcare technologies, this study investigates the application of AI and machine learning in video-based health monitoring systems, focusing on the challenges and potential of implementing such systems in real-world scenarios, specifically for knowledge workers. The research underscores the criticality of addressing technological, ethical, and practical hurdles in deploying these systems outside controlled laboratory environments. Methodologically, the study spanned three months and employed advanced facial recognition technology embedded in participants’ computing devices to collect physiological metrics such as heart rate, blinking frequency, and emotional states, thereby contributing to a stress detection dataset. This approach ensured data privacy and aligns with ethical standards. The results reveal significant challenges in data collection and processing, including biases in video datasets, the need for high-resolution videos, and the complexities of maintaining data quality and consistency, with 42% (after adjustments) of data lost. In conclusion, this research emphasizes the necessity for rigorous, ethical, and technologically adapted methodologies to fully realize the benefits of these systems in diverse healthcare contexts. Full article
29 pages, 3542 KiB  
Article
Gamified Engagement for Data Crowdsourcing and AI Literacy: An Investigation in Affective Communication Through Speech Emotion Recognition
by Eleni Siamtanidou, Lazaros Vrysis, Nikolaos Vryzas and Charalampos A. Dimoulas
Societies 2025, 15(3), 54; https://doi.org/10.3390/soc15030054 (registering DOI) - 22 Feb 2025
Viewed by 150
Abstract
This research investigates the utilization of entertainment approaches, such as serious games and gamification technologies, to address various challenges and implement targeted tasks. Specifically, it details the design and development of an innovative gamified application named “J-Plus”, aimed at both professionals and non-professionals [...] Read more.
This research investigates the utilization of entertainment approaches, such as serious games and gamification technologies, to address various challenges and implement targeted tasks. Specifically, it details the design and development of an innovative gamified application named “J-Plus”, aimed at both professionals and non-professionals in journalism. This application facilitates the enjoyable, efficient, and high-quality collection of emotionally tagged speech samples, enhancing the performance and robustness of speech emotion recognition (SER) systems. Additionally, these approaches offer significant educational benefits, providing users with knowledge about emotional speech and artificial intelligence (AI) mechanisms while promoting digital skills. This project was evaluated by 48 participants, with 44 engaging in quantitative assessments and 4 forming an expert group for qualitative methodologies. This evaluation validated the research questions and hypotheses, demonstrating the application’s diverse benefits. Key findings indicate that gamified features can effectively support learning and attract users, with approximately 70% of participants agreeing that serious games and gamification could enhance their motivation to practice and improve their emotional speech. Additionally, 50% of participants identified social interaction features, such as collaboration, as most beneficial for fostering motivation and commitment. The integration of these elements supports reliable and extensive data collection and the advancement of AI algorithms while concurrently developing various skills, such as emotional speech articulation and digital literacy. This paper advocates for the creation of collaborative environments and digital communities through crowdsourcing, balancing technological innovation in the SER sector. Full article
Show Figures

Figure 1

23 pages, 619 KiB  
Article
Electroencephalogram Based Emotion Recognition Using Hybrid Intelligent Method and Discrete Wavelet Transform
by Duy Nguyen, Minh Tuan Nguyen and Kou Yamada
Appl. Sci. 2025, 15(5), 2328; https://doi.org/10.3390/app15052328 - 21 Feb 2025
Viewed by 176
Abstract
Electroencephalography-based emotion recognition is essential for brain-computer interface combined with artificial intelligence. This paper proposes a novel algorithm for human emotion detection using a hybrid paradigm of convolutional neural networks and a boosting model. The proposed algorithm employs two subsets of 18 and [...] Read more.
Electroencephalography-based emotion recognition is essential for brain-computer interface combined with artificial intelligence. This paper proposes a novel algorithm for human emotion detection using a hybrid paradigm of convolutional neural networks and a boosting model. The proposed algorithm employs two subsets of 18 and 14 features extracted from four sub-bands using discrete wavelet transform. These features are identified as the optimal subsets of the most relevant, among 42 original input features extracted from two subsets of 8 and 6 productive channels using a dual genetic algorithm combined with a wise-subject 5-fold cross validation procedure in which the first and second genetic algorithms address the efficient channels and optimal feature subsets. The feature subsets are estimated by differently intelligent models and wise-subject 5-fold cross validation procedure on the validation set. The proposed algorithm produces an accuracy of 70.43%/76.05%, precision of 69.88%/74.57%, recall of 98.70%/99.17%, and F1 score of 81.83%/85.13% for valence/arousal classifications, which suggest that the frontal and left regions of the cortex associate especially to human emotions. Full article
Show Figures

Figure 1

20 pages, 4882 KiB  
Article
Empowering Recovery: The T-Rehab System’s Semi-Immersive Approach to Emotional and Physical Well-Being in Tele-Rehabilitation
by Hayette Hadjar, Binh Vu and Matthias Hemmje
Electronics 2025, 14(5), 852; https://doi.org/10.3390/electronics14050852 - 21 Feb 2025
Viewed by 100
Abstract
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using [...] Read more.
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using a webcam with MediaPipe to track their hand movements for interactive exercises, allowing the system to tailor treatment content for increased engagement and comfort. T-Rehab’s evaluation comprises two assessments: system performance and cognitive walkthroughs. The first evaluation focuses on system performance, assessing the tested game, middleware, and facial emotion monitoring to ensure hardware compatibility and effective support for AC, gaming, and tele-rehabilitation. The second evaluation uses cognitive walkthroughs to examine usability, identifying potential issues in emotion detection and tele-rehabilitation. Together, these evaluations provide insights into T-Rehab’s functionality, usability, and impact in supporting both physical rehabilitation and emotional well-being. The thorough integration of technology inside T-Rehab ensures a holistic approach to tele-rehabilitation, allowing patients to participate comfortably and efficiently from anywhere. This technique not only improves physical therapy outcomes but also promotes mental resilience, marking an important step advance in tele-rehabilitation practices. Full article
Show Figures

Figure 1

21 pages, 7761 KiB  
Article
Acoustic Feature Excitation-and-Aggregation Network Based on Multi-Task Learning for Speech Emotion Recognition
by Xin Qi, Qing Song, Guowei Chen, Pengzhou Zhang and Yao Fu
Electronics 2025, 14(5), 844; https://doi.org/10.3390/electronics14050844 - 21 Feb 2025
Viewed by 91
Abstract
In recent years, substantial research has focused on emotion recognition using multi-stream speech representations. In existing multi-stream speech emotion recognition (SER) approaches, effectively extracting and fusing speech features is crucial. To overcome the bottleneck in SER caused by the fusion of inter-feature information, [...] Read more.
In recent years, substantial research has focused on emotion recognition using multi-stream speech representations. In existing multi-stream speech emotion recognition (SER) approaches, effectively extracting and fusing speech features is crucial. To overcome the bottleneck in SER caused by the fusion of inter-feature information, including challenges like modeling complex feature relations and the inefficiency of fusion methods, this paper proposes an SER framework based on multi-task learning, named AFEA-Net. The framework consists of a speech emotion alignment learning (SEAL), an acoustic feature excitation-and-aggregation mechanism (AFEA), and a continuity learning. First, SEAL aligns sentiment information between WavLM and Fbank features. Then, we design an acoustic feature excitation-and-aggregation mechanism to adaptively calibrate and merge the two features. Furthermore, we introduce a continuity learning strategy to explore the distinctiveness and complementarity of dual-stream features from intra- and inter-speech. Experimental results on the publicly available IEMOCAP and RAVDESS sentiment datasets show that our proposed approach outperforms state-of-the-art SER approaches. Specifically, we achieve 75.1% WA, 75.3% UAR, 76% precision, and 75.4% F1-score on IEMOCAP, and 80.3%, 80.6%, 80.8%, and 80.4% on RAVDESS, respectively. Full article
Show Figures

Figure 1

73 pages, 4804 KiB  
Systematic Review
From Neural Networks to Emotional Networks: A Systematic Review of EEG-Based Emotion Recognition in Cognitive Neuroscience and Real-World Applications
by Evgenia Gkintoni, Anthimos Aroutzidis, Hera Antonopoulou and Constantinos Halkiopoulos
Brain Sci. 2025, 15(3), 220; https://doi.org/10.3390/brainsci15030220 - 20 Feb 2025
Viewed by 133
Abstract
Background/Objectives: This systematic review presents how neural and emotional networks are integrated into EEG-based emotion recognition, bridging the gap between cognitive neuroscience and practical applications. Methods: Following PRISMA, 64 studies were reviewed that outlined the latest feature extraction and classification developments using deep [...] Read more.
Background/Objectives: This systematic review presents how neural and emotional networks are integrated into EEG-based emotion recognition, bridging the gap between cognitive neuroscience and practical applications. Methods: Following PRISMA, 64 studies were reviewed that outlined the latest feature extraction and classification developments using deep learning models such as CNNs and RNNs. Results: Indeed, the findings showed that the multimodal approaches were practical, especially the combinations involving EEG with physiological signals, thus improving the accuracy of classification, even surpassing 90% in some studies. Key signal processing techniques used during this process include spectral features, connectivity analysis, and frontal asymmetry detection, which helped enhance the performance of recognition. Despite these advances, challenges remain more significant in real-time EEG processing, where a trade-off between accuracy and computational efficiency limits practical implementation. High computational cost is prohibitive to the use of deep learning models in real-world applications, therefore indicating a need for the development and application of optimization techniques. Aside from this, the significant obstacles are inconsistency in labeling emotions, variation in experimental protocols, and the use of non-standardized datasets regarding the generalizability of EEG-based emotion recognition systems. Discussion: These challenges include developing adaptive, real-time processing algorithms, integrating EEG with other inputs like facial expressions and physiological sensors, and a need for standardized protocols for emotion elicitation and classification. Further, related ethical issues with respect to privacy, data security, and machine learning model biases need to be much more proclaimed to responsibly apply research on emotions to areas such as healthcare, human–computer interaction, and marketing. Conclusions: This review provides critical insight into and suggestions for further development in the field of EEG-based emotion recognition toward more robust, scalable, and ethical applications by consolidating current methodologies and identifying their key limitations. Full article
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)
Show Figures

Figure 1

30 pages, 5210 KiB  
Review
Transformers in EEG Analysis: A Review of Architectures and Applications in Motor Imagery, Seizure, and Emotion Classification
by Elnaz Vafaei and Mohammad Hosseini
Sensors 2025, 25(5), 1293; https://doi.org/10.3390/s25051293 - 20 Feb 2025
Viewed by 194
Abstract
Transformers have rapidly influenced research across various domains. With their superior capability to encode long sequences, they have demonstrated exceptional performance, outperforming existing machine learning methods. There has been a rapid increase in the development of transformer-based models for EEG analysis. The high [...] Read more.
Transformers have rapidly influenced research across various domains. With their superior capability to encode long sequences, they have demonstrated exceptional performance, outperforming existing machine learning methods. There has been a rapid increase in the development of transformer-based models for EEG analysis. The high volumes of recently published papers highlight the need for further studies exploring transformer architectures, key components, and models employed particularly in EEG studies. This paper aims to explore four major transformer architectures: Time Series Transformer, Vision Transformer, Graph Attention Transformer, and hybrid models, along with their variants in recent EEG analysis. We categorize transformer-based EEG studies according to the most frequent applications in motor imagery classification, emotion recognition, and seizure detection. This paper also highlights the challenges of applying transformers to EEG datasets and reviews data augmentation and transfer learning as potential solutions explored in recent years. Finally, we provide a summarized comparison of the most recent reported results. We hope this paper serves as a roadmap for researchers interested in employing transformer architectures in EEG analysis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

15 pages, 587 KiB  
Systematic Review
AI Applications to Reduce Loneliness Among Older Adults: A Systematic Review of Effectiveness and Technologies
by Yuyi Yang, Chenyu Wang, Xiaoling Xiang and Ruopeng An
Healthcare 2025, 13(5), 446; https://doi.org/10.3390/healthcare13050446 - 20 Feb 2025
Viewed by 165
Abstract
Background/Objectives: Loneliness among older adults is a prevalent issue, significantly impacting their quality of life and increasing the risk of physical and mental health complications. The application of artificial intelligence (AI) technologies in behavioral interventions offers a promising avenue to overcome challenges in [...] Read more.
Background/Objectives: Loneliness among older adults is a prevalent issue, significantly impacting their quality of life and increasing the risk of physical and mental health complications. The application of artificial intelligence (AI) technologies in behavioral interventions offers a promising avenue to overcome challenges in designing and implementing interventions to reduce loneliness by enabling personalized and scalable solutions. This study systematically reviews the AI-enabled interventions in addressing loneliness among older adults, focusing on the effectiveness and underlying technologies used. Methods: A systematic search was conducted across eight electronic databases, including PubMed and Web of Science, for studies published up to 31 January 2024. Inclusion criteria were experimental studies involving AI applications to mitigate loneliness among adults aged 55 and older. Data on participant demographics, intervention characteristics, AI methodologies, and effectiveness outcomes were extracted and synthesized. Results: Nine studies were included, comprising six randomized controlled trials and three pre–post designs. The most frequently implemented AI technologies included speech recognition (n = 6) and emotion recognition and simulation (n = 5). Intervention types varied, with six studies employing social robots, two utilizing personal voice assistants, and one using a digital human facilitator. Six studies reported significant reductions in loneliness, particularly those utilizing social robots, which demonstrated emotional engagement and personalized interactions. Three studies reported non-significant effects, often due to shorter intervention durations or limited interaction frequencies. Conclusions: AI-driven interventions show promise in reducing loneliness among older adults. Future research should focus on long-term, culturally competent solutions that integrate quantitative and qualitative findings to optimize intervention design and scalability. Full article
Show Figures

Figure 1

24 pages, 1339 KiB  
Article
Bridging Neuroscience and Machine Learning: A Gender-Based Electroencephalogram Framework for Guilt Emotion Identification
by Saima Raza Zaidi, Najeed Ahmed Khan and Muhammad Abul Hasan
Sensors 2025, 25(4), 1222; https://doi.org/10.3390/s25041222 - 17 Feb 2025
Viewed by 183
Abstract
This study explores the link between the emotion “guilt” and human EEG data, and investigates the influence of gender differences on the expression of guilt and neutral emotions in response to visual stimuli. Additionally, the stimuli used in the study were developed to [...] Read more.
This study explores the link between the emotion “guilt” and human EEG data, and investigates the influence of gender differences on the expression of guilt and neutral emotions in response to visual stimuli. Additionally, the stimuli used in the study were developed to ignite guilt and neutral emotions. Two emotions, “guilt” and “neutral”, were recorded from 16 participants after these emotions were induced using storyboards as pictorial stimuli. These storyboards were developed based on various guilt-provoking events shared by another group of participants. In the pre-processing step, collected data were de-noised using bandpass filters and ICA, then segmented into smaller sections for further analysis. Two approaches were used to feed these data to the SVM classifier. First, the novel approach employed involved feeding the data to SVM classifier without computing any features. This method provided an average accuracy of 83%. In the second approach, data were divided into Alpha, Beta, Gamma, Theta and Delta frequency bands using Discrete Wavelet Decomposition. Afterward, the computed features, including entropy, Hjorth parameters and Band Power, were fed to SVM classifiers. This approach achieved an average accuracy of 63%. The findings of both classification methodologies indicate that females are more expressive in response to depicted stimuli and that their brain cells exhibit higher feature values. Moreover, females displayed higher accuracy than males in all bands except the Delta band. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 2242 KiB  
Article
Effective Data Augmentation Techniques for Arabic Speech Emotion Recognition Using Convolutional Neural Networks
by Wided Bouchelligua, Reham Al-Dayil and Areej Algaith
Appl. Sci. 2025, 15(4), 2114; https://doi.org/10.3390/app15042114 - 17 Feb 2025
Viewed by 283
Abstract
This paper investigates the effectiveness of various data augmentation techniques for enhancing Arabic speech emotion recognition (SER) using convolutional neural networks (CNNs). Utilizing the Saudi Dialect and BAVED datasets, we address the challenges of limited and imbalanced data commonly found in Arabic SER. [...] Read more.
This paper investigates the effectiveness of various data augmentation techniques for enhancing Arabic speech emotion recognition (SER) using convolutional neural networks (CNNs). Utilizing the Saudi Dialect and BAVED datasets, we address the challenges of limited and imbalanced data commonly found in Arabic SER. To improve model performance, we apply augmentation techniques such as noise addition, time shifting, increasing volume, and reducing volume. Additionally, we examine the optimal number of augmentations required to achieve the best results. Our experiments reveal that these augmentations significantly enhance the CNN’s ability to recognize emotions, with certain techniques proving more effective than others. Furthermore, the number of augmentations plays a critical role in balancing model accuracy. The Saudi Dialect dataset achieved its best results with two augmentations (increasing volume and decreasing volume), reaching an accuracy of 96.81%. Similarly, the BAVED dataset demonstrated optimal performance with a combination of three augmentations (noise addition, increasing volume, and reducing volume), achieving an accuracy of 92.60%. These findings indicate that carefully selected augmentation strategies can greatly improve the performance of CNN-based SER systems, particularly in the context of Arabic speech. This research underscores the importance of tailored augmentation techniques to enhance SER performance and sets a foundation for future advancements in this field. Full article
(This article belongs to the Special Issue Natural Language Processing: Novel Methods and Applications)
Show Figures

Figure 1

21 pages, 417 KiB  
Article
Modeling and Adaptive Resource Management for Voice-Based Speaker and Emotion Identification Through Smart Badges
by Xiaowei Liu and Alex Doboli
Electronics 2025, 14(4), 781; https://doi.org/10.3390/electronics14040781 - 17 Feb 2025
Viewed by 267
Abstract
The number of new applications addressing human activities in social settings, like groups and organizations, is on the rise. Devising an effective data collection infrastructure is critical for such applications. This paper describes a computational model and the related algorithms to design a [...] Read more.
The number of new applications addressing human activities in social settings, like groups and organizations, is on the rise. Devising an effective data collection infrastructure is critical for such applications. This paper describes a computational model and the related algorithms to design a sociometric badge for efficient data collection in applications in which speaker and emotion recognition and tracking are essential. A new computational model describes the characteristics of verbal and emotional interactions in a group. To address the requirements of changing group interactions, a self-adaptation module optimizes badge resource management to minimize data loss and modeling errors. Experiments considered scenarios for slow and regular shifts in group interactions. The proposed self-adaptation method reduces data loss by 51% to 90%, modeling errors by 28% to 44%, and computing load by 38% to 52%. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

17 pages, 3001 KiB  
Article
Performance Improvement of Speech Emotion Recognition Using ResNet Model with Data Augmentation–Saturation
by Minjeong Lee and Miran Lee
Appl. Sci. 2025, 15(4), 2088; https://doi.org/10.3390/app15042088 - 17 Feb 2025
Viewed by 90
Abstract
Over the past five years, the proliferation of virtual reality platforms and the advancement of metahuman technologies have underscored the importance of natural interaction and emotional expression. As a result, there has been significant research activity focused on developing emotion recognition techniques based [...] Read more.
Over the past five years, the proliferation of virtual reality platforms and the advancement of metahuman technologies have underscored the importance of natural interaction and emotional expression. As a result, there has been significant research activity focused on developing emotion recognition techniques based on speech data. Despite significant progress in emotion recognition research for the Korean language, a shortage of speech databases applicable to such research has been regarded as the most critical problem in this field, leading to overfitting issues in several models developed by previous studies. To address the issue of overfitting caused by limited data availability in the field of Korean speech emotion recognition (SER), this study focuses on integrating the data augmentation–saturation (DA-S) technique into a traditional ResNet model to enhance SER performance. The DA-S technique enhances data augmentation by adjusting the saturation of an image. We used 11,192 utterance numbers provided by AI-HUB, which were converted into images to extract features such as pitch and intensity of speech. The DA-S technique was then applied to this dataset, using weights of 0 and 2, to augment the utterance numbers to 33,576. This augmented dataset was utilized to classify four emotion categories: happiness, sadness, anger, and neutrality. The results of this study showed that the proposed model using the DA-S technique overcame overfitting issues. Furthermore, its performance for SER increased by 34.19% compared to that of existing ResNet models not using the DA-S technique. This demonstrates that the DA-S technique effectively enhances model performance with limited data and may be applicable to specific areas such as stress monitoring and mental health support. Full article
(This article belongs to the Special Issue Advanced Technologies and Applications of Emotion Recognition)
Show Figures

Figure 1

16 pages, 16807 KiB  
Article
Comparative Analysis of Product Information Provision Methods: Traditional E-Commerce vs. 3D VR Shopping
by Hui-Jun Kim, Seok Chan Jeong and Sung-Hee Kim
Appl. Sci. 2025, 15(4), 2089; https://doi.org/10.3390/app15042089 - 17 Feb 2025
Viewed by 102
Abstract
VR shopping combines the advantages of both online and offline shopping, demonstrating significant potential. In online settings, where consumers cannot directly experience products, providing detailed product information is essential. However, research on the impact of product information provision methods in VR shopping on [...] Read more.
VR shopping combines the advantages of both online and offline shopping, demonstrating significant potential. In online settings, where consumers cannot directly experience products, providing detailed product information is essential. However, research on the impact of product information provision methods in VR shopping on perceived product emotion and ease of product recognition is limited. Therefore, we compare the effects of the existing e-commerce product information provision method and the VR shopping method on perceived product emotion and product information recognition. We focus on shoes as a product where emotion and detailed information heavily influence the final purchase decision. The results showed that the VR shopping method delivered product emotions more consistently and demonstrated higher product information recognition ease. This study is significant as it provides practical verification of the effectiveness of product information provision methods in VR shopping and suggests directions for future research in this field. Full article
Show Figures

Figure 1

23 pages, 2838 KiB  
Article
Investigating Eye Movements to Examine Attachment-Related Differences in Facial Emotion Perception and Face Memory
by Karolin Török-Suri, Kornél Németh, Máté Baradits and Gábor Csukly
J. Imaging 2025, 11(2), 60; https://doi.org/10.3390/jimaging11020060 - 16 Feb 2025
Viewed by 285
Abstract
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, [...] Read more.
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, which measures the avoidance and anxiety dimensions of attachment), facial emotion perception and face memory in a neurotypical sample. Trait and state anxiety were also measured as covariates. Eye-tracking was used during the emotion decision task (happy vs. sad faces) and the subsequent facial recognition task; the length of fixations to different face regions was measured as the dependent variable. Linear mixed models suggested that differences during emotion perception may result from longer fixations in individuals with insecure (anxious or avoidant) attachment orientations. This effect was also influenced by individual state and trait anxiety measures. Eye movements during the recognition memory task, however, were not related to either of the attachment dimensions; only trait anxiety had a significant effect on the length of fixations in this condition. The results of our research may contribute to a more accurate understanding of facial emotion perception in the light of attachment styles, and their interaction with anxiety characteristics. Full article
(This article belongs to the Special Issue Human Attention and Visual Cognition (2nd Edition))
Show Figures

Figure 1

Back to TopTop