Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,201)

Search Parameters:
Keywords = augmented reality

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3569 KiB  
Article
Wearable Biosensor Smart Glasses Based on Augmented Reality and Eye Tracking
by Lina Gao, Changyuan Wang and Gongpu Wu
Sensors 2024, 24(20), 6740; https://doi.org/10.3390/s24206740 (registering DOI) - 20 Oct 2024
Abstract
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that [...] Read more.
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that must be addressed. In this study, we develop smart glasses based on augmented reality and eye tracking technology. Through real-time information interaction with the server, the smart glasses realize accurate scene perception and analysis of the user’s intention and combine with mixed-reality display technology to provide dynamic and real-time intelligent interaction services. A multi-level hardware architecture and optimized data processing process are adopted during the research process to enhance the system’s real-time accuracy. Meanwhile, combining the deep learning method with the geometric model significantly improves the system’s ability to perceive user behavior and environmental information in complex environments. The experimental results show that when the distance between the subject and the display is 1 m, the eye tracking accuracy of the smart glasses can reach 1.0° with an error of no more than ±0.1°. This study demonstrates that the effective integration of AR and eye tracking technology dramatically improves the functional performance of smart glasses in multiple scenarios. Future research will further optimize smart glasses’ algorithms and hardware performance, enhance their application potential in daily health monitoring and medical diagnosis, and provide more possibilities for the innovative development of wearable devices in medical and health management. Full article
Show Figures

Figure 1

23 pages, 702 KiB  
Article
VonEdgeSim: A Framework for Simulating IoT application in Volunteer Edge Computing
by Yousef Alsenani
Electronics 2024, 13(20), 4124; https://doi.org/10.3390/electronics13204124 (registering DOI) - 19 Oct 2024
Abstract
Recently, various emerging technologies have been introduced to host IoT applications. Edge computing, utilizing volunteer devices, could be a feasible solution due to the significant and underutilized resources at the edge. However, cloud providers are still reluctant to offer it as an edge [...] Read more.
Recently, various emerging technologies have been introduced to host IoT applications. Edge computing, utilizing volunteer devices, could be a feasible solution due to the significant and underutilized resources at the edge. However, cloud providers are still reluctant to offer it as an edge infrastructure service because of the unpredictable nature of volunteer resources. Volunteer edge computing introduces challenges such as reliability, trust, and availability. Testing this infrastructure is prohibitively expensive and not feasible in real-world scenarios. This emerging technology will not be fully realized until dedicated research and development efforts have substantiated its potential for running reliable services. Therefore, this paper proposes VonEdgeSim, a simulation of volunteer edge computing. To the best of our knowledge, it is the first and only simulation capable of mimicking volunteer behavior at the edge. Researchers and developers can utilize this simulation to test and develop resource management models. We conduct experiments with various IoT applications, including Augmented Reality, Infotainment, and Health Monitoring. Our results show that incorporating volunteer devices at the edge can significantly enhance system performance by reducing total task delay, and improving task execution time. This emphasizes the potential of volunteers to provide reliable services in an edge computing environment. The simulation code is publicly available for further development and testing. Full article
(This article belongs to the Section Computer Science & Engineering)
22 pages, 3711 KiB  
Article
Offload Shaping for Wearable Cognitive Assistance
by Roger Iyengar, Qifei Dong, Chanh Nguyen, Padmanabhan Pillai and Mahadev Satyanarayanan
Electronics 2024, 13(20), 4083; https://doi.org/10.3390/electronics13204083 - 17 Oct 2024
Abstract
Edge computing has much lower elasticity than cloud computing because cloudlets have much smaller physical and electrical footprints than a data center. This hurts the scalability of applications that involve low-latency edge offload. We show how this problem can be addressed by leveraging [...] Read more.
Edge computing has much lower elasticity than cloud computing because cloudlets have much smaller physical and electrical footprints than a data center. This hurts the scalability of applications that involve low-latency edge offload. We show how this problem can be addressed by leveraging the growing sophistication and compute capability of recent wearable devices. We investigate four Wearable Cognitive Assistance applications on three wearable devices, and show that the technique of offload shaping can significantly reduce network utilization and cloudlet load without compromising accuracy or performance. Our investigation considers the offload shaping strategies of mapping processes to different computing tiers, gating, and decluttering. We find that all three strategies offer a significant bandwidth savings compared to transmitting full camera images to a cloudlet. Two out of the three devices we test are capable of running all offload shaping strategies within a reasonable latency bound. Full article
(This article belongs to the Special Issue AI for Edge Computing)
Show Figures

Figure 1

14 pages, 4454 KiB  
Case Report
Pioneering Augmented and Mixed Reality in Cranial Surgery: The First Latin American Experience
by Alberto Ramírez Romero, Andrea Rebeca Rodríguez Herrera, José Francisco Sánchez Cuellar, Raúl Enrique Cevallos Delgado and Edith Elizabeth Ochoa Martínez
Brain Sci. 2024, 14(10), 1025; https://doi.org/10.3390/brainsci14101025 - 16 Oct 2024
Abstract
Introduction: Augmented reality (AR) and mixed reality (MR) technologies have revolutionized cranial neurosurgery by overlaying digital information onto the surgical field, enhancing visualization, precision, and training. These technologies enable the real-time integration of preoperative imaging data, aiding in better decision-making and reducing operative [...] Read more.
Introduction: Augmented reality (AR) and mixed reality (MR) technologies have revolutionized cranial neurosurgery by overlaying digital information onto the surgical field, enhancing visualization, precision, and training. These technologies enable the real-time integration of preoperative imaging data, aiding in better decision-making and reducing operative risks. Despite challenges such as cost and specialized training needs, AR and MR offer significant benefits, including improved surgical outcomes and personalized surgical plans based on individual patient anatomy. Materials and Methods: This study describes three intracranial surgeries using AR and MR technologies at Hospital Ángeles Universidad, Mexico City, in 2023. Surgeries were performed with VisAR software 3 version and Microsoft HoloLens 2, transforming DICOM images into 3D models. Preoperative MRI and CT scans facilitated planning, and radiopaque tags ensured accurate image registration during surgery. Postoperative outcomes were assessed through clinical and imaging follow-up. Results: Three intracranial surgeries were performed with AR and MR assistance, resulting in successful outcomes with minimal postoperative complications. Case 1 achieved 80% tumor resection, Case 2 achieved near-total tumor resection, and Case 3 achieved complete lesion resection. All patients experienced significant symptom relief and favorable recoveries, demonstrating the precision and effectiveness of AR and MR in cranial surgery. Conclusions: This study demonstrates the successful use of AR and MR in cranial surgery, enhancing precision and clinical outcomes. Despite challenges like training and costs, these technologies offer significant benefits. Future research should focus on long-term outcomes and broader applications to validate their efficacy and cost-effectiveness in neurosurgery. Full article
(This article belongs to the Special Issue New Trends and Technologies in Modern Neurosurgery)
Show Figures

Figure 1

23 pages, 17790 KiB  
Technical Note
Development of a Modular Adjustable Wearable Haptic Device for XR Applications
by Ali Najm, Domna Banakou and Despina Michael-Grigoriou
Virtual Worlds 2024, 3(4), 436-458; https://doi.org/10.3390/virtualworlds3040024 - 16 Oct 2024
Abstract
Current XR applications move beyond audiovisual information, with haptic feedback rapidly gaining ground. However, current haptic devices are still evolving and often struggle to combine key desired features in a balanced way. In this paper, we propose the development of a high-resolution haptic [...] Read more.
Current XR applications move beyond audiovisual information, with haptic feedback rapidly gaining ground. However, current haptic devices are still evolving and often struggle to combine key desired features in a balanced way. In this paper, we propose the development of a high-resolution haptic (HRH) system for perception enhancement, a wearable technology designed to augment extended reality (XR) experiences through precise and localized tactile feedback. The HRH system features a modular design with 58 individually addressable actuators, enabling intricate haptic interactions within a compact wearable form. Dual ESP32-S3 microcontrollers and a custom-designed system ensure robust processing and low-latency performance, crucial for real-time applications. Integration with the Unity game engine provides developers with a user-friendly and dynamic environment for accurate, simple control and customization. The modular design, utilizing a flexible PCB, supports a wide range of actuators, enhancing its versatility for various applications. A comparison of our proposed system with existing solutions indicates that the HRH system outperforms other devices by encapsulating several key features, including adjustability, affordability, modularity, and high-resolution feedback. The HRH system not only aims to advance the field of haptic feedback but also introduces an intuitive tool for exploring new methods of human–computer and XR interactions. Future work will focus on refining and exploring the haptic feedback communication methods used to convey information and expand the system’s applications. Full article
Show Figures

Figure 1

20 pages, 4362 KiB  
Article
Mechanisms for Securing Autonomous Shipping Services and Machine Learning Algorithms for Misbehaviour Detection
by Marwan Haruna, Kaleb Gebremichael Gebremeskel, Martina Troscia, Alexandr Tardo and Paolo Pagano
Telecom 2024, 5(4), 1031-1050; https://doi.org/10.3390/telecom5040053 - 15 Oct 2024
Abstract
Technological developments within the maritime sector are resulting in rapid progress that will see the commercial use of autonomous vessels, known as Maritime Autonomous Surface Ships (MASSs). Such ships are equipped with a range of advanced technologies, such as IoT devices, artificial intelligence [...] Read more.
Technological developments within the maritime sector are resulting in rapid progress that will see the commercial use of autonomous vessels, known as Maritime Autonomous Surface Ships (MASSs). Such ships are equipped with a range of advanced technologies, such as IoT devices, artificial intelligence (AI) systems, machine learning (ML)-based algorithms, and augmented reality (AR) tools. Through such technologies, the autonomous vessels can be remotely controlled from Shore Control Centres (SCCs) by using real-time data to optimise their operations, enhance safety, and reduce the possibility of human error. Apart from the regulatory aspects, which are under definition by the International Maritime Organisation (IMO), cybersecurity vulnerabilities must be considered and properly addressed to prevent such complex systems from being tampered with. This paper proposes an approach that operates on two different levels to address cybersecurity. On one side, our solution is intended to secure communication channels between the SCCs and the vessels using Secure Exchange and COMmunication (SECOM) standard; on the other side, it aims to secure the underlying digital infrastructure in charge of data collection, storage and processing by relying on a set of machine learning (ML) algorithms for anomaly and intrusion detection. The proposed approach is validated against a real implementation of the SCC deployed in the Livorno seaport premises. Finally, the experimental results and the performance evaluation are provided to assess its effectiveness accordingly. Full article
(This article belongs to the Special Issue Digitalization, Information Technology and Social Development)
Show Figures

Figure 1

31 pages, 5975 KiB  
Article
Introducing Digitized Cultural Heritage to Wider Audiences by Employing Virtual and Augmented Reality Experiences: The Case of the v-Corfu Project
by Vasileios Komianos, Athanasios Tsipis and Katerina Kontopanagou
Technologies 2024, 12(10), 196; https://doi.org/10.3390/technologies12100196 - 13 Oct 2024
Abstract
In recent years, cultural projects utilizing digital applications and immersive technologies (VR, AR, MR) have grown significantly, enhancing cultural heritage experiences. Research emphasizes the importance of usability, user experience, and accessibility, yet holistic approaches remain underexplored and many projects fail to reach their [...] Read more.
In recent years, cultural projects utilizing digital applications and immersive technologies (VR, AR, MR) have grown significantly, enhancing cultural heritage experiences. Research emphasizes the importance of usability, user experience, and accessibility, yet holistic approaches remain underexplored and many projects fail to reach their audience. This article aims to bridge this gap by presenting a complete workflow including systematic requirements analysis, design guidelines, and development solutions based on knowledge extracted from previous relevant projects. The article focuses on virtual museums covering key challenges including compatibility, accessibility, usability, navigation, interaction, computational performance and graphics quality, and provides a design schema for integrating virtual museums into such projects. Following this approach, a number of applications are presented. Their performance with respect to the aforementioned key challenges is evaluated. Users are invited to assess them, providing positive results. To assess the virtual museum’s ability to attract a broader audience beyond the usual target group, a group of underserved minorities are also invited to use and evaluate it, generating encouraging outcomes. Concluding, results show that the presented workflow succeeds in yielding high-quality applications for cultural heritage communication and attraction of wider audiences, and outlines directions for further improvements in digitized heritage applications. Full article
(This article belongs to the Special Issue Immersive Technologies and Applications on Arts, Culture and Tourism)
Show Figures

Figure 1

21 pages, 1550 KiB  
Article
Using 3D Hand Pose Data in Recognizing Human–Object Interaction and User Identification for Extended Reality Systems
by Danish Hamid, Muhammad Ehatisham Ul Haq, Amanullah Yasin, Fiza Murtaza and Muhammad Awais Azam
Information 2024, 15(10), 629; https://doi.org/10.3390/info15100629 (registering DOI) - 12 Oct 2024
Abstract
Object detection and action/gesture recognition have become imperative in security and surveillance fields, finding extensive applications in everyday life. Advancement in such technologies will help in furthering cybersecurity and extended reality systems through the accurate identification of users and their interactions, which plays [...] Read more.
Object detection and action/gesture recognition have become imperative in security and surveillance fields, finding extensive applications in everyday life. Advancement in such technologies will help in furthering cybersecurity and extended reality systems through the accurate identification of users and their interactions, which plays a pivotal role in the security management of an entity and providing an immersive experience. Essentially, it enables the identification of human–object interaction to track actions and behaviors along with user identification. Yet, it is performed by traditional camera-based methods with high difficulties and challenges since occlusion, different camera viewpoints, and background noise lead to significant appearance variation. Deep learning techniques also demand large and labeled datasets and a large amount of computational power. In this paper, a novel approach to the recognition of human–object interactions and the identification of interacting users is proposed, based on three-dimensional hand pose data from an egocentric camera view. A multistage approach that integrates object detection with interaction recognition and user identification using the data from hand joints and vertices is proposed. Our approach uses a statistical attribute-based model for feature extraction and representation. The proposed technique is tested on the HOI4D dataset using the XGBoost classifier, achieving an average F1-score of 81% for human–object interaction and an average F1-score of 80% for user identification, hence proving to be effective. This technique is mostly targeted for extended reality systems, as proper interaction recognition and users identification are the keys to keeping systems secure and personalized. Its relevance extends into cybersecurity, augmented reality, virtual reality, and human–robot interactions, offering a potent solution for security enhancement along with enhancing interactivity in such systems. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

21 pages, 3805 KiB  
Article
Hospital Web Quality Multicriteria Analysis Model (HWQ): Development and Application Test in Spanish Hospitals
by Santiago Tejedor and Luis M. Romero-Rodríguez
Big Data Cogn. Comput. 2024, 8(10), 131; https://doi.org/10.3390/bdcc8100131 - 8 Oct 2024
Abstract
The Hospital Web Quality Multicriteria Analysis Model (HWQ) is constructed, designed, and validated in this research. For this purpose, we examined the web quality analysis models specialized in hospitals and health centers through a literature review and the most current taxonomies to analyze [...] Read more.
The Hospital Web Quality Multicriteria Analysis Model (HWQ) is constructed, designed, and validated in this research. For this purpose, we examined the web quality analysis models specialized in hospitals and health centers through a literature review and the most current taxonomies to analyze digital media. Based on the benchmarking and walkthrough methods, the analysis model was built and validated by a panel of experts (X = 3.54, CVI = 0.88, Score Σ = 45.58). To test its applicability and reliability, the model was pilot-tested on the websites of the ten public and private hospitals with the best reputation in Spain in 2022, according to the Merco Sanitario ranking. The results showed very similar web structures divided by specific proposals or sections of some centers. In this regard, this study identifies a general communication proposal in hospitals that does not adapt to the guidelines of screen-mediated communication, as well as a lack of personalization and disruptive storytelling ideation. In addition, the work concludes that Spanish hospitals, for the moment, have not opted for formats and technological developments derived from the possibilities of gamified content, 360° immersion, Virtual Reality (V.R), or Augmented Reality (A.R). Full article
Show Figures

Figure 1

21 pages, 10416 KiB  
Review
Examining the Role of Augmented Reality and Virtual Reality in Safety Training
by Georgios Lampropoulos, Pablo Fernández-Arias, Álvaro Antón-Sancho and Diego Vergara
Electronics 2024, 13(19), 3952; https://doi.org/10.3390/electronics13193952 - 7 Oct 2024
Abstract
This study aims to provide a review of the existing literature regarding the use of extended reality technologies and the metaverse focusing on virtual reality (VR), augmented reality (AR), and mixed reality (MR) in safety training. Based on the outcomes, VR was predominantly [...] Read more.
This study aims to provide a review of the existing literature regarding the use of extended reality technologies and the metaverse focusing on virtual reality (VR), augmented reality (AR), and mixed reality (MR) in safety training. Based on the outcomes, VR was predominantly used in the context of safety training with immersive VR yielding the best outcomes. In comparison, only recently has AR been introduced in safety training but with positive outcomes. Both AR and VR can be effectively adopted and integrated in safety training and render the learning experiences and environments more realistic, secure, intense, interactive, and personalized, which are crucial aspects to ensure high-quality safety training. Their ability to provide safe virtual learning environments in which individuals can practice and develop their skills and knowledge in real-life simulated working settings that do not involve any risks emerged as one of the main benefits. Their ability to support social and collaborative learning and offer experiential learning significantly contributed to the learning outcomes. Therefore, it was concluded that VR and AR emerged as effective tools that can support and enrich safety training and, in turn, increase occupational health and safety. Full article
Show Figures

Figure 1

24 pages, 34599 KiB  
Article
Diverse Humanoid Robot Pose Estimation from Images Using Only Sparse Datasets
by Seokhyeon Heo, Youngdae Cho, Jeongwoo Park, Seokhyun Cho, Ziya Tsoy, Hwasup Lim and Youngwoon Cha
Appl. Sci. 2024, 14(19), 9042; https://doi.org/10.3390/app14199042 - 7 Oct 2024
Abstract
We present a novel dataset for humanoid robot pose estimation from images, addressing the critical need for accurate pose estimation to enhance human–robot interaction in extended reality (XR) applications. Despite the importance of this task, large-scale pose datasets for diverse humanoid robots remain [...] Read more.
We present a novel dataset for humanoid robot pose estimation from images, addressing the critical need for accurate pose estimation to enhance human–robot interaction in extended reality (XR) applications. Despite the importance of this task, large-scale pose datasets for diverse humanoid robots remain scarce. To overcome this limitation, we collected sparse pose datasets for commercially available humanoid robots and augmented them through various synthetic data generation techniques, including AI-assisted image synthesis, foreground removal, and 3D character simulations. Our dataset is the first to provide full-body pose annotations for a wide range of humanoid robots exhibiting diverse motions, including side and back movements, in real-world scenarios. Furthermore, we introduce a new benchmark method for real-time full-body 2D keypoint estimation from a single image. Extensive experiments demonstrate that our extended dataset-based pose estimation approach achieves over 33.9% improvement in accuracy compared to using only sparse datasets. Additionally, our method demonstrates the real-time capability of 42 frames per second (FPS) and maintains full-body pose estimation consistency in side and back motions across 11 differently shaped humanoid robots, utilizing approximately 350 training images per robot. Full article
(This article belongs to the Special Issue Computer Vision, Robotics and Intelligent Systems)
Show Figures

Figure 1

15 pages, 807 KiB  
Article
PointCloud-At: Point Cloud Convolutional Neural Networks with Attention for 3D Data Processing
by Saidu Umar and Aboozar Taherkhani
Sensors 2024, 24(19), 6446; https://doi.org/10.3390/s24196446 - 5 Oct 2024
Abstract
The rapid growth in technologies for 3D sensors has made point cloud data increasingly available in different applications such as autonomous driving, robotics, and virtual and augmented reality. This raises a growing need for deep learning methods to process the data. Point clouds [...] Read more.
The rapid growth in technologies for 3D sensors has made point cloud data increasingly available in different applications such as autonomous driving, robotics, and virtual and augmented reality. This raises a growing need for deep learning methods to process the data. Point clouds are difficult to be used directly as inputs in several deep learning techniques. The difficulty is raised by the unstructured and unordered nature of the point cloud data. So, machine learning models built for images or videos cannot be used directly on point cloud data. Although the research in the field of point clouds has gained high attention and different methods have been developed over the decade, very few research works directly with point cloud data, and most of them convert the point cloud data into 2D images or voxels by performing some pre-processing that causes information loss. Methods that directly work on point clouds are in the early stage and this affects the performance and accuracy of the models. Advanced techniques in classical convolutional neural networks, such as the attention mechanism, need to be transferred to the methods directly working with point clouds. In this research, an attention mechanism is proposed to be added to deep convolutional neural networks that process point clouds directly. The attention module was proposed based on specific pooling operations which are designed to be applied directly to point clouds to extract vital information from the point clouds. Segmentation of the ShapeNet dataset was performed to evaluate the method. The mean intersection over union (mIoU) score of the proposed framework was increased after applying the attention method compared to a base state-of-the-art framework that does not have the attention mechanism. Full article
Show Figures

Figure 1

13 pages, 485 KiB  
Review
Beyond Presence: Exploring Empathy within the Metaverse
by Anjitha Divakaran, Hyung-Jeong Yang, Seung-won Kim, Ji-eun Shin and Soo-Hyung Kim
Appl. Sci. 2024, 14(19), 8958; https://doi.org/10.3390/app14198958 - 4 Oct 2024
Abstract
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our [...] Read more.
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our analysis reveals a predominant focus on specific empathy, driven by the immersive nature of virtual settings, such as virtual reality (VR) and augmented reality (AR). However, we argue that such immersive scenarios alone are insufficient for a comprehensive exploration of empathy. To deepen empathetic engagement, we propose the integration of advanced sensory feedback mechanisms, such as haptic feedback and biometric sensing. This paper examines the current state of empathy in virtual environments, contrasts it with the potential for enriched empathetic connections through technological enhancements, and proposes future research directions. By fostering both specific and universal empathy, we envision a metaverse that not only bridges gaps but also cultivates meaningful, empathetic connections across its diverse user base. Full article
Show Figures

Figure 1

34 pages, 39851 KiB  
Article
Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework
by Karthik Subramanian, Liya Thomas, Melis Sahin and Ferat Sahin
Machines 2024, 12(10), 706; https://doi.org/10.3390/machines12100706 - 4 Oct 2024
Abstract
The integration of Augmented Reality (AR) into Human–Robot Interaction (HRI) represents a significant advancement in collaborative technologies. This paper provides a comprehensive review of AR applications within HRI with a focus on manufacturing, emphasizing their role in enhancing collaboration, trust, and safety. By [...] Read more.
The integration of Augmented Reality (AR) into Human–Robot Interaction (HRI) represents a significant advancement in collaborative technologies. This paper provides a comprehensive review of AR applications within HRI with a focus on manufacturing, emphasizing their role in enhancing collaboration, trust, and safety. By aggregating findings from numerous studies, this research highlights key challenges, including the need for improved Situational Awareness, enhanced safety, and more effective communication between humans and robots. A framework developed from the literature is presented, detailing the critical elements of AR necessary for advancing HRI. The framework outlines effective methods for continuously evaluating AR systems for HRI. The framework is supported with the help of two case studies and another ongoing research endeavor presented in this paper. This structured approach focuses on enhancing collaboration and safety, with a strong emphasis on integrating best practices from Human–Computer Interaction (HCI) centered around user experience and design. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

25 pages, 1188 KiB  
Article
Adoption and Continuance in the Metaverse
by Donghyuk Shin and Hyeon Jo
Electronics 2024, 13(19), 3917; https://doi.org/10.3390/electronics13193917 - 3 Oct 2024
Abstract
The burgeoning metaverse market, encompassing virtual and augmented reality, gaming, and manufacturing processes, presents a unique domain for studying user behavior. This study delineates a research framework to investigate the antecedents of behavioral intention, bifurcating users into inexperienced and experienced cohorts. Utilizing a [...] Read more.
The burgeoning metaverse market, encompassing virtual and augmented reality, gaming, and manufacturing processes, presents a unique domain for studying user behavior. This study delineates a research framework to investigate the antecedents of behavioral intention, bifurcating users into inexperienced and experienced cohorts. Utilizing a cross-sectional survey, empirical data were amassed and analyzed using structural equation modeling, encompassing 372 responses from 131 inexperienced and 241 experienced users. For inexperienced users, the analysis underscored the significant impact of perceived usefulness on both satisfaction and adoption intention, while perceived enjoyment was found to bolster only satisfaction. Innovativeness and satisfaction do not drive adoption intention. Conversely, for experienced users, satisfaction was significantly influenced by perceived ease of use, perceived usefulness, and perceived enjoyment. Continuance intention was positively affected by perceived usefulness, perceived enjoyment, trust, innovativeness, and satisfaction. This research extends valuable insights for both theoretical advancements and practical implementations in the burgeoning metaverse landscape. Full article
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)
Show Figures

Figure 1

Back to TopTop