Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Free access

Evaluating Grasping Visualizations and Control Modes in a VR Game

Published: 28 October 2021 Publication History

Abstract

A primary goal of the Virtual Reality (VR) community is to build fully immersive and presence-inducing environments with seamless and natural interactions. To reach this goal, researchers are investigating how to best directly use our hands to interact with a virtual environment using hand tracking. Most studies in this field require participants to perform repetitive tasks. In this article, we investigate if results of such studies translate into a real application and game-like experience. We designed a virtual escape room in which participants interact with various objects to gather clues and complete puzzles. In a between-subjects study, we examine the effects of two input modalities (controllers vs. hand tracking) and two grasping visualizations (continuously tracked hands vs. virtual hands that disappear when grasping) on ownership, realism, efficiency, enjoyment, and presence.
Our results show that ownership, realism, enjoyment, and presence increased when using hand tracking compared to controllers. Visualizing the tracked hands during grasps leads to higher ratings in one of our ownership questions and one of our enjoyment questions compared to having the virtual hands disappear during grasps as is common in many applications. We also confirm some of the main results of two studies that have a repetitive design in a more realistic gaming scenario that might be closer to a typical user experience.

1 Introduction

With the growing prevalence of Virtual Reality (VR) technology and applications, developers, and researchers must understand how people interact with VR environments. The most common control method for interactions in VR has been with tracked controllers such as the Oculus Touch and HTC Vive controllers, but advancing technology is enabling real-time hand tracking in virtual environments [Han et al. 2018], and is available in commercial VR headsets such as the Valve Index and Oculus Quest. As the use of hand tracking in VR is becoming more widespread, research into how it affects VR experiences has grown [Dewez et al. 2021; Seinfeld et al. 2020].
Although using hand tracking in VR can feel more natural than controllers, it typically lacks haptic feedback and the tracked fingers can intersect with the geometry of the virtual objects. The lack of haptic feedback makes it difficult to know if an object has been grasped, and the intersections with virtual objects look unrealistic and might reduce immersion. Visual feedback when grasping can help mitigate these issues. An example visualization used in VR applications is to hide the virtual hand as long as an object is grasped and to constrain the object to the position of the hand during that time.
Based on these observations, we study the effects of two control modes (controllers vs. hand tracking) and two grasping visualizations (continuously tracked hands vs. virtual hands that disappear when grasping) on ownership, realism, efficiency, enjoyment, and presence. Previous research has studied these or very similar effects [Lin et al. 2019; Canales et al. 2019; Prachyabrued and Borst 2012] using more typical experimental designs with short tasks that are repeated in several conditions. These tasks do not reflect our experience in VR applications or games, and participants are often aware of the concepts studied and experience all conditions. In this article, our goal is to investigate these effects during an experience that might be closer to a typical VR experience where the user’s attention is not focused on interaction conditions and instead on gameplay. To this aim, we designed a VR Escape Room game (see Figure 1). Can we still observe similar effects when the participants are not aware of the purpose of the experiment, when they are not able to compare different conditions, and when they might be distracted and not even pay attention to the interaction being used? Our design furthermore allows us to study effects that would be difficult to examine in a repetitive task such as the influence of control modes and grasping visualizations on enjoyment.
Fig. 1.
Fig. 1. Left: Conceptual drawing of Escape Room game; the player avatar sits in the chair. Right: Panorama third-person view of the final game from behind the chair; the avatar is hidden in this image so that the environment can be seen.

2 Related Work

In this section, we describe research related to investigating control modes and grasping visualizations for VR applications as well as research using games to explore similar concepts.

2.1 VR Interaction Modes and Visual Feedback

While controllers are the current consumer industry standard in VR, other modes of interaction are constantly emerging and being explored with the goal of improving presence and ownership. Presence is the “sense of being” in a virtual space [Sheridan 1992] and ownership is the illusion of owning a virtual limb or virtual body [Slater et al. 2008]. Many researchers suggest that more natural interaction techniques increase enjoyment, presence, and ownership even if performance might be reduced [Lougiakis et al. 2020; McMahan et al. 2010; Moehring and Froehlich 2011; Skalski et al. 2011], which could lead to the conclusion that using accurately tracked hand motions would be preferred to controllers. The two control modes (controllers vs. accurate hand tracking) that we are testing have been previously examined in a study by Lin et al. [2019], where participants were asked to build towers out of blocks. They measured the effect of using controllers or tracked hands and of having different hand sizes on ownership and the virtual hand illusion, which is the illusion that a virtual hand is part of a user’s body [Slater et al. 2008]. Among other results, they found that tracked hands lead to higher levels of ownership and perceived realism but to poorer perceived efficiency and longer task times.
Since the haptic feeling of grasping a real object is missing in VR, several ways of giving users feedback when grasping have been suggested. Multiple studies have demonstrated that visual feedback has several advantages over no feedback [Prachyabrued and Borst 2014; Vosinakis and Koutsabasis 2018; Lam et al. 2018; Canales et al. 2019]. For example, users tend to prefer visual feedback over no feedback while grasping a virtual object [Prachyabrued and Borst 2014; Vosinakis and Koutsabasis 2018] and visual grasping feedback can improve efficiency [Lam et al. 2018]. Prachyabrued and Borst [2014] investigated several visual feedback techniques for virtual grasping with a repetitive grasp and release task and found that preventing the virtual hand from entering the virtual object while grasping was preferred. A subsequent study found that having a tracked hand with hand-object interpenetrations improved efficiency [Prachyabrued and Borst 2016]. Canales et al. [2019] confirmed those findings with a similar repetitive procedure. Displaying the virtual hands when grasping leads to higher performance and perceived hand ownership than hiding the hands during a grasp. We compare our results of the Tracked Hand and Disappearing Hand conditions to theirs.
Control modes and grasping visualizations might influence each other leading to interaction effects as, for example, a grasping visualization might not be needed if the user can accurately trigger a grasp through the buttons of a controller. In contrast to previous work, we investigate if control modes and grasping visualizations influence the participants’ experience in a game setting.

2.2 Experiments with Game-Like Experiences

Numerous VR studies include repetitive tasks. Of course, such studies are a fundamental part of the way we gain knowledge through controlled methods. They attempt to limit confounding factors, avoid distractions, and focus on the specific concept(s) being scrutinized. However, it is not always clear if results acquired through such procedures would still be valid in real applications or game-like experiences.
Experiments designed as non-repetitive and often entertaining experiences are not a new concept and have been conducted to study many factors. Lugrin et al. [2018] designed a game that mixes elements of a fast-paced first-person shooter and battle game strategy play. Their hypotheses about a virtual body’s influence on ownership or performance were not confirmed, which they attribute to participants focusing on game completion. Normoyle et al. [2014] investigated the effect of delays in character control on player enjoyment, frustration, performance, and experience and found a negative effect when jitter was added to a large delay. Lin et al. [2017] found new insights when evaluating the effect of avatar customization on learning outcomes in a 7-hour-long curriculum using a virtual environment over a 2-week period. Finally, Ali and Cardona-Rivera [2020] developed a VR first-person shooter to compare presence, engagement, and performance when playing with an HTC VIVE wand or a gamepad.
A similar, but slightly different, idea is to run experiments “in the wild,” where users download or use an app or device outside of the lab environment. Experiments in the wild are more common in areas of human-computer interaction when VR is not involved [Henze et al. 2011; Brown et al. 2011] as they can give access to a larger group of participants and lead to results that relatively short experiments in a lab environment cannot give. An example of a VR experiment in the wild is Steed et al.’s study [2016] on how presence and ownership are influenced by a self-avatar, an induction phase, and the attention of another avatar. A public VR application was created for users to download and experience on their own VR devices. While our study uses a game at its core, we do not run it in the wild, but it in a more controlled lab environment where participants have access to accurate real-time hand tracking technology that has not reached the consumer market.

3 Experimental Design

3.1 Conditions

Our study uses a between-subjects experimental design comparing the independent variables of Control Modes (conditions: Controllers vs. tracked Gloves) and Grasping Visualizations (conditions: Tracked Hand vs. Disappearing Hand); see Table 1.
Table 1.
    
   Control Modes
  ControllersGloves
 Tracked HandControllersTHGlovesTH
Grasping
Visualizations
Disappearing HandControllersDHGlovesDH
Table 1. The Four Conditions, with the Control Modes Represented at the Top and the Grasping Visualizations on the Left
In the Control Modes conditions, participants either use Oculus Touch Controllers to interact with the scene or have their hands tracked by wearing Gloves with 19 motion capture markers attached to each finger joint and the back of the hand (Figure 2). The markers are tracked at 120 fps using 15 Optitrack Prime 17W cameras and labeled in real time using Han et al.’s [2018] optical marker based hand tracking algorithm. Participants can freely move their hands and the movements of the avatar’s hands mimic their own. In the Controllers condition the fingers are directed by a thumb button, index finger trigger, and hand trigger (typically activated with the middle finger); fingers are extended if the buttons are untouched, partially extended if being touched, and pinched if the buttons are pressed. The avatar’s arms are animated using inverse kinematics based on the position of the hands. Participants choose the glove that best tightly fits their hand out of six different sizes prior to entering the virtual environment. The hand size of the avatar is then adjusted accordingly.
Fig. 2.
Fig. 2. In the Gloves condition (A, B), the virtual hand follows the participants’ hand motions. In the Controllers condition (C, D), a set of default poses are used.
We create two types of Grasping Visualizations: Tracked Hand and Disappearing Hand (Figure 3). In the Tracked Hand condition the virtual hands are always visible and follow the players’ hands or the controller motions.
Fig. 3.
Fig. 3. (A) The Disappearing Hand visualization during a grasp in both control modes. (B) The Tracked Hand condition when using Gloves. (C) The Tracked Hand condition with Controllers.
In the Disappearing Hand condition the virtual hands disappear once a participant grabs an item and reappear upon release. We chose the Disappearing Hand condition as it imitates grasping visualization in VR games such as Job Simulator or I Expect You to Die. It is simple to implement as the hand pose does not need to be adjusted based on the object, which might be why it is a popular approach. The Disappearing Hand is furthermore investigated in Canales et al.’s work [2019] where it was rated significantly lower in questions related to ownership than some of the other tested conditions and preferred least on average out of all tested conditions.
Whether an item is grasped or not in the Gloves condition depends on the positions and velocities of the thumb and index fingers in relation to each other and on the number of contacts between a hand and an object. An item is detected as “grabbed” if the distance between the index finger and thumb is below 5 mm or if the velocity between those two digits is greater than 15 cm/s; additionally, the nearby item needs at least two points of contact with the hand. An item is released when the thumb and the index finger move apart at a velocity above a threshold of 30 cm/s. The thresholds were adjusted through tests with multiple pilot participants.

3.2 Hypotheses

Our hypotheses on ownership, realism, and efficiency are based on Lin et al.’s [2019] and Canales et al.’s [2019] work. We anticipate higher presence and thus higher game enjoyment [Tamborini and Skalski 2006] with the Tracked Hand and the Gloves condition. So we formulate our hypotheses as follows:
(1)
H1. Ownership:
(a)
Greater ownership in the Gloves condition than in the Controllers condition [Lin et al. 2019].
(b)
Greater ownership in the Tracked Hand condition than in the Disappearing Hand condition [Canales et al. 2019].
(2)
H2. Realism: Greater realism in the Gloves condition than in the Controllers condition [Lin et al. 2019].
(3)
H3. Efficiency: Greater efficiency in the Controllers condition than in the Gloves condition [Lin et al. 2019].
(4)
H4. Enjoyment:
(a)
Greater enjoyment for the Tracked Hand than for the Disappearing Hand, as we assume presence increases for the Tracked Hand and increased presence leads to increased enjoyment.
(b)
Greater enjoyment in the Gloves condition compared to the Controllers condition, as the Gloves are the more “natural” control mode.
(5)
H5. Presence:
(a)
Greater presence for the Tracked Hand than for the Disappearing Hand, as the Disappearing Hand may be slightly jarring and thus reduce presence.
(b)
Greater presence in the Gloves condition compared to the Controllers condition, due to the gloves’ more accurate tracking of the hand motions.

3.3 Participants

A total of 72 participants were recruited for this IRB approved study, 17 for each Controllers condition and 19 for each Gloves condition as we expected technical issues; 62.5% identified as male, 36.1% as female, and 1.4% as other. Ages of participants ranged from 19 to 69, with a mean age of 26. All participants were locally recruited through email, reddit, and word of mouth, with a majority being university students. Participants were assigned conditions in sequential order, round-robin style with the two extra participants at the end. A total of eight participants were eliminated from results analysis: three participants were excluded as the motion capture system was not well calibrated, two for different unique technical errors, and three as they had difficulties understanding how to play the game in general. This left 64 participants for analysis; demographics are detailed in Table 2.
Table 2.
    Gender  Age
 TotalFMOMeanMinMax
ControllersTH1577124.672035
ControllersDH15213 27.671969
GlovesTH15510 241929
GlovesDH19910 28.12062
Table 2. Distribution of Participants’ Gender and Age Throughout Conditions

3.4 The Game

For this experiment, we designed an Escape Room type video game, modeled after the popular live-action activity, where a person or a group of people is locked in a room and has to get out by finding clues and solving puzzles. In our case, the player was locked to a chair in an escape pod in space and had to solve puzzles to find the key to the lock.
Advantages of that specific genre are that it uses a first person player perspective and that the player would not walk or run around. Participants stayed seated during the duration of the game and all necessary puzzle-solving objects were provided within our tracking space. The puzzles allowed us to create a variety of interactions and to design a fun experience where players would use their virtual hands. Early pilots showed that placing all clues in front of the players at the same time was confusing. Therefore, only the objects related to the current puzzle were placed on a table in front of the participant. When a puzzle was solved, the table surface was lowered, then rose with the next puzzle’s objects in place. A total of seven puzzles were implemented, with four primary complex puzzles. A quick playthrough of the game can be seen in the video; impressions of the game are shown in Figures 1, 4, and 5.
Fig. 4.
Fig. 4. A (near) first-person view of a participant in the Controllers condition during the experiment.
Fig. 5.
Fig. 5. Examples of puzzles. Left: Participants must match the cage combination lock to the cards. Right: Mimic that “bites” on any hand that tries to retrieve the statue.
For one of the puzzles, we implemented a mimic—a box with teeth that suddenly closes when one tries to retrieve the object in it—as a threat condition. The mimic was used as an indication for the strength of the virtual hand illusion in a similar way that has been done in other studies [Lin and Jörg 2016; Lin et al. 2019; Canales et al. 2019; Argelaguet et al. 2016; Ma and Hommel 2013; Yuan and Steed 2010].
Objects became highlighted when picked or when the players’ hands were in touching distance. In addition, objects that could interact with held objects also became highlighted when both objects touched. There was no gravity: if an object was let go of in mid-air, it stayed there until it was picked up again.
As a neutral actor, we used a robot from Unity’s 4.0 Mecanim Animation Tutorial [Unity 2012], which we modified in Maya 2017 and Unity 5.6.1 to allow resizable hands. The participant could look down and see their virtual body. The avatar hand provided all degrees of freedom for movement of the 20 finger joints, but did not perform subtler movements such as skin stretching and palm flexing. The game models were created in Maya 2017, textures were designed in Adobe Photoshop CC 2017 and PaintTool SAI, and game functionality was implemented in Unity 5.6.1.

3.5 Procedure

At the start of the experiment, participants are asked to sign a consent form and answer a cybersickness pre-experiment motion sickness questionnaire. Participants who answer “yes” to more than two of four cybersickness questions are eliminated from the study; none were.
Before putting on the Oculus Rift headset, participants are guided on how to adjust the spacing between the lenses to match their interocular distance and, if necessary, how to put on the headset with glasses. Participants are assisted with tightening and adjusting the headset for a satisfactory fit.
Prior to entering the VR environment, participants in the Gloves condition are instructed to pick up items by pinching with their thumb, index, and middle fingers. Participants using the Controllers condition are instructed to pick up items by grabbing with the primary thumb button and index finger trigger, resulting in a similar motion to the pinching action of the Gloves condition. Participants in all conditions determine their hand size by trying on the tracking gloves; the size of their virtual hands in the game environment is then adjusted to match their real-world hand size for increased presence. The avatar height and arm length are also adjusted to match those of each participant.
Participants are introduced to the concept of the experiment, an Escape Room video game in VR, described in Section 3.4, at the start of the study. During the course of the game, participants who take more than a set amount of time to solve a puzzle (dependent on the puzzle and determined in pilot tests; max: 255 s, min: 27 s, mean: 137 s) are prompted with situational clues such as “There is something below you that can be interacted with” or “That stove could use some fuel” and the key puzzle items also flash briefly.
Finally, participants are given time to explore and practice grasping, moving, and placing items in a training phase. Participants can color-match simple shapes and blocks to grow comfortable with the interaction methods and the virtual environment.
Game completion takes on average 7 minutes and 33 seconds, not including the average 92 seconds it takes for participant calibration and training. Once participants finish the game, they are offered congratulations and directed to complete a post-experiment questionnaire (Table 3) on a nearby desktop computer. When the questionnaire is completed, participants are asked whether they noticed the threat condition (the toothy mimic that tries to bite their hand), what they thought of it, and what they thought of the game. After these questions, participants choose whether to sign a release of information form for the data gathered during the experiment (all participants signed) and then receive their incentive card.
Table 3.
MeasureQuestionsResultsMean, Std.Origin
OwnershipO1. I felt as if the virtual hands were part of my body.  (5.19, 1.08)Lin et al. [2019] Canales et al. [2019]
O2. It sometimes seemed my own hands were located on the screen.TH > DHTH (5.47, 1.25) DH (4.53, 1.44)
O3. It sometimes seemed like my own hands came into contact with the virtual object. Glv > CtrlGlv (4.91, 1.64) Ctrl (3.7, 1.8)
O4. I thought that the virtual hands could be harmed by the virtual danger. Glv > CtrlGlv (4.44, 2.16) Ctrl (3.07, 2)
RealismR1. I thought the virtual hands looked realistic.  (4.31, 1.73)Lin et al. [2019]
R2. I thought the movement of the virtual hands looked realistic.Glv > CtrlGlv (5.74, 1.05) Ctrl (4.8, 1.67)
EfficiencyF1. I felt like I could very efficiently use my virtual hands to interact with the environment. (5.53, 1.3)Lin et al. [2019]
Enjoyment
Not true at all - Very true
E1. While I was playing this game, I was thinking about how much I enjoyed it.(5.92, 1.15)IMI Enjoyment [2000]
E2. This game did not hold my attention at all. (R)Glv > Ctrl TH > DHTH (6.83, 0.38) DH (6.53, 0.83) Glv (6.85, 0.36) Ctrl (6.47, 0.86)
E3. I would describe this game as very interesting.Glv > CtrlGlv (6.21, 0.81) Ctrl (5.6, 1.19)
E4. I enjoyed playing this game very much.Glv > CtrlGlv (6.47, 0.66) Ctrl (6.03, 1)
E5. This game was fun to play.(6.28, 0.93) 
Presence
Do Not Agree - Strongly Agree
P1. When playing the game, I feel transported to another time and place.(5.56, 1.1)PENS Presence [Rigby and Ryan 2007]
P2. Exploring the game world feels like taking an actual trip to a new place.(5.25, 1.33)
P3. When moving through the game world I feel as if I am actually there.(5.31, 1.42)
P4. I am not impacted emotionally by events in the game. (R) (3.61, 1.84)
P5. The game was emotionally engaging.Glv > CtrlGlv (4.65, 1.69) Ctrl (3.57, 1.65)
P6. I experience feelings as deeply in the game as I have in real life. (3.27, 1.97)
P7. When playing the game I feel as if I was part of the story.Glv > CtrlGlv (5.47, 1.31) Ctrl (4.6, 1.63)
P8. When I accomplished something in the game I experienced genuine pride.Glv > CtrlGlv (6.12, 0.95) Ctrl (5.57, 1.19)
 P9. I had reactions to events and characters in the game as if they were real.Glv > CtrlGlv (5.18, 1.7) Ctrl (4.2, 1.73) 
Table 3. Our Questions and Main Results
All measures were on a 7-pt. Likert scale, with values from “Strongly agree” to “Strongly disagree” with the exception of the Enjoyment and Presence questions. Values from questions marked with (R) were reversed before analysis. : Question also measured in Lin et al. [2019]; : Question also measured in Canales et al. [2019].

3.6 Measures

We investigate the influence of our four interaction conditions on the players’ feeling of ownership of the virtual hands, the perceived realism, the perceived efficiency of the interactions, the players’ enjoyment, and the players’ feeling of presence. The effect of different interaction types on ownership, realism, and efficiency have been investigated in two recent studies [Lin et al. 2019; Canales et al. 2019], and we compare our results to theirs. We furthermore examine the effect of our interaction conditions on presence and enjoyment, which are typical measurements for game experiences. Our questions are listed in Table 3. The questions on ownership, realism, and efficiency were adapted from previous studies. We use the Pens Presence [Rigby and Ryan 2007] questionnaire as a measure of game presence, as PENS is statistically validated and generally comparable to other popular questionnaires IEQ and EEngQ [Denisova et al. 2016]. Seven items slightly altered from the Intrinsic Motivation Inventory [Center for Self-Determination Theory 2000] are used to measure game enjoyment.

4 Results

Results of our experiment were analyzed with a two-way independent ANOVA. Levene’s Test was used to assess the homogeneity of variance across groups. A significant difference of variance () was found in one measure, E2 on the IMI Enjoyment Questionnaire (Table 3). All measures were significantly non-normal. Therefore, we attempted a robust ANOVA that included trimming the means, but it did not yield any differences in significant results compared to the two-way ANOVA, and thus was not included in the results. All questionnaire results are summarized in Table 3.
Ownership. We found a significant main effect of Control Mode for questions O3 and O4 . As expected, participants reported higher levels of ownership when using gloves compared to using controllers. A significant main effect of Grasping Visualization was found for question O2 . Ownership was perceived to be greater when participants used the Tracked Hand visualization for grasping (see Figure 6).
Fig. 6.
Fig. 6. Left: Realism and Ownership were rated higher in the Gloves conditions than in the Controllers conditions. Right: Ownership was rated as greater in the Tracked Hand conditions than in the Disappearing Hand conditions.
Of the 52 (out of 64) participants who responded when asked about the threatening mimic, 27 (51%) reported that it was frightening in some way. Of those 27, 7 (26%) participants used controllers and 20 (74%) used gloves. Of the 25 (49%) who reported it as non-frightening or unnoticed, 14 (56%) used controllers and 11 (44%) used gloves. Pearson’s chi-squared test showed a significant association between the type of Control Mode and whether participants reported the mimic as scary (). The odds of participants reporting the mimic as frightening were 3.5 (CI: 0.99, 13.9) times higher in the Gloves condition than in the Controllers condition.
Participants in either condition made comments such as “I didn’t want to lose my hand” or “I hesitated until I remembered it was VR.” Twenty-two participants visibly jumped or exclaimed when the mimic chomped.
Realism. A significant main effect of Control Mode was present for question R2 ; see Figure 6. The motion of the hand was perceived to be more realistic in the Gloves condition than in the Controllers condition.
Efficiency. We found no significant effects on perceived efficiency. However, analysis of game completion time showed a significant main effect of Grasping Visualization , with participants in the Disappearing Hand condition taking longer to complete the game than those in the Tracked Hand condition.
Enjoyment. Significant main effects of Control Mode were present for three of the five game enjoyment questions: E2 , E3 , and E4 , with enjoyment being rated as higher by participants who used the gloves. Additionally, an effect of Grasping Visualization was found for E2 , with participants reporting enjoying the Tracked Hand visualization more. A significant interaction effect was found for question E1, but a Tukey HSD post-hoc test did not show any significant results. Main effects of enjoyment can be seen in Figure 7.
Fig. 7.
Fig. 7. Left: Enjoyment was rated greater in the Gloves conditions than when using controllers. Right: Enjoyment was rated higher in the Tracked Hand conditions than in the Disappearing Hand conditions.
Presence. Measuring presence yielded significant main effects for Control Mode for four of the nine presence questions: P5 , P7 , P8 , and P9 . For all effects the Glove condition induced higher perceived presence than the controllers (see Figure 8). An additional interaction effect was found for question P3; however, a Tukey HSD post-hoc test did not show any significant differences between conditions.
Fig. 8.
Fig. 8. For all significant effects of Control Mode on presence, the Gloves condition resulted in higher ratings than the Controllers condition.

5 Discussion

In this section, we discuss if our hypotheses are supported, compare our results to previous studies, discuss the advantages and disadvantages of using a game-like experience to study interactions in VR, and give tips for preparing such studies.
Ownership and Realism. Our results confirm our hypotheses H1 (a) based on significant differences for questions O3 and O4 as well as the reactions to the threat. In all cases, ownership was perceived to be higher in the Gloves condition where the motions of the virtual fingers corresponded to the players’ motions than in the Controllers condition that only displayed base poses. H1 (b) is only supported through one question (O2), so our evidence is only weak in this case. Participants in the Tracked Hand condition experienced higher ownership than those in the Disappearing Hand condition.
Realism metric R2 showed that participants perceived the movement of the virtual hands to be more realistic in the Gloves condition than in the Controllers condition, confirming hypothesis H2.
These overall results correspond to the findings from Lin et al. [2019] and Canales et al. [2019]. However, the results for each individual question are not always the same. Lin et al. averaged the answers to their ownership questions in their analysis and found a significant effect. We ran that analysis and also find a significant effect in that case. However, they did not find a significant effect for O3 when considered individually, which we do. Lin et al. also find a significant effect for question R1 (Gloves rated as more realistic than Controllers), which we do not (they did not ask R2). Canales et al. find a significant difference for O1 and two ownership questions that we did not ask, but not for O3 or O4. Findings from the different studies are shown in direct comparison in Figure 9.
Fig. 9.
Fig. 9. Left: Both our work and Canales et al. [2019] had significant main effects of Grasping Visualization on ownership, where the Tracked Hand condition (called Inner Hand in Canales et al.’s work) resulted in higher perceived ownership. Center: Taking the mean of all ownership averages shows a significant main effect of Control Mode in both Lin et al. [2019] and our work. Using the Gloves caused higher perceived ownership. Right: Realism was perceived as significantly higher when using the Gloves as opposed to the Controllers in both Lin et al. [2019] and our work. These graphs show that similar results were found in these experiments with different experimental setups and procedures.
Efficiency. We did not find significant differences between the Controllers condition and the Gloves condition when it comes to the perceived efficiency or the actual game completion. Thus, we cannot confirm our hypothesis H3. Lin et al. find that the controllers were perceived to be more efficient than the gloves. In a simpler grasping task and in direct comparison, differences in efficiency might be more noticeable than in a relatively slow-paced game such as this one that is focused on solving puzzles.
Enjoyment and Presence. We can confirm Hypothesis H4 (b), that enjoyment was higher in the Gloves condition compared to the Controllers condition, based on the significant differences in the answers of E2, E3, and E4. Enjoyment was rated very high in general, which shows that we successfully created an enjoyable game experience. Hypothesis H4 (a), that enjoyment will be greater for the Tracked Hand than for the Disappearing Hand, was only supported by a significant effect of E2, so the evidence in this case is too weak to draw confident conclusions.
We find evidence to support Hypothesis H5 (b) but not H5 (a). Presence questions P5, P7, P8, and P9 all showed that using gloves to interact in VR leads to a greater feeling of presence when compared to using controllers.
Experiments with Game-Like Experiences. As a goal of VR research is to understand our perception to create better VR experiences, our findings and hypotheses should be confirmed in scenarios that are similar to actual user experiences outside of lab settings in addition to experiments with repetitive tasks (not instead). However, the design of such studies also presents many challenges: The development of a suitable game can be very laborious, the variance between participants’ reactions can be increased through further confounding factors such as how skilled participants are at playing specific games, and the number of participants needed is typically larger (based on the estimated variance and the fact that such studies might require between-subjects designs). Furthermore, effects might become diluted in some types of games. For example, it is more difficult to measure efficiency and performance in a game that focuses on slow-speed puzzles than in a first-person shooter where speed is a key to success. Being able to compare different conditions without distractions in a repetitive design might lead to the participants’ “recalibration of the scale” and show more subtle differences. However, these differences might then not be important in a more immersive application. Despite the challenges, we consider experiments using more realistic applications as a necessary and important addition to studies with procedures using repetitive tasks because they can provide more true-to-life observations of immersive virtual experiences.
When planning such an experiment, we recommend to adjust the game type to the concepts being studied. Different types of games might need to be used to evaluate different concepts, and ideally, the same concepts would be tested in several scenarios. Ideally, a series of applications of different types would be accessible for experiments in the community, so that hypotheses can be tested in a variety of genres.

6 Conclusion, Limitations, and Future Work

In this article, we present a study that investigates the effect of two control modes (Gloves vs. Controllers) and two grasping visualizations (Tracked Hands vs. Disappearing Hands) on ownership, realism, efficiency, enjoyment, and presence when playing an Escape Room game in which players interact with objects to solve puzzles. Our results show that ownership, realism, enjoyment, and presence significantly increased when using hand tracking (Gloves) as an input modality compared to controllers. We also found limited evidence that a Tracked Hand visualization increases ownership and enjoyment compared to a virtual hand that disappears during grasps.
We therefore recommend to take hand tracking into account as an input modality instead of controllers when creating VR applications, and to continue to improve this technology and increase its accuracy for consumers. Our results were found using a motion capture system that was specifically developed to track hand motions in real time. Further studies would need to demonstrate if our findings would be the same with current commercially available hand tracking devices.
A limitation of this work is that the user’s hands are represented by a robotic model low in realism. While this model is in line with the model used in previous studies and allows for better comparison, the results might look different with a more realistic hand model. Additionally, our grasping representations are not realistic as the participants’ fingers intersect with the object when grasping if they do not disappear. Interestingly, none of the participants commented on the hands moving through the objects. Future work could investigate the effect of hand model and grasping representation realism in game-like experiences. It would also be interesting to investigate whether visualizing the hands with controllers in the Controllers condition would affect results. Finally, we only tried one game and cannot generalize our results to other games or genres. Exploring our results with experiments that use other game genres of varying levels of immersion, use players of different experience, or use existing games with modifications would further the generalizability of our findings.
While our results cannot be generalized to other games, one has to also be cautious when generalizing studies with a repetitive design. We often cannot confirm with certainty that such results will still be the same with an altered task or a different participant sample, who might, for example, have more experience with virtual reality. Most research progress in our field (and in any other field) is not made through individual studies but through many studies. Findings need to be replicated and validated in different contexts. While we do not replicate other studies—we would need to accurately follow the exact same protocol to do so—verifying how specific conditions are perceived in different situations can reinforce and strengthen findings, which is one of the main contributions of this article.

References

[1]
Monthir Ali and Rogelio E. Cardona-Rivera. 2020. Comparing gamepad and naturally-mapped controller effects on perceived virtual reality experiences. In ACM Symposium on Applied Perception 2020 (SAP’20). Article 10, 10 pages. https://doi.org/10.1145/3385955.3407923
[2]
Ferran Argelaguet, Ludovic Hoyet, Michaël Trico, and Anatole Lécuyer. 2016. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In 2016 IEEE Virtual Reality (VR). 3–10. https://doi.org/10.1109/VR.2016.7504682
[3]
Barry Brown, Stuart Reeves, and Scott Sherwood. 2011. Into the wild: Challenges and opportunities for field trial methods. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). 1657–1666. https://doi.org/10.1145/1978942.1979185
[4]
Ryan Canales, Aline Normoyle, Yu Sun, Yuting Ye, Massimiliano Di Luca, and Sophie Jörg. 2019. Virtual grasping feedback and virtual hand ownership. In ACM Symposium on Applied Perception 2019. 1–9. https://doi.org/10.1145/3343036.3343132
[5]
Center for Self-Determination Theory. 2000. Intrinsic Motivation Inventory (IMI). Retrieved 2021 https://selfdeterminationtheory.org/intrinsic-motivation-inventory/.
[6]
Alena Denisova, A. Imran Nordin, and Paul Cairns. 2016. The convergence of player experience questionnaires. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. ACM, 33–37. https://doi.org/10.1145/2967934.2968095
[7]
Diane Dewez, Ludovic Hoyet, Anatole Lécuyer, and Ferran Argelaguet Sanz. 2021. Towards “avatar-friendly” 3D manipulation techniques: Bridging the gap between sense of embodiment and interaction in virtual reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI’21). Article 264, 14 pages. https://doi.org/10.1145/3411764.3445379
[8]
Shangchen Han, Beibei Liu, Robert Wang, Yuting Ye, Christopher D. Twigg, and Kenrick Kin. 2018. Online optical marker-based hand tracking with deep labels. ACM Transactions on Graphics 37, 4, Article 166 (July 2018), 10 pages. https://doi.org/10.1145/3197517.3201399
[9]
Niels Henze, Martin Pielot, Benjamin Poppinga, Torben Schinke, and Susanne Boll. 2011. My app is an experiment: Experience from user studies in mobile app stores. International Journal of Human Computer Interaction 3 (Oct. 2011), 71–91. https://doi.org/10.4018/jmhci.2011100105
[10]
Meng Chun Lam, Haslina Arshad, Anton Satria Prabuwono, Siok Yee Tan, and Seyed Mostafa Mousavi Kahaki. 2018. Interaction techniques in desktop virtual environment: The study of visual feedback and precise manipulation method. Multimedia Tools and Applications 77, 13 (2018), 16367–16398. https://doi.org/10.1007/s11042-017-5205-9
[11]
Lorraine Lin and Sophie Jörg. 2016. Need a hand? How appearance affects the virtual hand illusion. In Proceedings of the ACM Symposium on Applied Perception (SAP’16). 69–76. https://doi.org/10.1145/2931002.2931006
[12]
Lorraine Lin, Aline Normovle, Alexandra Adkins, Yu Sun, Andrew Robb, Yuting Ye, Massimiliano Di Luca, and Sophie Jörg. 2019. The effect of hand size and interaction modality on the virtual hand illusion. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 510–518. https://doi.org/10.1109/VR.2019.8797787
[13]
Lorraine Lin, Dhaval Parmar, Sabarish V. Babu, Alison E. Leonard, Shaundra B. Daily, and Sophie Jörg. 2017. How character customization affects learning in computational thinking. In Proceedings of the ACM Symposium on Applied Perception (SAP’17). Article 1, 8 pages. https://doi.org/10.1145/3119881.3119884
[14]
Christos Lougiakis, Akrivi Katifori, Maria Roussou, and Ioannis-Panagiotis Ioannidis. 2020. Effects of virtual hand representation on interaction and embodiment in HMD-based virtual environments using controllers. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 510–518. https://doi.org/10.1109/VR46266.2020.00072
[15]
Jean-Luc Lugrin, Maximilian Ertl, Philipp Krop, Richard Klüpfel, Sebastian Stierstorfer, Bianka Weisz, Maximilian Rück, Johann Schmitt, Nina Schmidt, and Marc Erich Latoschik. 2018. Any “body” there? avatar visibility effects in a virtual reality game. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). 17–24. https://doi.org/10.1109/VR.2018.8446229
[16]
Ke Ma and Bernhard Hommel. 2013. The virtual-hand illusion: Effects of impact and threat on perceived ownership and affective resonance. Frontiers in Psychology 4 (2013), 604. https://doi.org/10.3389/fpsyg.2013.00604
[17]
Ryan P. McMahan, Alexander Joel D. Alon, Shaimaa Lazem, Robert J. Beaton, David Machaj, Michael Schaefer, Mara G. Silva, Anamary Leal, Robert Hagan, and Doug A. Bowman. 2010. Evaluating natural interaction techniques in video games. In 2010 IEEE Symposium on 3D User Interfaces (3DUI). 11–14. https://doi.org/10.1109/3DUI.2010.5444727
[18]
M. Moehring and B. Froehlich. 2011. Effective manipulation of virtual objects within arm’s reach. In 2011 IEEE Virtual Reality Conference. 131–138. https://doi.org/10.1109/VR.2011.5759451
[19]
Aline Normoyle, Gina Guerrero, and Sophie Jörg. 2014. Player perception of delays and jitter in character responsiveness. In Proceedings of the ACM Symposium on Applied Perception (SAP’14). 117–124. https://doi.org/10.1145/2628257.2628263
[20]
Mores Prachyabrued and Christoph W. Borst. 2012. Visual interpenetration tradeoffs in whole-hand virtual grasping. In 2012 IEEE Symposium on 3D User Interfaces (3DUI). 39–42. https://doi.org/10.1109/3DUI.2012.6184182
[21]
Mores Prachyabrued and Christoph W. Borst. 2014. Visual feedback for virtual grasping. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). 19–26. https://doi.org/10.1109/3DUI.2014.6798835
[22]
Mores Prachyabrued and Christoph W. Borst. 2016. Design and evaluation of visual interpenetration cues in virtual grasping. IEEE Transactions on Visualization and Computer Graphics 22, 6 (2016), 1718–1731. https://doi.org/10.1109/TVCG.2015.2456917
[23]
Scott Rigby and Richard Ryan. 2007. The Player Experience of Need Satisfaction (PENS). https://immersyve.com/white-paper-the-player-experience-of-need-satisfaction-pens-2007/.
[24]
Sofia Seinfeld, Tiare Feuchtner, Johannes Pinzek, and Jörg Müller. 2020. Impact of information placement and user representations in VR on performance and embodiment. IEEE Transactions on Visualization and Computer Graphics (2020), 1–13. https://doi.org/10.1109/TVCG.2020.3021342
[25]
Thomas B. Sheridan. 1992. Musings on telepresence and virtual presence. Presence: Teleoperators & Virtual Environments 1, 1 (1992), 120–126. https://doi.org/10.1162/pres.1992.1.1.120
[26]
Paul Skalski, Ron Tamborini, Ashleigh Shelton, Michael Buncher, and Pete Lindmark. 2011. Mapping the road to fun: Natural video game controllers, presence, and game enjoyment. New Media & Society 13, 2 (2011), 224–242. https://doi.org/10.1177/1461444810370949
[27]
Mel Slater, Daniel Pérez Marcos, Henrik Ehrsson, and Maria V. Sanchez-Vives. 2008. Towards a digital body: The virtual arm illusion. Frontiers in Human Neuroscience 2 (2008), 6. https://doi.org/10.3389/neuro.09.006.2008
[28]
Anthony Steed, Sebastian Friston, María Murcia López, Jason Drummond, Ye Pan, and Swapp David. 2016. An ‘in the wild’ experiment on presence and embodiment using consumer virtual reality equipment. IEEE Transactions on Visualization and Computer Graphics 22, 4 (April 2016), 1406–14. https://doi.org/10.1109/TVCG.2016.2518135
[29]
Ron Tamborini and Paul Skalski. 2006. The role of presence in the experience of electronic games. Playing Video Games: Motives, Responses, and Consequences 1 (Jan. 2006), 225–240.
[30]
Unity. 2012. Unity 4.0 - Mecanim Animation Tutorial. https://www.youtube.com/watch?v=Xx21y9eJq1U.
[31]
Spyros Vosinakis and Panayiotis Koutsabasis. 2018. Evaluation of visual feedback techniques for virtual grasping with bare hands using leap motion and oculus rift. Virtual Reality 22, 1 (Mar 2018), 47–62. https://doi.org/10.1007/s10055-017-0313-4
[32]
Ye Yuan and Anthony Steed. 2010. Is the rubber hand illusion induced by immersive virtual reality? In 2010 IEEE Virtual Reality Conference (VR). 95–102. https://doi.org/10.1109/VR.2010.5444807

Cited By

View all
  • (2024)Investigating body perception of multiple virtual hands in synchronized and asynchronized conditionsFrontiers in Virtual Reality10.3389/frvir.2024.13839575Online publication date: 13-Jun-2024
  • (2024)Comparison of Unencumbered Interaction Technique for Head-Mounted DisplaysProceedings of the ACM on Human-Computer Interaction10.1145/36981468:ISS(500-516)Online publication date: 24-Oct-2024
  • (2024)The Curation Tree: A Lightweight Behavior Tree Framework for Implementing Puzzle and Narrative GamesProceedings of the 19th International Conference on the Foundations of Digital Games10.1145/3649921.3659840(1-4)Online publication date: 21-May-2024
  • Show More Cited By

Index Terms

  1. Evaluating Grasping Visualizations and Control Modes in a VR Game

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Applied Perception
      ACM Transactions on Applied Perception  Volume 18, Issue 4
      October 2021
      74 pages
      ISSN:1544-3558
      EISSN:1544-3965
      DOI:10.1145/3492443
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 28 October 2021
      Accepted: 01 August 2021
      Received: 01 August 2021
      Published in TAP Volume 18, Issue 4

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Virtual reality
      2. game
      3. input devices
      4. virtual hands
      5. grasping visualization

      Qualifiers

      • Research-article
      • Refereed

      Funding Sources

      • Facebook Reality Labs
      • National Science Foundation

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)362
      • Downloads (Last 6 weeks)36
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Investigating body perception of multiple virtual hands in synchronized and asynchronized conditionsFrontiers in Virtual Reality10.3389/frvir.2024.13839575Online publication date: 13-Jun-2024
      • (2024)Comparison of Unencumbered Interaction Technique for Head-Mounted DisplaysProceedings of the ACM on Human-Computer Interaction10.1145/36981468:ISS(500-516)Online publication date: 24-Oct-2024
      • (2024)The Curation Tree: A Lightweight Behavior Tree Framework for Implementing Puzzle and Narrative GamesProceedings of the 19th International Conference on the Foundations of Digital Games10.1145/3649921.3659840(1-4)Online publication date: 21-May-2024
      • (2024)Hands or Controllers? How Input Devices and Audio Impact Collaborative Virtual RealityProceedings of the 30th ACM Symposium on Virtual Reality Software and Technology10.1145/3641825.3687718(1-12)Online publication date: 9-Oct-2024
      • (2024)[DC] Animating Interactive Virtual Humans2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00368(1148-1149)Online publication date: 16-Mar-2024
      • (2024)Exploring and Modeling Directional Effects on Steering Behavior in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345616630:11(7107-7117)Online publication date: 1-Nov-2024
      • (2024)Morphology agnostic gesture mapping for intuitive teleoperation of construction robotsAdvanced Engineering Informatics10.1016/j.aei.2024.10260062(102600)Online publication date: Oct-2024
      • (2024)Exploring the Relationship Between the Interactive Range of Objects and the Performance of Freehand Grasping Interaction in Glasses-Free 3D ScenesHCI International 2024 Posters10.1007/978-3-031-61950-2_17(147-158)Online publication date: 7-Jun-2024
      • (2023)The Effects of Hand Representation on Experience and Performance for 3D Interactions in Virtual Reality GamesProceedings of the ACM on Human-Computer Interaction10.1145/36110667:CHI PLAY(1206-1233)Online publication date: 4-Oct-2023
      • (2023)Controllers or Bare Hands? A Controlled Evaluation of Input Techniques on Interaction Performance and Exertion in Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.332021129:11(4633-4643)Online publication date: 3-Oct-2023
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media