Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Calibrated Passability Perception in Virtual Reality Transfers to Augmented Reality

Published: 25 October 2023 Publication History

Abstract

As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45°and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.

1 Introduction

In training situations, individuals are often asked to wield an object that requires them to understand the object’s size and extent in order to perform accurately. Some argue that once training progresses, trainees will start to incorporate the object into their body schema—or their understanding of the position and configuration of their physical body as a three-dimensional object in space [35]. In doing so, individuals would recalibrate their perceived action capabilities to reflect their new abilities with the additional object. But what type of experience or training is needed to ensure accurate perception of wielded objects? Previous research in the real world, virtual reality (VR), and augmented reality (AR) suggests that receiving various types of feedback about action capabilities over time, as in training, can calibrate action perception (e.g., Franchak [12]). For example, observers could make a judgment about their action capability (“I can walk through the doorway”) and then receive verbal feedback about its accuracy. Alternatively, they could make a judgment about their capability and then attempt to perform the action (e.g., walk through the doorway to see if they fit) to gain feedback. Although many types of feedback have been developed to calibrate actions, it is still unclear which are most effective for training in VR and AR.
In addition to knowing which types of feedback are effective, knowing whether training will transfer across environments is also important. Training in both VR and AR can transfer to the real world for situations like surgeries, first responder training, and more [29, 49]. However, little, if anything, is known about whether training in a fully virtual environment can transfer to a partially virtual (or augmented) environment or vice versa. To build effective training programs for any such environment, it would be ideal for the training to generalize to as many other types of environments as possible.
Understanding whether calibrated action perception transfers from VR to AR is important for training situations in which a user must ultimately perform actions in AR. The present study investigates action calibration and transfer in the context of walking through doorways. Many everyday tasks require us to pass through apertures, such as doorways, which becomes more difficult to do effectively and without collision when wielding large objects. For example, military personnel who frequently walk through doorways carrying a rifle must have a precise understanding of the extent of the rifle to avoid collision. In the current experiment, we first determine whether providing feedback about the outcome of passing through an aperture (i.e., experiencing a collision or not) while holding a long object can calibrate performance over time in VR. We then assess whether calibration of this action transfers to actions in AR. Training an action in AR may be difficult for reasons such as limited action space, ease of access to AR technology/remote training, or the intended AR application is in a risky environment. Calibration transfer between VR and AR is also interesting from a theoretical perspective. In VR, the entire visual scene is virtual, but in AR, virtual objects are presented in the context of real-world objects. Whether perceptual calibration is sensitive to these environmental differences is important for designers of training environments that need to be presented using different technologies for various reasons.

2 Background

2.1 Affordances and Extensions to the Body

The relationships between one’s body and the environment present opportunities for action, or affordances [20, 21]. For example, a doorway affords passage if it is wider than the shoulders, and an object affords grasping if it is smaller than the hand. In other words, the perception of affordances is calibrated, or scaled, to relevant properties of the body [46]. We are able to flexibly adjust affordance perception to account for changes in body dimensions or abilities in everyday life, such as carrying a wide object through a doorway [43] or locomoting in a wheelchair [26]. Studies on the perception of reaching capability indicate that when users wield a tool, the tool becomes incorporated into their body schemas. Thus, their reaching space becomes “re-mapped” to reflect their new ability [3, 35] and objects that the tool may interact with can be perceived as closer [47]. For the affordance of walking through a doorway, when users hold a rod that effectively widens their body, affordance perception recalibrates to accommodate the person-plus-object (PPO) system (e.g., Wagman and Taylor [45]) and their perception of the size of apertures that they are asked to consider walking through decreases [43]. Users are also able to adjust their affordance perception to accommodate locomoting through an aperture in a wheelchair [25, 26], and when wearing a backpack to squeeze sideways through an aperture [12, 13, 14]. This research, all done in the real world, provides evidence for claims that observers can flexibly incorporate new objects into their body schemas and perception for action capabilities is then updated accordingly.
Extended or mixed realities (XR) have offered new tools to test the perception of affordances with changes to the body. XR encompasses AR and VR, including environments that contain both real and virtual components [38, 42]. AR superimposes virtual objects onto the real world, whereas VR immerses the user in a visually virtual environment. Given its ability to portray a virtual body with different dimensions than an observer’s real physical body, VR has been used most often to manipulate body extensions or changes in size in order to assess the effect on affordance perception. For example, VR has been used to present larger hands [34], longer arms [6], and bigger feet [28] in order to test the effects of displaying a change to virtual body size on affordance perception for grasping, reaching, and stepping. Most relevant to the current experiment is the work of Bhargava and colleagues who tested whether the presence of an avatar affected judgments of passability through a virtual aperture while wielding a virtual object [5]. They found that rendering a self-avatar produced more realistic passability judgments compared to the no-avatar condition. These types of body manipulations would be very difficult, if not impossible, to achieve in the real world where users can see their true physical body size at all times. In AR, users see their bodies and can ascertain how it relates to an object they are carrying. In fully virtual environments, the body must be rendered. In the current study, we examine whether rendering just the object in VR will allow for an understanding of body size through feedback from action that may transfer to an AR environment.

2.2 Testing Feedback Effects on Perceived Affordances in XR

When an actor interacts with their environment, they receive information about the relationships between their body and features of the environment after performing an action. This information, or feedback, calibrates affordance perception. For example, in real environments participants who wore blocks on their feet, thereby increasing their height, could calibrate their affordance perception for sitting in a chair [36] or walking under a barrier [44] through feedback from postural sway about their new height. Postural sway is an example of exploratory feedback—a type of feedback that does not provide explicit information about the outcome of the afforded action. Participants wearing blocks in Mark et al. [36] did not need to practice the action of sitting nor did they receive feedback about the accuracy of making sit-ability judgments. Yet their judgments did calibrate to their new height while wearing the blocks. However, exploratory feedback is not always sufficient for action calibration to occur [12]. Often, feedback that explicitly reveals the outcome of the action is required for calibration. Thus, the bulk of research on affordance perception calibration has focused on outcome feedback, which provides explicit information about the outcome of an action that can be judged statically or through action. With static outcome feedback, participants make a judgment about whether they can or cannot complete the action (without actually performing it), and then they receive visual or auditory feedback about the accuracy of that judgment. For action outcome feedback, the participant makes a judgment and then performs the afforded action, which provides feedback about the success or failure of their judgment as that action unfolds. Franchak et al. [15] has shown that giving action-based outcome feedback may better calibrate passing through judgments than static feedback.
Affordance perception calibration for walking through a doorway without turning, but while holding an object, has been studied in mediated environments (VR and AR) as well as in the real world. At baseline, viewers tend to overestimate the smallest doorway width that affords passage when not holding an object [32, 46]. Previous work in the real world and in VR shows that participants wielding an object are sensitive to the addition of an object to their bodies and can rescale judgments of passability accordingly [5, 23, 45] (but see Petrucci et al. [39]). Participants judging passability from a static viewpoint in the real world can accurately calibrate their judgments when either seeing a rod that increases their horizontal width or carrying it without seeing it [45]. These findings suggest that only visual or only haptic information may be sufficient feedback to adjust passability estimates. Bhargava et al. [5] examined judgments of passability through a doorway while wielding an object in VR. Practice walking through doorways with the object improved the accuracy of subsequent affordance judgments. Moreover, the calibration phase had a larger influence on judgments when an avatar was not present, suggesting that action outcome feedback may be especially important for calibration when visual representation of a virtual body is absent.
An important aspect of successfully judging passability and calibrating judgments is not colliding with the aperture during passage. Kondo et al. [30] investigated collision-avoidance behavior in older adults who wielded a rod horizontally while walking through real and virtual apertures. In their study, participants completed a pretest in the real world where they practiced walking through apertures while holding a rod without receiving feedback. They then completed a training phase in VR, with the virtual aperture presented via a 3D stereo projection screen rather than an immersive head-mounted display (as in Bhargava et al.). Participants walked in place while the VR image moved to simulate them walking through the virtual aperture. Virtual collisions were indicated visually on the virtual walls that were hit. They then completed a posttest phase in the real world. The researchers found that the number of collisions did not change between pretest and posttest, but participants significantly reduced their body rotation angles for passing through the apertures. Mestre et al. [37] compared vibrotractile feedback and avatar conditions for calibrating passing through judgments without an additional object. They found that in the absence of any feedback (avatar, haptic), subjects collided with the small virtual apertures in almost 50% of trials. This number of collisions was reduced when feedback or an avatar was added, with the smallest number of collisions occurring when both were present.
Affordance perception and recalibration are just beginning to be explored in AR. By recalibration, we mean the use of feedback to adapt an affordance judgment. Existing affordance perception research in AR has primarily been conducted in optical see-through AR [16, 17, 18, 40, 41, 48]. Zhao and colleagues used mobile augmented reality (displayed via handheld devices) to assess providing a cue that represented the width of a user in the context of a set of apertures to be judged for passing through [50]. They found that participants’ judgments of whether apertures were passable or not were improved after viewing the augmented visual cue that provided feedback about the width of one’s shoulders in the context of the virtual aperture. Gagnon and colleagues also recently investigated the role of static and action outcome feedback on passability judgments in the HoloLens 1, an optical see-through AR device [16, 17]. Gagnon et al. [16] used the method of adjustment to assess participants’ perception of the smallest aperture width that would be passable from a static position. In feedback trials, participants were presented with an aperture, made a yes/no judgment on whether they could successfully pass, and then the HoloLens provided auditory feedback (“correct” or “incorrect”) on the accuracy of that judgment. Baseline aperture width judgments were overestimated compared to shoulder width, but adjustments got closer to shoulder width following feedback. Thus, the results of this study provide evidence that static verbal feedback can successfully calibrate passing-through judgments in AR.
Gagnon et al. [17] implemented the same stimuli and trial procedure as their prior work [16] to investigate whether perceptual-motor collision-based feedback would calibrate passing-through affordance perception similarly to static verbal feedback in AR. Walking through an aperture provides additional perceptual cues, such as optic flow; perhaps the greater availability of cues could calibrate affordance judgments more efficiently than static feedback. However, providing collision-based outcome feedback in XR is challenging due to the virtual nature of the aperture walls. It is obvious if a collision is made while passing through an aperture in the real world because the actor can haptically feel contact with the wall, but this sensory cue is often absent in XR. It can also be difficult to visually perceive a collision in AR due to the opacity of the aperture walls and the limited field of view of the device. In the feedback trials of Gagnon et al. [17], participants walked through the AR aperture and received auditory outcome feedback indicating whether a collision occurred during passage. Baseline perception of the just-passable aperture width was overestimated, but, contrary to Gagnon et al. [16], perceived just-passable aperture increased following the collision-based feedback blocks. The authors conjectured that this increase in overestimation was due to the high number of collisions that participants experienced. Taken together, the results of studies by Gagnon and colleagues ([16, 17]) demonstrate that affordance perception can be recalibrated in AR, but the direction of calibration may depend on the type of feedback that is provided. Additional research is needed to determine how different affordances are perceived in AR and to further examine the extent and type of feedback that is sufficient for recalibration.

2.3 Does Feedback Transfer to Other Environments?

XR can be useful for training skills that may be inconvenient to train in the real world [29]. These include situations that are unknown or that may be unsafe, such as a future mission to Mars [24], practice with difficult surgical procedures [2], or scenarios involving complex manufacturing [22]. For example, when training collision-avoidance behavior for walking through an aperture, it may be sub-optimal or dangerous for participants to collide with a physical doorway [30]. If XR is used for training, it is important to evaluate how and when calibration transfers between virtual and real environments as well as whether it transfers across different types of mediated environments in order for the training to be most effective and generalizable.
The existing research on affordance perception calibration transfer has primarily focused on whether calibration transfers to a functionally similar action. In some studies, researchers have found that calibrating perception for one affordance transfers to a similar affordance in the real world. For example, participants who have calibrated their maximum leaping distance show more accurate judgments of maximum stepping distance [7]. In other cases, calibration does not transfer. As previously described, Kondo et al. [30] also used a real-world pretest, VR training, and real-world posttest design for training collision-avoidance behavior for walking through virtual apertures. They found that training in VR did not reduce collisions in the real-world posttest, but the training did result in participants making smaller body rotations during passing, suggesting that training in VR does have the potential to transfer some aspects of the perception of action capabilities to a real environment. However, the VR setup in this study was more similar to AR in the sense that a participant saw their real body and a real rod when completing the training task.
To the best of our knowledge, research on the transfer of affordance perception between VR and AR does not exist. However, some research shows that training in VR can transfer to the real world for police training [4], maintenance procedures [19], surgical skills [27], and wayfinding tasks [31]. Most relevant to the current study, Day et al. [6] found that calibration to altered reaching capabilities in VR transferred to the real world. Participants completed a pretest phase in the real world, a calibration phase in VR in which they practiced reaching with an extended avatar arm or a normal avatar arm (action outcome feedback), and a posttest phase in the real world. Participants who practiced reaching with the extended arm in VR believed they could reach farther in the posttest phase compared to participants who experienced the unaltered avatar arm. The present research expands upon Day et al. [6] by investigating whether calibration from action outcome feedback with a different affordance—passability—in VR transfers to AR.

3 Experiment

The current experiment tested affordance judgments for passing through a doorway while wielding an object in VR and AR, the effect of providing collision-based outcome feedback on affordance perception calibration in VR, and whether calibration transferred from VR to AR. Participants held a PVC pole at a 45° angle. Participants verbally judged whether they could walk through virtual doorways from a static position, but then proceeded to physically walk through those “doorways” to receive feedback. Based on prior work in real and virtual environments, we predicted that baseline (i.e., prefeedback) judgments of the smallest just-passable doorway width would be overestimated [17]. We made the following predictions:
H1.
Collision-based action outcome feedback presented in VR will calibrate judgments of passability in VR while holding an object.
H2.
Any calibration resulting from training in VR will transfer to AR.

3.1 Participants

Data were collected from 36 participants at the University of Utah. All participants provided informed consent and volunteered or were compensated with course credit for their time. Participants had normal or corrected-to-normal vision. Six participants were excluded due to technical difficulties or a misunderstanding of instructions. This left 30 participants for analysis (19 female, Mage = 22.13 years, SD = 4.99). The average shoulder width of participants was 0.45 m (SD = 0.06 m).

3.2 Materials

The AR portion of the experiment was built with Unity (version 2018.2.2) on a Windows 10 laptop and ran as a stand-alone application on the Microsoft HoloLens 1. The HoloLens weighs approximately 579 g and has a graphical field of view of 30° \(\times\) 17°. For this experiment, the HoloLens presented two virtual walls with a space between them, creating an aperture. The virtual walls extended up to the ceiling of the real room and to the outside walls of the real room (see Figure 1). Participants stood at a virtual red line that was shown on the ground 3.20 m from the virtual aperture to make their static judgments of passability (as per [16]).
The VR portion of the experiment was created using Unity (version 2021.2.0) and ran as a stand-alone application on a Windows 10 computer. The virtual environment was presented in the HTC Vive Pro head-mounted display (HMD), which has a 110° diagonal field of view and weighs 555 g. The Vive presented a virtual aperture that was created by displaying two virtual walls that extended infinitely upward and to the left and right. This infinite presentation was implemented to block any view of the virtual world given the real walls in the AR environment blocked participants’ views. The VR walls are also depicted in Figure 1.
Fig. 1.
Fig. 1. The AR (left) and VR (right) walls.
All participants held a 1.22-m-long PVC pole with a 1 inch diameter that had two Vive controllers attached in both conditions (VR and AR) that tracked the position of the pole when it was displayed virtually in VR. The controllers were positioned 25 cm from the edges of the pole. The pole was held at a 45° angle with the participants’ dominant hand by their hip (Figure 2). The angle of the pole was checked at the beginning of every trial using a square tool to ensure that the participants continued to hold the pole at this angle throughout all trials in both the VR and AR conditions. When held at 45°, the effective width of the observer and pole was 0.86 m for every participant given the pole was wider than participants’ shoulder widths. In VR, the virtual pole was rendered to resemble the real-world pole (Figure 3). An avatar was not rendered.
For the VR feedback trials, collisions were determined by using invisible colliders positioned on the ends of the virtual pole by the hand controllers attached. The position of the colliders was determined based on the angle of the two Vive controllers relative to each other. The colliders were always set to be at 1.22 m apart in the virtual environment, which was equivalent to the length of the pole. If the virtual pole collided with the virtual walls, the corresponding controller vibrated (e.g., if the right wall was hit, the right controller vibrated).
Fig. 2.
Fig. 2. Participant holding pole at 45°.
Fig. 3.
Fig. 3. Rendering of the virtual pole used in the VR program.
The feedback trial doorway widths were chosen based on the effective width (0.86 m) that represented the person holding the pole at the correct angle. The smallest feedback trial doorway width was 90% of 0.86 m and the largest feedback trial width was 190% of 0.86 m. We then chose distances evenly spaced between the smallest and largest widths to yield a total of eight feedback trial widths: 0.777 m, 0.900 m, 1.024 m, 1.147 m, 1.270 m, 1.393 m, 1.516 m, and 1.640 m. These feedback trial widths were also, in part, chosen to accommodate what we thought was the actual smallest doorway width that would be just-passable. Based on pilot testing with three participants, we anticipated that the smallest just-passable doorway width would be greater than the effective width of 0.86 m due to body sway and possible movement of the pole while walking. The smallest doorway that these pilot participants could pass through was approximately 1.2–1.3 m. We chose 1.22 m given that this was also the effective width if participants held the pole horizontally across their body. The eight feedback trial widths that we chose had four trials less than 1.22 m and four trials greater than 1.22 m. With this design, we expected that participants would typically have four successful passes and four unsuccessful passes, which is exactly what we observed in the current experiment as reported in the Results section.

3.3 Procedure

After providing informed consent, participants completed a stereo vision test. Next, their interpupillary distance, height, eye height, and shoulder width were measured. Shoulder width was measured by having participants stand perpendicular to a wall with their shoulder against the wall. They held a ruler against the shoulder that was not touching the wall, and the experimenter measured the distance between the wall and the ruler. This procedure better standardized measurement of body width across participants.
The trial progression is depicted in Figure 4. Participants donned the HoloLens and stood 3.20 m from the virtual aperture to begin the AR portion of the experiment. They also held the pole at the appropriate angle. Participants completed two AR adjustment trials where they were instructed to adjust the aperture until they believed it was the smallest aperture that they could walk through while holding the pole without turning their body or colliding with the walls. They adjusted the walls by saying the vocal commands “bigger” and “smaller,” which increased or decreased the width of the aperture in 0.02 m increments, respectively. Within each pair of adjustment trials, one trial started with an aperture width of 0.854 m (70% of the just-passable width) and the other started with a width of 2.196 m (180% of the just-passable width).
Fig. 4.
Fig. 4. Trial progression. Participants completed adjustment trials in AR, then adjustment and feedback trials in VR, and then adjustment trials in AR.
After completing the adjustment trials, participants donned the Vive HMD to enter the virtual environment. They continued to hold the pole during the VR trials. First, they completed two adjustment trials similarly to the adjustment trials in AR. To adjust the width of the aperture, they pressed the grip buttons on the sides of the Vive controllers. When they pressed the buttons on the right controller, the aperture width increased by 0.02 m. When they pressed the buttons on the left controller, the aperture width decreased by 0.02 m. Participants then completed eight feedback trials. The feedback trials presented one of the eight feedback doorway widths (from Section 3.2) once in a random order. In each feedback trial, a virtual aperture was presented, and participants responded by saying “yes” if they thought they would be able to walk through it while holding the pole without turning their body or colliding with the walls, and “no” if they did not think they could walk through. After making their verbal response, participants physically walked through the virtual aperture. If the pole collided with the walls during passage, the corresponding controller vibrated (e.g., if the right wall was hit, the right controller vibrated). Participants were instructed to always walk through the aperture, even if they thought it would be too small to pass, to ensure that all participants received the same amount of perceptual-motor and outcome feedback. After completing the eight feedback trials, they completed two adjustment trials, eight more feedback trials (same distances as the first eight feedback trials), and then two more adjustment trials. Then, participants returned to the HoloLens AR environment where they completed two final adjustment trials. Figure 4 depicts the trial progression.
Fig. 5.
Fig. 5. Graph of the feedback effect. As participants received more feedback, affordance ratios decreased. Error bars represent +/ \(-\) one standard error from the mean.

4 Results

The doorway widths set during the adjustment trials for each block in VR were recorded in meters and averaged together, resulting in three data points per participant. Affordance ratios were calculated by dividing the participant’s set doorway width by 1.22 m. This denominator was chosen since it was the value used to determine the anticipated just-passable aperture width. Thus, a ratio greater than 1 means that the doorway width was set larger than the just-passable width, and a ratio less than 1 means that the doorway width was set smaller than the just-passable width.

4.1 Does Outcome Feedback Calibrate Judgments of Passability in VR?

On average, participants collided with 3.6 out of 8 feedback trials, meaning that they collided with about half of the trials, as we anticipated. We first tested whether the outcome feedback presented in VR calibrated judgments of passability in VR while holding the object. We predicted that affordance ratios would initially be overestimated and then become closer to 1 following repeated trials with feedback. A multilevel model was used to determine whether there was an effect of feedback. The dependent variable was the average affordance ratio for each block for each participant. Block number was an independent variable and treated as continuous. The model also included a random intercept for participant. A baseline model with only the random effect was run in order to calculate the intraclass correlation coefficient (ICC). The ICC was 0.40, which means that 40% of the variation in affordance ratios was accounted for by differences between the participants. This finding supports the inclusion of the random effect in the full model. The full model revealed a significant effect of Block Number, \(B = -0.05\) , \(SE = 0.02\) , \(p \lt 0.05\) . As block number increased, ratios decreased, becoming closer to 1 (see Figure 5). The model was rerun with Block designated as a factor in order to determine differences between individual blocks, or in other words, to understand the time point at which feedback started calibrating judgments. Planned comparisons revealed that Block 1 (Mean ratio = 1.19) significantly differed from Block 0 (Mean ratio = 1.30) ( \(t(58) = 2.82\) , \(p \lt 0.05\) ). Block 2 (Mean ratio = 1.20) was also significantly different from Block 0 ( \(t(58) = 2.45\) , \(p \lt 0.05\) ), but it was not different from Block 1 ( \(t(58) = -0.37\) , \(p = 0.93\) ). Thus, the results suggest that the calibration occurred between Block 0 and Block 1; there was no further change between Block 1 and Block 2. This finding is potentially useful for considering training time that is needed for calibration of these types of judgments.

4.2 Does Calibration of Passability Judgments Learned in VR Transfer to AR?

Our second hypothesis was that the calibration experienced in VR would transfer to judgments made in AR. At first glance, it appeared that the AR pre-VR and AR post-VR judgments did not differ. The average pre-VR affordance ratio was 0.97 (SD = 0.11), and the average post-VR affordance ratio was 0.98 (SD = 0.08). The results of a paired t-test confirmed that there was no difference between these ratios ( \(t(29) = -0.27\) , \(p = 0.79\) ). However, these averages did not take into account differences in the amount of change between individual participants’ pre-VR and post-VR judgments. For example, some participants might start with underestimated judgments and increase their judgments following feedback, whereas others could start with overestimated judgments and decrease their judgments following feedback. In fact, about half of participants increased their judgments after feedback, and about half decreased their judgments after feedback (see Figure 6). Therefore, we used scaled difference scores to test for an effect of calibration transfer.
Fig. 6.
Fig. 6. AR affordance ratios before and after experiencing VR feedback. Individual data points are presented alongside the boxplots. Red lines indicate participants whose affordance ratios decreased following feedback. Blue lines indicate participants whose ratios increased after feedback.
For each participant, we created a scaled AR difference score by subtracting the first AR adjustment phase average affordance ratios from the post-VR AR adjustment phase affordance ratios, then dividing this value by the average affordance ratios from the first AR adjustment phase:
\begin{equation*} AR\ Scaled\ Dif\!ference = \frac{ARpostVR - ARpreVR}{ARpreVR}. \end{equation*}
The VR data for this analysis were scaled difference scores created in a similar fashion. The Block 2 average adjustment trial affordance ratios were subtracted from the Block 0 average adjustment trial affordance ratios and then divided by the Block 0 average adjustment trial affordance ratios:
\begin{equation*} VR\ Scaled\ Dif\!ference = \frac{VRblock2 - VRblock0}{VRblock0}. \end{equation*}
Thus, a scaled score of 0 indicates that there was no change between pre-feedback and post-feedback judgments. A positive score means that participants set wider apertures after receiving feedback, and a negative score means that they set smaller apertures after receiving feedback. A correlation was run to determine whether the feedback effect observed in the analyses discussed in Section 4.1 transferred from VR to AR using these scaled values. The scaled AR and VR values had a strong, positive correlation, \(r(28) = 0.51, p \lt 0.05\) .1 This strong correlation suggests that the feedback effect (setting smaller doorway widths after feedback) in VR did transfer to judgments made in AR (see Figure 7). The positive correlation indicates that as VR scaled difference scores increased, the AR scaled difference scores also increased.
Fig. 7.
Fig. 7. Correlation between the VR and AR difference scores. The scores were significantly correlated, indicating that transfer occurred.

5 Discussion

The current experiment tested whether calibration of affordance judgments for passing through an aperture (made in both AR and VR) while wielding an object can be accomplished by providing feedback in VR. The results of this study show that feedback associated with performing the action of passing through in VR (i.e., receiving feedback about whether a collision occurred) did calibrate judgments to be more accurate, which supports our first hypothesis. Participants’ baseline judgments of the width of the aperture estimated as just-passable while wielding the pole were overestimated, with a ratio of about 1.3. This finding is generally consistent with the previous passability literature (e.g., [16, 17, 46]). However, the 1.3 ratio found in Warren and Whang [46] was based on participants judging when they would need to turn their shoulders to pass through, which is slightly different than the task that participants completed in the current study (judging whether they could pass through without a collision). After receiving feedback, participants’ judgments about the aperture size that they could just pass through in the current study became more accurate by about 10% (i.e., closer to their actual ability taking the rod into account). This finding is consistent with the results of Bhargava et al. [5], but it differs from Gagnon et al. [17]. Gagnon and colleagues [17] did not ask participants to wield an object, but they did use collision-based feedback similar to the feedback used in the current study to assess judgments of passability for walking straight through an aperture without turning. They found that collision-based feedback actually increased judgments of just-passable aperture width. However, this increase may have been due to the smaller aperture widths and large number of collisions that participants experienced (about 65% of trials) in that study. In other words, because participants experienced many collisions, they judged that they needed wider doorways. In the current study, we anticipated that larger apertures were needed during training and participants only experienced collisions on 50% of trials. Further work could specifically test variability in the number of collisions during training and its effects on calibration in a more controlled manner.
The calibration of passability judgments across trials in VR was also quite quick. The results showed that calibration occurred between the first and second block of trials, and did not change after that. This is useful to know for future training situations for affordances in XR, but it also has theoretical implications. Specifically, it seems as though participants easily incorporated the size of the pole into their understanding of their width while wielding it with little feedback. This result is consistent with Wagman and Taylor [45] who found that participants were sensitive to their perceived passability while wielding a rod, even without vision of the rod. Simply wielding a rod provides information about weight and inertia through dynamic touch, and this information may be used to calibrate affordance perception [45].
In the current experiment, affordance ratios in VR were overestimated compared to affordance ratios in AR. One possible explanation is that we did not render an avatar in the virtual environment. The presence of an avatar or a cue that indicates body size has been shown to increase the accuracy of affordance judgments in VR [5, 50]. Specific to the current study, Bhargava and colleagues found that participants who viewed a virtual avatar while wielding an object in VR had more accurate passability judgments compared to participants who did not have an avatar [5]. Another possible explanation for the difference in affordance ratios in VR and AR observed in the current experiment could be the virtuality of the pole as it related to the context in which judgments were made. In VR, participants compared a virtual pole to virtual walls to make their judgments. In AR, participants viewed the real pole while making judgments about whether they could pass through virtual walls. Perhaps participants were able to make more accurate judgments in AR because they could see the real pole along with their real body. Thus, the AR environment provided more visual information about the size of the pole and the body as compared to VR. Further, the field of view of the VR head-mounted display, albeit larger than that of the HoloLens, could have restricted visibility of the aperture and the pole. In contrast, participants could see the real pole in AR outside of the display, so any restriction from the field of view would have only influenced the perception of the aperture’s size. Future work could replicate our study and include an avatar in the virtual environment to see if affordance ratios change with more information about body size and location during training. The effect of restricted field of view on judgments in either environment could also be explored further.
In support of our second hypothesis, the current study found that calibration experienced in VR transferred to AR, as demonstrated by the positive correlation between the change in VR estimates due to feedback and the change in AR estimates post VR. To our knowledge, this is the first study to show that affordance calibration does transfer from VR to AR. Our finding is consistent with Kondo et al. [30] who found that calibrated body rotation angles transferred from VR to the real world for a passability scenario. However, the VR condition in Kondo et al. [30] was not immersive VR, so participants were still able to view their body and the real rod they were wielding. We found that transfer from VR to AR occurred even in the absence of seeing the body during feedback. It is also possible that the mere experience of being in VR led to the improved passability judgments through more generalized adaptation effects, rather than the outcome feedback that was received. A future study could employ a control condition in which participants make judgments in VR but do not receive feedback to see if practice making judgments in VR improves judgments in AR. Regardless, the potential for transfer supports the utility of training action judgments in XR more broadly. Additional studies could examine whether training in VR also transfers to video see-through AR, mobile AR, or to the real world.
There are some limitations to the current study that should be considered for future work. As is apparent in the descriptions of the different XRs, there are notable differences between the AR and VR experiences. These include the presence of the real body in AR but no avatar in VR as well as the full FOV viewing of the real pole in AR and a reduced FOV view of the virtual pole in VR. The walls and environment were also slightly different across AR and VR. Future study of transfer of calibration effects across different realities will benefit from more closely equating the environments. Furthermore, we developed a feedback paradigm that was tightly controlled, requiring the pole to be held constantly at 45° and for participants to walk through any virtual aperture, even if they determined that they could not pass through, so that each participant received the same amount of feedback. This allowed us to more easily analyze calibration and transfer across participants. However, this type of task is not always realistic—in many situations, actors would be able to move an object or extension of their body to fit through and might choose not to pass through in some circumstances. Future investigations of feedback and transfer could consider more flexible and dynamic measures of actual walking through apertures to assess behavior.
Future work could also employ different forms of feedback and analyze their effect on affordance perception calibration for the current affordance and for others. Gagnon and colleagues [16] found that providing static outcome feedback calibrated judgments of passability in AR (when not wielding an object). Studies could examine how static outcome feedback calibrates passability judgments while holding an object in VR and whether the type of feedback affects calibration transfer between VR and AR and/or the real world. Static outcome feedback can be convenient for situations where dynamic movement is not feasible. Additionally, if static outcome feedback is sufficient for calibration to occur and transfer, this would suggest that receiving perceptual-motor and haptic collision cues are not driving the mechanism underlying passability affordance perception calibration.
Future research could also measure affordance judgments differently. Instead of participants providing a yes or no judgment, they could instead walk through the apertures and the frequency of how often they turn their body to pass could be measured. Another way to manipulate affordance judgments in future work could be to provide conflicting cues. For example, rendering the virtual pole in VR to be a different size than the real pole. Would participants change their judgments to be more consistent with the VR pole if that is what the feedback was based upon? The research presented in the current study could also be applied to other affordances. For example, prior work shows that participants calibrate to altered reaching abilities in VR [1, 6, 8, 9, 10, 11, 33]. However, affordance perception for reaching toward AR objects has not yet been studied. The types of feedback and the time taken to calibrate other judgments will be important to understand in order to build the most effective training for affordance perception.

6 Conclusions

In summary, the presented work demonstrates that providing collision-based feedback in VR will calibrate judgments of passability when wielding an object, and that this calibration will transfer to AR. Further, the calibration happens with just a few trials of feedback. These results can inform the design of training applications where training needs to occur in VR but may be applied to a future AR setting. Future work should further explore perceptual calibration for other affordances and objects in virtual environments and the effects of providing different types of feedback on calibration and calibration transfer. The current results provide a first assessment of the transfer of training across XR environments, but more work will be needed to ascertain the generalizability of these findings to other affordances and environments as well as the specific factors during training that may change judgments.

Acknowledgments

We thank Josh Butner and Hunter Finney for building the VR program.

Footnote

1
One participant had an AR scaled difference score greater than two standard deviations from the mean AR scaled score and one participant had a VR scaled difference score greater than two standard deviations of the mean VR scaled score. However, removal of these participants did not change the significance of the correlation, so we decided to keep them in the analysis.

References

[1]
Bliss M. Altenhoff, Phillip E. Napieralski, Lindsay O. Long, Jeffrey W. Bertrand, Christopher C. Pagano, Sabarish V. Babu, and Timothy A. Davis. 2012. Effects of calibration to visual and haptic feedback on near-field depth perception in an immersive virtual environment. In Proceedings of the ACM Symposium on Applied Perception. ACM, 71–78.
[2]
Steven Arild Wuyts Andersen, Peter Trier Mikkelsen, Lars Konge, Per Cayé-Thomasen, and Mads Sølvsten Sørensen. 2016. Cognitive load in mastoidectomy skills training: Virtual reality simulation and traditional dissection compared. Journal of Surgical Education 73, 1 (2016), 45–50.
[3]
Anna Berti and Francesca Frassinetti. 2000. When far becomes near: Remapping of space by tool use. Journal of Cognitive Neuroscience 12, 3 (2000), 415–420.
[4]
Johanna Bertram, Johannes Moskaliuk, and Ulrike Cress. 2015. Virtual training: Making reality work? Computers in Human Behavior 43 (2015), 284–292.
[5]
Ayush Bhargava, Roshan Venkatakrishnan, Rohith Venkatakrishnan, Hannah Solini, Kathryn M. Lucaites, Andrew Robb, Christopher Pagano, and Sabarish Babu. 2022. Did I hit the door? Effects of self-avatars and calibration in a person-plus-virtual-object system on perceived frontal passability in VR. IEEE Transactions on Visualization and Computer Graphics 28 (2022), 4198–4210.
[6]
Brian Day, Elham Ebrahimi, Leah S. Hartman, Christopher C. Pagano, Andrew C. Robb, and Sabarish V. Babu. 2019. Examining the effects of altered avatars on perception-action in virtual reality. Journal of Experimental Psychology: Applied 25, 1 (2019), 1.
[7]
Brian M. Day, Jeffrey B. Wagman, and Peter J. K. Smith. 2015. Perception of maximum stepping and leaping distance: Stepping affordances as a special case of leaping affordances. Acta Psychologica 158 (2015), 26–35.
[8]
Elham Ebrahimi, Bliss Altenhoff, Leah Hartman, J. Adam Jones, Sabarish V. Babu, Christopher C. Pagano, and Timothy A. Davis. 2014. Effects of visual and proprioceptive information in visuo-motor calibration during a closed-loop physical reach task in immersive virtual environments. In Proceedings of the ACM Symposium on Applied Perception (SAP’14). ACM, New York, NY, 103–110. DOI:
[9]
E. Ebrahimi, B. M. Altenhoff, C. C. Pagano, and S. V. Babu. 2015. Carryover effects of calibration to visual and proprioceptive information on near field distance judgments in 3D user interaction. In Proceedings of the 2015 IEEE Symposium on 3D User Interfaces (3DUI’15). 97–104. DOI:
[10]
Elham Ebrahimi, Sabarish V. Babu, Christopher C. Pagano, and Sophie Jörg. 2016. An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments. ACM Transactions on Applied Perception 13, 4, Article 19 (July 2016), 21 pages. DOI:
[11]
Elham Ebrahimi, Andrew Robb, Leah S. Hartman, Christopher C. Pagano, and Sabarish V. Babu. 2018. Effects of anthropomorphic fidelity of self-avatars on reach boundary estimation in immersive virtual environments. In Proceedings of the 15th ACM Symposium on Applied Perception. 1–8.
[12]
J. M. Franchak. 2017. Exploratory behaviors and recalibration: What processes are shared between functionally similar affordances? Attention, Perception, and Psychophysics 79 (2017), 1816–1829.
[13]
John M. Franchak and Karen E. Adolph. 2014. Gut estimates: Pregnant women adapt to changing possibilities for squeezing through doorways. Attention, Perception, & Psychophysics 76, 2 (2014), 460–472.
[14]
John M. Franchak and Frank A. Somoano. 2018. Rate of recalibration to changing affordances for squeezing through doorways reveals the role of feedback. Experimental Brain Research 236, 6 (2018), 1699–1711.
[15]
John M. Franchak, Dina J. van der Zalm, and Karen E. Adolph. 2010. Learning by doing: Action performance facilitates affordance perception. Vision Research 50, 24 (2010), 2758–2765. DOI:. Perception and Action: Part I.
[16]
Holly C. Gagnon, Dun Na, Keith Heiner, Jeanine Stefanucci, Sara Creem-Regehr, and Bobby Bodenheimer. 2020. The role of viewing distance and feedback on affordance judgments in augmented reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR’20). 922–929.
[17]
Holly C. Gagnon, Dun Na, Keith Heiner, Jeanine Stefanucci, Sarah Creem-Regehr, and Bobby Bodenheimer. 2021. Walking through walls: The effect of collision-based feedback on affordance judgments in augmented reality. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct’21). IEEE, 266–267.
[18]
Holly C. Gagnon, Yu Zhao, Matthew Richardson, Grant D. Pointon, Jeanine Stefanucci, Sarah H. Creem-Regehr, and Bobby Bodenheimer. 2021. Gap affordance judgments in mixed reality: Testing the role of display weight and field of view. Frontiers in Virtual Reality 2 (2021), 22.
[19]
Franck Ganier, Charlotte Hoareau, and Jacques Tisseau. 2014. Evaluation of procedural learning transfer from a virtual environment to a real situation: A case study on tank maintenance training. Ergonomics 57, 6 (2014), 828–843.
[20]
James J. Gibson. 1966. The Senses Considered as Perceptual Systems. Houghton Mifflin, Boston, MA.
[21]
James J. Gibson. 1979. The Ecological Approach to Visual Perception. Houghton Mifflin, Boston, MA.
[22]
Mar Gonzalez-Franco, Rodrigo Pizarro, Julio Cermeron, Katie Li, Jacob Thorn, Windo Hutabarat, Ashutosh Tiwari, and Pablo Bermell-Garcia. 2017. Immersive mixed reality for manufacturing training. Frontiers in Robotics and AI 4 (2017), 3.
[23]
Amy L. Hackney, Michael E. Cinelli, and Jim S. Frank. 2014. Is the critical point for aperture crossing adapted to the person-plus-object system? Journal of Motor Behavior 46, 5 (2014), 319–327.
[24]
P. A. Hancock. 2017. On bored to Mars. Journal of Astro-Sociology 2 (2017), 103–120.
[25]
Takahiro Higuchi, Michael E. Cinelli, Michael A. Greig, and Aftab E. Patla. 2006. Locomotion through apertures when wider space for locomotion is necessary: Adaptation to artificially altered bodily states. Experimental Brain Research 175, 1 (2006), 50–59.
[26]
Takahiro Higuchi, Hajime Takada, Yoshifusa Matsuura, and Kuniyasu Imanaka. 2004. Visual estimation of spatial requirements for locomotion in novice wheelchair users. Journal of Experimental Psychology: Applied 10, 1 (2004), 55.
[27]
A. Hyltander, E. Liljegren, P. H. Rhodin, and H. Lönroth. 2002. The transfer of basic skills learned in a laparoscopic simulator to the operating room. Surgical Endoscopy and Other Interventional Techniques 16, 9 (2002), 1324–1328.
[28]
Eunice Jun, Jeanine K. Stefanucci, Sarah H. Creem-Regehr, Michael N. Geuss, and William B. Thompson. 2015. Big foot: Using the size of a virtual foot to scale gap width. ACM Transactions on Applied Percepttion 12, 4 (2015), 16:1–16:12.
[29]
Alexandra D. Kaplan, Jessica Cruit, Mica Endsley, Suzanne M. Beers, Ben D. Sawyer, and Peter A. Hancock. 2021. The effects of virtual reality, augmented reality, and mixed reality as training enhancement methods: A meta-analysis. Human Factors 63, 4 (2021), 706–726.
[30]
Yuki Kondo, Kazunobu Fukuhara, Yuki Suda, and Takahiro Higuchi. 2021. Training older adults with virtual reality use to improve collision-avoidance behavior when walking through an aperture. Archives of Gerontology and Geriatrics 92 (2021), 104265.
[31]
Florian Larrue, Hélène Sauzeon, Gregory Wallet, Déborah Foloppe, Jean-René Cazalets, Christian Gross, and Bernard N’Kaoua. 2014. Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment. Journal of Cognitive Psychology 26, 8 (2014), 906–918.
[32]
Jean-Claude Lepecq, Lionel Bringoux, Jean-Marie Pergandi, Thelma Coyle, and Daniel Mestre. 2009. Afforded actions as a behavioral assessment of physical presence in virtual environments. Virtual Reality 13, 3 (2009), 141–151.
[33]
Lisa P. Y. Lin, Neil M. McLatchie, and Sally A. Linkenauger. 2020. The influence of perceptual–motor variability on the perception of action boundaries for reaching. Journal of Experimental Psychology: Human Perception and Performance 46, 5 (2020), 474.
[34]
Sally A. Linkenauger, Markus Leyrer, Heinrich H. Bülthoff, and Betty J. Mohler. 2013. Welcome to wonderland: The influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects. PLoS One 8, 7 (July 2013), e68594. DOI:
[35]
Angelo Maravita and Atsushi Iriki. 2004. Tools for the body (schema). Trends in Cognitive Sciences 8, 2 (2004), 79–86. DOI:
[36]
Leonard S. Mark, James A. Balliett, Kent D. Craver, Stephen D. Douglas, and Teresa Fox. 1990. What an actor must do in order to perceive the affordance for sitting. Ecological Psychology 2, 4 (1990), 325–366.
[37]
Daniel R. Mestre, Céphise Louison, and Fabien Ferlay. 2016. The contribution of a virtual self and vibrotactile feedback to walking through virtual apertures. In International Conference on Human-Computer Interaction. Springer, 222–232.
[38]
Paul Milgram and Fumio Kishino. 1994. A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems 77, 12 (1994), 1321–1329.
[39]
Matthew N. Petrucci, Gavin P. Horn, Karl S. Rosengren, and Elizabeth T. Hsiao-Wecksler. 2016. Inaccuracy of affordance judgments for firefighters wearing personal protective equipment. Ecological Psychology 28, 2 (2016), 108–126.
[40]
Grant Pointon, Chelsey Thompson, Sarah Creem-Regehr, Jeanine Stefanucci, and Bobby Bodenheimer. 2018. Affordances as a measure of perceptual fidelity in augmented reality. In Proceedings of the 2018 IEEE VR 2018 Workshop on Perceptual and Cognitive Issues in AR (PERCAR’18). 1–6.
[41]
Grant Pointon, Chelsey Thompson, Sarah Creem-Regehr, Jeanine Stefanucci, Miti Joshi, Richard Paris, and Bobby Bodenheimer. 2018. Judging action capabilities in augmented reality. In Proceedings of the 15th ACM Symposium on Applied Perception. 1–8.
[42]
Richard Skarbez, Missie Smith, and Mary C. Whitton. 2021. Revisiting Milgram and Kishino’s reality-virtuality continuum. Frontiers in Virtual Reality 2 (2021), 647997.
[43]
Jeanine K. Stefanucci and Michael N. Geuss. 2009. Big people, little world: The body influences size perception. Perception 38, 12 (2009), 1782–1795.
[44]
Jeanine K. Stefanucci and Michael N. Geuss. 2010. Duck! Scaling the height of a horizontal barrier to body height. Attention, Perception, & Psychophysics 72, 5 (2010), 1338–1349.
[45]
Jeffrey B. Wagman and Kona R. Taylor. 2005. Perceiving affordances for aperture crossing for the person-plus-object system. Ecological Psychology 17, 2 (2005), 105–130.
[46]
W. H. Warren and S. Whang. 1987. Visual guidance of walking through apertures: Body scaled information for affordances. Journal of Experimental Psychology: Human Perception and Performance 13 (1987), 371–383.
[47]
J. K. Witt, D. R. Proffitt, and W. Epstein. 2005. Tool use affects perceived distance but only when you intend to use it. Journal of Experimental Psychology: Human Perception and Performance 31 (2005), 880–888.
[48]
Hansen Wu, Haley Adams, Grant Pointon, Jeanine Stefanucci, Sarah Creem-Regehr, and Bobby Bodenheimer. 2019. Danger from the deep: A gap affordance study in augmented reality. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR’19). IEEE, 1775–1779.
[49]
Biao Xie, Huimin Liu, Rawan Alghofaili, Yongqi Zhang, Yeling Jiang, Flavio Destri Lobo, Changyang Li, Wanwan Li, Haikun Huang, Mesut Akdere, Christos Mousas, and Lap-Fai Yu. 2021. A review on virtual reality skill training applications. Frontiers in Virtual Reality 2 (2021), 645153.
[50]
Yu Zhao, Jeanine Stefanucci, Sarah H. Creem-Regehr, and Bobby Bodenheimer. 2021. The perception of affordances in mobile augmented reality. In Proceedings of the ACM Symposium on Applied Perception 2021. 1–10.

Cited By

View all
  • (2024)Reaching Between Worlds: Calibration and Transfer of Perceived Affordances from Virtual to Real Environments2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00120(1011-1021)Online publication date: 16-Mar-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Applied Perception
ACM Transactions on Applied Perception  Volume 20, Issue 4
Special Issue on SAP 2023
October 2023
68 pages
ISSN:1544-3558
EISSN:1544-3965
DOI:10.1145/3630022
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 October 2023
Online AM: 05 August 2023
Accepted: 26 July 2023
Received: 29 June 2023
Published in TAP Volume 20, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Affordances
  2. calibration
  3. augmented reality
  4. virtual reality
  5. perception

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundation
  • Office of Naval Research

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)715
  • Downloads (Last 6 weeks)55
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Reaching Between Worlds: Calibration and Transfer of Perceived Affordances from Virtual to Real Environments2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00120(1011-1021)Online publication date: 16-Mar-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media