Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Designing Hand-held Controller-based Handshake Interaction in Social VR and Metaverse

Filippo Gabriele Pratticò, Control and Computer Engineering (DAUIN), Politecnico di Torino, Italy, filippogabriele.prattico@polito.it
Irene Checo, Politecnico di Torino, Italy, irene.checo@studenti.polito.it
Alessandro Visconti, Politecnico di Torino, Italy, alessandro.visconti@polito.it
Fabrizio Lamberti, Politecnico di Torino, Italy, fabrizio.lamberti@polito.it

This work presents four possible designs for the handshake interaction in a Social VR-like virtual environment in which the user operates using hand-held controllers: a first design based on a graphics user interface (GUI), a second design leveraging a physical button on hand-held controllers, and two designs based on recreating the handshaking gesture by grabbing the other party's hand and shaking it. The four designs were evaluated and compared through a user study which involved 24 participants, analyzing factors pertaining to embodiment, presence and social presence, usability, and handshake quality of experience. Results indicated that the gesture-based design was preferred, overall.

CCS Concepts:Human-centered computing → Interaction paradigms; User studies;Human-centered computing → Usability testing; Walkthrough evaluations;Human-centered computing → Gestural input; Empirical studies in HCI; Empirical studies in collaborative and social computing;Human-centered computing → Computer supported cooperative work;

Keywords: Non-verbal communication, handshaking, virtual environments.

ACM Reference Format:
Filippo Gabriele Pratticò, Irene Checo, Alessandro Visconti, Adalberto Simeone, and Fabrizio Lamberti. 2023. Designing Hand-held Controller-based Handshake Interaction in Social VR and Metaverse. In ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG '23), November 15--17, 2023, Rennes, France. ACM, New York, NY, USA 6 Pages. https://doi.org/10.1145/3623264.3624464

Figure 1
Figure 1: Example of handshake gesture performed with one of the interaction designs: a) user starting the handshake, b) user observing a request of handshake initiated by another party, c) handshake granted and feedback received.

1 INTRODUCTION AND BACKGROUND

The availability of affordable hardware products for immersive VR and the growing interest in the definition and development of the so-called Metaverse [Mann et al. 2023] is paving the way for the increasing prominence of Social VR platforms (like Facebook Horizon, MozillaHubs, Rec Room, VRChat, etc.) [Tanenbaum et al. 2020]. The purpose of these platforms is to enable people to communicate synchronously at a distance by embodying avatars in virtual environments (VEs) [Maloney et al. 2020; Sykownik and Masuch 2020; Tanenbaum et al. 2020]. With the aim of increasing the capabilities of Social VR platforms in fostering social relationships, efforts made by researchers in the field focused not only on verbal communication, but also took into account the nonverbal communication aspects of the experience [Imaizumi et al. 2022; Kasapakis et al. 2021; Sykownik and Masuch 2020]. Indeed, a fundamental role is attributed to nonverbal communication and social touch in supporting meaningful interactions among humans [Tanenbaum et al. 2014; 2020]. These two components, however, are by no means easy to replicate or mediate in VEs [Sykownik and Masuch 2020]. In this respect, it is worth saying there are Social VR platforms that already offer some ways to implement nonverbal communication, e.g., by supporting handshaking, a symbolic gesture embedded in multiple cultures [Tanenbaum et al. 2014]. As a matter of fact, the act of handshaking is able to bond people more than just exchanging a few words [Burgoon and Walther 1990]. Despite the relevance, there are still few studies investigating handshaking and social touch in Social VR settings.

Mediated handshaking [Haans and IJsselsteijn 2006] has been studied in the context of communication between humans and robots [Wang et al. 2011], in video-based telepresence platforms [Nakanishi et al. 2014], and even in non-immersive VEs [Tanenbaum et al. 2014]. However, the findings obtained are not immediate nor easy to transpose to the Social VR context. In fact, with handshaking robots and video-telepresence, there was a strong component related to kinesthetic haptic feedback. Furthermore, in the first case, the users were found to experience heterogeneous levels of anthropomorphism, whereas in the second case, interaction was not mediated by an avatar. Avatar-mediated interaction was instead studied by Tanenbaum [Tanenbaum et al. 2014], but the lower levels of embodiment that the users perceive when interacting in non-immersive VE might make the findings hard to generalize.

Recent studies are moving in the direction of filling this gap. For instance, Kasapakis et al. [Kasapakis et al. 2021] proposed a system that enables the use of handshaking and other nonverbal cues in VEs with a high degree of fidelity. Despite preliminary favourable results in terms of usability and user experience, it is still necessary to extend the analysis involving a larger sample size. Moreover, the system relies on hand-tracking gloves, which are not yet widespread among everyday Social VR users. Sykownik and Masuch used a more conventional setup, leveraging just the hand-held controllers coming with the headset, to investigate the impact of touch in the context of Social VR [Sykownik and Masuch 2020]. In particular, they studied how the building of intimacy is affected by different nonverbal gestures. Yet, handshaking was not included in the set of social gestures considered in the study.

The handshaking gesture in Social VR was instead the focus of the study by Imaizumi et al. [Imaizumi et al. 2022]. The authors proved that altering the hand's colour of the user-controlled avatar (UCA) could be an effective way to provide pseudo-haptic feedback capable to carry the illusion of temperature and improve the quality of the communication. This is particularly relevant when finger/hand-tracking is supported without relying on hand-held controllers (the UltraLeap hardware was used in the study), since in such a case a way to directly provide haptic feedback and stimulate the user's hands is actually missing. Nevertheless, hand-held controllers are still the most common interface for the users to experience Social VR, even though they trade off the robustness of the tracking with the sense of body ownership and naturalness [Lin et al. 2019]. Finally, the most common implementation of handshaking in commercial Social VR platforms is based on gestures triggered by interacting with a GUI.

In this context, the present work proposes four possible handshake interaction designs based on the use of hand-held controllers: a first design based on a graphics user interface (GUI), a second design leveraging a physical button on hand-held controllers, and two designs based on recreating the handshaking gesture by grabbing the other party's hand and shaking it. The four designs are also evaluated through a user study to investigate the impact of such different modalities on the user experience in the context of Social VR.

2 MATERIALS AND METHODS

This section describes the four variants of the handshake interaction that were proposed and evaluated. For the sake of performing the experimental analysis under controllable and repeatable conditions, a Social VR testbed scenario was arranged, where the users were allowed to experience all the variants with both non-player characters (NPCs) and UCAs.

2.1 Technology

The Social VR testbed was implemented using the Unity (v2021.3 LTS) game engine and its XR Interaction Toolkit plug-in. The VE was populated with 3D assets available free of charge and by custom-created ones modelled with Blender (v3.3 LTS). To let the users move in the VE, the joystick-based continuous locomotion method was used (as implemented in [Cannavò et al. 2020]). The Photon PUN 2 framework and its voice module were exploited to let multiple users operate together and talk in the VE.

The experience was deployed using the OpenXR framework targeting a Meta Quest 2 HMD (desktop-tethered via Air Link) and its bundled controllers. This HMD features a display resolution of 1832 × 1900 pixels per eye, spanning a horizontal 90° FOV with a 120Hz refresh rate. The user interacts with the VE using the controllers provided in bundle with the HMD.

2.2 Testbed Scenario

The testbed scenario was designed with a low-poly/cartoonish aesthetic by taking inspiration from popular Social VR platforms like Mozilla Hubs, Roblox, RecRoom, etc. The VE layout was deliberately defined taking as a reference a little village square with a neighborhood of commercial activities (Fig. 2c).

Figure 2
Figure 2: Two NPCs in the VE (a)-(b), and schematic representation of the VE layout (c). The red star and circle indicate the position of places shown in (a) and (b), respectively.

The avatars follow the same essential, minimalist style, with sphere-shaped heads, no body, and stylized hands (Fig. 1-2a -2b) for the sake of compelling the user to focus on the gestural part of a handshaking experience.

2.3 Variants Design

In order to design the variants of the handshake interaction, the following phases of the gesture were identified and considered:

  • Handshake Start (S): One of the two parties gets close to the other and initiates the handshake, while the other party is notified. The first party can decide to wait for the other party to grant the handshake (next phase) or cancel it before it happens. The other party can decide to decline the invitation or accept it (next phase);
  • Handshake Grant (G): The other party accepts the invitation to handshake and completes it;
  • Handshake Feedback (F): Both parties receive feedback on the successfully completed handshake.

For the implementation, a distance of 1.5 m among the parties was set as the minimum threshold for starting the handshake. If one of the parties gets outside of this range, the handshake is automatically considered as declined/cancelled. When designing the variants, the S and G phases were kept coherent, reaching the four designs described below (an example is given in Fig. 1).

  • Handshake 1 (HS1): this variant was designed following the most common mechanics in Social VR platforms in which interaction with a user interface is used to S/G the handshake [Haans and IJsselsteijn 2006]. When the parties are within the handshake threshold distance, a floating menu is shown above the dominant hand of the user. The menu carries a button that can be pressed via ray-cast interaction to perform the S. The button turns into a cancel button for the party that initiated the handshake. When S, an animation of the first party's right hand extending and changing colour to light blue is played, and a menu is shown to the other party under the head of the avatar that initiated the handshake with a grant button to be pressed. A predefined animation in which both right hands get in touch and handshake three times with a sound effect is used for F.
  • Handshake 2 (HS2): this variant was designed by considering minimal modifications to HS1 potentially allowing a higher comfort and efficiency. In particular, the GUI-based ray-cast interactions were replaced by others based on physical buttons on controllers. The menus (S/G) show up as before, but the user can S by pressing and holding one of the top buttons on the controller. If the button is released, the handshake is cancelled. The other party can G by pressing the same button on the controller. F is as before.
  • Handshake 3 (HS3): this variant was designed with naturalness and fidelity of the handshake in mind, at the cost of potentially sacrificing learnability and control. The idea was to allow the user to directly grab the other party's hand to S. To perform the grabbing, the user is asked to reach the hand and subsequently press-holding the grip button on the controller. As before, if the button is released, the handshake is cancelled. It is worth noting that once the other party's hand is grabbed, the two users are provided with a discordant visualisation of their own virtual hand. In fact, from the perspective of the S user s/he believes to grab the actual hand controlled by the other user but actually the hand s/he sees attached to its own is a placeholder one. The other party still retains control on its own hand (not seeing neither the placeholder nor his/her own hand as controlled by the S user), and is notified via a GUI (and via the HS1 blue hand extending animation) that it is possible to G. To G, the party is mandated to grab the S user's extended hand. This implementation was devised to provide a more polite experience and reduce the risk of having the G user to perceive lower control and embodiment. F was kept as before (the hands will be already in the correct position for the animation), not to introduce further modifications to this variant.
  • Handshake 4 (HS4): this latter variant was implemented as HS3, but with the aim to maximise naturalness and fidelity the F was modified by allowing the handshake to be freely performed without a predefined animation. Specifically, G is slightly modified so that the handshake is considered as granted not as soon as hands are mutually grabbed but when both users initiate a handshake gesture (moving the hand up and down while both parties are press-holding the grip button). F retained the sound effect notification whereas the animation was replaced with the possibility for the parties to handshake (i.e. controlling and moving) their own hands at will while seeing the hand of the other avatar attached to their own. This variant is shown in Fig. 1.

Videos depicting the four handshakes from both parties’ perspectives are available as supplemental material.

3 EXPERIMENT

This section presents the exploratory user study that was run to compare and evaluate the four proposed variants.

3.1 Handshaking Activity

A handshaking activity was arranged in the testbed in order to have all the participants in the user study to experience all the possible actions of a given design. The participants were briefly instructed on the use of the variant using pre-made videos illustrating the functioning of the interaction. However, they were not allowed to practice with another avatar in this tutorial stage, since factors such as learnability and intuitiveness were going to be measured. After the tutorial, the participants were allowed to experience the handshake both with two NPCs and a UCA (which was operated by a confederate during the experimental evaluation). The two NPCs were programmed so that the first one (acting as the village's major) will never start a handshake but can grant it (or decline, 25% of the times), whereas the other one (acting as a retailer) will always start the handshake and let the user grant it (or decline). Likewise, the UCA is operated to let the participants experience the whole spectrum of actions. Furthermore, it was decided that once a handshake is successfully completed, the UCA moves ways and gets back after 30 seconds. However, to prime the illusion that, each time, the participants are interacting with an avatar controlled by a different human being, the colour of the avatar changes (in a predefined order, identical for all the participants) at every new appearance.

The spectrum of actions (six), obtained by combining the possible events, is as follows: the participant starts and cancels the handshake before the UCA/NPC grants it; the participant starts and the UCA/NPC declines; the participant starts and the UCA/NPC grants (successful handshake); the UCA/NPC starts and cancels before the participant grants; the UCA/NPC starts and the participant declines; the UCA/NPC starts and the participant grants (successful handshake). It was decided to have the participants experience at least two times each action with both the NPCs and the UCA, for a total of 36 actions. The average time to complete the handshaking activity was about 12 minutes.

3.2 Experiment Design

The study was arranged by following a within-subject design. Prior to performing the handshaking activity, the participants were allowed to familiarise with locomotion in the VE using a sandbox scenario (the same used for the handshaking activity but without other avatars). Afterwards, they were asked to perform the handshaking activity with the four variants as described in Section 3.1. Latin square order of exposition was adopted to minimise possible biases and counterbalance learning effects. An a-priori power analysis was performed using the G*Power tool [Faul et al. 2009] to identify the required sample size. Setting α = 0.05 and aiming at detecting at least an effect size of medium entity (Cohen's f ≥ 0.25), it was found that a sample of 24 participants was adequate to reach a power of (1 − β) = 0.81 for the arranged study design [Cohen 1977]. Hence, 24 volunteers were recruited in the authors’ network of contacts, as well as among the staff and students at the authors’ university.

3.2.1 Sample. A before-experience questionnaire (BEQ) was administered before starting the experiment, which included items pertaining to demographics, previous knowledge and expertise with technologies related to the experiment, and real-life handshaking attitude. According to data collected, the sample was made of individuals aged between 21 and 35 ($\bar{x}_{}=25.70$ y.o., s.d. = 2.95 y.o.); 25% were females, 75% males. Out of them, 54% were moderately to very familiar with the use of immersive VR, 46% were little to none familiar with it. Moreover, 30% of the sample reported having used a Social VR platform at least once in the last six months. Only 8% of the sample reported being moderately frightened or annoyed by the handshaking practice (in real life), and 70% were used to handshake when introducing themselves to someone else.

3.2.2 Measures. Subjective feedback was collected by means of a multi-section questionnaire, which was administered after experimenting with each variant (81 items) which was made as follows.

Embodiment. Influence of the variant on the embodiment level w.r.t. the user-controlled avatar was measured using the scale in [Peck and Gonzalez-Franco 2021] reduced to the overall, location, ownership and agency factors.

Presence and Social Presence. Level of presence was measured using the i–group Presence Questionnaire (IPQ) [Schubert 2003], whereas for social presence the Networked Minds measure scale was employed (NMMS) [Harms and Biocca 2004].

Usability. Usability was measured from multiple perspectives. The System Usability Scale (SUS) [Brooke 1996] was used as overall indicator. Furthermore, section 1 and 8 of the VR-USE questionnaire [Kalawsky 1999] were used to assess the variants in terms of functionality, as well as error correction and robustness. The related subscales (appropriateness, ease of use, intuitiveness, learnability, and system performance) were also analysed.

Handshake experience. A custom set of questions (available as supplemental material) was used to obtain information in terms of similarity with real-life handshakes. Specifically, it was measured whether the particpants were able to tell who was leading the handshake, and observed if the personality of the other avatar was perceived differently among the variants.

At the end of the experience, the participants were asked to explicitly rank the four variants, by specifying which of the handshakes was: easier to start, easier to complete, easier to cancel (when started by them), easier to cancel (when the other started), easier to decline. Also, a rank for the following dimensions was collected: naturalness, control over the task, least distracting, embodiment (of the hands), and overall preference.

3.3 Results and Discussion

The statistical significance of the results was investigated by using the Friedman test with Conover correction and the Wilcoxon signed-ranks test as post-hoc by means of the RealStatistics tool (v7.3).

Table 1: SUS score
SUS Score (c.i.) p-value (Cohen'sd)
HSS1 HSS2 HSS3 HSS4 Friedman HS1/HS2 HS1/HS3 HS1/HS4 HS2/HS3 HS2/HS4 HS3/HS4
68.44 (8.12) 81.88 (5.38) 89.17(4.31) 93.85 (3.02) <.001 <.001 (0.81) <.001 (1.32) <.001 (1.14) .001 (0.62) <.001 (1.14) .009 (0.52)
Figure 3
Figure 3: Bold text is for aggregate indicators and normal text is for subscales. Baffles are used to report significant differences with p-value  ≤ 0.05.

The four variants were deemed comparable in terms of embodiment, presence and spatial presence. No significant differences were found in the embodiment measure ($\bar{x}_{HS1-4}=4.95\pm 0.24$,p-value  = .113), including the related subscales: body ownership ($\bar{x}_{HS1-4}=5.29\pm 0.24$,p-value  = .112), body location ($\bar{x}_{HS1-4}=3.89\pm 0.43$,p-value  = .392), and agency ($\bar{x}_{HS1-4}=5.69\pm 0.23$, p-value  = .058). Also in terms of presence (IPQ), no significant differences were identified among the variants ($\bar{x}_{HS1-4}=4.33\pm 0.15$,p-value  = .392). Spatial presence (NMMS) was found as not significantly different overall ($\bar{x}_{HS1-4}=6.12\pm 0.09$,p-value  = .401) as well as in its subscales: mutual awareness ($\bar{x}_{HS1-4}=6.79\pm 0.09$,p-value  = .392), empathy ($\bar{x}_{HS1-4}=5.16\pm 0.21$,p-value  = .194), behavioural interdependence ($\bar{x}_{HS1-4}=5.33\pm 0.27$,p-value  = .598), mutual assistance ($\bar{x}_{HS1-4}=6.75\pm 0.13$,p-value  = .999). An exception is made by the dependent action subscale (p-value  = .037, $\bar{x}_{HS1}=1.50\pm 0.29$, $\bar{x}_{H2}=1.52\pm 0.30$, $\bar{x}_{HS3}=1.40\pm 0.27$, $\bar{x}_{HS4}=1.33\pm 0.26$), according to which a small but significant difference was found for HS1/HS4 (p-value  = 0.041, $\eta _{p}^{2}=0.28$) and HS2/HS4 (p-value  = 0.041, $\eta _{p}^{2}=0.28$), thus indicating that HS4 was perceived to allow slightly more freedom compared to HS1 and HS2 for which, instead, a more fundamental role for completing the handshake interaction was attributed to the other avatar. The results about usability were statistically significant for most of the factors investigated, highlighting an overall trend for scores, i.e. HS4 > HS3 > HS2 > HS1. This is shown by SUS results in Table 1, which are confirmed by the overall usability measured through the aggregate scale of the VR-USE (Fig. 3a).

The functionality scores follow the same trend, whereas in terms of error correction and robustness significant differences were found only for HS4 against the other three variants, suggesting that the lack of the handshake animation could have influenced the perception of robustness of the HS4 variant. The animation had no significant effects instead on the appropriateness dimension. Ease of use and system performance followed the general trend, as for intuitiveness and learnability for which, however, no significant differences were spotted between HS2 and HS3.

A more complete picture can be obtained by looking at the handshake experience (Fig. 3b). Although for the overall score, fidelity, efficiency and effect on bonding with the other avatar the general trend was confirmed, it is interesting to notice that no significant differences in the perception of who is leading the handshake were found. Linked to this latter result is the score about the assessment of the other avatar's personality through the handshaking: only HS1 was judged as significantly worse compared to HS3 and HS4, which seems to indicate that the usage of the GUI could act as a communication barrier. Regarding the comfort, as it may be expected, no significant differences were observed between HS3 and HS4, which were anyhow surprisingly rated as the best-performing variants in this regard.

Finally, by analysing the explicit ranks in Table 2 some additional considerations can be made.

Table 2: Subjective ranking of variants.
Item Feature Rank (Median) p-value (Cohen'sd)
HS1 HS2 HS3 HS4 Friedman HS1/HS2 HS1/HS3 HS1/HS4 HS2/HS3 HS2/HS4 HS3/HS4
#1
Easier to start
4 3 2 1 <.001 <.001 (1.53) <.001 (2.11) <.001 (1.09) .170 (0.50) .007 (1.09) .034 (0.60)
#2
Easier to grant
4 3 2 1 <.001 <.001 (1.87) <.001 (2.59) <.001 (1.16) .024 (0.72) .008 (1.16) .098 (0.47)
#3
Easier to cancel (when I started)
4 3 1 1 <.001 <.001 (2.17) <.001 (3.74) <.001 (1.08) .020 (0.62) .003 (1.08) .065 (0.56)
#4
Easier to cancel (when other started)
1 1 1 1 .233 - - - - - -
#5
Easier to avoid
4 2 2 1 .001 .024 (0.55 .014 (0.85) .008 (0.69) .094 (0.40) .033 (0.69) .059 (0.34)
#6
Naturalness
4 3 2 1 <.001 <.001 (2.67) <.001 (7.72) <.001 (3.76) <.001 (2.02) <.001 (3.76) .001 (2.57)
#7
Least distracting
4 3 2 1 <.001 <.001 (2.56) <.001 (4.56) <.001 (2.06) .009 (0.89) <.001 (2.06) ..002 (1.29)
#8
Control over the task
4 3 2 1 <.001 <.001 (2.24) <.001 (3.54) <.001 (4.22) .006 (1.20 <.001 (4.22) <.001 (2.37)
#9
Embodiment of hands
4 3 2 1 <.001 <.001 (0.88) <.001 (1.65) <.001 (2.24) .009 (0.71) <.001 (3.47) <.001 (1.83)
#10
Overall preference
4 3 2 1 <.001 <.001 (3.01) <.001 (5.58) <.001 (3.47) <.001 (1.71) <.001 (3.47) <.001 (1.83)

For instance, the difference between H3 and H4 was not significant in those factors somehow related with the end of the handshake (#2, #3, #4, #5), which seems to indicate that the introduction of the different completion feedback did not play a role in the perception of the task completion. On the contrary, the lack of a predefined animation made the HS4 to be perceived as more controllable, less distracting, and associated with a higher level of embodiment (w.r.t. the hands) compared to HS3.

3.4 Remarks for Social VR Designers

The above analysis seems to indicate that designers of Social VR platforms should consider limiting the use of GUI-based handshake interaction methods in favour of the other approaches proposed. Thus, interaction based on controller's buttons (HS2, HS3, HS4) shall be preferred when possible w.r.t. to HS1. Even though the allocation of functions mapped to a given button would be more complicated in an actual implementation of a Social VR platform than in the testbed used for the current study, difficulties could be mitigated by adopting context-aware practices, e.g., by relying on users’ respective positions, or on interaction history (has a handshake ever been performed before between these two users?). Furthermore, the introduction of a natural gesture appears to contribute in a non-negligible way to the handshake experience. Hence, designers should consider limiting the use of predefined animations in favour of more unconstrained interaction and possibly support gesture detection.

4 CONCLUSIONS

In this work, handshake interaction in a Social VR-like VE was investigated by proposing and evaluating four possible designs that exploit hand-held controllers as interaction means. In particular, embodiment, presence, social presence, usability, and handshake quality of experience dimensions were studied through an exploratory user study that involved 24 participants.

Experimental results indicated that the gesture-based design was preferred overall. Thus, it appears to be worth devoting future efforts to improve gesture recognition for this use case. Furthermore, it could be interesting to evaluate this controller-based design against a handshake interaction performed with the support of hand-tracking technology. A possible limitation of the work pertains to restricting the start of the interaction to gesture only even though was in principle conceivable to design a variant which additionally supports other verbal or non-verbal cues to initiate the handshake (e.g. gaze interaction). Albeit we deemed such a variant out of the scope of the current work it may be of interest to extend the study in this direction in the future.

ACKNOWLEDGMENTS

This research is supported by Internal Funds KU Leuven (HFGD8312-C14/20/078) and the VR@POLITO initiative.

REFERENCES

  • John Brooke. 1996. SUS: A ‘quick and dirty’ usability scale. Usability Evaluation in Industry (1996), 189. https://doi.org/10.1201/9781498710411-35
  • Judee K Burgoon and Joseph B Walther. 1990. Nonverbal expectancies and the evaluative consequences of violations. Human Communication Research 17, 2 (1990), 232–265. https://doi.org/10.1111/j.1468-2958.1990.tb00232.x
  • Alberto Cannavò, Davide Calandra, F Gabriele Pratticò, Valentina Gatteschi, and Fabrizio Lamberti. 2020. An evaluation testbed for locomotion in virtual reality. IEEE Transactions on Visualization and Computer Graphics 27, 3 (2020), 1871–1889. https://doi.org/10.1109/TVCG.2020.3032440
  • Jacob Cohen. 1977. Statistical Power Analysis for the Behavioral Sciences (2 ed.). Academic Press. https://doi.org/10.1016/B978-0-12-179060-8.50006-2
  • Franz Faul, Edgar Erdfelder, Axel Buchner, and Albert-Georg Lang. 2009. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods 41, 4 (2009), 1149–1160.
  • Antal Haans and Wijnand IJsselsteijn. 2006. Mediated social touch: a review of current research and future directions. Virtual Reality 9 (2006), 149–159. https://doi.org/10.1007/s10055-005-0014-2
  • Chad Harms and Frank Biocca. 2004. Internal consistency and reliability of the networked minds measure of social presence. In 7h Annual Int. Workshop: Presence.
  • Shota Imaizumi, Hayata Saito, Masaki Kitajima, Kei Kanari, and Mie Sato. 2022. Effects of Hand Color Change on User's Psychological State During a Pseudo Handshake. In 2022 Int. Conf. on Cyberworlds (CW). IEEE, 126–129. https://doi.org/10.1109/CW55638.2022.00028
  • Roy S Kalawsky. 1999. VRUSE—a computerised diagnostic tool: for usability evaluation of virtual/synthetic environment systems. Applied ergonomics 30, 1 (1999), 11–25. https://doi.org/10.1016/S0003-6870(98)00047-7
  • Vlasios Kasapakis, Elena Dzardanova, Vasiliki Nikolakopoulou, Spyros Vosinakis, Ioannis Xenakis, and Damianos Gavalas. 2021. Social Virtual Reality: Implementing non-verbal cues in remote synchronous communication. In Virtual Reality and Mixed Reality: 18th EuroXR Int. Conf., EuroXR 2021, Milan, Italy, November 24–26, 2021, Proceedings 18. Springer, 152–157. https://doi.org/10.1007/978-3-030-90739-6_10
  • Lorraine Lin, Aline Normoyle, Alexandra Adkins, Yu Sun, Andrew Robb, Yuting Ye, Massimiliano Di Luca, and Sophie Jörg. 2019. The effect of hand size and interaction modality on the virtual hand illusion. In 2019 IEEE Conf. on Virtual Reality and 3D User Interfaces (VR). IEEE, 510–518. https://doi.org/10.1109/VR.2019.8797787
  • Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. "Talking without a Voice": Understanding Non-Verbal Communication in Social Virtual Reality. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 175 (2020), 25 pages. https://doi.org/10.1145/3415246
  • Steve Mann, Yu Yuan, Fabrizio Lamberti, Abdulmotaleb El Saddik, Ruck Thawonmas, and Filippo Gabriele Pratticò. 2023. eXtended meta-uni-omni-Verse (XV): Introduction, Taxonomy, and State-of-the-Art. IEEE Consumer Electronics Magazine (2023), 1–9. https://doi.org/10.1109/MCE.2023.3283728
  • Hideyuki Nakanishi, Kazuaki Tanaka, and Yuya Wada. 2014. Remote handshaking: touch enhances video-mediated social telepresence. In Proc. of the SIGCHI Conf. on human factors in computing systems. 2143–2152. https://doi.org/10.1145/2556288.2557169
  • Tabitha Peck and Mar Gonzalez-Franco. 2021. Avatar Embodiment: A Standardized Questionnaire. Frontiers in Virtual Reality 1 (2021). https://doi.org/10.3389/frvir.2020.575943
  • Thomas W Schubert. 2003. The sense of presence in virtual environments: A three-component scale measuring spatial presence, involvement, and realness.Z. für Medienpsychologie 15, 2 (2003), 69–71. https://doi.org/10.1026/1617-6383.15.2.69
  • Philipp Sykownik and Maic Masuch. 2020. The Experience of Social Touch in Multi-User Virtual Reality. In Proc. of the 26th ACM Symp. on Virtual Reality Software and Technology (VRST). Article 30, 11 pages. https://doi.org/10.1145/3385956.3418944
  • Joshua Tanenbaum, Magy Seif El-Nasr, and Michael Nixon. 2014. Nonverbal Communication in Virtual Worlds. ETC Press Pittsburgh, PA.
  • Theresa Jean Tanenbaum, Nazely Hartoonian, and Jeffrey Bryan. 2020. "How Do I Make This Thing Smile?": An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. In Proc. of the 2020 CHI Conf. on Human Factors in Computing Systems. ACM, 1–13. https://doi.org/10.1145/3313831.3376606
  • Zheng Wang, Elias Giannopoulos, Mel Slater, and Angelika Peer. 2011. Handshake: Realistic human-robot interaction in haptic enhanced virtual reality. Presence 20, 4 (2011), 371–392. https://doi.org/10.1162/PRES_a_00061

CC-BY license image
This work is licensed under a Creative Commons Attribution International 4.0 License.

MIG '23, November 15–17, 2023, Rennes, France

© 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0393-5/23/11.
DOI: https://doi.org/10.1145/3623264.3624464