Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3613904.3642425acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment

Published: 11 May 2024 Publication History
  • Get Citation Alerts
  • Abstract

    Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.

    1 Introduction

    Virtual Reality (VR) technologies enable people to not only immerse themselves in artificial digital worlds, but also enable previously impossible interactions that challenge our assumptions about our virtual bodies and social coordination processes in such virtual spaces. The proliferation of consumer head-mounted displays (HMDs) have made VR systems an increasingly common platform through which virtual social interactions can take place [16, 45]. VR enables meeting in a shared, immersive virtual environment [20, 34], and interacting with virtual representations of human avatars and virtual agents. Social interactions are a key factor in VR – not only to prevent isolation of individuals in the virtual environment [49], but also to enable joint social activities and interactions [48, 58] in and beyond social VR platforms such as VRChat and Rec Room [42].
    Figure 1:
    Four images showing the setup of the experiment. On the left an image of User 1 wearing oculus quest 2 head mounted displays and holding touch controllers while performing reaching motion and on the right an image of user 2 with the same setup; In the center (top) an image of the setup of Task 1 (targeted) with a single cube and shared avatar and (bottom) an image of the setup of task 2 (Free-choice) with five cubes and shared avatar
    Figure 1: Two users co-embodying virtual hands performing two types of joint action tasks in a shared virtual reality environment
    Despite the myriad ways that social VR platforms can implement multi-user functionality, the most commonly used method today is giving each user their own individual avatar to navigate through the virtual environment. Typically, such interactions adopt a first-person perspective, which is an important contributing factor to creating a sense of body ownership (i.e., the experience of having a virtual body [27]) and agency (i.e., involving a sense of control over a virtual body [4]) towards the user’s virtual avatar. Extending beyond common social VR interactions, researchers have recently explored the concept of “virtual co-embodiment” [11, 15], where two users embody a single, shared avatar. This inherently differs from a shared visual experience where multiple users would only share the same viewing perspective [66]. Virtual co-embodiment offers a multi-user experience characterized by shared control over the avatar’s movement. Since two or more people share control over the avatar, there is an increase in social coordination [11]. Such ‘fusionary’ interactions are a component of what is dubbed the “JIZAI Body" [21]: the concept by which a computer-mediated human body can seamlessly adapt to social structure changes, such that any additions or alterations (virtual or physical) would feel as much their own as their original body. This relates to recent efforts toward sensible human-computer integration [7, 40], which extends the notion of cybernetics [62]. Indeed, experiments have shown that participants who co-embody a virtual avatar report high levels of perceived control, with lower levels of actual control [17, 28]. This enhanced perception of control can be useful for rehabilitation specialists and support personnel that use immersive technology for the treatment of physical and cognitive function of individuals that have suffered from stroke [37] and dementia [69], respectively. The shared control of virtual co-embodiment provides an assistive methodology to improve accessibility for these vulnerable individuals during the setup and navigation of virtual environments. In cases of stroke rehabilitation, co-embodiment can also be incorporated alongside techniques such as Mirror Therapy [36], to enrich motor support in the VR system by incorporating shared (human) assistance to adapt to requirements for each individual need.
    Research has shown that individuals can adapt to different media to achieve their communication goals [43], where the degree of embodiment and time required for such embodiment may vary depending on the visual and visuo-motor consistency of the artificial (virtual) and biological body [1, 10, 25]. Furthermore, when another entity that is sentient or appears to be sentient is present in the same environment, another dimension called ‘social presence’ comes into play [42, 54]. The degree of experienced social presence depends on the person’s perceived ability to access another individual’s intentions, intelligence, and sensory impressions. Immersive qualities, contextual properties, and individual differences, can predict the extent to which social presence is experienced by users in VR [42]. Without a sufficient level of social presence, the other entity is perceived as artificial and not as an intentional social being [42]. Without such perceived intentionality, shared and collaborative tasks become difficult, making a high sense of social presence vital for a smooth collaborative experience to occur during virtual co-embodiment.
    However, a specific challenge in co-embodiment is that the visual feedback alone of the combined motion of the shared avatar does not fully provide the intentional information of one’s partner because the shared avatar’s motion is partially determined by one’s own motion. Virtual co-embodiment leads to a situation where there is an intermingling of self-presence with social presence, where the identity of the self is intrinsically linked to the avatar and the presence of another intentional subject. This intentionality is translated through the amount of control available for each person over the shared avatar. While researchers have explored virtual co-embodiment in relation to users’ perception of their embodiment of the shared avatar [11, 15] and its use for motor skill learning [28, 29], the role of social presence and its influence on social coordination within this context has not yet been fully characterized. Extensions of this concept by Hapuarachchi and Kitazaki [18] and Hapuarachchi et al. [17] have highlighted the need for non-verbal communication mechanisms to be implemented in the paradigm of co-embodiment to enhance sense of embodiment towards the co-embodied avatar. Indeed, during such co-embodiment experiences, the illusion of shared motion control can be coupled with user communication techniques that support coordination, highlighting the need for position-aware feedback mechanisms [59]. Haptic feedback may be especially appropriate here because it can positively influence the experience of social presence [42]. For this, we draw on Auvray et al. [2]’s haptic ’perceptual crossing paradigm’, which was conceived to study social interaction dynamics in real time through tactile sensorimotor interactions [30]. Perceptual crossing refers to situations where two perceptual activities of the same kind meet each other, such as when two people catch each other’s eye (joint gaze and attention [56]), or mutual social touch [50]. This paradigm offers us a foundation to build systems that enable studying of factors involved in mutual recognition between people in remote interactions. Given the focus of co-embodiment scenarios on a sense of shared control with a remotely located other person, we believe the perceptual crossing paradigm lends itself well to studying sensorimotor interactions in co-embodied VR.
    In this paper, we draw on the perceptual crossing paradigm [2], to explore how haptic feedback can be integrated into a virtual co-embodiment scenario, where pairs of participants share control over a virtual hand. Building on the idea of haptic perceptual crossing, we implemented haptic feedback (on/off) when users’ hand positions overlap in virtual space. We examine how haptic cues can enable awareness of each other and coordination between co-embodied participants. We ask: (RQ) How do haptic feedback mechanisms and varied avatar control distribution influence users’ sense of agency, co-presence, body ownership, and motion synchrony in targeted and free-choice virtual avatar co-embodiment tasks? In a controlled, within-subjects study with 20 participant pairs, we examined the effects of positional haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two cube selection tasks (Targeted, Free-choice) on participants’ sense of agency (SoA), co-presence, body ownership, and motion synchrony. Our findings showed (a) a lower sense of agency in the free-choice with haptics compared to no haptic feedback, (b) higher agency during the shared target task, (c) co-presence and embodiment were significantly higher in tasks where there were multiple targets, and (d) players’ hand motions synchronized more in the targeted task.
    Our exploratory work offers two primary contributions: (1) We integrate the concept of perceptual crossing into the paradigm of virtual co-embodiment to enable position-aware haptic non-verbal communication cues between two users; (2) We provide empirically backed insights showing the influence of haptics on the perceived sense of agency during targeted and free-choice selection tasks under variable control of shared virtual hands.
    Figure 2:
    An illustration of User 1 (left) and User 2 (right) embodying a virtual shared avatar with its motion generated by taking the weighted average of both users’ motions
    Figure 2: Illustration of the weighted average virtual co-embodiment method, where the motion of the shared avatar (center) is generated by taking the weighted average of the motion of User 1 (left) and User 2 (right)

    2 Background and Related Work

    In this section, we describe prior work on virtual co-embodiment, control sharing techniques, haptics integrated into co-embodiment experiences, and the perceptual crossing paradigm.

    2.1 Virtual avatar co-embodiment and the Sense of Agency

    Virtual co-embodiment refers to occurrences where multiple users can simultaneously interact with the virtual environment using a shared avatar. Given that two or more individuals share control over the avatar, there is an increase in social coordination [11]. To this end, prior work has shown that participants who co-embody a virtual avatar reported high levels of perceived control, with lower levels of weighted percentage of actual control [17, 28]. This makes it a promising tool for VR-based rehabilitation [23] and training [14, 19] applications, since a learner with low control can feel a stronger sense of agency (SoA) while performing the activity with a teacher with high control. One domain in which researchers are trying to leverage the immersive capabilities of VR is in the support and treatment of dementia patients [69] – these individuals are considered vulnerable and they typically find it challenging to operate basic VR controls [31]. In such cases, co-embodiment can enable assistive accessibility for these individuals, guided by support personnel. This would enable them to not only train, but also maintain a high level of agency during such immersive experiences. Furthermore, VR has shown potential for its ability to stimulate Mirror Neurons (MNs) of the internal sensorimotor system of stroke patients [5]. In such settings, patients are immersed in training scenarios in virtual environments that involve executing motor actions, such as observing and visualizing mirror limb movements with the intent to imitate these actions. These have shown enhanced MN activation, leading to faster post-stroke recovery [37]. To that end, co-embodiment can be further leveraged within these techniques in order to improve the effectiveness of such treatments through enhanced agency and nuanced control over movements executed by the patients actively guided by their trainers or caregivers.
    In the context of avatar co-embodiment, the ‘Sense Of Embodiment’ (SoE) can be manifested through three main components: Sense of Self-Location, Sense of Body Ownership, and the Sense of Agency [27]. Sense of Self-Location refers to the feeling of ‘being inside’ a virtual body, while sense of body ownership and agency refers to the feeling of ‘having’ and ‘controlling’ the virtual body, respectively. Studies have explored various factors and their influence on these components, and have shown that manipulations of the overall SoE are possible through changes in avatar representations, degree of control, and perspective of the users [10, 41]. Similarly, the influence of sharing the virtual body with another user and its effect on SoE was studied in experiments of virtual co-embodiment. Here, the sense of agency and body ownership play a pivotal role that determines the engagement level during the shared perceptual activity [11, 15, 17, 18, 29].

    2.2 Avatar co-embodiment control sharing techniques

    To realize such co-embodiment, the motion of a shared avatar has previously been generated using two techniques: the weighted average co-embodiment method [11, 15, 28] and the body-part-segmented co-embodiment method [17, 18]. The weighted-average method involves assigning a weight between 0 and 100 percent to each user and generating the movement of the shared avatar by interpolating the weighted average of the real-time position and orientation of the controllers of both the users (Figure 2) [11, 15, 28]. The body-part-segmented co-embodiment method is a technique where the motion of independent limbs of a shared avatar is controlled by each user separately [17, 18]. In this paper we focus on the first method, where shared interactions can be manipulated by both the users and their influence is determined by the percentage of control they possess. Since we focus on position-aware feedback mechanisms, we draw on the weighted-average method to enable this. Results from [11, 15, 28] all showed that sense of agency increased with the increase in the control weight for the participant during avatar co-embodiment. In all the studies, participants could coordinate their movements in joint action, leading to the sharing of motor intention and synchronization. In a follow-up study by Kodama et al. [29], they evaluated participants’ task performance and motor skill learning ability. They concluded that learning using virtual co-embodiment was more efficient than the perspective-sharing method, in which a translucent teacher avatar was superimposed on the learner’s first-person perspective view. However, contrary to the previous studies, no significant differences were observed between the different control weight conditions. We draw on the design considerations from [11, 15, 28] to design our study protocol.

    2.3 Haptics for virtual co-embodied experiences

    Since the early days of VR, haptic feedback has been a central component in many VR systems [55] and has been used to enable a diversity of touch-based interactions in VR  [60], with the most common type of haptic feedback in VR applications being vibrotactile and force feedback [65]. Studies that have explored haptics as a communication medium in the context of shared virtual spaces report enhanced user experiences [24, 68]. Important to our present purposes, the addition of haptic feedback to social VR has been found to consistently enhance perceived social presence [42]. In work on co-embodiment, the application of haptic feedback is essentially understudied. Hapuarachchi and Kitazaki [18] explored the manipulation of the sense of agency by providing visual feedback of the partner’s target during co-embodiment, and Hapuarachchi et al. [17] implemented passive haptics by attaching a back brace to both the users, allowing them to maintain consistent shoulder posture while controlling the shared avatar using the body-part-segmented co-embodiment method. These explorations highlight the value of identifying what type of feedback modalities can be integrated into the virtual co-embodiment paradigm to provide users with advanced perceptual capabilities. While visual feedback offers more information to the user, it leads to cluttered, chaotic experiences when scaled up. Thus, haptic feedback provides an alternative to overcome this limitation. The challenge in the context of co-embodiment is to design the feedback mechanism in a way that does not increase the cognitive load required to differentiate between the interaction with the environment and the presence of the other user. Given the foregoing, we implement haptic feedback as a communication medium to indicate the other users’ position during co-embodiment.

    2.4 Perceptual crossing paradigm for enhancing social coordination

    The perceptual crossing paradigm [2] was conceived to study social interaction dynamics in real time through tactile sensorimotor interactions [30]. The classical paradigm features a minimalist 1D environment–a line–that loops around creating a continuous interaction space not visible to the users. Two users are each represented in the virtual space by an avatar (a dot) that they can control using a standard computer mouse. When their avatar encounters a virtual object in the 1D space they receive haptic feedback. There are three types of objects in the environment: a static object, the user’s avatars, and the user’s ’shadow,’ an object that moves with the users’ avatars at a set distance. All objects have exactly the same size and produce the same haptic feedback when encountered. When one user encounters the shadow of the other user, only the interacting user receives haptic feedback while the other user (to whom the shadow belongs), does not. The only condition when both users receive haptic feedback simultaneously is when both users’ avatars encounter each other. Users are tasked with clicking the mouse when they think they are interacting with the avatar of the other user. In the original studies [2, 3], users were successfully able to locate each other in the 1D virtual space. However, the probability of clicking when encountering another user’s avatar was not significantly higher than clicking when encountering another user’s shadow. Successful identification of the other could only be explained by the stability of mutual recognition; users would encounter one another, move back, encounter each other again, and repeat, creating an oscillating movement pattern of repeated encounters. In other words, users would only successfully recognize each other during perceptual crossing (i.e., perceptual activities of the same kind meet each other).
    Extensions of the basic paradigm have shown that, in a team-based version of the paradigm where users were instructed to collaborate, participants successfully identified the other’s avatar and, for those encounters, reported the clearest awareness of the others’ presence [12]. A version of the paradigm that used a following task in a skewed 1D environment highlighted that users were successful in following each other’s movements through haptic perceptual crossing [32]. Though part of the strength of the perceptual crossing paradigm lies in the minimalist approach, 2D extensions have already been successful [33]. To the best of our knowledge, no 3D implementations of the paradigm have ever been attempted. We see an interesting opportunity in the implementation of the paradigm’s basic premise (i.e., haptic feedback upon contact in a virtual space to signify the presence of the other) as an interactive cue that could aid movement coordination as well as enhance perceived social presence in virtual co-embodiment scenarios.
    Figure 3:
    Diagram illustrating the different phases of the Study procedure, along with textual labels explaining each component
    Figure 3: Diagram illustrating the different phases of the Study procedure, along with textual labels explaining each component

    3 Methods

    In this section, we describe our research methodology, including the study design, experimental protocol, objective and subjective measures, our hardware and software setup, study procedure, and participant sample.

    3.1 Study design

    Our study has two main Independent Variables (IVs), and follows a 3 (IV1: Control Distribution: 25-75% vs. 50-50% vs. 75-25%) x 2 (IV2: Haptic Feedback: None vs. Present) within-subject design, tested in a controlled, virtual environment. The control distribution consisted of three sets of weighing player one and player two’s control over the shared avatar: 25-75%, 50-50%, 75-25% (referred to as W25, W50, W75). There was either no haptic feedback (NH) or haptic feedback when participants’ hands overlapped (H) in the virtual space. This interaction was designed using virtual spheres (radius 8 cm), which is approximately the size of the virtual hand mesh that was attached to the controllers in the virtual environment. When the spheres of each user intersect with each other, the haptic feedback is triggered for both the users. The study was divided into three phases: Training, Task 1, and Task 2 (Figure 3). There were six distinct conditions (3 control distribution x 2 haptic feedback) for each task that was performed by a pair of participants. The experiment consisted of two tasks where each of these conditions were repeated twice, bringing the total to 24 (2 tasks x 2 repetitions) trials for the entire study. The subjective responses of the participants on the sense of agency were collected with a questionnaire after each trial, while the sense of co-presence and embodiment questionnaires were collected after each haptic feedback condition / block (after six trials). Task 1 was always performed before Task 2, and the two haptic feedback conditions were counterbalanced according to a Latin square design such that starting trial of each session consisted of all possible combinations of haptic and control conditions, with the remainder of trials subsequently randomized to mitigate order effects. The study was designed such that a sample consists of pairs of participants. For example, in a session, participant 1 would perform the task with 25% control four times (4x), twice with and twice without haptic feedback, while their counterpart had 75% control (performed also 4x). Similarly for 50% (4x) and 75% (4x). Therefore for "25%", "75%", and "50%", there were 4 samples for each task (twice with and twice without haptic feedback).

    3.2 Protocol

    3.2.1 Joint action reaching tasks.

    The most common method to evaluate virtual co-embodiment is using a reaching task. In this task, participants touch an object such as a cube [15] or sphere [11, 18] using a shared avatar. This task typically focuses only on participants’ motion, as adding additional interactions (e.g., button presses) can increase task complexity, which may render the task unsuitable for studying shared control. Fribourg et al. [11] introduced a reaching task, for three scenarios: free, target, and trajectory. During the free task, each participant was free to choose any sphere to touch, while the sphere to be touched was highlighted in the target task. The trajectory task involved following a particular path before touching a highlighted sphere, and it focused more on precision. To help answer our RQ, we need to better understand the influence of movement freedom and intention on the level of embodiment (sense of agency and body ownership) over the shared avatar using haptics. Therefore, we implemented two reaching tasks: targeted (Task 1) and free-choice (Task 2), which we describe below.

    3.2.2 Training.

    In the training phase, basic controls of our VR system were explained to each participant, including how to use the controller buttons to interact with widgets in the scene. Afterward, each participant performed an individual training trial, which showed the participants how to complete Task 1. Since the training session was performed individually, no haptic feedback was provided beforehand, since this can only occur in the later part of the study involving co-embodiment.

    3.2.3 Task 1: Targeted.

    In Task 1, participants used the shared right hand of the avatar to touch a cube that spawned in their field of view (Figure 4(a)). Once the shared hand collided with the cube, the cube would be removed. After a second delay, another cube was spawned at a pseudo-randomized location1 (Figure 4(b)). The location was pseudo-randomized, instead of pre-generated, to minimize learning effects. The delay provided a small reset time for the participants to avoid physical and cognitive fatigue. A spatial chime sound also originated from the spawned location of the cube, to indicate to participants the location of the new cube as it is difficult for users to realize if the cube has appeared at a location that is not within their field of view. Participants had to touch the cube a total of 17 times in each trial during Task 1.
    Figure 4:
    Two images of the first-person perspective of users performing Task 1 (targeted); on the left, the virtual shared hand is reaching out to touch the cube, and on the right, the cube re-spawns at a different location
    Figure 4: First-person perspective of cube interaction in the targeted task

    3.2.4 Task 2: Free-choice.

    In Task 2, five cubes are spawned in front of the participants, who had to move the shared hand to touch any of them to progress (Figure 5). In this case, when the shared hand collided with any of the cubes, all the cubes would get removed, and after a second delay, all the cubes would re-spawn back in the same positions. During Task 2, in each trial, participants had to touch one of the five cubes a total of five times to proceed. This task was designed to simulate a scenario where participants would have to collaboratively choose the co-embodied movement without verbal communication. This provided a suitable scenario to investigate if the position-aware feedback mechanism modeled on the perceptual crossing paradigm will enable the co-embodied users to work together.
    Figure 5:
    First person perspective of users performing task 2 (free choice), where virtual shared hand is reaching out to touch one out of the five cubes in the view
    Figure 5: First-person perspective of cube interaction in the free-choice task

    3.3 Measures

    3.3.1 Objective measures.

    The orientation (Roll, Pitch, and Yaw) and position (X, Y, Z coordinates) of both participants’ HMD and controllers were recorded at the applications’ default sampling rate of 70Hz during the entire session. Additionally, the start and end time of each trial and the duration of overlap of the participants’ right hands were recorded.

    3.3.2 Subjective measures.

    Participants filled in the Simulator Sickness Questionnaire(SSQ) [26] before and after the study. Additionally, participants filled in the Igroup Presence Questionnaire (IPQ) [51] at the end of the study.
    During Task 1 and Task 2, participants used the Oculus motion controllers to provide a Likert-scale rating ranging from "not at all" (1) to "fully in control" (7) for the question “How much do you feel in control?” after each trial to measure their subjective “Sense of Agency” over the shared avatar. These questions were embedded as panels in VR, allowing participants to stay immersed in the VR experience [47]. After each haptic feedback condition, participants would answer three questions about their “sense of co-presence” and three questions about their “sense of body ownership”, taken from standard questionnaires of co-presence [46] and avatar embodiment [44]. Given these questions belong to two different questionnaires, we calculate the reliability scores separately. These six questions were selected based on their relevance to the study design, while reducing participants’ workload and the total session time compared to using the full questionnaires.
    Co-presence (CP) questionnaire (Cronbach’s α =0.87)
    (1)
    I felt that I was in the presence of the other person
    (2)
    I felt that the other person and I were together in the same space
    (3)
    I felt that the other person responded to shifts in my movement (e.g., posture, position)
    Body Ownership (BO) questionnaire (Cronbach’s α =0.58)
    (1)
    I felt as if my (real) hands were drifting toward the virtual hands or as if the virtual hands were drifting toward my (real) hands
    (2)
    I felt as if the movements of the virtual hands were influencing my own movements
    (3)
    At some point, it felt as if my real hands were starting to take on the posture or shape of the virtual hands that I saw

    3.4 Hardware and software setup

    Participants performed the study using Oculus Quest 2 Head-Mounted Displays (HMDs) and Oculus Touch VR motion controllers connected to desktop computers. These computers ran the virtual environment we created using Unreal Engine 5.12, and were connected with Ethernet to ensure minimum latency between the computers. One computer hosts a local server while the second computer joined this server as a client. Each computer recorded the rotation and position of their respective users. To create a co-embodied avatar, the level spawns a “shared hands” avatar in the virtual world. This virtual representation was chosen to model a gender neutral representation of hands (cf., [52]). Since the controller was not represented in virtual space, we did not have the position of the hand to be wrapped around the controller, and instead showed a default open palm position pose. To determine the position of each of the shared hands, the avatar linearly interpolates between the position of User 1 and User 2, expressed by the following equation:
    \(\begin{equation} {\it x_{fusion}} = \alpha {\it x_{user1}} + (1-\alpha){\it x_{user2}} (0 \lt \alpha \lt 1) \end{equation} \)
    (1)
    where α controls the interpolation such that the resulting position is 100% of Player 1 when α is 1 and 100% of Player 2’s position when α is 0. This value can be set to vary the control over the shared hands in each level to W25/W50/W75 to create the conditions outlined in the study design.
    Figure 6:
    Four images showing the waveforms that were created in Unreal Engine for the vibration patterns; On the top left is the waveform for the intermittent pattern, and bottom left is the waveform for the Heartbeat pattern; On the top right is the waveform for the sinusoidal pattern and bottom right is the waveform for the constant pattern.
    Figure 6: Waveforms of four vibration patterns used in pre-study (created in Unreal Engine): Intermittent (top left), Sinusoidal (top right), Heartbeat (bottom left) and Constant (bottom right)

    3.4.1 Haptic feedback design.

    Previous experiments by Wentzel et al. [61] tested techniques to modulate amplification levels of vibrations and found that it impacted the user’s comfort. Since our study is modelled on the perceptual crossing paradigm [2], which uses simplistic ON/OFF feedback for interactions between the participants, we also implement the haptic feedback to ON when participants hands are in the same virtual position and OFF otherwise. No additional hardware is implemented for sophisticated vibrotactile feedback cues as the scope of this study is limited to using the standard oculus motion controllers. Therefore, we conducted a pre-study to make an informed decision on the intensity and pattern of the vibrations. Fifteen Participants (Mean age = 25.66, SD = 2.09) tested four common type vibration patterns [53]: Intermittent, Sinusoidal, Heartbeat, and Constant (Figure 6) in combination with four intensity levels: 10, 20, 30, and 40. The intermittent pattern consisted of two vibration occurrences every second, the sinusoidal had one occurrence per second, the heartbeat pattern consisted of two short occurrences followed by a pause every second, and the constant pattern had a continuous vibration throughout. These were tested in VR using the same motion controllers that would be used for the main study. While receiving haptic feedback, participants rated on a Likert scale their perceived comfort ranging from very uncomfortable (1) to very comfortable (7), and their perceived intensity level ranging from very calm (1) to very intense (7).
    Table 1:
    Pattern typePattern intensityPerceived (mean) intensityPerceived (mean) comfort
    Intermittent103.24.4
    Intermittent203.64.4
    Intermittent304.53.8
    Intermittent405.23.4
    Heartbeat101.85.2
    Heartbeat203.34.2
    Heartbeat303.43.9
    Heartbeat4043.8
    Sinusoidal103.45.1
    Sinusoidal204.24.4
    Sinusoidal305.43.8
    Sinusoidal404.23.6
    Constant103.84.1
    Constant205.33.2
    Constant306.62.4
    Constant406.72.2
    Table 1: Results of our pre-study on perceived mean intensity and comfort for 16 haptic patterns. The mean perceived intensity ratings range between 1.8 to 6.7 (where 1 is very calm and 7 very intense) and the mean perceived comfort ratings range between 2.2 to 5.2 (where 1 is very uncomfortable and 7 very comfortable). The chosen variant "sinusoidal pattern with intensity 20" received a mean intensity rating of 4.2 and comfort rating of 4.4.
    From Table 1, results showed that high-intensity vibrations (30,40) had lower comfort ratings overall. The highest comfort rating was given to the heartbeat and sinusoidal patterns at intensity 10. However, this intensity level was often hardly noticeable by some participants. The variant with the sinusoidal pattern at intensity 20, provided a balanced level of intensity (mean score = 4.266) while still being comfortable (mean score = 4.4). Therefore, we chose to implement this variant of the haptic feedback for our main study.

    3.5 Study procedure

    At the study location, a table was placed where participants would fill out all the forms: demographics, informed consent, pre- and post-study SSQ, and IPQ. Two computers were placed side by side on a separate table and were connected to HMDs (Figure 7). One participant was randomly assigned to the computer acting as User 1, and the other to the computer acting as User 2. A video camera was also in place to record both participants’ motions while they performed the tasks. A video showing this interaction is provided in Supplementary Material A. The position in which both participants would stand was marked on the floor. To reduce any possibility of injury and to simplify the interactions, participants conducted the trials while standing, and only used their right hand for the motion task. The spatial chime sound was channeled through the HMD speakers, set at a comfortable 60% volume.
    Figure 7:
    An image of the experimental setup with user 1 on the left and user 2 on the right performing the task with the computers placed on a table showing the first-person perspective of each user in the background
    Figure 7: An image of the study setup that shows two users performing a task, and the two computer screens showing the perspective view of each user
    Upon their arrival, participants were asked to read and sign the informed consent form and fill in a pre-study SSQ. Similar to Fribourg et al. [11], participants were briefed that they would be sharing the avatar during all trials and instructed to avoid communicating with each other verbally. No instructions were provided regarding the haptic feedback, allowing participants to interpret the meaning of the vibrotactile cue when it occurs during the tasks. This was done to evaluate the effectiveness of the chosen vibration pattern in establishing synchronization patterns between participants autonomously, as was done in the original perceptual crossing experiments [2, 3]. Participants provided subjective ratings on their sense of agency, co-presence, and body ownership after each set of trials using questionnaires presented inside the virtual environment. Each participant performed 25 trials (including repetitions), and answered 32 questions during the study (24 Sense of Agency + 4 Co-presence + 4 Body ownership). At the end of the study, participants filled in the post-study SSQ along with the IPQ. Finally, a semi-structured interview was conducted with both participants, which lasted around 15 minutes. During the interviews, we asked participants about their overall impression of the study, their perceptions of the shared motion and the haptic feedback, their impressions regarding the two tasks, and provocations regarding further use cases of virtual co-embodiment. The complete interview guide is provided in Supplementary Material B. Sessions lasted an average of 60 minutes, whereas the within-VR portion took approximately 25 minutes. Each participant was compensated with a €/$10 gift voucher for participating. Our study received approval from our institute’s ethics and data protection committee, where we also followed any guidelines pertaining to any prevailing cleanliness (cf., COVID-19) regulations.

    3.5.1 Participants.

    Twenty pairs of participants3 (40 people, 23f, 17m) were recruited (M=25.95 years, SD=2.59). Participants were recruited primarily from the first author’s university. All were right-handed. Fifteen participants reported no prior VR experience; 17 reported being novice users (having used VR at least once), and eight reported occasional VR use. Of the 20 pairs, three were couples, 12 were friends, and five did not know each other (i.e., strangers). There were three male-male pairs (two friend pairs, one stranger pair). There were six female-female pairs (five friend pairs, one stranger pair). There were eleven male-female mixed pairs (three were couples, six were friend pairs, two were stranger pairs).

    4 Analysis and Results

    A mixed-methods approach was adopted for analysis, which means the results of the quantitative analysis are interpreted along with the qualitative analysis to explain the phenomena observed.

    4.1 Pre-processing and analysis approach

    4.1.1 Synchrony measure.

    Users’ hand motion data was re-sampled to 100Hz. We cleaned the data by removing missing values, NAs (Not Available) due to logging errors, and duplicates. This resulted in the removal of 11,679 records, with a final dataset size = 880,549 records. Several measures of inter-personal synchrony exist, from dyadic synchrony in VR as was done by Sun et al. (2019) [57], to breathing synchrony as was done by El Ali et al. (2023) [8]. Given our dataset, we analyze joint motion synchrony by adapting Sun et al.’s [57] approach — we perform the following steps to obtain our synchrony measures: The extracted <X,Y,Z> positional movement data was used to calculate the distance moved between each timestamp for each participant. We calculate the Euclidean distances for the movement of both participants, by taking the square of the difference between the consecutive positions in each direction (X, Y and Z). We then compute the square root of the sum of these squared differences to calculate the overall Euclidean distance for the movement between each timestamp. The intervals of the timestamps that are considered for this calculation are short (in the order of milliseconds), therefore any repeated movements (left and right) that occur over an interval will be captured, and will be different from a continuous motion in a single direction.
    We then compute the rolling Spearman correlation between each participants’ summed (right hand) Euclidean movement. Since our Euclidean measures were not normally distributed, we used rolling Spearman’s Rank Correlation Coefficient with a window size of 450 samples to compare the two movement series. The mean of the rolling Spearman’s Rank Correlation Coefficient was then calculated for all 24 trials across the 20 sessions4.

    4.1.2 Statistical analysis approach.

    The combined effects of task, control, and haptics on participants’ subjective ratings of perceived Sense of Agency (SoA), co-presence, body ownership, and mean Spearman’s Rank Correlation Coefficient were analyzed by fitting a full mixed-effects model for each dataset. First, the normality of the data was tested using the Shapiro-Wilk test. Results for all dependent variables showed that the data distribution significantly deviated from normality (p < 0.05). Therefore, aligned rank transforms were applied to the data before fitting it to the model [63]. Holm-Bonferroni corrections were applied to the datasets, and contrast tests were conducted using ART-C [9]. The results of the analysis of variance for all response variables are provided in Table 2.

    4.2 Quantitative results

    Table 2:
    Response VariableFactorLevelMeanMedianSDFdfp \(\eta _{p}^{2}\)
    Sense of AgencyTaskTask 14.535.001.44172.021<.000***0.16
      Task 23.453.001.45    
     HapticsOn3.864.001.5712.931<.000***0.01
      Off4.124.001.5    
     ControlW254.054.001.530.2420.780
      W503.974.001.54    
      W753.964.001.56    
     Task x Haptics----13.611<.000***0.01
     Task x Control----0.3020.740
     Haptics x Control----0.5220.590
     Task x Haptics x Control----0.0520.950
    Co-presence 1TaskTask 14.305.002.0026.351<.000***0.18
      Task 25.356.001.64    
     HapticsOn4.845.001.880.2410.620
      Off4.815.001.93    
     Task x Haptics----0.8010.370.01
    Co-presence 2TaskTask 13.543.001.9234.381<.000***0.23
      Task 24.715.001.81    
     HapticsOn4.154.001.950.0310.860
      Off4.104.001.96    
     Task x Haptics----2.3110.130.02
    Co-presence 3TaskTask 13.684.001.8028.281<.000***0.19
      Task 24.805.001.76    
     HapticsOn4.094.001.861.2410.270.01
      Off4.395.001.86    
     Task x Haptics----0.5110.480
    Body Ownership 1TaskTask 15.295.001.081.7210.190.01
      Task 24.935.001.45    
     HapticsOn5.005.001.271.4210.240.01
      Off5.215.001.30    
     Task x Haptics----0.2210.640
    Body Ownership 2TaskTask 15.155.001.300.381<.001**0.08
      Task 25.556.001.47    
     HapticsOn5.306.001.301.0010.320.01
      Off5.406.001.51    
     Task x Haptics----1.3110.260.01
    Body Ownership 3TaskTask 14.434.501.400.9910.320.01
      Task 24.234.001.58    
     HapticsOn4.314.001.450.0110.940.00
      Off4.344.501.54    
     Task x Haptics----0.0910.770
    Mean Rolling Spearman’s Rank Correlation coefficientTaskTask 10.390.410.2157.151<.000***0.11
      Task 20.270.250.20    
     HapticsOn0.340.360.202.0210.150
      Off0.320.330.22    
     ControlW250.340.370.212.9610.080
      W500.310.280.21    
      W750.340.360.22    
     Task x Haptics----1.1010.290
     Task x Control----0.0410.830
     Haptics x Control----0.2510.610
     Task x Haptics x Control----2.3610.120
    Table 2: Analysis of Deviance on the full mixed-effects model for Sense of Angency (SoA), co-presence, body ownership and mean rolling Spearman Rank Correlation using Aligned Rank Transformed data. For Sense of Agency, the model shows significance for the factors Task, Haptics and the interaction of Task and Haptics (***p<0.001). For Co-presence 1, Co-presence 2 and Co-presence 3, the model shows significance only for the Task factor (***p<0.001). For Body ownership 2, the model shows significance only for the Task factor (**p<0.01) and for Mean rolling Spearman’s rank correlation coefficient, the model shows significance only for the Task factor (***p<0.001).

    4.2.1 Sense of Agency.

    The analysis of the Sense of Agency (SoA) ratings are shown as boxplots in Figure 8(a), where lines with asterisks indicate pairwise, Holm-Bonferroni corrected, significance. A full mixed-effects model showed significance for Task and Haptics (p < 0.000). Significant interaction effects were also found between Task and Haptics (p < 0.000). Contrast tests for the main effect of Task revealed that responses were significantly higher in Task 1 compared to Task 2. Moreover, the contrast test for Haptics revealed that participants’ feelings of control were significantly greater in conditions without haptic feedback compared to conditions with haptic feedback. The contrast test on the interaction effects between Task and Haptics showed significant differences across all levels except between Task 1 - No haptics condition x Task 1 - Haptics condition (p = 0.219).
    The comparison of the reported SoA with respect to the actual control that participants had over the shared avatar is shown as boxplots in Figure 8(b). Participants tended to overestimate and rate higher SoA when they had only 25% control (Md=5, M=4.47, SD=1.4) and 50% control (Md=5, M=4.49, SD=1.45) in Task 1. However, in the case of 75% control (Md=4, M=4.48, SD=1.47) in Task 1, participants felt lower level of control over the shared avatar. We observe contrasting results for Task 2, where ratings for conditions of 25% control (Md=3, M=3.39, SD=1.42) were rated the lowest followed by ratings for 50% control (Md=4, M=3.45, SD=1.44). Notably, in the case of 75% control (Md=3.5, M=3.43, SD=1.5), participants rated lower SoA in Task 2 compared to Task 1.
    Given that the SoA measure was the only time series measurement we collected, we further conducted temporal analysis to assess whether these ratings changed across trials. While there were some changes in the responses (indicating participants were not randomly assigning ratings), the low correlations and lack of visible patterns did not warrant further statistical analysis. We provide this analysis (SoA rating plots over trial and correlation plot) in Supplementary Material C.
    Figure 8:
    Two plots. On the left (a) the perceived sense of agency ratings per haptic feedback condition for task 1 and task 2 shown as boxplots where lines with asterisks indicate pairwise Holm-Bonferroni-corrected significance. On the right (b) the perceived sense of agency ratings versus actual control weight for task 1 and task 2 shown as boxplots.
    Figure 8: Sense of agency responses to the question “How much do you feel in control?” asked after every trial during the study.

    4.2.2 Co-presence.

    Participant ratings of the co-presence questionnaire are visualized as boxplots in Figure 9, where lines with asterisks indicate pairwise Holm-Bonferroni-corrected significance. A full mixed-effects model showed significance only for Task across all three responses. No significant interaction effects were found. Contrasts test showed that co-presence ratings were significantly higher in Task 2 compared with Task 1 for Co-presence 1, Co-presence 2, and Co-presence 3.
    Figure 9:
    Two plots. On the left (a) the perceived co-presence ratings per task for three questions of co-presence shown as boxplots where lines with asterisks indicate pairwise Holm-
    Figure 9: Co-presence (CP) questionnaire responses. Co-presence

    4.2.3 Body ownership.

    Analysis of participant ratings for the body ownership questionnaire are visualized as boxplots in Figure 10(a), where lines with asterisks indicate pairwise Holm-Bonferroni-corrected significance. A full mixed-effects model showed significance only for Task for body ownership 2 responses. No significant interaction effects were found. Contrast tests showed that body Ownership 2 ratings were significantly higher in Task 2 compared with Task 1.
    Figure 10:
    Two plots. On the left (a) the perceived body ownership ratings per task for three questions of body ownership shown as boxplots. On the right (b) the perceived body ownership ratings per question for haptic feedback conditions shown as boxplots
    Figure 10: Body ownership (BO) questionnaire responses. Body ownership

    4.2.4 Controller motion synchronization.

    The analysis of participants’ controller motion synchronization is visualized as time-series plot in Figure 11. A full mixed-effects model showed significance for Task. No significant interaction effects were found. Contrast tests for the main effect of Task revealed that motion synchronization was significantly higher in Task 1 than Task 2.
    Figure 11:
    Motion controller synchronization (using rolling Spearman’s Rank Correlation with 450 samples) between users. Plot shows synchrony across trials for each session.
    Figure 11: Motion controller synchronization (using rolling Spearman’s Rank Correlation with 450 samples) between users. Plot shows synchrony across trials for each session

    4.2.5 IPQ Presence ratings.

    The IPQ [51] has a 7-point Likert scale, ranging from -3 to 3, where this was transformed to a scale of 1 to 7 during analysis. Results with respect to each presence factor within IPQ reveals that participants experienced high levels of Involvement (M=5.08, SD=1.49) and Spatial Presence (M=4.29, SD=1.74), but felt only neutral levels of General Presence (M=3.52, SD=1.78) and Realism (M=3.52, SD=1.92).

    4.2.6 SSQ Motion sickness.

    Participants’ reported motion sickness was measured before and after the study using the SSQ questionnaire [26]. A Wilcoxon signed-rank test was conducted since the data did not have a normal distribution. Results showed significant differences between the pre-study (Md=1.125, IQR=0.31) and post-study (Md=1.28, IQR=0.39) scores (Z=-4.03, p<0.01, r=-0.63) indicating that participants did experience motion sickness during the study, even if slight.

    4.3 Qualitative results

    We used an inductive thematic analysis [6] approach. First, the lead author extensively reviewed the interview transcripts and recorded videos, generating initial codes and themes. Then, the other authors reviewed the codes and themes for consistency and offered additional themes as needed. Quotes are attributed to participants by indicating which pair (P1-P20) they belonged to, followed by the specific participant (PN-1 or PN-2) where appropriate.

    4.3.1 Frustration with shared decisions.

    Participants associated their experience while performing Task 1 with being more comfortable than during Task 2, stating that “I didn’t feel much [shared control] in the beginning but in the second task with the choice it felt horrible” [P10-2]. In fact, many participants expressed that, without a shared goal, they often “[...] thought that I’m not controlling and somebody’s here to control the hands, and it made me a bit angry” [P15-2]. Overall, not only did participants feel more at ease with sharing the motion when a common target was presented, but the addition of free choices in Task 2 added confusion and frustration.

    4.3.2 Perception of shared motion.

    Only about half of the participants (21/40) were consciously aware that the motion of the avatar was shared between them and their partner during Task 1, while the rest expressed that it only became evident to them during Task 2, when differences in choices emerged between them and their partners. Participants attributed the differences between their motion and that of the avatar as “glitches” or “delays”, rather than the input of the other person in the pair. For instance, [P8-1] mentioned, “In the beginning, it felt like the hand was not working well”, and [P16-2] remarked, “I saw this (movement), and I thought it was an algorithm”. Importantly, during Task 2, when participants felt a diminished sense of agency or their partners’ movements were not well coordinated, they exaggerated their movements. This compensated for a perceived lack of responsiveness in shared motion: “When I moved my hand, I noticed the hand didn’t move that much, so to compensate for it, I had to reach out more” [P8-1]. This was particularly evident when there was a substantial difference in height between the pairs of participants, where there was an imbalance of control due to the taller participants’ extended reach, which resulted in a frustrating experience with shared motion for the partner with less reach.

    4.3.3 Following and leading.

    Participants spontaneously took on a more follower or leader role during the trials. While some participants focused on actively following their partners’ movements, aiming to coordinate their actions better, others took over the lead by misbehaving and “[...] trying to check control by doing the opposite movement”[P3-2], so that the share of control was more apparent. During Task 2, this difference was more evident, with followers expressing that since, “[...]I had no control, I thought I would follow whatever pattern in movement the other person was doing” [P11-1]. On the other hand, leaders used this ambiguity by moving their hands dramatically to shift the shared hand, regardless of the amount of control they had over the avatar: “I can sort of limit other persons’ actions and actually feel more in control” [P14-1]. Additionally, five participant pairs noted that the relationship with their partners also influenced the degree of co-operation they were inclined to achieve. For example, pairs that knew each other [P7] mentioned that they would be more attentive to the other person’s movement had it been with an unfamiliar person.

    4.3.4 Motion synchrony.

    A high level of motion synchronization was observed during Task 1; participants started the study with distinct motions, which eventually joined when one participant began mimicking the hand motion of the other. Participants made similar observations, referring to these synchronizations as “rhythms” or “flows”. For example, [P1-1] mentioned: “after a few rounds it felt like we were getting into this rhythm,” and [P2-2] stated: “I started with arc motion and [P2-1] was doing a different motion, then [P2-1] started moving with arc motion”.

    4.3.5 Perception of vibration patterns and associations.

    Several participants did not fully grasp during the study that haptic feedback would occur when their hands overlapped with their partner, while others inferred negative associations with the haptic feedback during the study, based on their prior experience with vibration feedback patterns. For example, participants expressed that they interpreted the haptics as hostile: “I thought maybe I was wrong that’s why the vibrations are coming to push me in another direction” [P6-2] or that the haptic feedback was “[...] very random, like it was malfunctioning” [P7-2]. Participants who viewed the haptics as positive feedback tended to associate it with video games: “I play the Nintendo Switch, and if you win in the game, it will have vibration” [P18-2].

    5 Discussion

    Below we discuss our study limitations and future work, and thereafter discuss our key findings by interpreting and synthesizing the results of our quantitative and qualitative analysis.

    5.1 Study limitations and future work

    First, we tested only a subset of questions in common co-presence and embodiment questionnaires – while such additional measures could shed further light on the experience of virtual co-embodiment, this was a deliberate design choice to ensure users do not experience fatigue and overload during the study. Second, while we followed closely Jeunet et al. [22]’s question regarding the sense of agency, we found that some participants may have misinterpreted what was meant by ‘feeling of control’. They judged the question to be related to success in the task rather than actual control over their body movements. Indeed, agency within HCI can have multiple interpretations (see [4] for a review), and we see this as a promising avenue for future work to explore other methods for evaluating the sense of agency in such shared virtual co-embodiment experiences. Third, it may be worthwhile to further extend the basic perceptual crossing paradigm in future work, by systematically investigating how varying the presence and type of haptics-related instructions and training beforehand would influence participants’ shared agency during co-embodiment tasks. Fourth, we restricted ourselves to studying hand ownership, we do not investigate realistic full-body avatar representations (cf., [11, 28]). Furthermore, previous studies have shown that the realism of the avatar [10] and users’ choice of avatar [35] impacts their sense of embodiment. Given our focus was on better understanding the role of haptic feedback and shared control distribution across targeted and free-choice tasks, we kept our study variables to a minimum to avoid blowing up the parameter space. However, this provides an interesting area for further research – does the type of avatar body, or mixed hand representation shared amongst users similarly influences the sense of agency and co-presence? Fifth, it worth exploring how height differences between participants and their reach can impact experiences of shared avatar control. To this end, prior work has developed methods that can generate avatar body characteristics that can adapt to variable heights of participants that can be used [67] – this would help ensure that control is distributed precisely between the participants, even if this does not necessarily reflect real-world user characteristics. Finally, given our finding that the type of relationship with another person can influence following and leading behavior (cf., Sec 4.3.3), this opens up opportunities to further examine co-embodiment interactions in different dyad compositions.

    5.2 Elucidating the role of haptic feedback and avatar control distribution for virtual avatar co-embodiment

    Our study explored the impact of including haptic feedback and varying avatar control distribution on users’ sense of agency, co-presence, body ownership and motion synchrony across reaching tasks in a virtual co-embodiment scenario (RQ). To this end, one key objective was to assess how the condition in which participants would receive haptic feedback when their hands overlapped would affect these three factors in scenarios involving shared goals and free choice. Our findings indicate that the presence of haptic feedback yielded a significant effect on the sense of agency during our study, though in unexpected ways. Participants felt a significantly greater sense of agency during conditions without haptic feedback compared to conditions with haptic feedback. Given that haptic feedback is well-suited for conveying non-verbal cues [39], we expected that haptics would facilitate, not hinder sensori-motor coordination and guidance [38]. Furthermore, our qualitative findings (Sec 4.3.5) indicated that the vibrotactile patterns within our haptic feedback were perceived to be a hindrance and at times intrusive. Despite that we took care to ensure pleasant vibrotactile patterns through a pre-study, as participants reported, they were reminded of smartphone and smartwatch vibrations and notifications. To interpret this finding, we first note that given our focus on translating elements of perceptual crossing into the avatar co-embodiment paradigm, we restricted our study to the context of autonomous interaction processes during shared perceptual activities. This means that even without conscious awareness of the vibrotactile cues, we expected that such position-aware haptic feedback mechanisms would support shared perceptual experiences, in this case, shared motor activity during targeted and free-choice reaching tasks. However, given the salience of the haptic stimuli, we suspect that haptics may have lowered the sense of agency for participants as they may have felt overwhelmed by the other users’ guidance. This, along with the interplay of control, coordination, and physical attributes, would have then played a role in shaping the strategies the participants used in synchronizing with the other user during the haptic feedback conditions. Together, the foregoing raise cautions about how haptics can be integrated, suggesting that including vibrotactile-based haptic feedback as a positional guidance mechanism in 3D virtual space during such shared control interactions may not be an effective means for improving shared avatar co-embodiment experiences.

    5.3 Shared control, motor synchrony, and perceptual crossing across targeted and free-choice tasks

    Our findings indicate that participants’ reported feelings of control (SoA) do not align with the actual levels of control. We found that participants’ sense of agency increased between 25% and 50% control conditions, while a decrease was observed between 50% and 75% conditions. This result echoes the findings of Kodama et al. (2023) [29], who did not find a clear differentiation between the tested levels of control. Moreover, we found that participants felt a significantly greater sense of agency in Task 1 (targeted) compared with Task 2 (free-choice). Since the conditions were counterbalanced and trials randomized, participants may have had a difficult time to judge absolute control levels, as they had no relative comparison to indicate such experienced control levels. We use absolute judgements for measurement of subjective responses since it is the standard practice across prior work [11, 15] that investigates the sense of agency and also provided a means to test if haptics would lead to higher (perceived) sense of agency in a given trial, without referencing back to earlier trials (which may not have had haptics activated). As such, overestimation of control was apparent during Task 1. This lends credence to the findings by Fribourg et al. [11], who also found that participants perceived a greater sense of agency when the goal was shared compared to situations where participants pursued different goals. Furthermore, participants felt a significantly greater sense of co-presence during Task 2 compared with Task 1, suggesting that strong motion synchronization effects may have diminished the awareness of the other. This was further echoed by participants, where some reported a lack of awareness that they were sharing an avatar with their partners during Task 1. Qualitative analyses of user responses in the perceptual crossing paradigm also highlight that movement synchronization might not always necessarily indicate recognition of each other [30]. In our specific implementation, in Task 1 (targeted), it might have been the case that the straight-forward task environment afforded high synchronization, but no real space for active exploration to take place (i.e., similar to the oscillating movements found upon successful recognition of each other in perceptual crossing studies) limiting opportunities to become aware of the other participant. As such, participants’ attention to their partners’ movement and need to explicitly communicate verbally with their partners during Task 2 also indicates that they were more inclined to consciously co-ordinate, compared to the more autonomous interaction that was observed during Task 1.
    We also found that participants felt that the movements of the virtual hands were influencing their own movements (Body ownership 2; cf., Sec. 4.2.3) significantly greater during Task 2 compared with Task 1. This was further reflected upon by participants who stated they actively strategized to either exert more control over the virtual hand or to follow its movements during Task 2. Interestingly, similar strategizing about movements when encountering other users are found in the perceptual crossing paradigm, where in some cases users would spontaneously adopt leader and follower roles [13]. For example, users may choose to remain stationary and passively receive the other’s touch [30]. These parallels are interesting given the stark differences in available sensory information in our tasks compared to the perceptual crossing paradigm. It raises interesting questions about ways in which the basic paradigm can be extended and integrated into more multi-sensory shared virtual environments. The foregoing raise fundamental questions about the nature of shared control and social coordination as we integrate with machines and one another [40]: to what extent should we be consciously aware of bodily feedback mechanisms during shared activities? Given the importance of motion synchrony in varying the levels of conscious awareness of the virtual other, to what extent should shared body control systems, whether with humans or machines, leverage this without impeding on users’ sense of perceived and actual agency?

    6 Conclusion

    We investigated whether integrating haptics into shared avatar co-embodiment can enhance users’ shared VR experiences. Drawing on the perceptual crossing paradigm, we examine whether implementing non-verbal feedback mechanisms (namely, haptic feedback) within embodied interaction between two users can improve such social coordination experiences. Insights from this work provide a deeper understanding of the dynamics between users during co-embodiment and its impact on the perceptions of their sense of agency, co-presence, and body ownership towards a virtual hand. We found that haptic feedback given to participants when their hands overlapped led to a diminished sense of agency during co-embodiment. Our findings showed (a) a lower sense of agency in the free-choice with haptics compared to no feedback, (b) higher agency during the shared target task, (c) co-presence and embodiment were significantly higher in tasks where there were multiple targets, (d) users’ hand motions synchronized more in the targeted task. Our work contributes a deeper understanding and cautionary considerations for the role of vibrotactile haptic feedback and shared control distribution in the emerging area of virtual avatar co-embodiment.

    Footnotes

    1
    The random locations were limited to within the space in front of the participants to ensure the cubes were visible and reachable.
    3
    For effect size f=0.25 under α =0.05 and power (1-β)=0.95, with 24 repeated measurements within factors, a minimum of 12 participants is needed.
    4
    We additionally tested cosine similarity measures, given Wohltjen et al. [64]’s approach that used Dynamic Time Warping to calculate cosine similarity scores; however, the results were similar to ours, and therefore we only report the Spearman Rank correlation results.

    Supplemental Material

    MP4 File - Video Preview
    Video Preview
    MP4 File - Video Presentation
    Video Presentation
    MP4 File - Supplementary Material A
    Video showing how the interaction was setup for the study. Refers to Sec. 3.5
    PDF File - Supplementary Material B
    A document showing the questions that were asked in the semi structured interview. Refers to Sec. 3.5
    PDF File - Supplementary Material C
    A document showing the temporal analysis for sense of agency for all trials for each participant with three plots

    References

    [1]
    Ferran Argelaguet, Ludovic Hoyet, Michael Trico, and Anatole Lecuyer. 2016. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In 2016 IEEE Virtual Reality (VR). IEEE, Greenville, SC, USA, 3–10. https://doi.org/10.1109/VR.2016.7504682
    [2]
    Malika Auvray, Charles Lenay, and John Stewart. 2009. Perceptual interactions in a minimalist virtual environment. New Ideas in Psychology 27, 1 (April 2009), 32–47. https://doi.org/10.1016/j.newideapsych.2007.12.002
    [3]
    Malika Auvray and Marieke Rohde. 2012. Perceptual crossing: the simplest online paradigm. Frontiers in human neuroscience 6 (2012), 181.
    [4]
    Dan Bennett, Oussama Metatla, Anne Roudaut, and Elisa D. Mekler. 2023. How Does HCI Understand Human Agency and Autonomy?. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 375, 18 pages. https://doi.org/10.1145/3544548.3580651
    [5]
    Diana Carvalho, Silmar Teixeira, Marina Lucas, Ti-Fei Yuan, Fernanda Chaves, Caroline Peressutti, Sergio Machado, Juliana Bittencourt, Manuel Menéndez-González, Antonio Egidio Nardi, 2013. The mirror neuron system in post-stroke rehabilitation. International archives of medicine 6, 1 (2013), 1–7.
    [6]
    Harris Ed Cooper, Paul M Camic, Debra L Long, AT Panter, David Ed Rindskopf, and Kenneth J Sher. 2012. Thematic analysis. In APA Handbook of Research Methods in Psychology: Vol. 2. Research Designs. American Psychological Association, Washington, 57–71. https://doi.org/10.1037/13620-000
    [7]
    Patricia Cornelio, Patrick Haggard, Kasper Hornbaek, Orestis Georgiou, Joanna Bergström, Sriram Subramanian, and Marianna Obrist. 2022. The sense of agency in emerging technologies for human-computer integration: A review. Front. Neurosci. 16 (Sept. 2022), 949138.
    [8]
    Abdallah El Ali, Ekaterina R. Stepanova, Shalvi Palande, Angelika Mader, Pablo Cesar, and Kaspar Jansen. 2023. BreatheWithMe: Exploring Visual and Vibrotactile Displays for Social Breath Awareness during Colocated, Collaborative Tasks. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 58, 8 pages. https://doi.org/10.1145/3544549.3585589
    [9]
    Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. https://doi.org/10.48550/arXiv.2102.11824 arXiv:2102.11824 [cs, stat].
    [10]
    Rebecca Fribourg, Ferran Argelaguet, Anatole Lecuyer, and Ludovic Hoyet. 2020. Avatar and Sense of Embodiment: Studying the Relative Preference Between Appearance, Control and Point of View. IEEE Transactions on Visualization and Computer Graphics 26, 5 (May 2020), 2062–2072. https://doi.org/10.1109/TVCG.2020.2973077
    [11]
    Rebecca Fribourg, Nami Ogawa, Ludovic Hoyet, Ferran Argelaguet, Takuji Narumi, Michitaka Hirose, and Anatole Lecuyer. 2021. Virtual Co-Embodiment: Evaluation of the Sense of Agency While Sharing the Control of a Virtual Body Among Two Individuals. IEEE Transactions on Visualization and Computer Graphics 27, 10 (Oct. 2021), 4023–4038. https://doi.org/10.1109/TVCG.2020.2999197
    [12]
    Tom Froese, Hiroyuki Iizuka, and Takashi Ikegami. 2014. Embodied social interaction constitutes social cognition in pairs of humans: a minimalist virtual reality experiment. Scientific reports 4, 1 (2014), 3672.
    [13]
    Tom Froese, Hiroyuki Iizuka, and Takashi Ikegami. 2014. Using minimal human-computer interfaces for studying the interactive development of social awareness. Frontiers in psychology 5 (2014), 1061.
    [14]
    Mar Gonzalez-Franco, Rodrigo Pizarro, Julio Cermeron, Katie Li, Jacob Thorn, Windo Hutabarat, Ashutosh Tiwari, and Pablo Bermell-Garcia. 2017. Immersive Mixed Reality for Manufacturing Training. Frontiers in Robotics and AI 4 (2017). https://www.frontiersin.org/articles/10.3389/frobt.2017.00003
    [15]
    Takayoshi Hagiwara, Maki Sugimoto, Masahiko Inami, and Michiteru Kitazaki. 2019. Shared Body by Action Integration of Two Persons: Body Ownership, Sense of Agency and Task Performance. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, Osaka, Japan, 954–955. https://doi.org/10.1109/VR.2019.8798222
    [16]
    Ayah Hamad and Bochen Jia. 2022. How Virtual Reality Technology Has Changed Our Lives: An Overview of the Current and Potential Applications and Limitations. International Journal of Environmental Research and Public Health 19, 18 (2022). https://doi.org/10.3390/ijerph191811278
    [17]
    Harin Hapuarachchi, Takayoshi Hagiwara, Gowrishankar Ganesh, and Michiteru Kitazaki. 2023. Effect of connection induced upper body movements on embodiment towards a limb controlled by another during virtual co-embodiment. PLOS ONE 18, 1 (Jan. 2023), e0278022. https://doi.org/10.1371/journal.pone.0278022
    [18]
    Harin Hapuarachchi and Michiteru Kitazaki. 2022. Knowing the intention behind limb movements of a partner increases embodiment towards the limb of joint avatar. Scientific Reports 12, 1 (July 2022), 11453. https://doi.org/10.1038/s41598-022-15932-x
    [19]
    D. J. Harris, T. Arthur, J. Kearse, M. Olonilua, E. K. Hassan, T. C. De Burgh, M. R. Wilson, and S. J. Vine. 2023. Exploring the role of virtual reality in military decision training. Frontiers in Virtual Reality 4 (2023). https://www.frontiersin.org/articles/10.3389/frvir.2023.1165030
    [20]
    Paul Heidicker, Eike Langbehn, and Frank Steinicke. 2017. Influence of avatar appearance on presence in social VR. In 3D User Interfaces (3DUI), 2017 IEEE Symposium on. IEEE, Los Angeles, CA, USA, 233–234.
    [21]
    Masahiko Inami, Daisuke Uriu, Zendai Kashino, Shigeo Yoshida, Hiroto Saito, Azumi Maekawa, and Michiteru Kitazaki. 2022. Cyborgs, Human Augmentation, Cybernetics, and JIZAI Body. In Augmented Humans 2022 (Kashiwa, Chiba, Japan) (AHs 2022). Association for Computing Machinery, New York, NY, USA, 230?242. https://doi.org/10.1145/3519391.3519401
    [22]
    Camille Jeunet, Louis Albert, Ferran Argelaguet, and Anatole Lecuyer. 2018. “Do You Feel in Control?”: Towards Novel Approaches to Characterise, Manipulate and Measure the Sense of Agency in Virtual Environments. IEEE Transactions on Visualization and Computer Graphics 24, 4 (April 2018), 1486–1495. https://doi.org/10.1109/TVCG.2018.2794598
    [23]
    M-Carmen Juan, Julen Elexpuru, Paulo Dias, Beatriz Sousa Santos, and Paula Amorim. 2023. Immersive virtual reality for upper limb rehabilitation: comparing hand and controller interaction. Virtual Reality 27, 2 (2023), 1157–1171.
    [24]
    Sungchul Jung, Nawam Karki, Max Slutter, and Robert W. Lindeman. 2021. On the Use of Multi-sensory Cues in Symmetric and Asymmetric Shared Collaborative Virtual Spaces. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 1–25. https://doi.org/10.1145/3449146
    [25]
    Samantha Keenaghan, Lucy Bowles, Georgina Crawfurd, Simon Thurlbeck, Robert W. Kentridge, and Dorothy Cowie. 2020. My body until proven otherwise: Exploring the time course of the full body illusion. Consciousness and Cognition 78 (2020), 102882. https://doi.org/10.1016/j.concog.2020.102882
    [26]
    Robert S. Kennedy, Norman E. Lane, Kevin S. Berbaum, and Michael G. Lilienthal. 1993. Simulator Sickness Questionnaire: An enhanced method for quantifying simulator sickness. The International Journal of Aviation Psychology 3, 3 (1993), 203–220. https://doi.org/10.1207/s15327108ijap0303_3 Place: US Publisher: Lawrence Erlbaum.
    [27]
    Konstantina Kilteni, Raphaela Groten, and Mel Slater. 2012. The Sense of Embodiment in Virtual Reality. Presence: Teleoperators and Virtual Environments 21, 4 (Nov. 2012), 373–387. https://doi.org/10.1162/PRES_a_00124
    [28]
    Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, and Michitaka Hirose. 2022. Enhancing the Sense of Agency by Transitional Weight Control in Virtual Co-Embodiment. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, Singapore, Singapore, 278–286. https://doi.org/10.1109/ISMAR55827.2022.00043
    [29]
    Daiki Kodama, Takato Mizuho, Yuji Hatada, Takuji Narumi, and Michitaka Hirose. 2023. Effects of Collaborative Training Using Virtual Co-embodiment on Motor Skill Learning. IEEE Transactions on Visualization and Computer Graphics 29, 5 (May 2023), 2304–2314. https://doi.org/10.1109/TVCG.2023.3247112
    [30]
    Hiroki Kojima, Tom Froese, Mizuki Oka, Hiroyuki Iizuka, and Takashi Ikegami. 2017. A sensorimotor signature of the transition to conscious social perception: co-regulation of active and passive touch. Frontiers in Psychology 8 (2017), 1778.
    [31]
    Amanda Lazar, Hilaire Thompson, and George Demiris. 2014. A Systematic Review of the Use of Technology for Reminiscence Therapy. Health Education & Behavior 41, 1_suppl (2014), 51S–61S. https://doi.org/10.1177/1090198114537067 arXiv:https://doi.org/10.1177/1090198114537067PMID: 25274711.
    [32]
    Charles Lenay. 2021. Perceiving at a distance: enaction, exteriority and possibility–a tribute to John Stewart. Adaptive Behavior 29, 5 (2021), 485–503.
    [33]
    Charles Lenay, John Stewart, Marieke Rohde, and Amal Ali Amar. 2011. “You never fail to surprise me”: the hallmark of the Other: Experimental study and simulations of perceptual crossing. Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systems 12, 3 (Nov. 2011), 373–396. https://doi.org/10.1075/is.12.3.01len
    [34]
    Jie Li, Yiping Kong, Thomas Röggla, Francesca De Simone, Swamy Ananthanarayan, Huib de Ridder, Abdallah El Ali, and Pablo Cesar. 2019. Measuring and Understanding Photo Sharing Experiences in Social Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA, 1?14. https://doi.org/10.1145/3290605.3300897
    [35]
    Sohye Lim and Byron Reeves. 2009. Being in the Game: Effects of Avatar Choice and Point of View on Psychophysiological Responses During Play. Media Psychology 12, 4 (2009), 348–370. https://doi.org/10.1080/15213260903287242 arXiv:https://doi.org/10.1080/15213260903287242
    [36]
    Che-Wei Lin, Li-Chieh Kuo, Yu-Ching Lin, Fong-Chin Su, Yu-An Lin, and Hsiu-Yun Hsu. 2021. Development and testing of a virtual reality mirror therapy system for the sensorimotor performance of upper extremity: A pilot randomized controlled trial. IEEE Access 9 (2021), 14725–14734.
    [37]
    Destaw B Mekbib, Dereje Kebebew Debeli, Li Zhang, Shan Fang, Yuling Shao, Wei Yang, Jiawei Han, Hongjie Jiang, Junming Zhu, Zhiyong Zhao, 2021. A novel fully immersive virtual reality environment for upper extremity rehabilitation in patients with stroke. Annals of the New York Academy of Sciences 1493, 1 (2021), 75–89.
    [38]
    Miguel Melo, Guilherme Gonçalves, Pedro Monteiro, Hugo Coelho, José Vasconcelos-Raposo, and Maximino Bessa. 2020. Do multisensory stimuli benefit the virtual reality experience? A systematic review. IEEE Transactions on Visualization and Computer Graphics 28, 2 (2020), 1428–1442.
    [39]
    Jonas Moll and Eva-Lotta Sallnäs. 2009. Communicative functions of haptic feedback. In International Conference on Haptic and Audio Interaction Design. Springer, Springer, Berlin, Heidelberg, 1–10.
    [40]
    Florian Floyd Mueller, Pedro Lopes, Paul Strohmeier, Wendy Ju, Caitlyn Seim, Martin Weigel, Suranga Nanayakkara, Marianna Obrist, Zhuying Li, Joseph Delfa, Jun Nishida, Elizabeth M. Gerber, Dag Svanaes, Jonathan Grudin, Stefan Greuter, Kai Kunze, Thomas Erickson, Steven Greenspan, Masahiko Inami, Joe Marshall, Harald Reiterer, Katrin Wolf, Jochen Meyer, Thecla Schiphorst, Dakuo Wang, and Pattie Maes. 2020. Next Steps for Human-Computer Integration. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1?15. https://doi.org/10.1145/3313831.3376242
    [41]
    Nami Ogawa, Takuji Narumi, and Michitaka Hirose. 2019. Virtual Hand Realism Affects Object Size Perception in Body-Based Scaling. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, Osaka, Japan, 519–528. https://doi.org/10.1109/VR.2019.8798040
    [42]
    Catherine S. Oh, Jeremy N. Bailenson, and Gregory F. Welch. 2018. A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Frontiers in Robotics and AI 5 (Oct. 2018), 114. https://doi.org/10.3389/frobt.2018.00114
    [43]
    Zizi Papacharissi. 2005. The Real-Virtual Dichotomy in Online Interaction: New Media Uses and Consequences Revisited. Annals of the International Communication Association 29, 1 (Jan. 2005), 216–238. https://doi.org/10.1080/23808985.2005.11679048
    [44]
    Tabitha C. Peck and Mar Gonzalez-Franco. 2021. Avatar Embodiment. A Standardized Questionnaire. Frontiers in Virtual Reality 1 (Feb. 2021), 575943. https://doi.org/10.3389/frvir.2020.575943
    [45]
    Tekla S. Perry. 2016. Virtual reality goes social. IEEE Spectrum 53, 1 (2016), 56–57. https://doi.org/10.1109/MSPEC.2016.7367470
    [46]
    Daniel Pimentel and Charlotte Vinkers. 2021. Copresence With Virtual Humans in Mixed Reality: The Impact of Contextual Responsiveness on Social Perceptions. Frontiers in Robotics and AI 8 (April 2021), 634520. https://doi.org/10.3389/frobt.2021.634520
    [47]
    Susanne Putze, Dmitry Alexandrovsky, Felix Putze, Sebastian Höffner, Jan David Smeddinck, and Rainer Malaka. 2020. Breaking The Experience: Effects of Questionnaires in VR User Studies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1?15. https://doi.org/10.1145/3313831.3376144
    [48]
    Julian Rasch, Vladislav Dmitrievic Rusakov, Martin Schmitz, and Florian Müller. 2023. Going, Going, Gone: Exploring Intention Communication for Multi-User Locomotion in Virtual Reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–13. https://doi.org/10.1145/3544548.3581259
    [49]
    Selma Rizvic, Gregg Young, Avinash Changa, Bojan Mijatovic, and Ivona Ivkovic-Kihic. 2022. Da Vinci Effect - multiplayer Virtual Reality experience. In Eurographics Workshop on Graphics and Cultural Heritage, Federico Ponchio and Ruggero Pintus (Eds.). The Eurographics Association,. https://doi.org/10.2312/gch.20221229
    [50]
    Aino Saarinen, Ville Harjunen, Inga Jasinskaja-Lahti, Iiro P. Jääskeläinen, and Niklas Ravaja. 2021. Social touch experience in different contexts: A review. Neuroscience & Biobehavioral Reviews 131 (2021), 360–372. https://doi.org/10.1016/j.neubiorev.2021.09.027
    [51]
    Thomas Schubert, Frank Friedmann, and Holger Regenbrecht. 2001. The experience of presence: Factor analytic insights. Presence: Teleoperators & Virtual Environments 10, 3 (2001), 266–281.
    [52]
    Valentin Schwind, Pascal Knierim, Cagri Tasci, Patrick Franczak, Nico Haas, and Niels Henze. 2017. "These Are Not My Hands!": Effect of Gender on the Perception of Avatar Hands in Virtual Reality. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 1577–1582. https://doi.org/10.1145/3025453.3025602
    [53]
    Hasti Seifi, Kailun Zhang, and Karon E MacLean. 2015. VibViz: Organizing, visualizing and navigating vibration libraries. In 2015 IEEE World Haptics Conference (WHC). IEEE, 254–259.
    [54]
    Harrison Jesse Smith and Michael Neff. 2018. Communication Behavior in Embodied Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173863
    [55]
    Mandayam A Srinivasan and Cagatay Basdogan. 1997. Haptics in virtual environments: Taxonomy, research status, and challenges. Computers & Graphics 21, 4 (1997), 393–404.
    [56]
    Lisa J. Stephenson, S. Gareth Edwards, and Andrew P. Bayliss. 2021. From Gaze Perception to Social Cognition: The Shared-Attention System. Perspectives on Psychological Science 16, 3 (2021), 553–576. https://doi.org/10.1177/1745691620953773 arXiv:https://doi.org/10.1177/1745691620953773PMID: 33567223.
    [57]
    Yilu Sun, Omar Shaikh, and Andrea Stevenson Won. 2019. Nonverbal synchrony in virtual reality. PLOS ONE 14, 9 (09 2019), 1–28. https://doi.org/10.1371/journal.pone.0221803
    [58]
    Anastasios Theodoropoulos, Dimitra Stavropoulou, Panagiotis Papadopoulos, Nikos Platis, and George Lepouras. 2023. Developing an Interactive VR CAVE for Immersive Shared Gaming Experiences. Virtual Worlds 2, 2 (May 2023), 162–181. https://doi.org/10.3390/virtualworlds2020010
    [59]
    Cordula Vesper, Ekaterina Abramova, Judith Bütepage, Francesca Ciardo, Benjamin Crossey, Alfred Effenberg, Dayana Hristova, April Karlinsky, Luke McEllin, Sari RR Nijssen, 2017. Joint action: Mental representations, shared information and general mechanisms for coordinating with others. Frontiers in psychology 7 (2017), 2039.
    [60]
    Chyanna Wee, Kian Meng Yap, and Woan Ning Lim. 2021. Haptic Interfaces for Virtual Reality: Challenges and Research Directions. IEEE Access 9 (2021), 112145–112162. https://doi.org/10.1109/ACCESS.2021.3103598 Conference Name: IEEE Access.
    [61]
    Johann Wentzel, Greg d’Eon, and Daniel Vogel. 2020. Improving Virtual Reality Ergonomics Through Reach-Bounded Non-Linear Input Amplification. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376687
    [62]
    Norbert Wiener. 1948. Cybernetics: or Control and Communication in the Animal and the Machine (2 ed.). MIT Press, Cambridge, MA.
    [63]
    Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). Association for Computing Machinery, New York, NY, USA, 143–146. https://doi.org/10.1145/1978942.1978963
    [64]
    Sophie Wohltjen, Brigitta Toth, Adam Boncz, and Thalia Wheatley. 2023. Synchrony to a beat predicts synchrony with other minds. Scientific Reports 13, 1 (2023), 3591.
    [65]
    Tae-Heon Yang, Jin Ryong Kim, Hanbit Jin, Hyunjae Gil, Jeong-Hoi Koo, and Hye Jin Kim. 2021. Recent Advances and Opportunities of Active Materials for Haptic Technologies in Virtual and Augmented Reality. Advanced Functional Materials 31, 39 (2021), 2008831. https://doi.org/10.1002/adfm.202008831 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/adfm.202008831
    [66]
    Ungyeon Yang and Gerard Jounghyun Kim. 2002. Implementation and Evaluation of "Just Follow Me": An Immersive, VR-Based, Motion-Training System. Presence: Teleoper. Virtual Environ. 11, 3 (jun 2002), 304–323. https://doi.org/10.1162/105474602317473240
    [67]
    Yongjing Ye, Libin Liu, Lei Hu, and Shihong Xia. 2022. Neural3Points: Learning to Generate Physically Realistic Full-body Motion for Virtual Reality Users. https://doi.org/10.48550/arXiv.2209.05753 arXiv:2209.05753 [cs].
    [68]
    Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, and Baining Guo. 2023. RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch. http://arxiv.org/abs/2302.14365 arXiv:2302.14365 [cs].
    [69]
    Shizhe Zhu, Youxin Sui, Ying Shen, Yi Zhu, Nawab Ali, Chuan Guo, and Tong Wang. 2021. Effects of Virtual Reality Intervention on Cognition and Motor Function in Older Adults With Mild Cognitive Impairment or Dementia: A Systematic Review and Meta-Analysis. Frontiers in Aging Neuroscience 13 (2021). https://doi.org/10.3389/fnagi.2021.586999

    Index Terms

    1. ShareYourReality: Investigating Haptic Feedback and Agency in Virtual Avatar Co-embodiment

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
      May 2024
      18961 pages
      ISBN:9798400703300
      DOI:10.1145/3613904
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 May 2024

      Check for updates

      Badges

      Author Tags

      1. Virtual reality
      2. avatar co-embodiment
      3. body ownership
      4. co-presence
      5. haptics
      6. perceptual crossing
      7. sense of agency

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Data Availability

      Supplementary Material B: A document showing the questions that were asked in the semi structured interview. Refers to Sec. 3.5 https://dl.acm.org/doi/10.1145/3613904.3642425#pn8299-supplemental-material-3.pdf
      Supplementary Material C: A document showing the temporal analysis for sense of agency for all trials for each participant with three plots https://dl.acm.org/doi/10.1145/3613904.3642425#pn8299-supplemental-material-4.pdf
      Supplementary Material B: A document showing the questions that were asked in the semi structured interview. Refers to Sec. 3.5 https://dl.acm.org/doi/10.1145/3613904.3642425#pn8299-supplemental-material-3.pdf
      Supplementary Material C: A document showing the temporal analysis for sense of agency for all trials for each participant with three plots https://dl.acm.org/doi/10.1145/3613904.3642425#pn8299-supplemental-material-4.pdf

      Conference

      CHI '24

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI PLAY '24
      The Annual Symposium on Computer-Human Interaction in Play
      October 14 - 17, 2024
      Tampere , Finland

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 1,316
        Total Downloads
      • Downloads (Last 12 months)1,316
      • Downloads (Last 6 weeks)855

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media