The Normal Natural Troubles of Virtual Reality in Mixed-Reality Performances
DOI: https://doi.org/10.1145/3491102.3502139
CHI '22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, April 2022
Performing with technology is a complex and challenging task. Artists who use novel technologies, such as Virtual Reality, have to develop strategies of monitoring, maintenance, and recovery from errors with as minimal impact on the ongoing performance as possible. In this paper we draw on two case studies of mixed-reality performances and document strategies of Stage Managing VR Performance, Choreographing for Cables, Consistency & Charging, Improvising Interventions, and Priming Participants. We discuss how these practices expose areas ripe with potential for tool development, and how they can also be used to inform the design of interaction with other technologies, such as the Internet of Things.
ACM Reference Format:
Asreen Rostami and Donald McMillan. 2022. The Normal Natural Troubles of Virtual Reality in Mixed-Reality Performances. In CHI Conference on Human Factors in Computing Systems (CHI '22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA 22 Pages. https://doi.org/10.1145/3491102.3502139
1 INTRODUCTION
Virtual Reality (VR) has become — once again— a topic of interest for many; for journalists [52] to create immersive stories alongside their reports, and for performing artists [7, 15, 33] and movie directors[23] to create novel artistic experiences. HCI research has also been influenced by this renewed interest, as we can see a growth in the number of related research projects being presented [20, 40, 64]. With respect to interactive performance with VR, mixed-reality performances have been a source of inspiration in HCI [2, 21] and researchers have taken advantage of the artistry and creativity of this discipline to inform their work [46] and point towards different possibilities of interaction within the performances [29, 49].
In this paper we take a different approach to understanding VR in mixed-reality performances. By close video analysis of two artistic performances which involve participating audience members wearing HMDs we provide details of the natural, everyday troubles encountered by artists in designing and staging such experiences. We foreground the practical, logistical, and organisational challenges involved in keeping these artworks running and the potential of these challenges to both spark and direct artistic practice and technology design.
We describe the two performances and their related data in detail, then present the challenges and the coping mechanisms employed by the artist groups under the headings Stage Managing VR Performances, Choreographing for Cables, Consistency, & Charging, Improvising Interventions, and Priming Participants.
Our use of the term VR is in keeping with our practitioners’ use of the term, the use by other artists in this space (e.g [7, 33]), and researchers studying mixed-reality performances (e.g. [2, 50]) for a variety of technologies that make use of Head-Mounted Displays (HMDs). As such we use the term VR as an umbrella term for both 360-degree video experiences viewed through HMDs as well as virtual and physical environments within the virtual reality continuum [42].
In the discussion we suggest possible tools and solutions that would help artists to maintain the fluidity and coherence of their performance, manage and control the audio and visual boundaries in VR, and to manipulate participants’ attention. We also look to how mixed-reality performances with VR can draw on other research in the field of 360 video and VR to provide tools to understand and orchestrate awareness of participants, manage the ongoing attention and direction of the audience, and help in the planning and portability of interactive performances. We continue the discussion with a vision of how Internet of Things (IoT) environments such as the smart home with its interconnected entertainment and domestic technology can learn from the techniques employed by the artists we studied to better support, embed, and motivate the complex monitoring and maintenance tasks that they require.
Exposing the practices and workarounds employed by performance artist is a valuable contribution to HCI for those artists, designers and researchers who's work touches on ‘the arts’ and those who take advantage of the outcome of research beyond its artistic contributions [1, 14, 24, 46, 48, 60]. Positioning within HCI's long history of interest in examining experts’ and practitioners’ everyday practices and exposing them for analysis, discussion, and design (e.g. [12, 16, 41], we contribute by providing an opportunity to learn from how artists – as experts in their field – appropriate technology for an aesthetic experience [38] and to serve purposes it was not designed for. These contributions are synthesised into practical methods and opportunities for orchestrating the awareness of those wearing HMDs, and employing and adapting stage management techniques to both the attention and movement of the participating audience members.
As a further contribution we take advantage of this opportunity to better understand different challenges that emerging technologies could create not only for the increasing number of artists presenting performance-based theatrical VR works1 but also to those working with museums and exhibition spaces interested in curating mixed-reality experiences and those designing novel technical artefacts to support immersive and ambulatory VR experiences. There is a risk, to their artworks and reputations, when performances in front of paying audience members and arts funding agencies rely on technology being able to enable experiences it was not designed for. Understanding how artists ‘make it work’ not only shows designers and HCI researchers where technology can be improved, but as also sheds light on new opportunities for design that can be drawn on far beyond the fields of interactive performance and immersive experiences.
2 BACKGROUND
In the following sections, we introduce previous research on interactive and mixed-reality performances, as well as virtual reality technologies in relation to different interaction challenges and opportunities they facilitate.
2.1 Interactive performances
Several strands of HCI research have explored the potentials of interactive performances in different contexts and from different research perspectives. HCI collaboration with performing artists [2, 3, 5, 24] have previously resulted in the development of the performance-led research in the wild framework[3], which puts artists and their design activities at the centre of research. Beyond methodological contributions of such collaborations, artistic expression employing a variety of interactive technologies have also been studied within HCI. In a recent work Tholander et al. [60] illustrates how different group of artists and their professional skills and expertise are required to not only design the performative aspect of a performance, but also to make the telepresence technology work as part of their performance.
For example, Fdili Alaoui [14] discusses the result of designing an interactive dance performance with respect to an “implicit negotiation between the artist and the technology” as they appropriate the technology and adjust the performer's interactions and expectations to the technological constraints in a way that delivers an expressive interactive performance. In relation to this, the role of an interactive technology in the design of an artistic practice can go beyond the interactivity and technical functionality that it provides. It can be seen as a provider of “expressive” opportunities that the artist can take advantage of to add to the performance. With this view, the technology does not need to be seen only as a tool to solve a particular problem, rather it can be seen as an aesthetic element within the performance, a “partner” to perform with, or a “character” to interact with during the performance.
HCI's interest in studies of interactive performances is not limited to studies of novel artistic opportunities enabled by interactive technologies. In another study Eriksson et al. [13] use body trackers and drones as a novel interactive technology assemblage. This was done to explore aesthetic opportunities provided by physically interacting with drones and for performers to craft novel dance expressions for an opera performance. In another example, Taylor et al. [58] report on the challenges of designing and performing an interactive performance, humanaquarium, in a public setting. They discuss how artists experience a gradual sense-making of the technology in context through observing public interactions, understanding the experience of audience members, and reflecting on the process of design and performing the artwork. This moved the discourse beyond the experience of the performance from an audience perspective, and turns towards the investigation of relationships between performers, engineers and designers as part of a dialogical design practice.
Interestingly, but perhaps not surprisingly, one general challenge mentioned by all these studies was dealing with technology and audience participation in a live setting. For example, Taylor et al. [58] discuss how experiencing interactive performances can be intimidating, causing discomfort or stage-fright for the participant, performer and the audience. Similarly, Barkhuus et al.[1] discuss how artists in an interactive performance had to take extra measures in interacting with technology, to avoid distraction and disconnections during their distributed performance. This was foregrounded at points in the performance where audience members were able to interact with the live performance, resulting in unexpected reactions from other audience members. Although similar challenges and concerns can be experienced in any live performance, the openness of interactive performance to audience intervention and manipulation or technological failures (such as the one documented in [14]) require extra attention and the careful consideration of artists, performers and designers during the design and performance of the art piece.
2.2 Virtual Reality
With respect to VR, discussions on the challenges and potential of the technology goes beyond the scope of mixed-reality performances. For example, Gugenheimer et al.[20] have previously emphasised the opportunities and challenges of designing a shared and collaborative mixed-reality game for both HMD and non-HMD users. They discuss how the physical proximity of non-HMD users to HMD user or acoustic and tactile input from the physical environment could break the sense of presence and immersion of the HMD users, particularly if those interactions were not embedded as part of the virtual experience. Dao et al. [6] analysed 233 YouTube videos of ‘VR Fails’ and discussed how these problems could inform design for the interaction between the user in VR and external spectators and the environment, and Krauß et al. [30] studied how professional teams, such as designers and software developers make use of different tools to work VR technologies and what are the challenges they have to overcome. One outcome that emerged from this study is focused on different needs and expectations that each team member may have with respect to the technology. Their, simple but effective, design suggestion is then creating a common ground of knowledge between different members of the team through adopting interactive artefacts – rather than traditional static ones such as diagram flows– that can enhance interdisciplinary communications. In another study, Williamson et al. [65] studied the challenges of a multi user perspective of VR by taking it to a more complex and social environment of a commercial passenger flight. They discuss how a passenger using an in-flight VR entertainment system may have an unsettling experience as they receive interruptions from other passengers or flight attendants.
Knibbe et al. [28] move beyond the ongoing experience of VR, to discuss different challenges and opportunities of design in relation to the “moment of exit” from VR. In this study they present a series of suggestions to heighten or lessen the transition from VR to the real wold. For instance, they discuss different strategies of ending, such as an abrupt exit or gradual fade into the real world, that can create or break the sense of immersion at the moment of leaving the VR world. This study also suggests providing the participant with an awareness of their social setting outside the VR to create a trust-able experience and provide them an opportunity for sensory adaptation to their real world surroundings while within VR.
In another study, Marques et al. [35] discuss the use of 360-degree VR to create an immersive experience for a short suspense movie. In their study, they argue although the VR format creates a more immersive experience and enhances the sense of presence for the viewer –in comparison to a 2D format– it also creates more distraction. This distraction comes from the immersive surrounding of VR, providing things to attend to outwith the traditional “frame” which a director has control over. Suzuki et al. [55] propose a system (Substitutional Reality) for creating a sense of immersion and presence through the manipulation of the participant's reality. This is done by taking advantage of recorded videos of the real scenes of the physical space where participants are located, and playing it back to them when needed, and to fill in the reality gap caused by computer graphics. In line with these studies, Speicher et al. [53] present a set of guidelines on how to draw viewers’ attention in a particular direction while watching 360-degree videos on their HMD. For example, visual guidance can be implemented to help the viewer notice any important actions that are integral to the plot or experience. This would help the designers reduce the impact of the viewer's distraction within the immersive surrounding on the artistic experience.
3 CASE STUDIES
As is common in studies of interactive performances (e.g. [4, 29, 50]), we begin with a description of each case study and the performance settings in which it was investigated. These two case studies constitute the base from which we discuss issues in relation to the use of virtual reality technologies in live performances. The data collection for the following case studies was carried out in both Stockholm and Malmö in Sweden between 2016 and 2019.
3.1 Case One: The Shared Individual
Case One is a 20 minutes live mixed-reality performance, designed and performed by the Swedish artists group Bombina Bombast in collaboration with the Danish film studio Makropol. Bombina Bombast is a touring performing arts company based in Malmö, Sweden. The team has created over 30 original works for stage by bringing together performing arts and emerging technologies such as VR. The Shared Individual is centred around creating an out-of-body experience for a group of audience members in a live performance using ”film, performing art, technological trickery [and] virtual reality”2. This performance has been presented at festivals and stages around the world including the IDFA International Documentary Film Festival in Amsterdam, 2016. The artistic goal behind this performance is to provide the audience with an out-of-body [27] experience, to detach the audience members from their physical body, and challenge the audiences’ notion of perception by forcing them to give up the control over their bodies in VR [47]. In this performance, a live performer shares her point of view (POV) with a group of between four and thirty live audience members by wearing a head-mounted 360-degree camera and streaming live video from her point of view (Fig. 1b) to the audiences’ smartphone-based Gear VR3 Head Mounted Displays (HMD) (Fig. 1c).
During the performance, the second performer invites the audience members to “occupy” the main performer's body and become her by following her in three phases of the performance. These phases are the visual synchronisation, the physical synchronisation and the sensual synchronisation. The visual synchronisation is where the audience see themselves in VR from performer's point of view (Fig. 1), and listen through the ears (via embedded cameras and microphones in the helmet) of the performer. The physical synchronisation happens as the performer asks the audience to look at their hands (where what they are actually seeing through the HMD is the performer's hands) and mirror the movements they see with their own (physical) hands (Fig. 2a & 2b). This is done by the artists to connect the audience's body to performer's body and complete the physical and visual telepresence.
For the last step of becoming the main performer, the second performer blocks the audiences’ view of themselves by placing a black screen in front of the main performer. As a result, the audience can only see the second performer through the eyes of main performer in VR (Fig. 2d). Aiming for an intimate and emotional connection, the second performer asks to hold audiences’ hand through a face-to-face conversation with the main performer in VR. The second performer then, touches the main performer's hand in order to create the illusion of an sensual connection (Fig. 2c & 2d).
When this part is performed, the performer announces the successful transformation of all audience members into the main performer. The second performer then removes the black screen so that the audience once again can see the full house though the main performer's eyes. However, what the audience sees is an empty theatre, as their physical body has now disappeared and they have all became one with the main performer. From technical perspective, this is done by taking advantage of network latency. There is a 30-60 seconds latency in streaming, in response to network capacity. By wearing a HMD audience members can see themselves and follow performer's live instruction to have an out-of-body experience, with a 30-60 seconds delay. This delay is used by artists as an opportunity to not only create a novel narrative, but to switch the live stream to a pre-recorded video of the same room, empty of the audience. This is done to create the illusion of their body being removed. To better demonstrate this absence, the second performer starts walking on (now empty) seats to prove that audiences’ bodies don't exist in the physical world anymore. In this void the main performer performs a song and when it is finished, the pre-recorded video is switched to the live stream and the audience reappears in their seats again. The second performer then explains that this experience is a simulation and the audience members have collectively inhabited the main performer's body.
The audience continue to watch themselves from the perspective of the artist. As time goes on, they begin to see others remove their headsets and stand up to leave from that perspective. The performance continues in this way until the majority of the audience have left the shared experience4.
3.2 Case Two: Frictional Realities, Experimental & Commercial
Case Two is a 30 minutes live co-present performance, designed by Noah Hellwig, a Swedish artists and performer based in Stockholm, Sweden. The performance was designed as part of the collaboration between Rostami and Hellwig to explore Rostami's concept of friction [50] with the goal of designing an embodied mixed-reality performance. The final artistic work is performed by three performers from which, two of them directly interact with the participants taking part in the performance on stage, and one acting as the narrator of the live experience. The underlying artistic concern behind the performance is to play with participants perception of reality and sense of presence within the experience of an immersive performance. The narrator facilitates this experience by inviting participants to follow her voice, perform actions in the physical or virtual world, and explore the consequences of their actions.
This case combined two different runs of the one shown in Fig. 3. To better present the data and the related analysis, we label one Frictional Realities: Experimental (Fig. 3a) which refers to those runs in the Swedish National Touring Theatre (Riksteatern) as part of a design residency program. The following commercial performance we label as Frictional Realities: Commercial which refers to those runs at Weld, a local arts centre in Stockholm shown in Fig. 3b. This was an updated version, adapted for both the new space and the issues that were encountered during the experimental performance.
During the performance, two participants can take part by wearing HTC-Vive5 HMDs and interacting with objects in both the physical and virtual spaces, using HTC-Vive trackers and controllers. The performance is led by a team of three or four including: two stage performers, a live narrator and a system controller.
Upon arrival on the performance stage, both participants are asked to wear HMDs, close their eyes and lay down on the floor (Fig. 3). Each participant is placed on one side of the stage (Fig. 3b), and they hear the voice of the narrator in the physical room. Participants are invisible to each other in VR, unless they cross over the virtual wall, and physically move to each other's space. They are invited to interact with different elements of the performance (e.g. physical and virtual objects and environment), with each other, and stage performers. During the performance, the narrator delivers exposition, comments on participants’ actions by recounting (and sometimes judging) their movements or mistakes, and relates those actions to the story of the performance. The performance ends when all the narrator's tasks have been performed by the participants. The participants are then invited to sit back and follow the narrator's countdown before removing their HMDs and leaving VR.
During the experimental runs, the team had access to a system designer who also was responsible for monitoring and controlling the technical and virtual part of the performance. During the commercial runs however, the narrator – who was familiar and comfortable with working with technology took the role of system controller.
4 RESEARCH METHOD
Empirical data was collected over three years following the performance-led research in the wild approach [3], which puts the focus on HCI studies of the artistic practice being led by artists, as well as studies of audiences’ experience of the performance in a realistic public setting. Data included field notes, video recordings of The Shared Individual collected from three runs of rehearsal, and video recordings of 5 runs of rehearsal and design process of Frictional Realities (experimental), as well as 22 runs of this performance (commercial) at a local art centre. Overall more than 870 minutes of multi-angle and 360-degree video recordings were collected.
We were not involved in the design of the performance described in Case One: The Shared Individual, nor its deployment. However, as part of our ongoing collaboration with Bombina Bombast, we were invited to attend and observe the design process and rehearsal sessions of the performance. Additionally, several recordings and related documentation of the performance being staged in different festivals and venues were made available to the research team. The artists did not receive any monetary compensation for their collaboration in this research, however their production was supported by a grant from KulturBryggan, The Swedish Arts Grants Committee.
In relation to Case Two: Frictional Realities Noah Hellwig led the artistic exploration, design and choreography of the work. The first author led and carried out the research aspects of the project following the performance-led research in the wild[3] expanded with the self-situated performance research approach[59]. The research contribution and involvement therefore included introducing the research concept to the artists, establishing dialogue within the creative process of designing artwork, and collecting data from artist-led design process during the Experimental phase. For the first author, this also included being part of the orchestrating team on stage and behind the scenes when required. During the Experimental phase 8 participants (5 women and 3 men) experienced the performance. Our overall data includes 3 weeks of on-site observations, field notes of participation as well as design process and its challenges, and video recordings of everyday design activities and staging performances, and artists interacting with participants.
In relation to the Commercial version of Frictional Realities, the first author attended, observed, and collected data and video recordings of all 22 runs of the performance where 44 audience members took part and experienced the performance. The design and deployment of the performance in Frictional Realities was supported by a grant provided by Riksteatern. The grant covered the salary of four artists (excluding the researcher) for three weeks of participating in the project as well as the technical equipment required. The grant also provided access to a theatre venue in Riksteatern, Stockholm. All images included in this paper have been used with permission from participants, artists and their production team.
4.1 Data analysis
Drawing on ethnomethodology, interaction analysis, and informed broadly by the conversation analysis literature we examined the recorded performances to look for revealing cases of interaction with, through, and about the technology [8, 17]. Ethnomethodology focuses on members’ practices, offering a different perspective on pragmatic problem solving in that it focuses on the ‘how and what’ or ‘ethno-methods’ of choosing how to act in order to deal with a particular situation, and of how we move and coordinate with other people in shared space. In this way ethnomethodological video-based studies provide a view on practices or problem solving and interactions with and through technology (e.g., [9, 11]).
We used on site observations and notes of participation to identify specific examples and incidents of interaction as a starting point. We then looked at the video material of these incidents, building an understanding of the situation. Through repeated analysis of the recordings informed by the first-person experience of the first author we focused on understanding artists approach to identifying and solving the practical problems of performing with VR.
In this way we extracted a corpus of 168 videos, distributed across the different performances. In focused analysis sessions, we watched all the video clips of each performance, to gain an overview. Informed from our developing analysis we selected a set of 26 video clips for more in-depth analysis. Each of these video clips were selected for illustrating either smooth use of VR, different problems that were emerging from our analysis, or other behaviours that we felt were interesting. These were analysed in more focused data analysis sessions. Our goal in these sessions was to explore the individual differences in the use and management of the assemblage of VR system, artists and audience-participants. We looked at both individual users and performances, and also how the experience worked or failed overall. We also sought to understand the different behaviours and uses overall, as well as how different problems emerged in use.
In presenting this video data we employ Laurier's guidelines for Graphic Transcription [32] and borrow the visual language of comic books. This allows a clearer description of the temporal and physical changes in comparison to a traditional conversation analysis transcription – which focuses, understandably, on the spoken and communicative acts. These graphic transcriptions should be read from left to right, top to bottom in the western comic tradition. As shown in (Fig. 4) for consistency, in all these graphic transcriptions the voice of an off-stage narrator is presented in jagged speech bubbles in comparison to on-stage speech which is presented in round speech bubbles, descriptions of the action are in square text boxes, red transparent highlights are used to draw attention to relevant action taking place in the image, and yellow ‘tape’ is used to stitch together two sides of the same frame from the image while omitting space between participants, actors, or actions to save space. All have frame-by-frame descriptions provided in the alternate text for screen readers, and also in Appendix A.
4.2 Challenges and Limitations
Collaborative working, by definition, involves trade-offs to ensure that the wider goals of both partners are met without unduly impacting the outcome or procedure for either. There are a number of further data sources that could have been collected to enhance our understanding of the methods employed and their impacts on the experience.
Implementing a comprehensive logging framework for system and user actions would have provided an opportunity to compliment the qualitative analysis presented here with some quantitative work. However, this would require the imposition of additional development work on the part of the artists which fell outside of the remit of the grants which supported this work. The potential collection and distribution of such data would also result in a requirement for informed consent to be collected from audience members, something that is not part of the artists vision of the production.
Extensive interviews of audience members was also considered as a data source, however on reflection the focus on the practices of the artists combined with the practicalities of back-to-back performances and limited researcher manpower meant that it was deemed more important to ensure observation during the performances. Beyond the practicalities, it should be noted that audience members in most of these observed productions were paying customers of the artists in question and imposing additional requests for time and engagement should be carefully justified with benefits for both the researcher and the artists.
5 MANAGING PROBLEMS
The case studies presented above provided a set of problems encountered by the artists, as well as the multitude of mitigation and coping mechanisms they employed. In this section we discuss three categories of mitigation and coping strategies. How artists and performers successfully stage managed their VR performances, how they choreographed the maintenance tasks the technology needed to keep functioning within the performance, and how they adapted their actions and the performance when things went wrong.
5.1 Stage Managing VR Performances
The most prevalent method for heading off and addressing problems was the adaptation of traditional stage management for VR performances. In Frictional Realities, where the only audience members were wearing HMDs, this was in fact seen as easier than in a traditional stage performance. Here the team were able to move freely around the stage without interrupting the ongoing performance, with some caveats. One being that they had to keep silent. This involved not only careful movements, removal of shoes, and appropriate clothing, but also the adhoc development of physical signalling to communicate intentions, problems, and proposed solutions. The shared context of the planned narrative was instrumental in the success of this communication. In the example shown in Fig. 5a, the artist on stage is using their vantage point to signal to the person controlling the technology and the narrator that the participant has completed the requested action (touching a balloon string) and that the animation and narration should continue.
Beyond this, the stage management activities also involved being on hand to clear objects from the path of the participants or stop them for safety reasons or because they were reaching the edge of the performance space. This included subtle cable management to avoid snags or twists that could either disconnect the headsets or pull them without relation to the world that the participants were seeing – breaking immersion.
5.1.1 VR on Stage. This was made more difficult in the on-stage performances of The Shared Individual which involved the HMD being fed a live 360 video of the performance space. Essentially, in these situations the expanded ‘backstage’ provided by the visual occlusion of the HMD was no longer available. Indeed, in The Shared Individual the position of the 360 camera provided a view to what would traditionally be ‘backstage’ in a live theatre performance providing even less opportunity for such management (Fig. 6a).
In the experimental version of Frictional Realities (Fig. 6b) the performance itself was not developed for the stage with an audience. The addition of large screens accommodated to show the point of view of the participants wearing the HMDs was one of the few changes made at the request of the national theatre funding the work to provide a showcase performance for selected members of their institution. One issue that this raised was that while the stage management activities were still invisible to the primary participants in the performance, they were not to the audience. This meant that the audience at times reacted to these actions, which the participants could hear but could not reconcile with the experience that was being presented to them. This was somewhat mitigated by the overall design that had two participants independent of each other for the majority of the performance, giving both the ability to pass off audience reaction they could not understand as being in relation to something that the other must have done.
For instance, in the vignette transcribed in (Fig. 7) the narrator commented that a participant's movements, implying that participant's “clumsy” action caused an object to fall on the ground. Right after saying this, a live performer would drop an empty cardboard poster tube making loud noise, creating a sense of illusion for both participants that one of their actions resulted in the incident. During this scene, we saw and heard that the audience started laughing to this composition, knowing that this was in fact a trick by performers to mess with participants. Following the audience reaction, the other participant decided to manipulate the audiences’ reaction (or respond to it) by dropping an object purposefully. Although this created another reaction from the audience, such an unexpected and unscripted change resulted in the actor being out of position for the scripted action of receiving an object from the other participant as he recovered the object from under the table where it rolled, another member of the team was then forced to – slightly belatedly – take the object from the first participant.
5.2 Choreographing for Cables, Consistency, and Charging
In an effort to minimise the amount of reordering, and the possibility of trips and snags the cables were explicitly taken into account during the iterative design of the performance in Frictional Realities. As the artist walked the interaction through, holding the HMD and discussing the practicalities along with the artistic and technical considerations of each element, the cables were noted and their limitations used to reorder, reorient, and replace ideas.
However, even with this planning the audience in VR had the freedom to explore the world and interact in unexpected ways. The vignette in (Fig. 8) shows perhaps the clearest example. The participants are manoeuvring real world objects connected to models in VR including a large frame and chairs for them to sit on. This requires both physical coordination between the participants around the hybrid objects, and the close attention of the actors as they play roles in both parts of the scenery (i.e. taking the weight of the frame as it is placed on purely virtual space) and ensure that the audience can safely and securely complete their task. In this run through the performance participants spun around each other once more than was expected, resulting in the cable snagging despite the best efforts of the team. Indeed, the narrator had to rush out to also provide physical support for the placement of the chair as the out of position audience members resulted in out of position cast as well. One interesting point to make here is how the participant in VR interpreted reaching the end of her cable. Even after the snag was resolved she waited for the chair to be brought to her rather than trying again to reach it, even with the instruction from the narrator, and was hesitant to continue to navigate and interact in the space until the actor physically intervened and helped them sit down on the chair.
In The Shared Individual there was no cable management to contend with, but consistency was a problem. In one part of the performance the audience is drawn into the viewpoint of the artist, looking at themselves from the stage. However, the practical limitations of the network capabilities of the technology meant that there was a perceptible delay in the 360 video stream – which was longer the more audience members were connected. With the technical solutions to this latency both expensive and likely to delay the expected premier of the performance this was solved in an artistic manner: the artists adjusted the narrative before entering the VR, explaining that this was a view of you momentarily in the future.
“... there is a one minutes delay, that means whatever you see now [in real world] will happen again in one minute [in future]... wave and say hello to your future self... hello future me, hello future me...” [participants are then asked to put the headsets and headphones on].
Frictional Realities also had charging issues. The physical objects which were tracked and shown in some manner in the virtual space used powered trackers attached to them, but even with a supply of extra trackers the limited battery life and the reality of back-to-back runs of the performance for the audience meant that keeping them charged was a carefully choreographed activity. Subtle changes in the ordering of interaction and the placement of objects in both the physical and virtual world were enacted to provide the opportunity for trackers to be removed to charge during each run or while the performance was underway. These had to be carefully planned in order to ensure that there was time to both remove the tracker from the object and place it on charge for the next run, but also so that the artist was available when another physical manipulation of the scene or participant was required.
5.3 Improvising Interventions
The reality of working with new technology in situations for which it was not designed or tested for by the original engineers is that, at times, it will break down. In the case studies presented here we observed a number of categories of coping strategies employed by artists to ameliorate the impact that these problems have on the ongoing performance.
5.3.1 Temporal Manipulation.
One common thread in the problems encountered was that they took time to resolve, yet time was a resource already employed in the narrative and interaction of the ongoing performance. This produced a tension for the artists, where they needed to ‘make time’ within the performance for the resolution to be affected. Method employed included improvising additional narrative exposition, presenting exposition at a reduced pace to the participants, and simply stopping the narrative while problems were addressed.
Another method of temporal manipulation observed to be employed by artists was reordering aspects of the performance ‘on the fly.’ In this example (Fig. 10), participants were required to choose a virtual hat from the wall in VR by moving their headset close to the hat to make it appear on their head, and then walk to the virtual mirror and perform some physical movements. During a few runs of the performance, the headset did not activate the hat. The first time this happened we see the narrator tell the participants to move back into the space, close there eyes, and eventually take a seat while they work together to solve the problem. However, when the same problem presented itself in a later run (Fig. 9) the narrator is able to recognise the issue before the participants are led to choose a virtual hat. To solve the problem the narrator asked participants to ‘re-follow’ which was the previous instruction where the actors physically led the participants through a series of physical movements, with the additional instruction to ‘close your eyes’ both signalling to the actors that there was a technical hick-up and potentially hiding from the participants the disappearance and reappearance of the virtual world. The artist continued to improvise movements with the participants for almost a minute and a half, with glances towards the busy narrator, while she restarted the virtual scene. When the virtual world was available again she continued the narration and the participants were led to the hats to continue. This improvisation gave time for the solution without the need to stop the performance and change the experience to the same extent as the previous example.
5.3.2 Physical Manipulation. Sometimes, however, the problem would manifest itself in such a way that there was no opportunity for the artists to direct the narrative to provide space for the fix. One reason observed for this was that in some cases there simply was no time, and intervention was necessary immediately. For instance, during the design process of Frictional Realities: Experimental, performers had to help a participant by holding her hands to make sure she did not fall from the bench (Fig. 5b). This was not connected to the experience in VR, nor was it included in the overall story-world of the narration, it was simply a safety measure that was felt necessary if the participants were to be expected to climb up onto the bench while seeing only the virtual representation of that bench through the HMD. In the commercial version of this performance this was removed. In another example during the experimental runs, one participant tried to push the boundaries of VR space, stepping out of the area covered by the lighthouse through the blue wire-frame box and the default warning guidance causing a blackout in their headset. This move resulted in disconnection in participant's experience of one of the realities and forced the performer to pause the performance and physically guide the participant back to the space mirrored in VR before continuing with the next scene. This was because the virtual world simply disappeared for the participant with only a blue grid-pattern visible to them. This behaviour is programmed into the HTC Vive as an indication that the user should return to the grid which represents the space within the sensors – however this is not an obvious conclusion to draw for someone with limited or no experience using VR and so the participant simply froze.
Another motivator for direct manipulation was that, in some cases, the necessary changes to the narrative would be too long, complex, or distracting from the artistic goals. For example, at times the cable would become twisted or tangled, despite the best stage management efforts of the artistic team. This has the opportunity to be partially resolved by narrating the participants through an improvised set of movements that would result in a cable situation which could be transparently managed. However such a narrative interjection was seen to be problematic in terms of a) fitting the specific movements to the narrative, b) managing this set of movements on one participant while the other waited, and c) understanding the exact movements necessary from the narrators position. For this reason, it was often preferable to interact with the participant directly and give a short interruption to the immersion.
5.3.3 Isolation as an Advantage. Some problems could not be fixed on the fly or over the network connection between HMD and the controller. For these the artists were resigned to employing more extreme measures. In The Shared Individual, the Gear VR headset of one of the participants stopped working, disconnecting the participant from live 360-degree stream. This could only be fixed with a full restart of the phone in Gear VR, which would take almost a minute. In this case, the participant was told that there was a problem and that they could take off their headsets and sit back until the problem is fixed and then, they can continue the performance. As we can see from Figure 11, the other participants and the performers continued with the performance. The nature of the performance and the relative isolation of each participant through over-ear headphones in combination with the HMDs each was wearing meant that this was able to be done without impacting the other participants. As we see in the next session on priming participants, this was incorporated into the instructions given to participants in later performances giving them the agency and knowledge necessary to restart their own headset if the same issue occurred.
5.4 Priming Participants
The final management technique involved priming the participants before letting them interact with the virtual space. VR and interaction with virtual objects can be overwhelming, especially for those who have not experienced it prior to the performance. This can stem from a lack of strategies for how to interact with technology (similar to any other type of technology), and the knowledge of the tropes and underlying assumptions of action and reaction that are expected of them. This can not only result in breaking the fluidity of the experience for the participant, performers and those who co-experience the performance, but can also result in stress or embarrassment on the part of the participant. This type of problem is well understood in participatory immersive theatre performances that require the audience's direct input. In such a live performance, performers (through meta-commentary and scripted actions as part of the performance [29, 50]) need to prime the audience within the performance well enough that they feel comfortable to take part and provide a meaningful input for the work to continue. While there are techniques to help the participant to deliver the expected performance (such as improvisation), this is much more difficult when the participant needs to get familiar with a new technology to interact with during the live performance. The case studies here prime their participants in very different ways, but with the same goal.
In The Shared Individual, the first scene of the performance was all priming work. An actor takes to the stage and one of the first lines is a question to the audience: “How many of you have tried VR before? Raise your hand.”
This was followed by a detailed explanation using an HMD mounted on a dressmakers dummy, as can be seen in the background of Fig. 1a and the foreground of Fig. 12, covering the technical aspects of the cables, cameras, screens, network requirements to live stream 360 video and fastenings to what to expect to experience. The two actors walk the participants through these aspects using the visual aids shown in Fig. 12, timing the appearance of each icon on the projector to coincide with it's appearance in the explanatory rhyme shown in Fig. 13 as well as the real object being highlighted by a torch held by the second actor. After this there is an attempt to put the audience at ease, explaining that while they are unable to see and hear their surroundings the doors will be locked and ushers will be watching. They are also told that they are free at any time to raise the headset for a break, but that the experience will be continuing without them in the headset. In later runs they also explain and demonstrate the process of restarting the GearVR headset if they do not see anything through the headset. This was especially important in larger theatre settings where it was physically difficult for technicians to reach many of the audience members.
In Frictional Realities, the participants were primed within the experience. They were guided by the narrator through the experiential and interactional qualities of using the technology. This started with them lying on the floor, a shortened version is described in the vignette shown in Fig. 14. They are asked to lie down, close their eyes, and follow the instructions of the narrator to be guided through the experience of VR.
The priming itself is embedded in the exposition of the narrative and the artistic emotive intent. Yet there are clearly pedagogical moments where the expectations of action and reaction are defined for the participants. They are explicitly told that ‘Yes, you are allowed to touch’ as the performance includes virtual, physical, and hybrid objects. They are teased through the description of the cardinal direction which will allow them to orient to the narrative instructions and slowly encouraged to sit on, move, and touch virtual and hybrid objects in the virtual space before the exposition complexity increases and the actors enter the space. This exposition is responsive to the participants’ actions and reactions, with pedagogical intent – the participants are required to understand what they can and cannot do within the performance before it can proceed. In this way, even though participant's first encounter with the performance is when they wear their HMDs, this priming provides space and time for the participants to sensitise to the live performance, letting them explore and interact in VR without the fear of missing out due to their engagement with either story or interactions with the technology that could become a barrier to the experience of immersion.
This range of methods for priming participants, from inside and outside the narrative structure of the performance and from inside and outside the HMD, provides different opportunities and challenges for the artists and participants. From outside the narrative there is more freedom to be explicit, especially about the technology and expectations of its use, but that can be challenging to understand in the abstract by the participants. Working inside the narrative foregrounds a tension between the narrative integrity and the quality and detail of the exposition needed for the participants to gain the knowledge they need to fully engage with the performance going forwards. Performing such priming while the participants are wearing the HMDs has the potential to be much more effective in allowing them to understand the experiential and sensory aspects of what is being explained, however the explanation itself either must be done blind (to the artist) or scripted and modelled in the virtual environment limiting the ability to tailor the priming to the individual.
6 DISCUSSION
The problems and amelioration strategies identified above highlight a number of interesting paths that warrant more research from an academic perspective, more design and development in the technology design and the accompanying interaction design, and more experimentation in narrative and experiential development.
While some issues can be put at the feet of current generations of technology, the pace of technological advancement can not be used as an excuse to ignore the current problems and the design and interaction challenges that they present. This has been a common refrain since Suchman studied photocopiers in the 1980s [54], where problems and shortcomings related to technology are downplayed in favour of the ‘next version.’ As Duguid put it “Those who complained that machines failed to live up to expectation could simply be told ‘they will improve’ and Moore's Law suggested how” [10]. However, practice shows that those using technology continuously deal with things going wrong. These failures, breakdowns, and inconsistencies demand workarounds and result in continuous decision making as part of an iterative and ongoing process of technology management.
What we draw attention to is that problems are expected and unavoidable [39]. Skill is used to mitigate them in various ways, and to draw inspiration from them. We are not presenting these issues that artists face simply as a list of bugs to fix, but that is not the same as saying that some should not be fixed and that there are not technological solutions to some of the classes of problems presented. Here we discuss three areas where tools could be developed to help the artistic teams plan, manage, and maintain interactive performances – and how these methods could be expanded beyond the field of interactive performances and virtual reality experiences.
6.1 Orchestrating Awareness
As presented in the analysis, while HMDs and VR at times can act as a blindfold to make the visible invisible, hide performer's unexpected interactions or the messiness of a scene, in most cases it was artists’ preformative expertise and their improvisational skills that helped them to mitigate any problems with the technology. Such an expertise are necessary to not only interweave the technology in the performance to make it work [2, 46], but it helps to keep the performance move forward with less interruption, and stay coherent and entertaining. To do this the artists need to be aware of how the audience members are individually and collectively experiencing the artistic work.
Both the artistic teams made constant use of the monitoring displays provided by the current generation of technology, this allowed them to ‘see through the eyes’ of the participants at all times. Yet this was a live, transient, and sometimes imperfect view into what the participant was aware of. Building on this current technological tool to provide a greater insight into the ongoing, and cumulative awareness of audience members is the first suggestion for technological intervention we make.
By taking advantage of new HMDs that incorporate eye trackers, tools can be developed to not only display if the audience member had performed a requested action such as closing their eyes, or is currently looking both towards, and at, a particular virtual object. But even without the eye-gaze estimation, such tools can be used to track past awareness of the audience members, allowing the artists to set spatial and temporal targets of attention which can be met and listed. In this way, for example, setting the participants the task of looking at themselves in a virtual mirror can then happen in parallel with preparation for future activities or maintenance of the technology without the need for constant, real-time monitoring of the audience members’ activities in VR. It also expands the number of participants whose experiences a performer could realistically manage and work with at a time.
Beyond monitoring the awareness of the participants, the performances presented here would be able to take advantage of methods to control the barriers between the audience's awareness of the physical and virtual worlds. Such a tool could be envisaged as VRJ (Virtual Reality Jockey) mixing decks, allowing individual or group awareness to be dialled up and down through adjusting noise cancelling headsets and allowing the visual representation of the real world to bleed through the virtual in the headset. Targeted use at an individual participant could be used for isolated and individual troubleshooting, where subset or whole audience manipulation could be an inspiration for more interactive methods of priming or whole new methods of artistic expression.
6.2 Stage Managing Attention
Beyond ensuring that what the participant is aware of is understood by the artist, and can be used as a tool for further enhancing the experience and immersion, the artists also have to be able to effectively direct the attention of those wearing head mounted displays to objects and actions when necessary for the ongoing experience.
The issue of managing the attention of participants in VR has been approached in a number of ways, including using audio [64] and visual cues [53]. Vosemeer & Schouten [64] discussed different narrative enhancing and preserving methods in which the use of 3D soundscapes could direct where the participant would look, in order to ensure that they would be looking in the right direction when anything of narrative significance happened. We see some parallels with the reactive, improvised narration provided in Frictional Realities to manage participants direction of attention.
Speicher et al. [53] provided a comprehensive taxonomy of visual methods for guiding participants in 360 degree videos. The three axis – of explicit/implicit, diegetic/non-diegetic, and allows/limits interaction – can be seen also in the external management work done by the artists in our case studies. However, these methods still primarily reside in research prototypes and are hand-crafted for individual experiences and experiments. What would benefit the artists in our study would be to have a set of simple, yet powerful, tools to provide levers to manage participant attention. As with most of the suggestions we put forward, there would be a need to be able to target individuals and groups. By drawing on how attention was managed through narrative exposition, physical manipulation, and audio visual editing in our two case studies we can suggest that a combination of visual and audio cues that artists could attach to either virtual objects or cardinal direction in space would provide the flexibility and ease of use necessary to manipulate attention in real time. We envisage a system where objects could be set to ping a set group of participants to draw attention to it, either in response to participant action or the direct intervention of the artist. This would allow for scripted use of the attention management tools for narrative purposes, but also in the ongoing maintenance of the experience. The simple cardinal directions would, in this scenario, be mainly used for priming and to re-orientate a disorientated participant.
On the other side of the head mounted display, as it were, the attention of the artist could also be managed with tools. By this we mean that in a complex, ongoing, and multi-party orchestration of which both our cases were an example the ability to quickly, and quietly, draw the attention of other staff members to a shared focus would be very useful. This tool for joint attention could also be used to synchronise attention between the tools themselves and the physical environment. With multiple participants viewing very similar scenes, identifying the connection between one monitoring stream and a particular participant was not always a simple task, so being able to synchronise between giving attention to something in the virtual environment and what that object is in the physical would again allow for smoother management of performances. Beyond this, the ability for a tool to draw the attention of the artists to activities that fall outside the heuristically expected actions and reactions that have been seen in earlier runs of the performance would provide the possibility to intervene before problems occur.
6.3 Managing Movement
While the work described in the last sections on awareness and attention do not speak directly to guiding the participants through space, the research conducted on these topics provide a valuable starting point for tools to intervene in the movement of participants. More complex manipulation of the movement of those in VR can be seen in the range of research aimed at allowing users to seamlessly experience a larger environment than they are physically able to move around [37, 63], with a taxonomy redirection techniques presented by Kunz et al. [31]. However, the work on effective redirection techniques is still ongoing and has some caveats on the distances and angles that can be effectively manipulated as well as some impacts on the experience of ‘VR sickness’.
The artist in our case studies managed to manipulate the movements of their audience in less subtle, but effective ways. Initially with narrative cues as to where they should go, but relying on physical intervention as a backstop to ensure that the experience was able to progress and that the participants were never in physical danger. As suggested by Dao et al. [6], the opportunity to dynamically adjust the virtual environment to guide or direct users around physical barriers to the ongoing experience would be an improvement. However, the control and results of these changes would have to be able to be woven into the ongoing management of an artistic experience. Artists also had to manage and script their own movements, as they had interwoven responsibilities that included cable management, stage furniture manipulation, and acting as part of the ongoing narrative.
This was planned physically, and walked through multiple times but still met with problems when the reality of performing with participants was met. Tools to enhance the planning of movements in hybrid virtual and physical environments is the first suggestion we present in this section. Such a tool could take advantage of the virtual world and tracked objects already modelled for the creation of any such experience, with some additional information such as the lengths of cables and the positions of computers and other devices. We envisage a system that would allow the artist to walk through and record a performance as one participant, then manipulate space and position on playback – showing them how different spatial configurations would impact the virtual, physical and technological objects used.
Not only could such a tool be used to avoid problems, but where problems were possible or probable to occur visual stimuli could be scripted in much the same way we suggest is possible for audio above. As intervention points are determined, either through repeated runs of a performance and repeatedly observing participants doing unexpected things or through careful planning, the actions taken to intervene can be planned in layers. With implicit, non-diegetic actions attempted to move participants seamlessly back on track followed by more explicit and diegetic interventions when subtlety has failed. This, as we saw in the analysis, would of course be at the mercy of the participants actually following this guidance.
Another aspect of the management of physical movement was the occasional necessity to physically touch and manhandle the participants into position. The artists took care to announce their intentions and gently initiate the physical connection before applying the manipulative pressure necessary to achieve the goal of getting the participant at the correct position or orientation. As a companion tool to the VRJ outlined above that allows for control of the audio visual barriers between the virtual and physical worlds, the ability for artists to ‘step into’ the virtual scene for a subset of participants giving visual warning before they physically interacted with them would again enhance the ongoing immersion of the participants.
6.4 Applying Monitoring and Maintenance Procedures Outside VR
One area of interest and concern for HCI more widely is the maintenance of technology introduced to spaces without a dedicated, or present, technical support team. There has been a lot of work to understand who [19, 61], when [45, 56], and how [44, 62] the maintenance tasks around technology, such as updating software or adding and removing devices, are done.
From what we have presented here, we suggest that there may be another avenue that this research can take by learning from and appropriating work done on understanding diegetic interactions. The maintenance tasks outlined above were able to be woven into the natural, ongoing flow of the technology as it was being used for its intended purpose. While this was unable to account for all the maintenance tasks, it is a marked departure from the ‘blinking light’ calling for attention without a clear direction or purpose. As technology starts to become more embedded in everyday life, scripting the lifeworlds of technical additions to our environment could enable a fluid integration of monitoring and maintenance tasks with our everyday lives. Receiving a monthly email from your internet connected fire alarm informing you that your home has not burnt down [66], or sending notifications to one user's mobile device that the robot vacuum cleaner is 80% along a maintenance cycle – but nothing needs to be done at that time – can be seen as crude attempts at embedding the technology in the users’ consciousness to ensure that maintenance tasks are attended to in a timely manner. Or, more cynically, to ensure brand recognition and promote word-of-mouth advertising. Geerts et al. present a vision of what they call a Hyper-personalized watching experience [18], which in its use of naturalistic dialogue and ascription of agency to the technical system provides an opportunity to interweave not only media and entertainment, but also maintenance and monitoring tasks of the smart environment. Indeed, as many visions of the smart environments of the future are highly interconnected [51], such scripted, opportunistic, and engaging interactions could be used to embed maintenance and monitoring seamlessly in the ongoing experience of use – this could, potentially, provide the users with a sense of control over problematic aspects of IoT such as data management and security monitoring.
7 CONCLUSION
In this paper we have provided insight to the normal, natural troubles of performing with VR headsets. We have focused on the problems and solutions that artist groups encountered and developed during two different interactive performances with VR. We explored the complexities of stage managing VR performances, the proactive planning that can be incorporated into the very fabric of the performance to ameliorate some issues, and how unexpected problems were dealt with when ‘the show must go on.’ By documenting these everyday problems and how practitioners make use of technology and make it compatible to their needs, to interweave it into their ongoing work we contribute to the ongoing research on the studies of technology, specifically the study of artistic production and practice. Such a contribution can be seen as resource for understanding technology in the wild, practitioners’ long-established skills, and the design of new technology and experiences.
Through video analysis of these performances, we presented techniques that artists employ to stage manage VR performances, recover from technological problems, choreograph cables, maintain the technology, and finally prime the participants using their performative skills and expertise in designing a coherent narrative.
In the discussion we have outlined four possible tools that would help artists, and others producing and managing content in VR, focus not on the technology and its ongoing maintenance but on the artistic experience they want to impart on their audience. For the management of ongoing performances we proposed tools that aim to expand the artists’ understanding of the audiences’ awareness in VR, manage what they can see and hear of the virtual and physical worlds in real time by way of a VRJ system, manipulate the attention of the participants towards events and objects of narrative significance, We also propose a tool to help in the planning of performances taking into account varying numbers of participants, different configurations of performance spaces, and the limitations of the available technology.
In understanding the management and planning such interactive performances with VR we aim to provide a base for the community to not only support the artistic community with, but also to learn from the creative manipulations of participants, technology, and narrative for the development of interactions across a wide range of constellations and contexts.
ACKNOWLEDGMENTS
This work was partially supported by grants from Riksteatern and Stockholms Stad to develop and perform Frictional Realities. The Shared Individual was developed and performed with support from KulturBryggan. The authors would like to thank Noah Hellwig, Disa Krosness, Emelie Wahlman, Leo Låby, Gabriel Widing, Emma Bexell, Stefan Stanisic, Johan K. Jensen and Mads Damsbo for providing us the opportunity to study their artistic productions.
A DESCRIPTIVE TEXT FOR IMAGES AND GRAPHIC TRANSCRIPTIONS
A.1 Figure 1a
Description of Figure 1a - Caption: ‘Stage orientation and organisation of The Shared Individual: (a) Priming participants.’
Scene: The secondary performer gestures to the audience in the foreground, behind him to the left is the primary performer without the head mounted camera system. To the right is a full size dressmaker's dummy, black, wearing the head mounted camera and microphone system.
A.2 Figure 1b
Description of Figure 1b - Caption: ‘Stage orientation and organisation of The Shared Individual: (b) Main performer's technology set-up.’
Scene: Close up of the main performer of The Shared Individual. She is wearing a black top with a high neckline. From above the mouth her face is obscured by the black head mounted cameras (one over each eye), the black helmet to hold them, and the black microphones on either side.
A.3 Figure 1c
Description of Figure 1c - Caption: ‘Stage orientation and organisation of The Shared Individual: (c) Audiences’ set-up (IDFA 2016).’
Scene: Shot from backstage, the secondary performer stands behind the main performer who is seated wearing the head-mounted camera and microphones. Beyond them is a traditional theatre with red seats, mostly occupied by audience members wearing head mounted displays.
A.4 Figure 2a
Description of Figure 2a - Caption: ‘Establishing the physical synchronisation between the audience (a) and performer(b).’
Scene: Close shot of a single audience member wearing an HMD with his hands raised in front of the VR headset. Six other audience members around him are doing the same.
A.5 Figure 2b
Description of Figure 2b - Caption: ‘Establishing the physical synchronisation between the audience (a) and performer(b).’
Scene: The main performer sits in an office chair, arms outstretched and wearing the head mounted camera and microphones. Behind her the second performer stands and adjusts the technology on the main performer's head.
A.6 Figure 2c
Description of Figure 2c - Caption: ‘Establishing the sensual synchronisation between the the audience (c) and performer (d).’
Scene: Two rows of four participants wearing HMDs. In the first row three are raising their left hand and looking towards it, while the participant second from the camera looks straight ahead.
A.7 Figure 2d
Description of Figure 2d - Caption: ‘The black screen on the right-side of (d) would block audience's view.’
Scene: Image from the stereoscopic camera showing one performer kneeling in front of the legs of the artist wearing the camera, and a black screen to the left obscuring the audience.
A.8 Figure 3a
Description of Figure 3a - Caption: ‘Performing Frictional Realities on stage in front of a selected audience as part of the design process. Photos courtesy of [anon].’
Scene: Image taken from backstage, looking out towards the audience. In the foreground is a participant lying on the ground with her head towards the left of the frame, wearing an HMD. The other participant can be seen beyond them in the opposite orientation. At their feet there is a desk, and between them there are two chairs and other objects. The artist is standing adjusting an object on the closest desk.
A.9 Figure 3b
Description of Figure 3b - Caption: ‘Performing Frictional Realities on stage in a commercial setting without the audience. Photos courtesy of [anon].’
Scene: On a large red mat floor mat two participants in HMDs lie on top of light futons. At there feet each has a chair. Between them are a pair of suitcases and rattan baskets. At their head each has a desk. To the left of the frame the glow of the controlling computer screen can be seen.
A.10 Figure 4
Frame by Frame description of Figure 4 - Caption: ‘Examples of the comic visual language used in this paper.’
Frame 1
Description Text: descriptions are in boxes like this, read from left to right top to bottom
Scene: Two participants from Frictional Realities: Commercial wearing VR headsets with their backs to the camera, each interacting with a hanging rail of clothes.
Narrator: Narrators exposition is in jagged speech bubbles
Frame 2
Description Text: Highlights are in red and scenes that are too wide are cut, and stitched together with yellow tape
Scene: Split Frame. On the left is the main actor from The Shared Individual, facing the camera with her hands clasped in front of her. In the centre is a black dressmaker's dummy wearing a head mounted camera and microphone helmet. Between them are two HDMI cables, highlighted by a red circle. On the right the second actor stands facing the camera with one hand behind his back, pointing at head height with his right hand towards the dressmaker's dummy.
Speech: MAIN ACTOR: Speech by participants or actors is in round bubbles.
A.11 Figure 5a
Description of Figure 5a - Caption: ‘ Signalling next action, communicating with technology operator.’
Scene: On the left of the scene the artist signals with his left hand outstretched, index finger raised, towards the camera. In his right hand he holds a piece of string which is attached to an overhead lighting rig. A participant in an HMD is standing on a red office chair reaching up to the top of the string. In the background a cast member takes a picture on a mobile phone.
A.12 Figure 5b
Description of Figure 5b - Caption: ‘Physical manipulation.’
Scene: On the right and standing on a long bench is a participant, wearing a HMD and holding the hands of a cast member. The cast member stands on the floor in front of the participant, arms up to guide them. Behind the cast member the artists stands waiting.
A.13 Figure 6a
Description of Figure 6a - Caption: ‘The Shared Individual, during the rehearsal, The production team and their equipment are visible to the participants in VR as they see themselves from the main performer's POV.’
Scene: In the foreground with their backs to the camera are two technicians in front of two screens showing the views from various VR headsets. To the left is the performer wearing a head mounted camera and microphones, seated on an office chair in front of four audience members in a row at the right of the frame all wearing HMDs.
A.14 Figure 6b
Description of Figure 6b - Caption: ‘Frictional Realities: Experimental, during the showcase in front of a selected audience. Extended backstage and the crew which are positioned on the left side of stage are visible to audience members of the performance.’
Scene: Wide angle shot of a theatre stage. Two rows of audience members, facing the stage and away from the camera, are visible in the foreground. At the front of the stage two large screen TVs, one on either side, display the view from the participant's VR headsets. On stage are two participants and an actor in the centre with two desks as part of the performance. Around them are 3 desks with technology and the crew.
A.15 Figure 7
Frame by Frame description of Figure 7 - Caption: ‘The reaction of the audience breaks the separation between the experience of the two participants.’
Frame 1:
Scene: Two participants wearing VR headsets on a black stage, one on the right of the image and one on the left. Each stands at a table with a tablecloth looking at objects on the table. P1 on the left has the actor standing to her right at the table.
Narrator: The participant clumsily dropped an item on the ground
Frame 2
Description Text: The actor throws the tube. The audience laugh.
Scene: Close up of P1 and the actor. A poster tube is falling towards the floor on the right of the image, its trajectory shown by a thick red line starting from the artist.
Frame 3
Description Text: As the actor retrieves the tube, P2 deliberatly pushes her virtual pland off the table.
Scene: On the left the artist bends and reaches for the poster tube. On the right P2 is hunched over the table with her left arm outstretched pushing a VR tracker off the edge of the table. Her arm and the tracker are highlighted by a red triangle.
Frame 4
Description Text: As the actor tries to find the object...
Scene: P2 faces the camera, standing behind the table looking down. The artist is on hands and knees reaching under the table from the right.
Frame 5
Description Text:...P1 gets to the point in the script where they should give him something...but he is not in position.
Scene: Wide image. On the left, highlighted by a red circle, P1 is holding out a yellow object with a VR tracker. On the right P2 is looking at the table and the actor kneels behind her, looking at P1. Behind P2 at the back of the stage another team member is taking photos, the distance between the team member and P1 is highlighted by a thick red line.
Frame 6
Description Text: Another team member rushes over just in time.
Scene: Close up on P1 and the team member who was taking photographs. The team member has their arms outstretched even as they are moving into position to take the object being handed to them.
A.16 Figure 8
Frame by Frame description of Figure 8 - Caption: ‘The participants move out of position, causing the cable to run out before they can reach their goal.’
Frame 1:
Description Text: They Manhandle the Frame.
Scene: Two participants wearing VR headsets on a red mat with a black background. They have their backs to the camera, holding a large thin empty wooden frame between them with a VR tracker attached. An actor looks on holding a cable.
Frame 2
Description Text: They collect the first chair...but the participants end up out of position.
Scene: Two participants wearing VR headsets, helped by an actor, carry a chair. Another actor holds the wooden frame in position.
Frame 3
Description Text:The actor tries to keep the cable from snagging... but P2 reaches the end of the cable and can't reach the second chair.
Scene: In the centre P1 stands between the chair and the wooden frame being held by the second actor. The first actor is to the left, holding a cable highighted by a red circle. The cable stretches behind the two in the centre to P2 on the right. P2 reaches for the chair at the edge of the screen on the righ, but can't reach it. The cable is pulled tight pulling her green jacket up, as is highlighted by a second red circle.
Frame 4
Description Text: The actor pulls the wire over the head of the other participant...
Scene: The actor on the left reaches up to move the cable over the second actor (P1) the chair and the frame in the centre of the scene, her hand is highlighted by a red circle. P2 stands on the right with the cable pulling up her jacket.
Frame 5
Description Text: but even with prompting P2 just waits as the narrator moves the chair for her.
Scene: The first actor is no on the near side of the frame and first chair, with the cable clear. P2 stands where she was in the previous frame. The narrator has entered the scene and is holding the second chair.
Frame 6
Description Text: She has to be physically guided to the chair to re-engage with the space...
Scene: P1 is sitting on the first chair, visible through the empty wooden frame held by the second actor. The first actor reaches out and touches P2 from behind on her shoulder and right hand – both places highlighed by red circles – guiding her towards the chair that is positioned opposite P1.
Frame 7
Description Text:...before the experience can continue.
Scene: Two participants wearing VR headsets, sitting face to face with an empty frame between them held by the second actor. The first actor is passing a tray with various objects on it to both of them at the same time.
A.17 Figure 10
Frame by Frame description of Figure 10 - Caption: ‘The hat doesn't activate, the first time the participants have to sit and wait until the problem is found and solved.’
Frame 1
Description Text: P2 attempts to reach for the hat, as narrated
Scene: P2, wearing a VR headset, reaches out with his left hand for a virtual hat.
Frame 2
Description Text: Leaning more and more
Scene: P2 leans forward, stepping over the edge of the red mat he is standing on.
Frame 3
Scene: Split frame. On the left P1 is standing at the edge of his area waiting. On the right P2 is stepping off the area and leaning forward.
Narrator: You take a step backwards
Frame 4
Description Text: Both move back, away from the edges.
Scene: Split frame. On the left P1 is standing at further back into the mat. On the right P2 has also stepped back and is standing waiting.
Frame 5
Description Text: The narrator tries to fix the problem.
Scene: The narrator's face is lit by the glow of two computer screens, her hands hover over the keyboard.
Frame 6
Description Text: She tells them to close their eyes as the actors gesture back and forth.
Scene: The two participants stand waiting back to back across the scene, a cut-out image of the actor in the centre zooms on his hand gesture to the narrator.
Narrator: You close your eyes.
Frame 6
Description Text: He runs to help...
Scene: Close up of the actor running across the stage to the narrator.
Frame 7
Scene: The narrator and the actor point at the screens
Frame 8
Description Text: The improvisation surprises the second actor, she only gets a seat to P2 before P1 sits on the floor
Scene: Split Frame. On the right the second actor is moving a chair next to P2, on the left P1 is in the process of lowering himself to sit on the floor.
Narrator: You take a seat.
Frame 9
Description Text: Finally Fixed! The actors get back in position
Scene: P2 is sitting on the chair waiting, behind him the actor is getting back into position to continue the performance.
A.18 Figure 9
Frame by Frame description of Figure 9 - Caption: ‘The hat doesn't activate, the team improvises around the reset of the virtual environment.’
Frame 1
Description Text: P2 attempts to reach for the hat, as narrated
Scene: Split frame. The participants wearing VR headsets hold the wrists of the actors as they lead them through physical movements.
Narrator: You re-model and close your eyes.
Frame 2
Description Text: The actors continue to improvise while waiting for updates.
Scene: The re-modeling continues, with the participants holding the actors as they move. The actor with P1 looks towards the narrator.
Frame 3
Description Text: Narrator works to reset the VR world.
Scene: The narrator works at the keyboard, hand reaching for a mouse.
Frame 4
Description Text: The hats start working and the show continues.
Scene: The actors lead the participants towards the position of the next part of the experience.
Narrator: You follow Mimi.
Frame 5
Scene: P2 explores the virtual scene.
A.19 Figure 11
Frame by Frame description of Figure 11 - Caption: ‘One participant has their headset removed and restarted without stopping or changing the experience for the others.’
Frame 1
Description Text: The actor notices a problem with P4
Scene: Four participants sit in a row, facing right, all wearing head mounted displays. Behind them there is a desk with a computer screen. One actor's head peaks over the top of the screen, the other is standing looking at P4 - the furthest participant from the camera.
Frame 2
Description Text: Lifts his headset to explain his actions
Scene: The actor has his head close to P4’s left ear and has lifted his headphones so he can explain what is happening quietly.
Frame 3
Description Text: Removes the headset and earphones
Scene: The actor is standing straight, holding the headphones in his right hand and the HMD in his left.
Frame 4
Description Text: Looks into the headset and resets
Scene: The actor is holding the headset to his face with two hands, looking to the right.
Frame 5
Description Text: Then places the devices on P4
Scene: The actor is placing the HMD over the head of the participant.
Frame 6
Description Text: P4 is then able to rejoin the performance, which has continued uninterrupted.
Scene: Split frame. On the left the four participants are holding their hands in front of their face while wearing HMDs and earphones. On the right the main performer sits wearing the 360 camera and audio headset while the secondary performer leans towards her and continues the narrative.
A.20 Figure 12
Description of Figure 12 - Caption: ‘The Shared Individual: Priming visual aids’
Scene: In the foreground is the black dressmaker's dummy with the head-mounted camera and microphone system. Behind is a projection of the whole technical system using simple icons, showing the connection between the input headset, the various computational units, and the wireless transmission to the output headsets.
A.21 Figure 13
Description of Figure 13 - Caption: ‘The rhyme used to prime the audience on the technology in use.’
Scene: Text: We got two GoPros sitting side by side, (.) intraocular distance is six-point-five,
true 3D stereoscopy, (.) oozing out the signal in 1080p,
but that's not all, as you can see, (.) we are also working, binaurally.
One mic per ear, will make you hear, (.) this performance, loud and clear.
So one signal, per camera eye, (.) go through these two HDMI,
and two BlackMagics grab the stream, (.) and shoots it like a laser beam
to the VJ system over there, (.) we've hacked to make the stream appear
as an equirectangular top-bottom feed, (.) that the trashcan of a CPU will need
for the Wowza streaming engine to run fast (.) and make it possible for Wirecast
to bring the broadcast nice and steady (.) through the air, and we're here already where finally it ends up in our Homepack app, (.) that right now is running in the headset in your lap
and due to the network capacity, (.) there will be a thirty second latency.
A.22 Figure 14
Frame by Frame description of Figure 14 - Caption: ‘The priming on how to behave and interact in VR was performed through narrative exposition.’
Frame 1
Narrator: Welcome. Close your eyes, and relax. Follow the instructions and you know all will be well. You take a deep breath in and out. Keep breathing deeply and I will count down from 10 to 0.
Scene: Two participants lie on light futons wearing VR headsets. Further from the camera, P1 on the left has her hands folder over her stomach, while P2 has her arms on the floor to her sides.
Frame 2
Description Text: The narrator counts down slowly to from 10 to 0, with relaxation exercises and reassurance between each number
Scene: The two participants stay in the same positions as frame 1, the lights have lowered.
Frame 3
Narrator: You open your eyes and gaze up into the ceiling. You are not afraid, and yes, you are allowed to touch
Scene: Close up of P1. She is lying on the futon looking up. She has her left hand still resting on her stomach, her right hand reaches out in front of her.
Frame 4
Narrator: You sit up and have a look around you
Scene: Close up of P2. She is sitting cross-legged looking to the left of the frame.
Frame 5
Narrator: You stand up and look to the east. You think you are looking to the east, but are looking Northwest.
Scene: Close up of P1. She is standing, facing left.
Frame 6
Narrator: Embarrassed by this, you go to the chair
Scene: Image of P1 standing looking down at the right of the frame, a brown wooden chair is in the centre of the shot.
Frame 7
Narrator: You get a clear sensation of this chair, by touching it.
Scene: P1 is leaning over the brown wooden chair, right hand outstretched towards the top of the back of the chair.
REFERENCES
- Louise Barkhuus and Chiara Rossitto. 2016. Acting with Technology: Rehearsing for Mixed-Media Live Performances. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). ACM, New York, NY, USA, 864–875. https://doi.org/10.1145/2858036.2858344
- Steve Benford and Gabriella Giannachi. 2011. Performing Mixed Reality. MIT Press.
- Steve Benford, Chris Greenhalgh, Andy Crabtree, Martin Flintham, Brendan Walker, Joe Marshall, Boriana Koleva, Stefan Rennick Egglestone, Gabriella Giannachi, Matt Adams, Nick Tandavanitj, and Ju Row Farr. 2013. Performance-Led Research in the Wild. ACM Trans. Comput.-Hum. Interact. 20, 3 (July 2013), 14:1–14:22. https://doi.org/10.1145/2491500.2491502
- Steve Benford, Chris Greenhalgh, Gabriella Giannachi, Brendan Walker, Joe Marshall, and Tom Rodden. 2012. Uncomfortable Interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2005–2014.
- Andy Crabtree, Steve Benford, Tom Rodden, Tom Rodden, Chris Greenhalgh, Martin Flintham, Rob Anastasi, Adam Drozd, Matt Adams, Ju Row-Farr, Nick Tandavanitj, Anthony Steed, and Anthony Steed. 2004. Orchestrating a Mixed Reality Game ’on the Ground’. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’04). ACM, New York, NY, USA, 391–398. https://doi.org/10.1145/985692.985742
- Emily Dao, Andreea Muresan, Kasper Hornbæk, and Jarrod Knibbe. 2021. Bad Breakdowns, Useful Seams, and Face Slapping: Analysis of VR Fails on YouTube. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3411764.3445435
- Nima Dehghani. 2016. Decompensation. http://www.nimadehghani.com/.
- Arnulf Deppermann. 2013. Multimodal Interaction from a Conversation Analytic Perspective. Journal of Pragmatics 46, 1 (Jan. 2013), 1–7. https://doi.org/10.1016/j.pragma.2012.11.014
- Paul Dourish and Graham Button. 1998. On ”Technomethodology”: Foundational Relationships Between Ethnomethodology and System Design. Human–Computer Interaction 13, 4 (Dec. 1998), 395–432. https://doi.org/10.1207/s15327051hci1304_2
- Paul Duguid. 2012. On Rereading Suchman and Situated Action. Le Libellio d'Aegis 8, 2 (2012), 3–11.
- Mustafa Emirbayer and Douglas W. Maynard. 2011. Pragmatism and Ethnomethodology. Qualitative Sociology; New York 34, 1 (March 2011), 221–261. https://doi.org/10.1007/s11133-010-9183-8
- Sheena Erete and Jennifer O. Burrell. 2017. Empowered Participation: How Citizens Use Technology in Local Governance. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2307–2319.
- Sara Eriksson, Åsa Unander-Scharin, Vincent Trichon, Carl Unander-Scharin, Hedvig Kjellström, and Kristina Höök. 2019. Dancing With Drones: Crafting Novel Artistic Expressions Through Intercorporeality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). ACM, New York, NY, USA, 617:1–617:12. https://doi.org/10.1145/3290605.3300847
- Sarah Fdili Alaoui. 2019. Making an Interactive Dance Piece: Tensions in Integrating Technology in Art. In Proceedings of the 2019 on Designing Interactive Systems Conference(DIS ’19). ACM, New York, NY, USA, 1195–1208. https://doi.org/10.1145/3322276.3322289
- Blanca Li Company (Film Addict - Calentito). 2020. Blanca Li. Event. Le Bal de Paris de Blanca Li. https://www.blancali.com/en/event/146/Le-Bal-de-Paris.
- Diana Freed, Jackeline Palmer, Diana Minchala, Karen Levy, Thomas Ristenpart, and Nicola Dell. 2018. “A Stalker's Paradise”: How Intimate Partner Abusers Exploit Technology. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13.
- Harold Garfinkel. 1991. Studies in Ethnomethodology(1 edition ed.). Polity, Cambridge, UK.
- David Geerts, Evert van Beek, and Fernanda Chocron Miranda. 2019. Viewers’ Visions of the Future. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’19). Association for Computing Machinery, New York, NY, USA, 59–69. https://doi.org/10.1145/3317697.3323356
- Rebecca E. Grinter, W. Keith Edwards, Mark W. Newman, and Nicolas Ducheneaut. 2005. The Work to Make a Home Network Work. In ECSCW 2005, Hans Gellersen, Kjeld Schmidt, Michel Beaudouin-Lafon, and Wendy Mackay (Eds.). Springer Netherlands, Dordrecht, 469–488. https://doi.org/10.1007/1-4020-4023-7_24
- Jan Gugenheimer, Evgeny Stemasov, Julian Frommel, and Enrico Rukzio. 2017. ShareVR: Enabling Co-Located Experiences for Virtual Reality Between HMD and Non-HMD Users. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). ACM, New York, NY, USA, 4021–4033. https://doi.org/10.1145/3025453.3025683
- Giulio Iacucci, Carlo Iacucci, and Kari Kuutti. 2002. Imagining and Experiencing in Design, the Role of Performances. In Proceedings of the Second Nordic Conference on Human-Computer Interaction(NordiCHI ’02). ACM, New York, NY, USA, 167–176. https://doi.org/10.1145/572020.572040
- Etsuko Ichihara. 2014. 最高の求愛体験をあなたに。SRxSIxMS. http://etsuko-ichihara.com/works/最高の求愛体験をあなたに。srxsixms/.
- Alejandro G. Iñárritu. 2017. Carne y Arena. https://carne-y-arena.com.
- Rachel Jacobs, Steve Benford, and Ewa Luger. 2015. Behind The Scenes at HCI's Turn to the Arts. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’15). ACM, New York, NY, USA, 567–578. https://doi.org/10.1145/2702613.2732513
- Marie Jourdren. 2018. The Horrifically Real Virtuality. https://www.labiennale.org/en/cinema/2018/lineup/venice-virtual-reality/horrifically-real-virtuality.
- Marie Jourdren and Mathias Chelebourg. 2017. Alice, the Virtual Reality Play. https://futureofstorytelling.org/project/alice-the-virtual-reality-play.
- Shunichi Kasahara and Jun Rekimoto. 2014. JackIn: Integrating First-Person View with out-of-Body Vision Generation for Human-Human Augmentation. In Proceedings of the 5th Augmented Human International Conference (Kobe, Japan) (AH ’14). Association for Computing Machinery, New York, NY, USA, Article 46, 8 pages. https://doi.org/10.1145/2582051.2582097
- Jarrod Knibbe, Jonas Schjerlund, Mathias Petraeus, and Kasper Hornb\a ek. 2018. The Dream Is Collapsing: The Experience of Exiting VR. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). ACM, New York, NY, USA, 483:1–483:13. https://doi.org/10.1145/3173574.3174057
- Boriana Koleva, Ian Taylor, Steve Benford, Mike Fraser, Chris Greenhalgh, Holger Schnädelbach, Dirk vom Lehn, Christian Heath, Ju Row-Farr, and Matt Adams. 2001. Orchestrating a Mixed Reality Performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Seattle, Washington, USA, 38–45.
- Veronika Krauß, Alexander Boden, Leif Oppermann, and René Reiners. 2021. Current Practices, Challenges, and Design Implications for Collaborative AR/VR Application Development. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3411764.3445335
- Andreas Kunz, Markus Zank, Morten Fjeld, and Thomas Nescher. 2016. Real Walking in Virtual Environments for Factory Planning and Evaluation. Procedia CIRP 44 (Jan. 2016), 257–262. https://doi.org/10.1016/j.procir.2016.02.086
- Eric Laurier. 2014. The Graphic Transcript: Poaching Comic Book Grammar for Inscribing the Visual, Spatial and Temporal Aspects of Action. Geography Compass 8, 4 (2014), 235–248. https://doi.org/10.1111/gec3.12123
- Corinne Linder. 2018. The Ordinary Circus Girl » Program • Biennale Internationale Des Arts Du Cirque • BIAC. https://www.biennale-cirque.com/en/program/the-ordinary-circus-girl-248.
- Marie-G. Losseau and Yann Deval. 2020. Atlas. https://www.atlas-experience.xyz/.
- Tiffany Marques, Mário Vairinhos, and Pedro Almeida. 2019. How VR 360o Impacts the Immersion of the Viewer of Suspense AV Content. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’19). Association for Computing Machinery, New York, NY, USA, 239–246. https://doi.org/10.1145/3317697.3325120
- Tupac MARTIR. 2019. Cosmos Within Us - Memory Is All We Are. https://www.a-bahn.com/projects/cosmos-within-us.
- Sebastian Marwecki and Patrick Baudisch. 2018. Scenograph: Fitting Real-Walking VR Experiences into Various Tracking Volumes. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology(UIST ’18). Association for Computing Machinery, New York, NY, USA, 511–520. https://doi.org/10.1145/3242587.3242648
- John McCarthy and Peter Wright. 2004. Technology As Experience. interactions 11, 5 (Sept. 2004), 42–43. https://doi.org/10.1145/1015530.1015549
- Mark McGill, Daniel Boland, Roderick Murray-Smith, and Stephen Brewster. 2015. A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems(CHI ’15). Association for Computing Machinery, New York, NY, USA, 2143–2152. https://doi.org/10.1145/2702123.2702382
- Joshua McVeigh-Schultz, Max Kreminski, Keshav Prasad, Perry Hoberman, and Scott S. Fisher. 2018. Immersive Design Fiction: Using VR to Prototype Speculative Interfaces and Interaction Rituals Within a Virtual Storyworld. In Proceedings of the 2018 Designing Interactive Systems Conference(DIS ’18). ACM, New York, NY, USA, 817–829. https://doi.org/10.1145/3196709.3196793
- Elisa D. Mekler and Kasper Hornbæk. 2019. A Framework for the Experience of Meaning in Human-Computer Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–15.
- Paul Milgram and Fumio Kishino. 1994. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Information Systems vol. E77-D, no. 12 (Dec. 1994), 1321–1329.
- Michael Charles Neumann. 2020. The Trip. http://www.michaelcharlesneumann.com/the-trip/.
- Erika Shehan Poole, Marshini Chetty, Rebecca E. Grinter, and W. Keith Edwards. 2008. More than Meets the Eye: Transforming the User Experience of Home Network Management. In Proceedings of the 7th ACM Conference on Designing Interactive Systems(DIS ’08). Association for Computing Machinery, Cape Town, South Africa, 455–464. https://doi.org/10.1145/1394445.1394494
- Erika Shehan Poole, Marshini Chetty, Tom Morgan, Rebecca E. Grinter, and W. Keith Edwards. 2009. Computer Help at Home: Methods and Motivations for Informal Technical Support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’09). Association for Computing Machinery, Boston, MA, USA, 739–748. https://doi.org/10.1145/1518701.1518816
- Asreen Rostami. 2020. Interweaving Technology : Understanding the Design and Experience of Interactive Performances. Ph.D. Dissertation. Department of Computer and Systems Sciences, Stockholm University.
- Asreen Rostami, Emma Bexell, and Stefan Stanisic. 2018. The Shared Individual. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction (Stockholm, Sweden) (TEI ’18). Association for Computing Machinery, New York, NY, USA, 511–516. https://doi.org/10.1145/3173225.3173299
- Asreen Rostami, Donald McMillan, Elena Márquez Segura, Chiara Rossitto, and Louise Barkhuus. 2017. Bio-Sensed and Embodied Participation in Interactive Performance. In Proceedings of the TEI ’17: Tenth International Conference on Tangible, Embedded, and Embodied Interaction(TEI ’17). ACM, New York, NY, USA. https://doi.org/10.1145/3024969.3024998
- Asreen Rostami, Chiara Rossitto, Louise Barkhuus, Jonathan Hook, Jarmo Laaksolahti, Robyn Taylor, Donald McMillan, Jocelyn Spence, and Julie Williamson. 2017. Design Fiction for Mixed-Reality Performances. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems(CHI EA ’17). ACM, New York, NY, USA, 498–505. https://doi.org/10.1145/3027063.3027080
- Asreen Rostami, Chiara Rossitto, and Annika Waern. 2018. Frictional Realities: Enabling Immersion in Mixed-Reality Performances. In Proceedings of the 2018 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’18). ACM, New York, NY, USA, 15–27. https://doi.org/10.1145/3210825.3210827
- Neelima Sailaja, Andy Crabtree, James Colley, Adrian Gradinar, Paul Coulton, Ian Forrester, Lianne Kerlin, and Phil Stenton. 2019. The Living Room of the Future. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’19). Association for Computing Machinery, New York, NY, USA, 95–107. https://doi.org/10.1145/3317697.3323360
- Sara Pérez Seijo. 2017. Immersive Journalism: From Audience to First-Person Experience of News. In Media and Metamedia Management. Springer, Cham, 113–119. https://doi.org/10.1007/978-3-319-46068-0_14
- Marco Speicher, Christoph Rosenberg, Donald Degraen, Florian Daiber, and Antonio Krúger. 2019. Exploring Visual Guidance in 360-Degree Videos. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3317697.3323350
- Lucy A. Suchman. 1987. Plans and Situated Actions: The Problem of Human-Machine Communication.Cambridge University Press, New York, NY, US.
- Keisuke Suzuki, Sohei Wakisaka, and Naotaka Fujii. 2012. Substitutional Reality System: A Novel Experimental Platform for Experiencing Alternative Reality. Scientific Reports 2, 1 (June 2012), 459. https://doi.org/10.1038/srep00459
- Leila Takayama, Caroline Pantofaru, David Robson, Bianca Soto, and Michael Barry. 2012. Making Technology Homey: Finding Sources of Satisfaction and Meaning in Home Automation. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing(UbiComp ’12). Association for Computing Machinery, Pittsburgh, Pennsylvania, 511–520. https://doi.org/10.1145/2370216.2370292
- Jordan Tannahill. 2017. Draw Me Close. https://www.nationaltheatre.org.uk/immersive/projects/draw-me-close.
- Robyn Taylor, Guy Schofield, John Shearer, Jayne Wallace, Peter Wright, Pierre Boulanger, and Patrick Olivier. 2011. Designing from Within: Humanaquarium. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’11). ACM, New York, NY, USA, 1855–1864. https://doi.org/10.1145/1978942.1979211
- Robyn Taylor, Jocelyn Spence, Brendan Walker, Bettina Nissen, and Peter Wright. 2017. Performing Research: Four Contributions to HCI. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems(CHI ’17). ACM, New York, NY, USA, 4825–4837. https://doi.org/10.1145/3025453.3025751
- Jakob Tholander, Chiara Rossitto, Asreen Rostami, Yoshio Ishiguro, Takashi Miyaki, and Jun Rekimoto. 2021. Design in Action: Unpacking the Artists’ Role in Performance-Led Research. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3411764.3445056
- Peter Tolmie, Andy Crabtree, Tom Rodden, Chris Greenhalgh, and Steve Benford. 2007. Making the Home Network at Home: Digital Housekeeping. In ECSCW 2007, Liam J. Bannon, Ina Wagner, Carl Gutwin, Richard H. R. Harper, and Kjeld Schmidt (Eds.). Springer, London, 331–350. https://doi.org/10.1007/978-1-84800-031-5_18
- Kami Vaniea and Yasmeen Rashidi. 2016. Tales of Software Updates: The Process of Updating Software. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, San Jose, California, USA, 3215–3226. https://doi.org/10.1145/2858036.2858303
- Khrystyna Vasylevska and Hannes Kaufmann. 2017. Compressing VR: Fitting Large Virtual Environments within Limited Physical Space. IEEE Computer Graphics and Applications 37, 5 (Jan. 2017), 85–91. https://doi.org/10.1109/MCG.2017.3621226
- Mirjam Vosmeer and Ben Schouten. 2017. Project Orpheus A Research Study into 360° Cinematic VR. In Proceedings of the 2017 ACM International Conference on Interactive Experiences for TV and Online Video(TVX ’17). ACM, New York, NY, USA, 85–90. https://doi.org/10.1145/3077548.3077559
- Julie R. Williamson, Mark McGill, and Khari Outram. 2019. PlaneVR: Social Acceptability of Virtual Reality for Aeroplane Passengers. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300310
- Rayoung Yang and Mark W. Newman. 2013. Learning from a Learning Thermostat: Lessons for Intelligent Systems for the Home. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing(UbiComp ’13). Association for Computing Machinery, Zurich, Switzerland, 93–102. https://doi.org/10.1145/2493432.2493489
FOOTNOTE
1See for example: The Ordinary Circus Girl by Corinne Linder [33], Le Bal de Paris by Blanca Li [15], Atlas by Yann Deval & Marie-G Losseau [34], Cosmos Within Us by Tupac MARTIR [36], The Trip by Michael Charles Neumann [43] and Etsuko Ichihara's artworks on The Substitutional Reality [22], Draw Me Close by Jordan Tannahill [57], The Horrifically Real Virtuality by Marie Jourdren [25], Alice, The Virtual Reality Play by Jourdren & Chelebourg [26].
2 https://www.bombinabombast.com/kopia-på-local-hero-2?lang=en
3 https://www.samsung.com/global/galaxy/gear-vr/
4The recorded video of a version of The Shared Individual staged in The Conference event is available at https://videos.theconference.se/bobina-bombast-live-virtual-reality
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
CHI '22, April 29–May 05, 2022, New Orleans, LA, USA
© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9157-3/22/04…$15.00.
DOI: https://doi.org/10.1145/3491102.3502139