What Is Mixed Reality
What Is Mixed Reality
What Is Mixed Reality
ABSTRACT 1 INTRODUCTION
What is Mixed Reality (MR)? To revisit this question given This paper is motivated by many discussions with colleagues,
the many recent developments, we conducted interviews researchers, professionals in industry, and students active
with ten AR/VR experts from academia and industry, as well in the HCI community, all working on Virtual Reality (VR),
as a literature survey of 68 papers. We find that, while there Augmented Reality (AR), and Mixed Reality (MR) projects.
are prominent examples, there is no universally agreed on, These discussions showed that, while MR is increasingly
one-size-fits-all definition of MR. Rather, we identified six gaining in popularity and relevance, and despite the relative
partially competing notions from the literature and experts’ popularity of Milgram & Kishino’s Reality–Virtuality Con-
responses. We then started to isolate the different aspects of tinuum [44], we are still far from a shared understanding of
reality relevant for MR experiences, going beyond the primar- what MR actually constitutes. Many see MR as a synonym
ily visual notions and extending to audio, motion, haptics, for AR. Some consider MR strictly according to the definition
taste, and smell. We distill our findings into a conceptual given by Milgram & Kishino [44], i.e., a superset of AR in
framework with seven dimensions to characterize MR appli- terms of a “mix of real and virtual objects within a single
cations in terms of the number of environments, number of display.” Yet, others consider MR distinct from AR in the
users, level of immersion, level of virtuality, degree of interac- sense that MR enables walking into, and manipulating, a
tion, input, and output. Our goal with this paper is to support scene whereas AR does not. Some do not even attempt, or
classification and discussion of MR applications’ design and want, to specify what MR is. What adds to the confusion
provide a better means to researchers to contextualize their is that key players like Microsoft are pushing MR as a new
work within the increasingly fragmented MR landscape. technology, first, with HoloLens, then expanding to a range
of Windows Mixed Reality devices, along with the Mixed
CCS CONCEPTS Reality Toolkit to build applications for these devices.
• Human-centered computing → Mixed / augmented What does this paper do? The goal of this paper is to work
reality; HCI theory, concepts and models; towards a shared understanding of the term MR, the related
concepts and technologies. Many researchers base their un-
KEYWORDS derstanding of MR on the Reality–Virtuality Continuum [44],
Augmented reality; conceptual framework; expert interviews; which they consider the go-to source for a widely accepted
literature review; mixed reality; taxonomy; virtual reality. definition of MR. Yet, as we will show with expert inter-
views and a literature review reported in this paper, it is
ACM Reference Format: not a universally agreed notion. As the authors noted them-
Maximilian Speicher, Brian D. Hall, and Michael Nebeling. 2019.
selves, the core limitation of the continuum is the fact that it
What is Mixed Reality?. In CHI Conference on Human Factors in
is restricted to visual features. Broadly speaking, MR origi-
Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow,
Scotland, UK. ACM, New York, NY, USA, 15 pages. https://doi.org/ nated from computer graphics, hence common notions of MR
10.1145/3290605.3300767 are mostly restricted to graphical aspects. Yet, technological
capabilities, design practices, and perceptions of MR have
Permission to make digital or hard copies of all or part of this work for evolved since the continuum was first proposed in 1994, and
personal or classroom use is granted without fee provided that copies discussions about MR have become increasingly difficult. We
are not made or distributed for profit or commercial advantage and that
therefore found it necessary to identify the different working
copies bear this notice and the full citation on the first page. Copyrights
for components of this work owned by others than the author(s) must definitions of MR that are used “in the wild”, how they differ
be honored. Abstracting with credit is permitted. To copy otherwise, or and relate, and what their limitations are. We hope that our
republish, to post on servers or to redistribute to lists, requires prior specific effort will allow the community to work towards a more
permission and/or a fee. Request permissions from permissions@acm.org. consistent understanding of MR and apply it in different con-
CHI 2019, May 4–9, 2019, Glasgow, Scotland, UK texts, e.g., to better characterize MR experiences using such
© 2019 Copyright held by the owner/author(s). Publication rights licensed
distinguishing factors as single-user or multi-user, same or
to ACM.
ACM ISBN 978-1-4503-5970-2/19/05. . . $15.00 different environments, different degrees of immersion and
https://doi.org/10.1145/3290605.3300767 virtuality, and implicit vs. explicit interactions.
What does this paper not intend to do? The goal of this 2 FIRST THINGS FIRST: MILGRAM ET AL.’S
paper is not to find the definition of MR, or even to develop CONTINUUM
a new one. First, there are already several definitions in the Similar to the goal of this paper, in the early 90s, Milgram et
literature and in use, and another one would only add to the al. noticed that “Although the term ‘Augmented Reality’ has
confusion. Second, it is not realistic or constructive to try to begun to appear in the literature with increasing frequency,
impose a definition onto an active community. Finally, MR is we contend that this is occurring without what could reason-
a rapidly developing field and it is not clear whether a single ably be considered a consistent definition” [45]. Hence, they
definition would be sufficient to cover all its aspects. developed the Reality-Virtuality Continuum—first described
This paper offers two core contributions: in [44]—as a means to facilitate a better understanding of
(1) We compile six widely used working definitions of MR. AR, MR, and VR and how these concepts interconnect.
These have been derived from interviews with ten experts The continuum has two extrema: a fully real environ-
and a literature review of 68 sources. We provide an overview ment, the real world, and a fully virtual environment, i.e.,
of the status quo, showing that there is no one-size-fits-all VR. Everything in between—not including the extrema (cf.
definition for a concept as broad as MR, but that there are [44], Fig. 1)—is described as MR. Types of MR can be AR,
indeed different, competing types of MR to be distinguished. which is a mostly real environment augmented with some
(2) We provide a conceptual framework for organizing virtual parts, and Augmented Virtuality (AV), which is “either
different notions of MR along seven dimensions—number completely immersive, partially immersive, or otherwise, to
of environments, number of users, level of immersion, level which some amount of (video or texture mapped) ‘reality’ has
of virtuality, degree of interaction, input, and output. This been added” [45]. In particular, according to this definition,
framework enables more precise capture of the different VR is not part of MR and AR is only a subset of MR.
types of MR in order to reduce confusion, helps with the Today, this continuum is still probably the most popu-
classification of MR applications, and paints a more complete lar source when it comes to definitions of MR, with 3553
picture of the MR space. [44] and 1887 [45] citations on Google Scholar, as of August
Who is this paper for? First and foremost, this paper is in- 2018. Yet, it stems from the beginning of the 90s and tech-
tended for anyone who wants to learn about the current nological capabilities as well as the capabilities of MR have
state of MR. Given the proliferation of MR technologies significantly evolved. One shortcoming of the continuum
and increased interest among new developers, designers, is that it is mostly focused on visual displays. The authors
researchers, and in particular students, our work aims to note that “although we focus [...] exclusively on mixed re-
facilitate their participation in the existing MR community. ality visual displays, many of the concepts proposed here
It is our attempt to enable people with differing understand- pertain as well to analogous issues associated with other dis-
ings to better communicate which notions of MR they are play modalities[, f]or example, for auditory displays”. This,
working with, with the goal of improving reasoning and however, means that novel developments like multi-user or
reducing misunderstandings, including in peer review pro- multi-environment MR experiences cannot be fully covered.
cesses. Moreover, this paper provides researchers already Moreover, despite its popularity and being one of the main
working in the field of MR with a way to think about their frameworks guiding MR researchers (as will become evident
work, and hopefully one that enables them to better contextu- in our expert interviews and literature review), we will find
alize, evaluate, and compare their work, as well as identifying that the continuum is neither a universal nor the definition
opportunities for further research. In our interviews, experts of Mixed Reality.
noted that, even though notions are fading and might not
distinguish, or even use, the terms AR/MR/VR anymore in
3 ASPECTS OF REALITY
the future, it is important to have a common vocabulary.
In the following, as the background for this paper, we Many experts and researchers the authors have talked to (and
will first revisit the Reality–Virtuality Continuum as one many of whom are familiar with the continuum) initially
of the most popular notions of MR, and from the literature only consider the visual—i.e., virtual 3D models added to a
identify aspects of reality beyond the visual that are relevant real environment—and a single display when describing or
for MR. Next, we go into the details of our expert interviews discussing MR. However, in the context of this paper, we
and literature review. As part of our findings, we present six are also particularly interested in exploring which aspects
notions, or working definitions, of MR and the extent to which beyond the purely visual are considered MR, and in which
they are being used. Finally, based on the aspects of reality ways these have already been addressed. From the literature,
and working definitions, we propose a conceptual framework we have identified five other aspects of reality that can be
and illustrate its use by classifying two MR applications simulated in a virtual environment, or translated from the
mentioned in interviews and the literature. physical into the digital to align two environments:
Audio. “Auditory displays” are a possible extension to the how they usually explain AR, VR, and MR to their students
Reality–Virtuality continuum mentioned in [44]. An early or clients and moreover asked for specific examples they
example is Audio Aura [51], which augments the physical typically use—if any—to illustrate what AR/MR/VR are and
world with auditory cues instead of 3D models. Dobler et are not. Next, we inquired into what interviewees see as the
al. [18] and Çamcı et al. [13] combine visual and audio ele- relevant aspects of reality that should be considered in the
ments to enable sound design in VR or MR. context of MR and furthermore gave three examples, for each
Motion. It is not possible to augment the physical world of which they should state and explain whether it is MR or
with motion in a digital way. Yet, motion is an important not: (1) listening to music; (2) Tilt Brush, where the motion
aspect for aligning physical and virtual realities, e.g., by of the user’s hands is translated from the physical into the
manipulating 3D models based on motion capture [14, 47]. virtual world; and (3) Super Mario Bros.™, where Mario (in
Haptics. A variety of research has looked into haptics as the virtual world) jumps when the user pushes a button in
input, e.g., in the form of tangible user interfaces [81], and the physical world. Here, the idea was to provide examples of
output, such as [71], who describe a “device that lets you “increasing controversy” in order to explore the boundaries
literally feel virtual objects with your hands”. A third variant of MR and what the experts think constitutes a (minimal)
are passive haptics (e.g., [32]) that can be used to enhance MR experience, e.g., whether a simple augmentation or trans-
virtual environments. lated motion is enough. Following this, we asked whether
Taste/Flavor. First steps have been taken into the direc- it will still make sense to explicitly distinguish between AR,
tion of simulating the experiences of eating and tasting. [52] MR, and VR five or ten years from now. The final questions
create a virtual food texture through muscle stimulation asked the experts to explain whether it is useful to have a
while [60] have successfully simulated virtual sweetness. single definition of MR at all and if so, which would be the
Smell. Another key human sense is smelling. Previous most useful in the context of HCI research.
work [12] has looked into smell in virtual environments
as early as 1994 while [59] inquired into authentic (virtual) What is AR?
smell diffusion. Hediger & Schneider [24] discuss smell as The interviewees named a set of relevant characteristics for
an augmentation to movies. AR experiences, not all of which are compatible. The merging
of 3D graphics with the real world and spatial registration in
4 EXPERT INTERVIEWS the physical environment were mentioned as requirements
To get a better understanding of accepted notions of Mixed five times each. I2 explained AR as the combination of the
Reality and in which ways they potentially differ—and there- human, the digital, and the physical world, so that AR cannot
fore as a foundation for our conceptual framework of Mixed be considered independent of the user. Another two experts
Reality—we have interviewed a total of ten AR/MR/VR ex- supported this by mentioning the necessity that the user has
perts (I1–I10) from academia and industry. to be in control. I3 stressed that virtual content must be able
We recruited experts from academia (5) and industry (5) to interact with the real world while I6 stated that AR, unlike
we identified based on their experience and leadership in VR, always happens in the physical space you are currently in.
the AR/VR field. All interviewees had at least two years of Two experts provided rather broad explanations by stating
experience and eight had 8+ years of experience working that AR is any contextual digital overlay or augmenting your
with AR, MR, and/or VR technologies. Our interviewees reality in any way (which specifically stand in contrast to
were: a full professor, an associate professor, an assistant spatial registration). I7 and I10 provided less technical expla-
professor, a post-doctoral researcher, an AR consultant, a UX nations by stressing that AR means augmenting or creating
engineer for a popular AR/VR headset, an R&D executive, experiences by enhancing human perception.
the CTO of an AR/VR company, the CEO of an AR company, Examples. As for examples they typically use to consti-
and the head of an AR lab. Their backgrounds included HCI, tute what AR is and is not, the most prominent was Pokémon
computer vision, technology-enhanced learning, wearable GO. It was given as an example for AR three times; yet, the
computing, media arts, architecture, design, AR training and original version also served as a negative example thrice due
maintenance, and entertainment. Each expert received a $20 to the missing spatial registration. Other examples for AR
gift card for their participation. included Terminator (2×), AR training and maintenance (e.g.,
The interviews started with a short briefing about the Steven Feiner’s work; 2×) Google Glass, Snapchat, FB AR
background of our research and comprised a total of 16 ques- Studio, and Pepper’s ghost. I10’s understanding was that AR
tions. These questions were designed to uncover differences is not bound to technology and, therefore, books can be AR
in perceptions of AR/MR/VR and relevant aspects beyond if they augment your interactions with the world. Besides
the visual, and to inquire into understandings of current Pokémon GO, further examples for what does not constitute
and potential future definitions. First, we asked interviewees AR were sports augmentations on TV (3×), “anything that’s
just HUD or 2D contextual” (2×), again Google Glass (2×), and “the same as AR” (I9). Two experts explicitly expressed
the Pokémon GO map view (because despite its contextual regret over the fact that the term is also used for marketing
nature it is fully virtual), (static) paintings, and VR. purposes nowadays (I1: “It’s all marketing mumbo-jumbo at
Generally, it seems that experts have differing understand- this point.”). Moreover, I4 pointed out that “only academics
ings of what constitutes AR. For some, simple overlays al- understand the MR spectrum”. I10 said that they had not
ready qualify as long as they are contextual (e.g., Google thought enough about MR conceptually, but that they usually
Glass) while others explicitly require spatial registration in see it as “realities that are mixed in a state of transition” and
space and/or interactions with the physical space—from both, sometimes use AR and MR interchangeably.
users and virtual content. Examples. In comparison to AR and VR, interviewees
also struggled with giving specific examples for what is and
What is VR? is not MR. Three experts referred to HoloLens as a specific
Unlike with AR, experts were more in agreement about what example for MR while I8 mentioned diminished reality and
constitutes VR. Eight mentioned that the defining charac- projection-based augmentation. I5 chose Pokémon GO as a
teristic is a fully synthetic or fully virtual view while one whole, i.e., the combination of catching a Pokémon in AR
described it as a completely constructed reality. Moreover, the plus the VR map view. I10 chose windows in a house as their
necessity for head tracking or a head-worn display and full im- example, since they mediate a view, but can also alter your
mersion were mentioned five and four times, respectively. I2 experience with noises and smells if open. In terms of what
and I6 specifically noted that VR features an isolated user, i.e., does not constitute MR, I1 and I9 mentioned anything that
there is a lack of social interaction. Two experts described is not AR (or registered in space) and gave Google Glass as
VR as “the far end of the MR spectrum” (I4, I7), while three an example. Moreover, I6 referred to just overlays without
mentioned the ability to visit remote places as an important an understanding of the physical environment, in the sense
characteristic (I6, I7, I10). that in MR, a virtual chair would be occluded when standing
Examples. Two experts (I4, I5) referred to watching 360- behind a physical table. I3 did not consider HoloLens and
degree content on a headset as an example for VR. Moreover, RoboRaid as MR, because neither is capable of full immersion,
360-degree movies, Tilt Brush, architectural software, flight but said that these come closest to their idea of MR.
simulators, virtual museums, movies like The Matrix, CAVEs As above, there are major differences in experts’ under-
and Sutherland’s Ultimate Display [78] were mentioned once standing of MR. Generally, four themes become apparent so
each. Contrary, watching 360-content on a mobile device like far: MR according to Milgram et al.’s continuum, MR as a
a smartphone was given as a non-VR example by I4 and I5 “stronger” version of AR, MR as a combination of AR and VR
(due to the missing head-worn display). “Simple” desktop 3D (potentially bound to specific hardware or devices), and MR
on a screen and anything happening in the space you’re in as a synonym for AR.
(i.e., the real world) were given once and twice respectively.
Overall, our experts largely agreed that a fully virtual view, What are relevant aspects of reality?
full immersion and head-worn technology are what consti- Since discussions about AR, MR, and VR usually evolve
tutes VR as opposed to AR. Therefore, their characterization around graphics and visuals—I8 noted that we are “visu-
of VR is mainly based on hardware and graphical aspects. ally dominant creatures”—we also asked interviewees for
However, also social aspects were explicitly mentioned. other aspects of reality that are relevant for MR, or could be
in the future. Five experts each said that MR should consider
What is MR? (spatial) audio and haptics while three said any of the user’s
Experts had more difficulties to specify what constitutes MR, senses or any physical stimulus, and two each interactions,
with a number of contradicting statements, which illustrates and anything sensors can track. Smell was mentioned twice.
our motivation for writing this paper. They described eight Aspects that were mentioned once included: other partici-
characteristics, of which everything in the continuum (incl. pants (i.e., the ‘social aspect’, I3), geolocation (I5), motion (I7),
VR), “strong” AR (i.e., like AR, but with more capabilities)1 , temperature (I8), as well as wind and vibrotactile feedback
and marketing/buzzword were mentioned three times each. (I9). To provoke thinking more about aspects beyond visual
Two experts each referred to AR plus full immersion, i.e., the and the “boundaries” of MR, we furthermore asked the inter-
possibility to do both, AR and VR in the same app or on viewees to reason for each of the following examples why it
the same device. The remaining explanations were “MR is is or is not MR.
the continuum” (I2), the combination of real and virtual (I6), Listening to Music. Seven of the experts stated that lis-
that MR is bound to specific hardware (e.g., HoloLens; I6), tening to music is not MR, the most prominent reason given
being the lack of a spatial aspect (5×). Additionally, I3 noted
1 For instance, I8 described AR as “the poor man’s version of MR.” that it is not immersive enough while I7 stated that music is
not MR when it is just a medium to replace the live experi- between AR and VR will remain. Yet, they also specifically
ence and does not react to (i.e., mix with) the environment. noted that differences are fluent and human perception, not
Yet, three of the experts were undecided. One stated that you devices, should be the deciding factor for distinction. I1 and
“could technically say it’s MR”, but that the “visuals are still I9 stated that in the future, we might distinguish based on
very important”. I10 stated that it depends on your “state of applications rather than technology.
mind” and whether you are “carried away by the music”.
Tilt Brush. The idea here was to inquire into whether the
translation of the motion of the user’s hands into the motion Is a single definition useful?
of the virtual controllers (i.e., adding a “part” of the real to Six experts stated that it would be useful to have a single
the virtual world) is enough to constitute MR in the experts’ definition of MR, while two said it would not, I8 said it
opinions. Almost unanimously, they argued that Tilt Brush does not matter, and I5 was undecided. Two experts (I1, I2)
is VR rather than MR. The main reasons given were that no explicitly noted that context matters and it is important in
part of the physical world is visible (6×), that motion is simply conversations to make one’s understanding of MR clear. I7
an input to interact with the virtual reality (4×), and the high stressed the importance of a coherent frame of reference. I2
level of immersion (3×). I2 explicitly stated that “just input also pointed out that “definitions are temporary”, while I3
is not sufficient to constitute MR”. I7 argued that it is MR, and I5 mentioned that the term “Mixed Reality” is at least
because VR is a type of MR according to the continuum and partly marketing.
because the interaction is visible even though the controllers Regarding a suitable definition for the specific context of
are virtual. HCI research, I7 proposed the notion of MR encompassing
Super Mario Bros.™ This was maybe the most provoca- everything according to the continuum, including VR, and
tive of the examples. The experts were unanimously con- stressed that it is time to “fix the broken definitions from
vinced that pushing a button on a video game controller is the past”. Similarly, I9 proposed an extensible version of
not MR, even though technically a motion is translated from the continuum. I2 noted that they would like to see more
the physical into a virtual world. Four experts reasoned that “consistent definitions for everything in the context of MR”.
it is just input. A missing spatial aspect and “if this is MR, Three experts explicitly stated that a single definition would
then everything is” were mentioned three times each. I6, I8, be very useful for the community. I1 compared the situation
and I9 said that it would be MR if Mario were standing in the to that of the different competing definitions of grounded
room, though, while I7 and I8 referred to the gap between theory. Additionally, I5 stated that a definition of MR for HCI
real world and GUI. must include interactions since “interaction is a very big part
Generally, this shows that spatial registration seems to besides the rendering”. I10 noted that it might be worthwile
be one of the core features of MR. Many experts argued to move away from technology-based to an experience-based
that listening to music becomes MR as soon as the music understanding. Per I8, different understandings lead to better
reacts to the environment. Moreover, it seems that a certain research since they help to identify gaps.
minimum of the physical environment needs to be visible. For
instance, I5, I6, and I8 noted that Tilt Brush would be MR if
the user’s actual hands were visible instead of virtual con- Results (so Far)
trollers. Finally, while interactions (both with other users For a start, we have learned that experts struggle when it
and the virtual parts of the environment) were mentioned comes to defining AR and MR, while the distinction from
as an important aspect of reality for MR, simple input is not VR is more clear and mainly based on visual as well as hard-
sufficient to constitute MR. ware aspects. So far, it seems that spatial registration and
Will there still be AR/MR/VR in the future? the possibility to see at least some part of the physical envi-
ronment constitute defining features of MR, while “simple”
Regarding the future of the different concepts, four experts input (e.g., through motion capture) does not, in the experts’
said that five or ten years from now, we will not distinguish opinion. While the majority of interviewees considered a
between AR, MR, and VR anymore. In their opinion, this will single definition of MR useful—also in the context of HCI
be mainly due to the fact that different hardware/devices will research—they as well generally agreed that this is unlikely
merge and be capable of everything (I4, I5, I6, I10) and that (I4: “Never going to happen.”) and we might not even use
people will internalize the differences with more exposure to the terminology anymore in the future. Furthermore, inter-
the technology (I2). Yet, another four experts said we will still actions, geolocation, and temperature were mentioned as
distinguish between the concepts (or at least two of them, relevant aspects of reality for MR that were not in our initial
e.g., AR/MR vs. VR) while two were undecided. For instance, list, but will be incorporated.
I7 argued that the gap between devices and therefore also
“Mixed Reality” in
to premier HCI venues. Hence, we added UIST and, informed
title or abstract; CHI, CHI by [73], CHI and CHI PLAY. To find the most relevant pa-
PLAY, UIST, ISMAR 2014-18
pers from these conferences, we based our search on two
popular academic databases—dblp2 and Scopus3 —as well as
“Mixed Reality”
in title; CHI, a two-tier strategy (Figure 1).
CHI PLAY, In a first round, we selected all papers from the above
UIST, ISMAR
2014-18 venues that featured the term “Mixed Reality” in their ti-
narrow
37 tles. We restricted the search range to 2014–2018 (inclu-
sive), i.e., the past five years, in order to ensure that we
extract only notions with reasonable currency. This corre-
broad +27 other sponded to the dblp search term "mixed reality" venue:X:
+4 year:Y: with X ∈ {CHI, CHI_PLAY, UIST, ISMAR} and Y ∈
{2014, ..., 2018}. Papers from companion proceedings were
Figure 1: Our paper selection strategy for the literature re- manually excluded from the results.
view. We identified 37 relevant papers in round one and 27 In a second round, we extended our search to papers from
in round two, and added four other sources for a total of 68. the four venues between 2014 and 2018 that featured the
term “Mixed Reality” in their abstracts (but potentially not
in their titles). This corresponded to the Scopus search term
From the interviews we can derive a preliminary list of (TITLE-ABS-KEY("mixed reality") AND CONF (chi OR
working definitions of MR, which were explicitly or implic- uist OR ismar)) AND DOCTYPE(cp) AND PUBYEAR >
itly used by the experts and which we will refine and extend 2013 AND PUBYEAR < 2019. Again, papers from companion
based on the upcoming literature review: proceedings were excluded.
MR according to the Reality–Virtuality Continuum. The process of reviewing an individual paper was as fol-
In this case, the term “MR” is used based on the definition in lows. We first identified the authors’ understanding of MR
[44] or [45]. It can either include VR or not. (I1, I2, I7) by finding the part of the paper in which the term was de-
MR as a Combination of AR and VR. In this case, MR fined. In case no explicit definition (or at least explanation)
denotes the capability to combine both technologies—AR was given, we derived the authors’ understanding implicitly
and VR—in the same app or on the same device. (I3, I5) from the described contribution. If the authors cited one or
MR as “strong” AR. This understands MR as a more ca- more other sources from which they seemingly derived their
pable version of AR, with, e.g., an advanced understanding of understanding of MR, those sources were added to the stack
the physical environment, which might be bound to specific of papers to be reviewed—if they referred to MR at some
hardware. (I4, I6, I8) point themselves (which was not the case for [1, 2, 8, 20, 46]).
MR as a synonym for AR. According to this working Also, for each paper, we updated a citation graph (Figure 2)
definition, MR is simply a different term for AR. (I9, I10) showing which papers rely on which references for their
understanding of MR.
5 LITERATURE REVIEW
Overall, we reviewed 37 papers in round one and an ad-
To get a more thorough understanding of existing notions ditional 27 papers in round two. Moreover, we added four
of MR “in the wild”, we decided to conduct an additional other sources known to us that deal with the definition of MR
literature review. From a total of 68 sources we were able [7, 11, 27, 34], which makes a total of 68 reviewed sources. In
to extract six different notions of MR, including the four the following two sections, we will first present existing no-
working definitions identified during the expert interviews. tions of MR, which we synthesized from the above literature
review in combination with the expert interviews. Subse-
Method quently, we will describe other findings from the literature
We focused on four primary sources known for high-quality review based on the identified notions.
Mixed Reality research: (CHI) the ACM CHI Conference on
Human Factors in Computing Systems; (CHI PLAY) the ACM 6 EXISTING NOTIONS OF MIXED REALITY
SIGCHI Annual Symposium on Computer-Human Interac- Based on the literature review and expert interviews com-
tion in Play; (UIST) the ACM Symposium on User Interface bined, we were able to derive six notions of MR. To synthesize
Software and Technology; and (ISMAR) the International these, we performed thematic coding of all definitions and
Symposium on Mixed and Augmented Reality. These were
selected since there are already systematic MR reviews fo- 2 https://dblp.org/
cused on ISMAR [16, 91] and we intended to build the bridge 3 https://www.scopus.com/
Piumsomboon et al.
Müller et al. CHI ‘16 SIGGRAPH Asia ‘17 Al Kalbani et al. Richter-Trummer et al. Schwandt & Broll
ISMAR ‘16 ISMAR ‘16 ISMAR ‘16
Reilly et al. CHI ‘15 Yannier et al. CHI ‘15 Ohta et al. ISMAR ‘15 Yang et al. ISMAR ‘15
Billinghurst et al.
Found. Trends HCI ‘15 Morales et al. ISMAR
Weichel et al. CHI ‘14 Hough et al. ISMAR ‘14
‘14
Yule et al. CHI PLAY
‘15
Hilliges et al. CHI ‘12
Sharma et al. CHI Sharma et al. CHI Sharma et al. CHI
PLAY ‘15 PLAY ‘16 PLAY ‘17
Feiner SA ‘02
Benford et al. TOCHI
Ullmer & Ishii IBM ‘06
Billinghurst & Kato Syst. J. ‘00
ISMAR ‘99 Benford & Giannachi
Bonsignore et al. CSCW ‘11
Koleva et al. ECSCW Benford at al. TOCHI ‘12
‘99 ‘98
Figure 2: The citation graph derived from round one of the literature review, with clusters of CHI (light red), UIST (light
yellow), ISMAR (light blue), and CHI PLAY (light pink) papers.
was the most consistently used notion across all venues Venue Papers total w/ MR reference(s) %
and was the most-used among UIST (3/7, 42.9%) and “other” CHI 19 12 63.2
sources (10/19, 52.6%). The remaining notions, 3—Collabo- CHI PLAY 6 5 83.3
ration, 4—Combination, and 6—Strong AR were not among UIST 7 3 42.9
the most-used for individual venues. ISMAR 17 6 35.3
Generally, this suggests two things. First, even though other 19 8 42.1
the Reality–Virtuality Continuum is considered the go-to total 68 34 50.0
definition of MR by many and was indeed the most-used Table 2: Overview of the use of references to explain
notion overall, it was still only referred to by just over a third or define a source’s understanding of MR.
of the reviewed papers, which highlights the fragmentation
of the MR landscape and the lack of a predominant notion.
Second, the use of different notions seems to be not uniformly
distributed across venues. For instance, CHI might be more CHI PLAY (5/6, 83.3%) papers do so, while the numbers of
about collaboration and CHI PLAY (i.e., MR games) more papers with respective references lies below 50% for UIST,
about aligning distinct environments. However, the sample ISMAR, and “other” (Table 2). This lack of references could
size is too small for findings to be conclusive. have three reasons. Authors might use an intuitive under-
standing of MR or consider it common sense and therefore
Which papers are cited as definitions of MR? do not see the need to provide a reference, or authors might
have an understanding of MR that is not yet covered by
Another goal of our literature review was to investigate
existing literature.
which specific sources are used as references to explain or
Overall, 22 sources were referenced5 a total of 49 times,
define one’s understanding of MR. Overall, 34 of the 68 pa-
with 13 in round one of the literature review and seven in
pers (50.0%) referenced one or more sources for explaining
round two (two papers appeared in both). The most popular
or defining MR, and provided a total of 49 of such references.
Yet, only a majority of the reviewed CHI (12/19, 63.2%) and 5 [2, 4–6, 8–10, 17, 20, 25, 29, 30, 33, 37, 43–46, 57, 77, 81], and HoloLens.
reference was Milgram & Kishino [44], with 20 citations, finding a minimal framework that still allows us to classify
followed by Benford & Giannachi [5] with five citations, all all notions unambiguously.
of which came from CHI PLAY papers. Transitively, however, Number of Environments. This dimension refers to
[44] would be referenced by an additional 5 (round one, cf. the number of physical and virtual environments nec-
Figure 2) plus 2 (round two) papers. This means that 27 of essary for a certain type of MR. For instance, if an AR
the 34 papers (79.4%) providing at least one reference are in and a VR user are in the same room, the VR experience
some way connected to Milgram & Kishino’s paper. would be treated as a separate environment.
Venue-wise, the reviewed CHI papers referenced a total Number of Users. The number of users required for a
of 13 unique sources; Milgram & Kishino [44] was the most- certain type of MR. More than one user is only strictly
referenced with six citations. CHI PLAY papers cited four required for notion 3—Collaboration, but, of course, is
sources a total of 14 times, with the aforementioned Benford also possible for other kinds of MR.
& Giannachi [5] being the most popular. Only three UIST Level of Immersion. This dimension refers to how im-
papers provided references. Milgram & Kishino [44], Mil- mersed the user feels based on the digital content they
gram et al. [45], and HoloLens were cited once each. ISMAR perceive. This is not a linear relationship with level of
papers referenced four different sources a total of six times, virtuality. For instance, a head-worn MR display might
again with Milgram & Kishino [44] being the most-cited, as show a huge amount of digital content that does not
was also the case for “other” sources with 6 citations. interact with the environment and therefore might not
Two papers provided four references to explain or define feel immersive.
their understanding of MR, two provided three references, Level of Virtuality. The level of virtuality refers to how
five provided two references, and 25 provided a single ref- much digital content (whether or not restricted to a
erence. The citation graph for round one of the literature specific sense) the user perceives. For instance, visually,
review is shown in Figure 2. VR is fully virtual while the real world without any
Overall, this suggests that if an academic paper cites an augmentation is not. In this sense, this dimension is
explanation or definition of MR, it is very likely that it is similar to the Reality–Virtuality Continuum, which is,
derived from Milgram & Kishino [44]. Still, more than 50% however, specifically concerned with displays [44].
of the reviewed sources do not rely on the Reality–Virtuality Degree of Interaction. Interaction is a key aspect in
continuum or do not provide a reference at all. Therefore, the MR, which can be divided into implicit and explicit
continuum is the single most popular notion of MR, but is far [38]. While all types of MR require implicit interac-
from being a universal definition in a fragmented landscape. tion, e.g., walking around a virtual object registered in
This highlights the need for a more systematic approach to space, explicit interaction means intentionally provid-
understand, organize, and classify the different notions. ing input to, e.g., manipulate the MR scene. The only
notion explicitly requiring this is 6—Strong AR, but, of
8 A CONCEPTUAL FRAMEWORK FOR MIXED course, can be realized with other types of MR. What
REALITY does specifically not fall into this category are GUIs
So far, we have found that the MR landscape is highly frag- that are separate from the MR scene (as is the case in
mented. We interviewed ten experts from academia and in- Pokémon GO).
dustry, who made partly contradicting statements. Based on Two additional, lower-level dimensions should be speci-
their answers and a literature review with 68 sources, we fied that are independent of particular MR notions. Based on
could identify six existing notions of MR. Even though the our earlier review of “aspects of reality”, these dimensions
majority of experts agreed that a single definition would are input and output (to specific senses).
be useful and important—especially in the context of HCI
Input. This dimension refers to input (besides explicit
research—our aim was not to find the one definition of MR.
interaction) that is used to inform the MR experience.
Rather, we acknowledge that different people will always
Such input includes motion (e.g., tracked by Leap Mo-
use different notions, depending on their context. The im-
tion [69]), (geo)location, other participants, and in a
portant thing is to make this context clear and provide a
more general sense anything sensors can track.
coherent framework for better communicating what one’s
Output. This dimension considers output to one or more
understanding of MR is. This is what we do in the following.
of the user’s senses in order to change their perception.
As we have seen, in most cases of MR, this is exclu-
Dimensions sively visual output, but can also encompass audio,
After analyzing the differences between the six notions, we haptics, taste/flavor, smell, as well as any other stimuli
initially derived five dimensions. With this, we aimed at and sensory modalities like temperature, balance, etc.
Dimension # Environments # Users Level of Immersion Level of Virtuality Interaction Input Output
value one many one many not partly fully not partly fully implicit explicit any any
1—Continuum ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
2—Synonym ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
3—Collaboration ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
4—Combination ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
5—Alignment ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
6—Strong AR ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Table 3: Our conceptual framework for classifying MR experiences along seven dimensions, showing a classifica-
tion of the six notions of MR that were derived from expert interviews and a literature review.
In addition to the notion of MR, it is important to specify Moreover, the MR experience provides visual output and
these two dimensions for specific MR experiences since many receives motion as input, as tracked with a Kinect.
consider MR on a purely visual basis. Yet, different types of Pokémon GO according to Interviewee No 5. Accord-
output and input can imply entirely different requirements, ing to I5, the whole of Pokémon GO, i.e., the combination of
particularly in terms of the necessary hardware. the fully virtual map view and the AR view in which one can
In Table 3, we have classified the six notions of MR accord- catch Pokémon, is an MR experience. Hence, the considered
ing to these dimensions. For instance, 1—Continuum spans notion is that of MR as a combination of AR and VR.
a whole range of MR experiences and has therefore been According to Table 3, it can be classified as featuring: one
classified as all possible types of immersion, but does not environment (since everything happens on one device and
cover cases that feature no virtual content whatsoever. Con- in one specific real-world location), one user, a level of im-
trary, when understanding MR as alignment of environments mersion that is between not immersive and partly immersive,
(5—Alignment), one of the environments can be completely a level of virtuality that is both, partly virtual (AR view)
without virtual content. The individual dimension’s values and fully virtual (map view), and implicit interaction (since
we have chosen are sufficient for this purpose, but can be explicit interaction happens via an HUD).
adjusted for more fine-grained classification. For instance, Moreover, Pokémon GO provides visual as well as audio
many MR application use a mix of implicit and explicit in- output and receives the user’s geolocation as input.
teractions to various degrees. While watching 360-degree
photos involves purely implicit interaction, explicit interac-
tion can, e.g., vary from simple clicks on digital objects to 9 DISCUSSION & FUTURE WORK
changing the environment using gestures. We have identified six existing notions of MR and from these
derived a conceptual framework, which is an important step
into the direction of being able to more thoroughly classify
How to use the conceptual framework and discuss MR experiences. While existing taxonomies or
To conclude this section, we want to illustrate the use of our conceptual frameworks are well suited for specific use cases
conceptual framework with two examples. or aspects of MR, they do not intend to cover the complete
Yannier et al. [87]. The authors present a system in landscape as described in this paper: [44, 45] are essentially
which a Kinect observes real building block towers on an included in the dimension “level of virtuality”, while [29] only
earthquake table (environment 1) and automatically synchro- considers visualization techniques and provides a taxonomy
nizes their state with virtual towers in a projection (envi- specific to image guided surgery; [65] conceptualizes MR
ronment 2). They state that “Mixed-reality environments, in terms of transforms, which allows for a more detailed
including tangible interfaces, bring together the physical and classification in terms of explicit interaction.
virtual worlds by sensing physical interaction and providing We also need to acknowledge the limitations of our work.
interactive feedback”. This experience is based on MR as First, it is rather academia-centric. Even though we recruited
alignment of environments. half of our interviewees from industry and they directly in-
According to Table 3, it can be classified as featuring: many formed several of the notions of MR, there is a chance that
environments, one to many users, a level of immersion that we missed other notions that exist beyond academia. Sec-
is between not immersive and partly immersive, a level of ond, while our literature review included 68 sources, there
virtuality that is both, not virtual (environment 1) and fully is always more literature to be reviewed, in order to get an
virtual (environment 2), and implicit and explicit interaction even more thorough understanding of the MR landscape.
(since the building blocks can be directly manipulated). Third, the conceptual framework was derived based on the
[18] Daniel Dobler, Michael Haller, and Philipp Stampfl. 2002. ASR: Aug- CAVEs. IEEE Transactions on Haptics 11, 1 (Jan 2018), 119–127. https:
mented Sound Reality. In ACM SIGGRAPH 2002 Conference Abstracts //doi.org/10.1109/TOH.2017.2755653
and Applications (SIGGRAPH ’02). ACM, New York, NY, USA, 148–148. [33] C. Lee, G. A. Rincon, G. Meyer, T. Höllerer, and D. A. Bowman. 2013.
https://doi.org/10.1145/1242073.1242161 The Effects of Visual Realism on Search Tasks in Mixed Reality Simu-
[19] Clarence A Ellis, Simon J Gibbs, and Gail Rein. 1991. Groupware: some lation. IEEE Transactions on Visualization and Computer Graphics 19, 4
issues and experiences. Commun. ACM 34, 1 (1991), 39–58. (April 2013), 547–556. https://doi.org/10.1109/TVCG.2013.41
[20] Steven K. Feiner. 2002. Augmented Reality: a New Way of Seeing. [34] @LenaRogl. 2017. Was die Hololens macht ist übrigens weder #AR
Scientific American 286, 4 (2002), 48–55. http://www.jstor.org/stable/ noch #VR, sondern #MixedReality :) [By the way, what HoloLens does
26059641 is neither #AR nor #VR, but #MixedReality :)]. Tweet. Retrieved May 28,
[21] Maribeth Gandy and Blair MacIntyre. 2014. Designer’s Augmented 2018 from https://twitter.com/LenaRogl/status/869851941966290945.
Reality Toolkit, Ten Years Later: Implications for New Media Authoring [35] David Lindlbauer and Andy D. Wilson. 2018. Remixed Reality: Ma-
Tools. In Proceedings of the 27th Annual ACM Symposium on User nipulating Space and Time in Augmented Reality. In Proceedings
Interface Software and Technology (UIST ’14). ACM, New York, NY, of the 2018 CHI Conference on Human Factors in Computing Sys-
USA, 627–636. https://doi.org/10.1145/2642918.2647369 tems (CHI ’18). ACM, New York, NY, USA, Article 129, 13 pages.
[22] Çağlar Genç, Shoaib Soomro, Yalçın Duyan, Selim Ölçer, Fuat Balcı, https://doi.org/10.1145/3173574.3173703
Hakan Ürey, and Oğuzhan Özcan. 2016. Head Mounted Projection [36] Pedro Lopes, Sijing You, Alexandra Ion, and Patrick Baudisch. 2018.
Display & Visual Attention: Visual Attentional Processing of Head Adding Force Feedback to Mixed Reality Experiences and Games Using
Referenced Static and Dynamic Displays While in Motion and Stand- Electrical Muscle Stimulation. In Proceedings of the 2018 CHI Conference
ing. In Proceedings of the 2016 CHI Conference on Human Factors in on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY,
Computing Systems (CHI ’16). ACM, New York, NY, USA, 1538–1547. USA, Article 446, 13 pages. https://doi.org/10.1145/3173574.3174020
https://doi.org/10.1145/2858036.2858449 [37] Laura Lotti. 2013. Through the Augmenting-Glass: The Rhetorics of
[23] Perttu Hämäläinen, Joe Marshall, Raine Kajastila, Richard Byrne, and Augmented Reality Between Code and Interface. Itineration: Cross-
Florian “Floyd” Mueller. 2015. Utilizing Gravity in Movement-Based Disciplinary Studies in Rhetoric, Media, and Culture (March 2013).
Games and Play. In Proceedings of the 2015 Annual Symposium on [38] Blair MacIntyre, Maribeth Gandy, Steven Dow, and Jay David Bolter.
Computer-Human Interaction in Play (CHI PLAY ’15). ACM, New York, 2004. DART: A Toolkit for Rapid Design Exploration of Augmented
NY, USA, 67–77. https://doi.org/10.1145/2793107.2793110 Reality Experiences. In Proceedings of the 17th Annual ACM Symposium
[24] Vinzenz Hediger and Alexandra Schneider. 2005. The Deferral of on User Interface Software and Technology (UIST ’04). ACM, New York,
Smell: Cinema, Modernity and the Reconfiguration of the Olfactory NY, USA, 197–206. https://doi.org/10.1145/1029632.1029669
Experience. In I cinque sensi del cinema/The Five Senses of Cinema, eds. [39] Laura Malinverni, Julian Maya, Marie-Monique Schaper, and Nar-
Alice Autelitano, Veronica Innocenti, Valentina Re (Udine: Forum, 2005). cis Pares. 2017. The World-as-Support: Embodied Exploration, Un-
243–252. derstanding and Meaning-Making of the Augmented World. In Pro-
[25] Otmar Hilliges, David Kim, Shahram Izadi, Malte Weiss, and Andrew ceedings of the 2017 CHI Conference on Human Factors in Comput-
Wilson. 2012. HoloDesk: Direct 3D Interactions with a Situated See- ing Systems (CHI ’17). ACM, New York, NY, USA, 5132–5144. https:
through Display. In Proceedings of the SIGCHI Conference on Human //doi.org/10.1145/3025453.3025955
Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, [40] D. Mandl, K. M. Yi, P. Mohr, P. M. Roth, P. Fua, V. Lepetit, D. Schmal-
2421–2430. https://doi.org/10.1145/2207676.2208405 stieg, and D. Kalkofen. 2017. Learning Lightprobes for Mixed Reality
[26] G. Hough, I. Williams, and C. Athwal. 2014. Measurements of live Illumination. In 2017 IEEE International Symposium on Mixed and Aug-
actor motion in mixed reality interaction. In 2014 IEEE International mented Reality (ISMAR). 82–89. https://doi.org/10.1109/ISMAR.2017.25
Symposium on Mixed and Augmented Reality (ISMAR). 99–104. https: [41] Mark McGill, Alexander Ng, and Stephen Brewster. 2017. I Am The
//doi.org/10.1109/ISMAR.2014.6948414 Passenger: How Visual Motion Cues Can Influence Sickness For In-Car
[27] Intel. 2018. Demystifying the Virtual Reality Landscape. VR. In Proceedings of the 2017 CHI Conference on Human Factors in
https://www.intel.com/content/www/us/en/tech-tips-and-tricks/ Computing Systems (CHI ’17). ACM, New York, NY, USA, 5655–5668.
virtual-reality-vs-augmented-reality.html. https://doi.org/10.1145/3025453.3026046
[28] Hajime Kajita, Naoya Koizumi, and Takeshi Naemura. 2016. SkyAn- [42] David McGookin, Koray Tahiroălu, Tuomas Vaittinen, Mikko Kytö,
chor: Optical Design for Anchoring Mid-air Images Onto Physical Beatrice Monastero, and Juan Carlos Vasquez. 2017. Exploring Season-
Objects. In Proceedings of the 29th Annual Symposium on User Interface ality in Mobile Cultural Heritage. In Proceedings of the 2017 CHI Con-
Software and Technology (UIST ’16). ACM, New York, NY, USA, 415–423. ference on Human Factors in Computing Systems (CHI ’17). ACM, New
https://doi.org/10.1145/2984511.2984589 York, NY, USA, 6101–6105. https://doi.org/10.1145/3025453.3025803
[29] M. Kersten-Oertel, P. Jannin, and D. L. Collins. 2012. DVV: A Taxon- [43] Paul Milgram and Herman Colquhoun Jr. 1999. A Taxonomy of Real
omy for Mixed Reality Visualization in Image Guided Surgery. IEEE and Virtual World Display Integration.
Transactions on Visualization and Computer Graphics 18, 2 (Feb 2012), [44] Paul Milgram and Fumio Kishino. 1994. A taxonomy of mixed reality
332–352. https://doi.org/10.1109/TVCG.2011.50 visual displays. IEICE Trans. Information and Systems 77, 12 (1994),
[30] Boriana Koleva, Steve Benford, and Chris Greenhalgh. 1999. The Prop- 1321–1329.
erties of Mixed Reality Boundaries. Springer Netherlands, Dordrecht, [45] Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino.
119–137. https://doi.org/10.1007/978-94-011-4441-4_7 1995. Augmented reality: A class of displays on the reality-virtuality
[31] Martijn J.L. Kors, Gabriele Ferri, Erik D. van der Spek, Cas Ketel, and continuum. In Telemanipulator and telepresence technologies, Vol. 2351.
Ben A.M. Schouten. 2016. A Breathtaking Journey. On the Design 282–293.
of an Empathy-Arousing Mixed-Reality Game. In Proceedings of the [46] Takashi Miyaki and Jun Rekimoto. 2016. LiDARMAN: Reprogramming
2016 Annual Symposium on Computer-Human Interaction in Play (CHI Reality with Egocentric Laser Depth Scanning. In ACM SIGGRAPH
PLAY ’16). ACM, New York, NY, USA, 91–104. https://doi.org/10.1145/ 2016 Emerging Technologies (SIGGRAPH ’16). ACM, New York, NY, USA,
2967934.2968110 Article 15, 2 pages. https://doi.org/10.1145/2929464.2929481
[32] A. Lassagne, A. Kemeny, J. Posselt, and F. Merienne. 2018. Performance
Evaluation of Passive Haptic Feedback for Tactile HMI Design in
[47] Thomas B. Moeslund and Erik Granum. 2001. A Survey of Computer Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 46,
Vision-Based Human Motion Capture. Computer Vision and Image 13 pages. https://doi.org/10.1145/3173574.3173620
Understanding 81, 3 (2001), 231 – 268. https://doi.org/10.1006/cviu. [59] Belma Ramic-Brkic and Alan Chalmers. 2010. Virtual Smell: Authen-
2000.0897 tic Smell Diffusion in Virtual Environments. In Proceedings of the 7th
[48] C. Morales, T. Oishi, and K. Ikeuchi. 2014. [Poster] Turbidity-based International Conference on Computer Graphics, Virtual Reality, Visu-
aerial perspective rendering for mixed reality. In 2014 IEEE Interna- alisation and Interaction in Africa (AFRIGRAPH ’10). ACM, New York,
tional Symposium on Mixed and Augmented Reality (ISMAR). 283–284. NY, USA, 45–52. https://doi.org/10.1145/1811158.1811166
https://doi.org/10.1109/ISMAR.2014.6948451 [60] Nimesha Ranasinghe and Ellen Yi-Luen Do. 2016. Virtual Sweet: Sim-
[49] Jens Müller, Roman Rädle, and Harald Reiterer. 2016. Virtual Objects As ulating Sweet Sensation Using Thermal Stimulation on the Tip of the
Spatial Cues in Collaborative Mixed Reality Environments: How They Tongue. In Proceedings of the 29th Annual Symposium on User Interface
Shape Communication Behavior and User Task Load. In Proceedings Software and Technology (UIST ’16 Adjunct). ACM, New York, NY, USA,
of the 2016 CHI Conference on Human Factors in Computing Systems 127–128. https://doi.org/10.1145/2984751.2985729
(CHI ’16). ACM, New York, NY, USA, 1245–1249. https://doi.org/10. [61] Stuart Reeves, Christian Greiffenhagen, Martin Flintham, Steve Ben-
1145/2858036.2858043 ford, Matt Adams, Ju Row Farr, and Nicholas Tandavantij. 2015. I’D
[50] Jens Müller, Roman Rädle, and Harald Reiterer. 2017. Remote Collabo- Hide You: Performing Live Broadcasting in Public. In Proceedings of
ration With Mixed Reality Displays: How Shared Virtual Landmarks the 33rd Annual ACM Conference on Human Factors in Computing
Facilitate Spatial Referencing. In Proceedings of the 2017 CHI Conference Systems (CHI ’15). ACM, New York, NY, USA, 2573–2582. https:
on Human Factors in Computing Systems (CHI ’17). ACM, New York, //doi.org/10.1145/2702123.2702257
NY, USA, 6481–6486. https://doi.org/10.1145/3025453.3025717 [62] H. Regenbrecht, K. Meng, A. Reepen, S. Beck, and T. Langlotz. 2017.
[51] Elizabeth D. Mynatt, Maribeth Back, Roy Want, and Ron Frederick. Mixed Voxel Reality: Presence and Embodiment in Low Fidelity, Vi-
1997. Audio Aura: Light-weight Audio Augmented Reality. In Pro- sually Coherent, Mixed Reality Environments. In 2017 IEEE Interna-
ceedings of the 10th Annual ACM Symposium on User Interface Soft- tional Symposium on Mixed and Augmented Reality (ISMAR). 90–99.
ware and Technology (UIST ’97). ACM, New York, NY, USA, 211–212. https://doi.org/10.1109/ISMAR.2017.26
https://doi.org/10.1145/263407.264218 [63] Derek Reilly, Andy Echenique, Andy Wu, Anthony Tang, and W. Keith
[52] Arinobu Niijima and Takefumi Ogawa. 2016. Study on Control Method Edwards. 2015. Mapping out Work in a Mixed Reality Project Room.
of Virtual Food Texture by Electrical Muscle Stimulation. In Proceedings In Proceedings of the 33rd Annual ACM Conference on Human Factors
of the 29th Annual Symposium on User Interface Software and Technology in Computing Systems (CHI ’15). ACM, New York, NY, USA, 887–896.
(UIST ’16 Adjunct). ACM, New York, NY, USA, 199–200. https://doi. https://doi.org/10.1145/2702123.2702506
org/10.1145/2984751.2984768 [64] T. Richter-Trummer, D. Kalkofen, J. Park, and D. Schmalstieg. 2016.
[53] M. Ohta, S. Nagano, H. Niwa, and K. Yamashita. 2015. [POSTER] Mixed- Instant Mixed Reality Lighting from Casual Scanning. In 2016 IEEE
Reality Store on the Other Side of a Tablet. In 2015 IEEE International International Symposium on Mixed and Augmented Reality (ISMAR).
Symposium on Mixed and Augmented Reality. 192–193. https://doi. 27–36. https://doi.org/10.1109/ISMAR.2016.18
org/10.1109/ISMAR.2015.60 [65] Yvonne Rogers, Mike Scaife, Silvia Gabrielli, Hilary Smith, and Eric
[54] Leif Oppermann, Clemens Putschli, Constantin Brosda, Oleksandr Harris. 2002. A Conceptual Framework for Mixed Reality Environ-
Lobunets, and Fabien Prioville. 2015. The Smartphone Project: An ments: Designing Novel Learning Activities for Young Children. Pres-
Augmented Dance Performance. In Proceedings of the 33rd Annual ence: Teleoperators and Virtual Environments 11, 6 (2002), 677–686.
ACM Conference on Human Factors in Computing Systems (CHI ’15). https://doi.org/10.1162/105474602321050776
ACM, New York, NY, USA, 2569–2572. https://doi.org/10.1145/2702123. [66] K. Rohmer, W. Büschel, R. Dachselt, and T. Grosch. 2014. Interactive
2702538 near-field illumination for photorealistic augmented reality on mobile
[55] Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne devices. In 2014 IEEE International Symposium on Mixed and Augmented
Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip L. Davidson, Reality (ISMAR). 29–38. https://doi.org/10.1109/ISMAR.2014.6948406
Sameh Khamis, Mingsong Dou, Vladimir Tankovich, Charles Loop, Qin [67] C. Rolim, D. Schmalstieg, D. Kalkofen, and V. Teichrieb. 2015. [POSTER]
Cai, Philip A. Chou, Sarah Mennicken, Julien Valentin, Vivek Pradeep, Design Guidelines for Generating Augmented Reality Instructions. In
Shenlong Wang, Sing Bing Kang, Pushmeet Kohli, Yuliya Lutchyn, 2015 IEEE International Symposium on Mixed and Augmented Reality.
Cem Keskin, and Shahram Izadi. 2016. Holoportation: Virtual 3D Tele- 120–123. https://doi.org/10.1109/ISMAR.2015.36
portation in Real-time. In Proceedings of the 29th Annual Symposium [68] Joan Sol Roo, Jean Basset, Pierre-Antoine Cinquin, and Martin Ha-
on User Interface Software and Technology (UIST ’16). ACM, New York, chet. 2018. Understanding Users’ Capability to Transfer Information
NY, USA, 741–754. https://doi.org/10.1145/2984511.2984517 Between Mixed and Virtual Reality: Position Estimation Across Modal-
[56] Clément Pillias, Raphaël Robert-Bouchard, and Guillaume Levieux. ities and Perspectives. In Proceedings of the 2018 CHI Conference on
2014. Designing Tangible Video Games: Lessons Learned from the Human Factors in Computing Systems (CHI ’18). ACM, New York, NY,
Sifteo Cubes. In Proceedings of the 32nd Annual ACM Conference on USA, Article 363, 12 pages. https://doi.org/10.1145/3173574.3173937
Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, [69] Joan Sol Roo, Renaud Gervais, Jeremy Frey, and Martin Hachet. 2017.
USA, 3163–3166. https://doi.org/10.1145/2556288.2556991 Inner Garden: Connecting Inner States to a Mixed Reality Sandbox
[57] Thammathip Piumsomboon, Arindam Day, Barrett Ens, Youngho Lee, for Mindfulness. In Proceedings of the 2017 CHI Conference on Human
Gun Lee, and Mark Billinghurst. 2017. Exploring Enhancements for Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA,
Remote Mixed Reality Collaboration. In SIGGRAPH Asia 2017 Mobile 1459–1470. https://doi.org/10.1145/3025453.3025743
Graphics & Interactive Applications (SA ’17). ACM, New York, NY, USA, [70] Joan Sol Roo and Martin Hachet. 2017. One Reality: Augmenting
Article 16, 5 pages. https://doi.org/10.1145/3132787.3139200 How the Physical World is Experienced by Combining Multiple Mixed
[58] Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Reality Modalities. In Proceedings of the 30th Annual ACM Symposium
Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. on User Interface Software and Technology (UIST ’17). ACM, New York,
2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collab- NY, USA, 787–795. https://doi.org/10.1145/3126594.3126638
oration. In Proceedings of the 2018 CHI Conference on Human Factors in