Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3544548.3581072acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

When XR and AI Meet - A Scoping Review on Extended Reality and Artificial Intelligence

Published: 19 April 2023 Publication History

Abstract

Research on Extended Reality (XR) and Artificial Intelligence (AI) is booming, which has led to an emerging body of literature in their intersection. However, the main topics in this intersection are unclear, as are the benefits of combining XR and AI. This paper presents a scoping review that highlights how XR is applied in AI research and vice versa. We screened 2619 publications from 203 international venues published between 2017 and 2021, followed by an in-depth review of 311 papers. Based on our review, we identify five main topics at the intersection of XR and AI, showing how research at the intersection can benefit each other. Furthermore, we present a list of commonly used datasets, software, libraries, and models to help researchers interested in this intersection. Finally, we present 13 research opportunities and recommendations for future work in XR and AI research.

1 Introduction

Extended Reality (XR) and Artificial Intelligence (AI) have become prominent research topics in Human-Computer Interaction (HCI) and Computer Science in general. Previously, research on these topics happened primarily within their respective fields. However, tools and technologies such as Unity3D and Keras have made XR and AI more accessible to researchers from different domains and backgrounds. As a consequence, a new research field has emerged at the intersection of XR and AI. On the one hand, XR researchers employ AI methods for problems, such as foveated rendering [357], object tracking [202, 416], or predicting virtual reality (VR) sickness [199, 373]. On the other hand, AI researchers use XR technologies to address issues, such as understandability, say, by visualizing neural networks in VR [243], and explainability, for example, by providing immersive interfaces to train machine learning (ML) models for non-experts [125]. Furthermore, in 2018, ACM and IEEE launched new conferences to specifically address research at the intersection of XR and AI1,2.
Currently, it is difficult to obtain an overview of the research at this intersection. There are some reviews that summarize the literature on XR and AI for certain topics. For example, they analyze intelligent embodied agents [262], production systems [299], or specific use cases, such as surgery simulations [383] or medical education [99]. However, the purpose of these works is to answer a specific question on applying XR and AI to an external use case. In contrast, we aim to provide a comprehensive account of the current landscape and research directions at the intersection of XR and AI. To remedy this situation, we present a scoping review covering 311 papers published between 2017 and 2021. Scoping reviews aim to map the breadth of the available evidence [340]. In doing so, we follow the process suggested by Cooper et al. [68] and Aromataris and Munn [21]. First, we screened 2619 publications from 203 venues to cover the broad spectrum of XR and AI research. For the search, we used an inductively built set of XR and AI terms. The venues include research from XR, AI, Human-Computer Interaction, Computer Graphics, Computer Vision, and others (see section D for a complete list of the venues). After a two-phase screening process, we reviewed and extracted data from 311 full papers based on a code book with 26 codes about the research direction, contribution, and topics of the papers, as well as the algorithms, tools, datasets, models, and data types the researchers used to address research questions on XR and AI. The extracted data for these codes form the basis for our predominantly narrative synthesis. As a result, we found five main topics at the intersection of XR and AI: (1) Using AI to create XR worlds (28.6%), (2) Using AI to understand users (19.3%), (3) Using AI to support interaction (15.4%), (4) Investigating interaction with intelligent virtual agents (IVAs) (8.0%), and (5) Using XR to Support AI Research (2.3%). The remaining 23.8% of the papers apply XR and AI to an external problem, such as for medical training applications (3.5%) or for simulation purposes (3.0%). Finally, we summarize our findings in 13 research opportunities and present ideas and recommendations for how to address them in future work. Some of the most pressing issues are a lack of generative use of AI to create worlds, understand users, and enhance interaction, a lack of generalizability and robustness, and a lack of discussion about ethical and societal implications.
In summary, we make the following contributions: First, we summarize the state-of-the-art XR and AI research by presenting a typology including five main topics. We also provide a dataset of the reviewed papers, including the extracted data for the codes. Second, we present an overview of algorithms, tools, datasets, models, data types, and user study data from the reviewed papers. We also provide a list of commonly used datasets, software, libraries, ML networks, and models in section A. This list can help researchers interested in XR and AI research to find suitable tools. Third, we critically discuss current research gaps, and provide 13 research opportunities, as well as recommendations for future work.

2 Background

In this section, we first discuss existing reviews on particular issues in XR and AI research. Second, we introduce our understanding of the terms XR and AI.

2.1 Literature Reviews on XR and AI

Existing reviews on XR and AI typically focus on a particular aspect about XR and AI research, but do not cover their intersection comprehensively. Lampropoulos et al. [186], for example, reviewed applications of deep learning, semantic web, and knowledge graphs to improve augmented reality (AR). They identified object detection, image processing, and computer vision as three areas where deep learning can enhance user experience and services in AR. Throughout the paper, AI is expressed as a technology to enhance the detection of input like gestures or speech in AR. However, the paper is not specific on which techniques should be used for these purposes. Furthermore, there are many reviews of IVAs [262, 263]. Norouzi et al. [262] presented a systematic review on embodied agents in AR head-mounted displays (HMDs). They identified the application of embodied agents in assistive and collaborative roles as one of the emerging trends. One of the major challenges in this area is to enhance agents’ understanding of their physical environment. Another two emerging trends are the use of agents as companions (e.g., as therapy partners) and the modeling of agent personality and empathy. Other articles focused on IVAs in a certain domain, for example, for education and training [300], professional skills training [42], or healthcare [225]. Some reviews specifically address empathy [270] or the nonverbal behavior [30] of agents. We also found reviews that synthesize literature about applying both XR and AI for a specific use case. A frequent example of this category are works from the medical domain, such as clinical simulation for nursing pain education [119], using ML to assess surgical expertise in a VR simulation [383], personalizing doctor-patient surgical risk communication [14], or about the application of AI and AR/VR in medical education [99]. These papers cover individual topics that report on insights about applying AI and XR (mainly VR or AR) to a particular external use case, like in the medical domain. However, the state of the art of XR and AI research is not addressed by these papers. We not only differ from these reviews in methodology (i.e., using a scoping review instead of a systematic review), but also in our aim of giving an overview of the state of the art.
We found three papers that describe the broad range of research at the intersection of XR and AI. Luck and Aylett [224] coin the term intelligent virtual environments in their 2000 article about “applying artificial intelligence to VR”. A key concept that they use to discuss work on intelligent virtual environments is the concept of autonomy. Being very much an outlook into future systems, their work provides an interesting preamble to our work. Ribeiro de Oliveira et al. [299] reviewed papers with a focus on how VR and AI are applied to specific problems in “industry, commerce, services, logistics, processes, or systems”. This complements our research, which focuses on basic research at the intersection of the two fields without addressing how both technologies are applied to such an external problem. As a result, the authors point out that AI methods mostly contribute through high precision and high efficiency to VR problems (e.g., in surgery). The main drawbacks of applying AI for VR problems is a lack of training data and high computational costs. The most recent article in this area by Reiners et al. [296] is about the combination of XR and AI research. The main applications of XR and AI, as revealed in their review, are training (i.e., medical and military), gaming, robots and autonomous cars, and advanced visualization. These existing reviews focus around fields that XR and AI are applied to. In general, a lot of research is going on in the medical domain and on training. The papers point out that computational costs and limited training data are two major issues that limit the current methods. The work by Luck and Aylett [224] is more conceptual, identifying autonomy as an important axis on which to describe intelligent virtual instances. In contrast to these works, our review is focusing on the state of basic research at the intersection of XR and AI. More precisely, we are not interested in the application domains XR and AI are applied to, but how they are used with respect to fundamental research questions, for example, about interaction techniques in XR or user characteristics.

2.2 Our Understanding of XR and AI

In the following, we characterize the understanding of XR and AI used in this work. XR is typically referred to as an umbrella term for VR, AR, and mixed reality (MR) [37, 112, 245, 293]. VR refers to overlaying the real world with virtual content by completely blocking real-world content. In contrast, AR refers to virtual objects being superimposed on an existing, three-dimensional real-world environment [23] using projection, optical, or video see-through devices. While researchers generally agree on these notions of VR and AR, the case is more complicated for MR [333]. The reality-virtuality continuum by Milgram and Kishino [246] typically serves as the basis for discussions around the term MR, but it has been criticised to not fully cover modern, more advanced technologies [333]. XR covers all of these notions (VR, AR, MR), and since we aim to cover the breadth of XR research, we include all of the above terms in our definition of XR.
In the case of AI, giving a definition is more challenging. Numerous articles aim to address the problem of defining AI [92]. As highlighted by Wang [375], early definitions of the term “indicate the same scope of intelligence as we see in human action” [261], or more abstractly note that “intelligence usually means the ability to solve hard problems” [249]. However, to date, “there is no widely accepted definition of AI” [375]. The ACM Computing Classification System3 lists AI, as well as ML, as computing methodologies, while in other cases ML is often considered a sub-category of AI. With our work, we do not aim to add another definition of AI to this collection. Our definition of AI is reflected in the set of keywords chosen for the search. To do that, we follow an inductive data-driven approach. We include articles that communicate on a high level that they used AI or specify a concrete method of AI. Since, to our knowledge, currently no clear list of such methods exists, we adopted an iterative approach to obtain AI terms.

3 Method

We aim to identify and examine the state of the art of XR and AI research and, therefore, chose to conduct a scoping review. Scoping reviews aim to “provide a preliminary assessment of the potential size and scope of available research literature” [340]. While systematic reviews typically focus on one precise question [340], scoping reviews aim to explore the “range of evidence” [277] rather than dive deep into one particular question [68, 159, 256]. Since their aim is to assess the full scope of literature on a topic, literature is included regardless of methodological quality [17]. Yet, some authors argue that some sort of quality assessment should take place to better identify critical gaps in evidence and not just a “lack of research” [197, 282]. Consequently, our aim is to cover a range of venues; and we only limited the publication type to full papers for quality assessment. Furthermore, a formal synthesis is typically not carried out (as opposed to systematic reviews that require a formal synthesis) [277]. Instead, scoping reviews present and structure the located evidence and give an overview of studies or research contributions conducted on a topic [277]. We followed the checklists suggested by Cooper et al. [68] and Aromataris and Munn [21] for conducting scoping reviews in the procedure and development of the protocol.

3.1 Research Questions and Rationale

This review is guided by the following research questions (RQs):
RQ1
What are the main topics researched at the intersection of XR and AI?
RQ2
What are the main problem areas that are addressed with XR and AI research?
RQ3
How is the research conducted? In particular, what algorithms, tools, datasets, models, data types, and user study data are employed to conduct the research?

3.2 Search Strategy

3.2.1 Definition of Keywords.

It is a non-trivial task to choose an appropriate set of keywords that covers the full spectrum of XR and AI research. To avoid subjective bias, we chose to define the keywords through a data-driven approach. That means, we defined one XR-related and one AI-related keyword set based on the key terms that are used in the literature. Through this approach we ensure that we find the majority of related keywords and are not limited to our own knowledge or biases towards terms that we think describe XR and AI best.
Method to define XR-related keywords. We started with the keywords “extended reality”, “virtual reality”, “augmented reality”, and “mixed reality”. We then used this list to search the 2021 proceedings of two XR-related venues (ISMAR4 and VRST5). We compared the retrieved set of papers with the 2021 proceedings of both conferences and noted the papers that were not in the result list. Author A6 then read the title, abstract, and author keywords of these missed publications and identified additional XR terms (e.g., “head-mounted display” and “virtual space”). The aim of this process was to retrieve the full proceedings with the selected XR-related keyword list. Table 1 shows the full keyword list.
Method to define AI-related keywords. For the AI keywords we started with the keywords “artificial intelligence” and “machine learning”. We then searched the 2021 proceedings of two machine learning conferences (ICML7 and NeurIPS8) with this set of keywords and compared the result with the full proceedings. Again, author A went through title, abstract, and author keywords to identify additional AI-related keywords. The complete list is shown in Table 1.
Table 1:
XR-related keywordsAI-related keywords
augmented reality, AR, extended reality, head-mounted display, head-up display, head-worn display, headset, HMD, immersive environment, mixed reality, virtual environment, virtual reality, virtual space, VR, XRagent, artificial intelligence, bandit, classif*, cluster*, computational, computer vision, dataset, deep, estimation, generative, intelligent, learning, machine learning, markov, model*, natural language processing, neural, optimi*, predict*, reasoning, recognition, segmentation, *supervised*, tensor
Table 1: Keyword list for the literature search. The search term was constructed by putting an OR operator between each phrase within a set and an AND operator between the two keyword sets.

3.2.2 Search.

We searched Web of Science9 and Scopus10 using an OR operator between keywords within each set and an AND operator between the two keyword sets. We limited our search to the title, abstract, and author keywords of an article. The specific queries for each data base are shown in section B.
Furthermore, we applied a number of filters. We limited the search to the five years from 2017 to 2021, and only included articles in main conference proceedings and journal articles. We chose to select the last five years as time period, since we wanted to display the current state of the art of XR and AI research, including the most recent developments in the field. Furthermore, previous years have in part been covered by several narrower reviews on these topics [296, 299]. We realized that Scopus does not index some of the ML conferences that we deemed important for our review (e.g., NeurIPS). Therefore, we decided to use Web of Science only. The initial search yielded a result of 10714 records. By double-checking some of the conferences, we found errors in the database (e.g., years 2020 and 2021 were missing for VRST5, years 2019, 2020, and 2021 were missing for IEEE Transactions on Image Processing11). For other publication venues, a substantial part of the papers published in certain years were missing (e.g., only 373 out of the 746 papers of the CHI 2021 proceedings12 were found). Consequently, we decided that Web of Science worked too unreliably and adopted a venue-based approach.
We selected a set of venues based on the search results. We found papers from a total of 1361 publication venues, including conference proceedings and journal publications. Authors A and F then identified which venues should be included in the search. They first individually coded 25% of the venues with include (yes/no) (intercoder reliability: 82%). After resolving conflicts, the remaining venues were coded by author A. The criteria for including a publication venue are shown in section C. The complete list consists of 203 publication venues and is shown in section D. We then conducted a separate search with our search term for each of the venues on the publisher websites, ACM DL13, IEEE Xplore14, ScienceDirect15, PMLR16, and NeurIPS Proceedings17. We used the same search query and filters as in the initial search.
Figure 1:
Figure 1: The PRISMA-ScR flowchart documents the scoping review process from identification of sources to the final sample of articles that are included for data extraction.

3.3 Evidence Screening and Selection

We adopted a two-phase screening process: In the first phase, we screened the papers based on title, abstract, and author keywords and in the second one based on the full text. Figure 1 shows the PRISMA diagram for scoping reviews (PRISMA-ScR) [356]. It details the complete process from initiating the search to identifying the papers included in the analysis. For both screening phases we first conducted a calibration phase on each 10% of the records in which all coders (authors A-E) screened the same set of papers, followed by single extraction for the remaining papers.
Exclusion and inclusion criteria. Based on our research questions, we defined the following exclusion (EC) and inclusion (IC) criteria. We derived them in an iterative process: Authors A and F first generated an initial set of criteria, which was refined with all authors after the screening process. Both screening phases used the same criteria, except E9, which we added after the first screening phase and thus it only applied in the second one.
EC1
Not in main proceedings: adjunct, poster, extended abstract, companion proceedings, short paper, workshop proposal, position paper, demo, editorial.18
EC2
Survey or literature review: We excluded surveys, literature reviews, and opinion pieces.
EC3
Year: not published between 2017 - 2021.18
EC4
Missing term: No XR or AI term is mentioned. We found a considerable amount of papers as part of the result list that should not have been found by the search engine.18
EC5
False positive: Words/terms are used in a different sense of the word (e.g., “model” is used in the context of 3D modeling but not to refer to an ML model).
EC6
Example mention: XR and AI term is only mentioned as an example in the abstract or introduction(e.g., as motivation), but the work itself does not use any type of XR or AI.
EC7
Example application: XR term is used as one example application or implementation OR the XR term refers to training or testing or simulation of an AI method in a virtual environment, but not for actual deployment.
EC8
Dataset: A dataset is presented, but no XR or AI reference is made.
EC9
Lacking information: The paper does not present enough details to allow for full application of the codes.
IC1
AI method applied for XR: An AI method is applied for an XR problem (e.g., for redirected walking, viewport generation, or sickness prediction).
IC2
XR applied for AI: An XR technology is applied for an AI problem (e.g., to visualize neural networks in VR).
IC3
Interaction with embodied AI: Papers aiming to enhance interaction with intelligent VAs.
IC4
Application focus: XR and AI are applied to a problem, but are not the focus of the presented research (e.g., an AR-based system that helps with tumor recognition and applies deep learning to estimate positions).
IC5
Requires further reading: We included papers for the second screening phase when it was not clear from reading the abstract whether it meets the inclusion criteria.
Before the first round of screening, we implemented a script to identify obvious exclusion cases. First, the script identified whether there was no XR or AI term in title, abstract, or author keywords. Such cases should not have been found by the search engines in the first place, but we still had some cases in our list. Second, it identified survey papers (by looking for the words “survey” and “review” in the respective fields). It also highlighted cases where “learning” referred to an educational context and cases where “AR” referred to a false positive case, such as “LDAR”. Lastly, it identified duplicates by comparing the DOI of the papers and highlighted whether a paper was not published in the main proceedings (by highlighting the words “extended abstract”, “short paper”, “poster”, “adjunct”, “companion proceedings”). Author A reviewed and excluded these cases when needed.
Table 2:
Research objective + contributionUser-based evaluationXR-related
C1 CategoryC7 Type of user studyC11 Type of XR
C2 Research question/objectiveC8 Purpose of user studyC12 Device type
C3 Contribution or main findingsC9 Metric for user-based evaluationC13 Interaction/application/task
C4 Contribution typeC10 Study details (e.g., sample size)C14 What XR problem is solved?
C5 AI part of the contribution?  
C6 Limitations  
   
AI-related  
C15 Custom implementation?C19 Validation and testC23 When/how AI is applied
C16 Tool/library/dataset usedC20 Performance + validation metricC24 Data acquisition
C17 Class of algorithmC21 Model techniqueC25 Publicly available resources
C18 Details about algorithmC22 Purpose + applicationC26 What AI problem is solved?
Table 2: Summary of the codes used for data extraction. See Table 13 and Table 14 in the Appendix for the code book including a description for each code.
First round of screening. In the initial screening, 2619 unique records were screened based on title, abstract, and author keywords by authors A-E. First, the five authors screened the same subset of 10% of the papers (258) separately. In a meeting after this calibration phase, the authors discussed discrepancies and refined the definitions of the exclusion and inclusion criteria. Before the discussion, there was an agreement of 56.6%, where all authors coded the respective record with the same decision. For another 23.6% of the records, all but one coders agreed (i.e., four coders agreed on the same decision) and the majority vote was taken. For the remaining 19.8% the authors disagreed. These papers were discussed in a meeting and and discrepancies were resolved. After the calibration, the remaining papers (2361) were distributed equally between authors A-E, resulting in a pool of 688 papers to be included for the second round of screening.
Second round of screening. The full text screening was conducted together with the data extraction phase. The authors first screened the full text for eligibility with the same exclusion and inclusion criteria as in the first round. Only one criterion was added (E9), which refers to the paper not presenting sufficient details on the methods, implementation, or results part to apply the codes. If the paper was included, data extraction was performed.

3.4 Data Extraction and Code Book

A total of 311 papers were included for the data extraction phase. We charted data for four main categories: “research objective and contribution”, “user-based evaluation”, “XR-related codes”, and “AI-related codes”. The data items are presented in Table 2. The complete code book (i.e., the codes with descriptions) is presented in the Appendix in Table 13 and Table 14. Authors A-E coded the papers. The code book was developed in an iterative process that combined an inductive and deductive strategy. We first started with a set of codes identified by author A. We then defined a random sub set of 10% of the papers (=69 papers), which we used for calibration to evaluate the suitability of the codes. During this phase we had three meetings, in which we discussed the codes’ suitability. Author A discussed the codes with author F in three separate meetings. After the calibration phase, we had a final set of 26 codes. We performed single extraction for the remaining papers. The remaining 619 papers were distributed among coders A-E (A:150, B:150, C:149, D:150, E:20).

3.5 Critical Appraisal, Potential Bias, and Limitations

Critical appraisal. Scoping reviews typically include all available evidence regardless of methodological quality [17]. We followed this approach. However, to receive a manageable set of papers, we decided to include the full paper proceedings and journal publications only. Besides these filters we did not exclude papers based on their methodological quality.
Limitations and potential bias. We acknowledge that our keywords selection process might have been influenced by the papers in the specific proceedings that we chose as a basis for the selection process. We chose this process to reduce subjective bias as much as possible. A second point that made the selection of keywords difficult is that both concepts (XR and AI) lack a clear definition. This is especially the case for AI. Being an ill-defined concept, it is difficult to find a comprehensive list of keywords and existing lists are likely biased towards the authors understanding of AI. Therefore, generating a list based on our outlined iterative process is the most bias-free solution that we found feasible. In conclusion, although we might have missed some keywords, we are confident that we found the majority of relevant papers. We aimed to cover the breadth of research at the intersection of XR and AI. Therefore, we defined broad keyword sets. We intentionally included terms that might only loosely be connected to XR and AI (e.g., “virtual space” in addition to “virtual reality” or “model” and “intelligent”). This approach left us with a high number of false positives. Yet, we still deemed it necessary to go for this breadth-first approach to really cover the full scope of XR and AI research. Nevertheless, we acknowledge that this approach might not be in agreement with other researchers’ definitions of XR and AI, who might have selected a more focused set of keywords. We adopted a data-driven approach to define the terms XR and AI. We did this to cover a broad spectrum of research. However, we acknowledge that there are other understandings of XR and AI, which might have led to a slightly different set. To reduce errors in the screening and coding phases, we conducted calibration phases on each 10% of the records for the screening and the data extraction phase with all the coders. Furthermore, we had extensive discussion sessions to resolve conflicts and adapt the exclusion/inclusion criteria and code descriptions (two one-hour sessions for initial screening, three one-hour sessions for full-text screening and data extraction).
Figure 2:
Figure 2: (A) Number of records per year. (B) Number of records per research direction.

3.6 Analysis and Informal Synthesis

Our analysis combines categorization, quantization, and narrative synthesis. First, we categorized the papers into topics based on C1. Then, we collected the research question(s) (C2) and contribution statements (C3, C4, C5) of each paper and summarized the main topic of the paper in one sentence. These sentences were grouped into topics using an approach inspired by affinity diagramming [222]. While author A performed this process, all authors discussed the topics in three meetings. The result of this process is a typology about the state of the art of XR and AI research, which is presented in the next section. Furthermore, we summarized the quantitative codes (C10, C11, C12, C17, C21, C23, and C24) and, based on the summary of all codes, we created a narrative synthesis, which is presented in the following.

4 Results

In the following, we present the review results. First, we give an overview of the papers’ research directions, publication venues, distribution of XR technologies, and distribution of keywords. Then, we present a typology of the state-of-the-art XR and AI research and give an overview of the main problem areas and methods. Our aim is to reveal and discuss general trends and point out challenges that the intersection as a field has to face. Therefore, while discussing several reviewed papers in detail, describing all the included papers is not within the scope of this work. However, we provide the full list of papers as a resource for future in-depth analyses.

4.1 Overview of Papers

The final corpus consists of 311 unique papers. The papers were published in 50 publication venues, with the most frequent ones being VRST (57), AIVR (42), TVCG (35), ISMAR (28), and CHI (22). Table 15 in the Appendix shows the full list of the number of papers per publication venue. The number of published papers per year is increasing, with a slight drop in 2020 (see Figure 2 A). Notably, the number of publications on XR and AI has more than quadrupled from 2017 to 2021.

4.1.1 Research Directions.

Based on C1, we found four research directions (see Figure 2 B).
AI for XR: Papers that address or investigate an XR problem using an AI method (187/60%). These papers typically present an algorithm or model to address an issue in XR (e.g., VR sickness), often with a focus on prediction, and an empirical evaluation thereof.
XR for AI: Papers that address or investigate an AI problem using XR technologies (7/2.3%). These papers either use XR to visualize an AI technique to improve understandability, or focus on generating training data for XR.
Intelligent VAs: Papers that address or investigate a problem concerning intelligent VAs (43/13.8%). The papers are either concerned with the design of agents (e.g., physical appearance) or with how users perceive VAs (e.g., regarding trust).
XR and AI Applied: Papers that apply an XR technology and an AI method to an external problem (74/23.8%19). These papers typically present applications, such as medical training applications or using XR for simulation purposes (e.g., driving simulators). The focus in this research direction is not on an XR or AI problem. We grouped these papers into eight topic clusters, with the largest one being health-related training applications (18), simulation applications (13), and general training and learning applications (11). However, since our paper’s focus is on the research that addresses problems within XR and AI, these papers will not be further discussed. Table 18 in the Appendix gives an overview of the clusters and papers.

4.1.2 Publication Venues.

Most of the papers were published in XR venues (36%), followed by Computer Graphics (19.6%), venues at the intersection of XR and AI (14.1%), and HCI (13.5%). Only 5.8% of the papers were published in AI venues. The remaining papers were published in venues on Artificial Agents (3.3%), Computer Vision (2.9%), Affective Computing (2.3%), Eye Tracking and Perception (2.3%), or others (0.3%). Table 16 in the Appendix shows an overview of the published papers per research direction and venue group. Since this paper is written from an HCI perspective, we took a closer look at the papers published in HCI venues (42/13.5%). When it comes to research directions, the distribution of the HCI papers is almost identical with the overall distribution: AI for XR 60%, XR for AI 0%, Intelligent VAs 14%, and XR and AI Applied 26%.

4.1.3 Distribution of XR Technologies.

Most of the papers present research on VR (68%) or AR (21%). The remaining 11% present research about a relevant issue for XR, which is not actually tested in XR, but with images [221] or videos [29, 289]. The distribution of VR/AR for the research directions is: AI for XR (74% VR/19% AR); XR for AI (100% VR/0% AR); Intelligent VAs (67% VR/16% AR); and XR and AI applied (56% VR/37% AR).

4.1.4 Distribution of Keywords.

The most common keywords in title, abstract, and author keywords for XR were virtual reality (375) and VR (372). Extended reality was found only four times. For the AI keywords the most common ones were learning (214) and model (207). Artificial Intelligence was found 24 times. In Appendix Table 17 we show on overview of the complete list of keywords per research direction.
Figure 3:
Figure 3: AI is used to create XR worlds by (1) creating a realistic replication of the real world, (2) modifying the real world, or (3) generating a synthetic world.

4.2 Typology of the State-Of-The-Art XR and AI Research

We present the state-of-the-art XR and AI research as a typology. To create the topics, we grouped the papers into clusters based on the extracted research questions and contribution statements (C2 and C3) as described in section 3.6. In the following, we present the topics for the first three research directions; the fourth is not at the core interest of this review.
(1)
Using AI to create XR worlds (89/37.6%20);
(2)
Using AI to understand users in XR (60/25.3%);
(3)
Using AI to support interaction in XR (48/20.3%);
(4)
Interaction with IVAs (25/10.5%);
(5)
Using XR to support AI research (7/3%).

4.2.1 Using AI to Create XR Worlds.

AI is used to create virtual representations of environments, people (avatars), agents, and objects. How these are created is by either (1) realistically replicating the real world, (2) modifying the real world, or (3) generating a synthetic world (see Figure 3).
Creating Environments. With 34 (14.3%) papers, the largest cluster of creating XR worlds addresses the problem of creating an XR environment, most of them with a focus on realism. Nine papers present work on improving tracking or reconstructing real-world geometries in XR spaces (e.g., [71, 137, 353]). Besides visual representations, two works present the reproduction of spatial audio or sound effects in XR worlds [56, 162].
Chang et al. [56] use a generative adversarial network (GAN) for creating real-time synthetic drum sounds in VR perceived as real by the users. Kim et al. [162] present a system to recreate the spatial sound of a room using a CNN to estimate the depth from different images. The spatially synchronised audio is then reproduced by combining the depth estimates with the spatial sound library Resonance Audio21. There is also work on realistically presenting virtual content [95, 107, 200], for example, by improving the rendering of motion cues to improve depth perception in VR [255, 303]. Another cluster is about the improvement of image quality [54, 184, 387] and optimizing illumination [215, 231, 307]. Two works aim to improve the efficiency of algorithms [117, 411].
Seven examples modify an XR world by mapping a physical and a virtual environment. In this case, the content of the XR world is mapped to the physical world, creating a mix of real and virtual environments. For example, Taylor et al. [346] present an approach to create virtual representations of real rigid and non-rigid objects. They used a CNN to predict deformation parameters of said objects. Cheng et al. [60] present an optimization-based approach to automatize the process of placing virtual interfaces in the real environment to enhance user performance. Another example is the work by He et al. [122], which maps virtual objects to real objects. Yoon et al. [402] map the virtual environments of two users working in different physical spaces to allow them to interact in the same virtual space, while considering their individual physical constraints. Furthermore, there is work on correctly placing virtual characters according to real-world scene semantics [187].
We found one example that followed a generative approach to generate an environment. Sra et al. [334] show how virtual worlds can be generated based on music-induced moods (in particular happiness and sadness). As highlighted by the authors, a way of creating an XR world that abstracts from realism but focuses on an aesthetically pleasing appearance is a challenging task, which might explain that current XR worlds mostly focus on realism. Furthermore, current challenges remain, as interactive elements still have to be added manually.
Table 3:
Main Topic ClusterCountPapers
Using AI to Create XR Worlds89 
     Creating XR Environments34 
            Tracking of environments9[53, 56, 71, 131, 137, 161, 162, 182, 353]
            Presenting realistic virtual content6[95, 107, 153, 200, 255, 303]
            Measuring and optimising illumination5[180, 215, 231, 307, 409]
            Optimising image quality4[54, 184, 209, 387]
            Mapping environments4[60, 122, 187, 402]
            Augmenting content in AR3[140, 147, 174]
            Improving efficiency2[117, 411]
            Generating environments1[334]
     Creating Avatars28 
            Recognition and animation of facial expressions6[61, 181, 268, 320, 341, 348]
            Physical appearance: certain aspects of human bodies6[229, 236, 361, 377, 386, 390]
            Tracking4[87, 215, 272, 352]
            Physical appearance: full body reconstruction3[52, 205, 283]
            Influence of avatars on users3[2, 193, 284]
            Animation of movements2[24, 176]
            Toolkit for creating avatars1[120]
            Animation of gaze behaviour1[323]
            Animation of gestures1[295]
            Modification1[241]
     Creating Agents18 
            Realistic modelling of agent behaviour12[12, 33, 34, 106, 110, 185, 286, 288, 289, 290, 309, 398]
            Investigating non-realistic agents5[167, 297, 370, 376, 418]
            Blended agents1[317]
     Creating XR Objects9 
            Tracking of objects4[202, 346, 347, 416]
            Rendering of objects3[216, 331, 391]
            Modifying object appearance2[257, 358]
Using AI to Understand Users60 
     Predicting VR Sickness25[25, 82, 88, 132, 142, 143, 146, 163, 164, 165, 166, 168, 170, 171, 194, 195, 199, 210, 232, 251, 267, 273, 294, 320, 373]
     Predicting User Characteristics13[8, 103, 127, 128, 175, 207, 226, 230, 259, 321, 328, 362, 400]
     Predicting Viewport and Head Movement11[7, 90, 91, 124, 279, 304, 305, 357, 365, 366, 406]
     Eye Tracking and Gaze Analysis11 
           Gaze analysis and visual attention estimation4[9, 77, 201, 337]
           Gaze prediction4[134, 135, 136, 393]
           Eye tracking and gaze modelling3[81, 178, 221]
Using AI to Support Interaction48 
     Gesture-based Interaction22 
           3D mid-air gesture interaction11[72, 105, 126, 219, 238, 239, 248, 322, 350, 372, 403]
           Gesture recognition and classification11[16, 57, 98, 148, 234, 250, 325, 380, 392, 395, 410]
     Locomotion Techniques13 
           Redirected walking techniques8[58, 62, 80, 100, 191, 192, 204, 336]
           General locomotion techniques5[46, 47, 118, 158, 269]
     Novel Devices7 
           HMDs4[6, 196, 292, 388]
           Controllers3[96, 326, 371]
     Novel Interaction Techniques3[76, 121, 123]
     Haptic Feedback3[66, 83, 399]
Interaction with Intelligent Virtual Agents25 
     Interacting with Crowds of Agents10[32, 38, 74, 156, 183, 188, 258, 271, 306, 368]
     Physical Interaction with Agents7 
           Peripersonal space4[40, 41, 48, 342]
           Touch3[4, 44, 129]
     Interacting with One Agent4[43, 113, 382, 421]
     Trust in Agents4[114, 115, 116, 139]
Using XR to Support AI Research7 
     Visualising AI Methods in XR5[28, 125, 228, 243, 343]
     Generating Training Data for XR2[94, 287]
Other8 
     User Authentication and Identification7[5, 15, 133, 208, 247, 280, 281]
     Software Testing For VR1[11]
Table 3: Typology of XR and AI research.
Creating Avatars. 27 (11.4%) of the papers focus on realistically replicating human bodies to create avatars in XR. The majority of these papers is concerned with the physical appearance of avatars, either by capturing and reconstructing the complete body of a person [52, 205, 283] or by reconstructing specific parts of the body, like the teeth [361], face [229], or fingers [236]. A particularly challenging problem is to create realistic hair. Xing et al. [390] present an approach that combines expert feedback with deep learning to create realistic models of hair. Hair modeling artists created a set of structures and styles, which served as the basis for the model. Furthermore, there is work on recognizing and generating facial expressions of avatars [61, 268, 341], for example, by tracking the eyes or facial expressions of a person and rendering that on an avatar’s face [181, 320, 348]. All of these works aim to create some part or even the complete body of a realistic virtual avatar. We found three works on the influence of such a realistic avatar on the user. For example, they studied how distortion in avatar movement [284] or walking in place [193] influence body ownership. We found only one example of a modification of an avatar. McIntosh et al. [241] presented an adaptable avatar that, based on a task-integrated optimization approach, changes its arm or finger length based on target distance. As a result, the adapted avatar created less frustration and less physical demand compared to the non-adapted one. This work shows the potential of modifying virtual representations of humans for specific tasks. We did not find any case about generating a synthetic avatar.
Creating Agents. 18 papers (7.8%) related to creating IVAs, 12 of which focus on realistically modeling agent behavior, and five on investigating non-realistic agents. We found that agents in VR are typically embodied and modelled to imitate human appearance and behavior. To achieve this realistic modeling, researchers have modelled gait [290, 398], gaze [12], and personality [309], among others. However, we found several works questioning whether VAs should be modelled realistically. For example, these works compare realistic, embodied VAs with other forms of agents [167, 370, 418]. Reinhardt et al. [297] compared an invisible agent with a simplified humanoid agent and a fully textured realistic agent. They found that non-verbal behavior, such as eye contact, seems to be the main cue why a realistic agent was preferred over the others. Weber et al. [376] present a design space for edible VAs for human-food interaction by augmenting food with virtual eyes and hands. The edible agents could explain facts about themselves (e.g., ingredients) and made the meal a “fun experience”, while allowing the users to learn something about the food. These works on unrealistic agents can be understood as modifications, since they take the human body as basis and modify the appearance or behavior [167, 297, 376]. Lastly, we found one paper [317] about mapping realities. They present blended agents that are able to manipulate physical properties of virtual objects, thus bridging the gap between realities. This was perceived as “amazing” and “surprising” by participants of the user study. They also mentioned that the physical consequences of an agent’s movements made it appear more present. This work is the only of its kind in our corpus; it shows the promises of XR-based interaction by mapping realities with agents. We did not find any work addressing the synthetic generation of agents.
Creating Objects. Nine papers (3.8%) create XR objects, either realistically (7) or by modifying the real world (2). Four of these papers were about object tracking [202, 346, 347, 416] and three about object rendering [216, 331, 391]. Liu et al. [216] used a GAN for creating virtual object shadows in AR. The algorithm generates a shadow based on a synthetic AR image and a virtual object mask input. The authors report their key insight is that the model is able to map a virtual shadow to an object based on the depth clues provided in the environment only. We found one particular use case where objects were modified. Concretely, these papers are about modifying the appearance of food [257, 358]. Nakano et al. [257] use StarGAN to overlay the complete image with a newly styled version of the food (i.e., different style of noodles or rice), while Ueda and Okajima [358] used a version of ResNet to track and recreate the exact shape of the food. Similar to the avatars, we did not find an example for generating synthetic objects for XR worlds.

4.2.2 Using AI to Understand Users.

In total 60 (25.3%) papers presented work about understanding users in XR.
Predicting VR Sickness. The most frequent topic about understanding users is predicting VR sickness (25/10.5%). Despite the high density of this cluster, only a few papers present real-time applications [25, 88, 210, 232], while most of them analyze sickness symptoms post-hoc (e.g., [165, 251, 373]). There are many different approaches for predicting VR sickness, such as using support vector machines [88, 232], long short-term memory networks [164, 165], or convolutional neural networks [146, 195]. In terms of model technique, ten papers addressed the problem as a classification problem and ten as a regression problem. However, while many papers work on predicting sickness, AI is not often applied for a solution. The work by Lim et al. [210] is an exception here. They present a solution that dynamically adapts the field of view to a minimal degree to reduce VR sickness symptoms.
Predicting User Characteristics. The second cluster in understanding users presents work about predicting user characteristics, such as affect and emotion [128, 328, 400], presence [207, 230, 321], or mental workload [226].
Predicting Viewport and Head Movement. The most basic form of interacting with an environment is viewing. We found ten papers that presented a technique for viewport or head movement prediction (e.g., [7, 124, 304]). These works typically address the problem of computational rendering cost and propose to only render the part where the user is looking at with high detail.
Eye Tracking and Gaze Analysis. Lastly, there are 11 (4.6%) papers that present approaches for eye tracking and gaze analysis. In particular, there is work on gaze prediction [135, 136, 393], visual attention estimation [9, 77, 201, 337], and gaze modeling [81, 178, 221].

4.2.3 Using AI to Support Interaction.

Gestural Interaction. The majority (22/9.3%) of papers in this area are about 3D mid-air gesture-based interaction. They, for example, present improvements in hand tracking for a better gesture recognition (e.g., for AR [248, 372], for VR [239]). Most of these works focus on hand gestures [57, 250, 410], hand pose estimation [16, 380], and hand trajectory prediction [98]. We identified four papers presenting work on gesture interaction using other modalities, namely foot [325], face [234, 395], and waist gestures [392]. Mo et al. [250] present a tool for designing hand gestures for MR applications with minimal training data. Hirota and Komuro [126] present a classifier to recognize whether a hand gesture is a grasping gesture. Tian et al. [350] also present a grasping algorithm. Three papers address the problem of freehand mid-air sketching in VR [105, 219, 403]. Yu et al. [403] present a real-time application that allows users to sketch 3D objects based on curve networks. The system is specifically tailored towards idea generation and concept sketches. The algorithm first calculates possible intersections of new strokes with the existing 3D curves created by users. It then selects an intersection based on discrete optimization. There are two works focusing on users’ perception of gestures rather than improving the tracking thereof [72, 238]. Dalsgaard et al. [72] built a model that reflects natural pointing in a 3D space. In particular, they focus on features that best describe natural pointing. They compared several ML models (Naive Bayes, RF, SVM) for both a classification and regression problem and found the best accuracy for SVM-based classification.
Locomotion Techniques. The vast majority of the 13 (5.5%) papers about locomotion techniques present improvements on redirected walking (e.g., [80, 191, 204]). They mostly address this as reinforcement learning [58, 192, 204, 336] or regression problem [62, 100, 191]. There is also work on backwards movement [269], evaluating unintentional positional drifts [47], and walking in place [158].
Novel Devices. Seven (3%) papers apply AI to design and implement novel devices, in particular, controllers [96, 326, 371] and HMDs [6, 196, 292, 388]. Shigeyama et al. [326] present a haptic controller that changes its shape dynamically to adapt to different objects by mapping its mass properties to the form of the respective object. A linear regression model was optimized to predict the shape of the controller based on the properties of VR objects.
Novel Interaction Techniques. Only three (1.3%) papers used AI to create non-gesture-based novel interaction techniques. These are virtual keyboard typing [123], a smartphone-based interaction technique for AR [121], and a framework for sword fighting experiences in VR [76].
Haptic Feedback. Lastly, three (1.3%) papers aimed to improve haptic feedback in XR by using drones [83], haptic retargeting [66], or simulating haptics using a robotic prop [399].

4.2.4 Interacting with Intelligent Virtual Agents.

Besides the physical appearance and behavior modeling aspects about VAs, which we discussed in the paragraph about creating agents in section 4.2.1, 25 (10.5%) papers investigated the interaction with intelligent agents. The largest group in this category is about social aspects of interacting with a crowd of agents (e.g., [38, 306, 368]). They investigate empathy towards groups of VAs [156], algorithms to generate plausible movements for agents interacting with other agents [258], or creating VAs that are able to transition between individual and collaborative behavior [183]. Furthermore, seven papers present work on physical aspects with agents, including how users perceived physical touch by agents [4, 44, 129] and how their relationship to agents influenced users’ perception of peripersonal space [40, 41, 48, 342]. Four papers each were about interacting with one agent [43, 113, 382, 421] and measuring different aspects about trust in VAs [114, 115, 116, 139].

4.2.5 Using XR to Support AI Research.

We only identified seven works (3%) that apply XR technologies to AI problems (2.3%). Five of these works visualize AI methods in XR, for example, for immersive analytics [343], or to improve the understanding of neural networks for non-expert users by visualizing them in VR [28, 228, 243]. Hilton et al. [125] present a tool for non-experts to configure and train an ML model. With the increasing complexity of neural networks, such methods are promising to facilitate the interaction with neural networks for novices. Another problem of AI methods in general is the limited amount of available data and, consequently, the generation of training data. To address this problem, typically images are synthesised by creating variants of one image. Franchi and Ntagiou [94] address this problem in VR by providing an application to create synthetic VR training data. Lastly, Ramirez et al. [287] provide a tool for labeling data in VR.

4.2.6 Topic Distribution for HCI Papers.

Similar to the research direction, we analysed the topic distribution for HCI papers (3122). 39% (12) of the HCI papers used AI to create XR worlds, with two papers creating XR environments and each five creating avatars and agents. We found only two papers (6%) in the understanding users category with both focusing on predicting user characteristics. The majority of HCI papers (15/48%) use AI to support interaction. Most of them use it for gestural interaction (10). Lastly, there is one paper about the interaction with intelligent VAs and one in the “other” cateogry.

4.3 Main Problem Areas Addressed in XR and AI Research

Table 4:
 SumCreate WorldsUnderstand UsersSupport InteractionInteraction with IVAsXR to Support AI
1. Perception and neuroscience521231900
2. Interacting with IVAs472020250
3. Presentation of virtual content473511100
4. Tracking technologies412261300
5. Health-related impacts26125000
6. High fidelity virtual human characters24230010
7. Interaction techniques23102200
8. Social and ethical issues21700140
9. Locomotion techniques13211000
10. Collaboration with people1150240
11. Novel system and devices1001900
12. Rendering980100
13. Explainability of AI methods500005
14. Display technology410300
15. Limited training data200002
Table 4: Main problem areas addressed in XR and AI research.
Table 5:
 SumCreate WorldsUnderstand UsersSupport InteractionInteraction with IVAsXR to Support AI
Empirical143463734233
ML model13751503330
System/artifact461841653
Technological432461120
Dataset29714800
Methodological932202
Application521101
Theoretical530020
Table 5: Contribution types presented by the papers.
We found 15 problem areas that are addressed by the papers in our corpus (see Table 4). The list of challenges is based on the articles on challenges in AR and virtual environments by Billinghurst [36], Kim et al. [173], and Slater [329]. Most of the papers address a challenge about perception and neuroscience (21.9%). The main interest in this area is about understanding how users perceive realistic worlds and about how interacting with these worlds affects users, for example, in their feeling of presence [230], emotions [127], or visual attention [77]. The second area is interacting with IVAs (19.8%). Research on these challenges is mostly empirical (16.5%). The actual behavior of agents is rarely implemented based on a model, but mostly scripted. The third challenge is the presentation of virtual content. Here, many ML models are applied and evaluated with perceptual user studies (19.8%). These papers are about optimizing image quality or tracking of the real world and representing it in XR. This is similar for the problem of tracking technologies (17.3%). We found only one topic about health in XR, in particular simulator sickness (10.5%). The focus of these papers is on building ML models, followed by empirical research, but not all the ML models are evaluated empirically. The next two problems are creating high fidelity human characters (10.1%), which is mostly addressed by a combination of an ML model and an empirical evaluation. The same holds for interaction techniques (9.7%). Surprisingly, there were only 21 (8.9%) papers addressing social and ethical issues. Two thirds of them investigated an issue about interacting with VAs and one third about creating worlds. The vast majority of these papers contains a form of empirical evaluation, but, there is not much technical work in this area. Lastly, there is little work on more “traditional” computer graphics and computer vision topics like building devices (4.2%), rendering (3.8%), or display technology (1.7%). Lastly, as also demonstrated by the typology, there is not much research about using XR to address an AI problem (3%).
Contribution Types. When looking at the methods used to address problems (see Table 5), we see a trend of building ML models (57.8%) and evaluating them empirically (60.3%). This is present in many of the problems, as discussed in the previous paragraph. In general, there is very little theoretical and methodological work. Interestingly, there are some dataset contributions, in particular, in the problem areas of tracking technologies and health-related impacts. We collected all the datasets presented by the papers and provide a list of them in the Appendix in Table 8.

4.4 Algorithms, Tools, Datasets, Networks, Data Types, and User Study Data

In the following, we summarize what type of algorithm techniques and classes are used in the reviewed papers. Furthermore, we present a list of commonly used tools, datasets, and networks. We also discuss the data types and summarize data about the users that is used to train and evaluate the ML models.
Table 6:
 SumCreate WorldsUnderstand UsersSupport InteractionInteraction with IVAsXR to Support AI
Supervised learning13845493833
Unsupervised learning1182001
Reinforcement learning922500
Optimization522100
Semi-supervised learning220000
Unclear/other1892421
Table 6: Algorithm techniques.
Table 7:
 SumCreate WorldsUnderstand UsersSupport InteractionInteraction with IVAsXR to Support AI
Classification7622242631
Regression5821251200
Generation750011
Optimization851200
Reinforcement learning812500
Clustering431000
Planning100010
Unclear/other430100
Table 7: Algorithm Classes.

4.4.1 Algorithm Techniques and Classes.

Algorithm Techniques. Table 6 and Table 7 give an overview of the algorithm techniques and classes. With 138 papers (58.2%), there is a clear focus on supervised learning. In contrast, only 11 papers (4.6%) use an unsupervised learning technique. Nine papers (3.8%) address a problem with reinforcement learning, the majority of which are in the support interaction topic. Also, we did not find many applications of optimization algorithms (2.1%), with some occasional cases in creating worlds, understanding users, and supporting interaction.
Algorithm Classes. Most often, problems in creating worlds, understanding users, and supporting interaction are considered either classification (32.1%) or regression (24.5%) problems. Only very few papers use a generative technique (3%), primarily for creating worlds.

4.4.2 Tools, Datasets, and Networks.

PyTorch23, Keras24, TensorFlow25, and Scikit-learn26 are the most frequently used tools for the implementation of algorithms and ML models. Furthermore, we found some software toolkits being used, for example, for sensing facial expressions [306] or creating virtual humans [291]. We also collected a list of datasets (e.g,. hand model datasets [16], motion capture datasets [289], indoor datasets [117], or datasets for facial expressions [61]), as well as networks and models. We provide the complete list of tools, datasets, networks, and models in the Appendix in section A.

4.4.3 Data Types.

We collected a data types, including sensor data (e.g., eye tracking, acoustic sensors, brain computer interface data, electroencelography, positional tracking, inertial tracking, and speech and audio data), subjective self-report data (e.g., questionnaire results), and images and videos. Furthermore, we noted when synthetic data was used. The most common data types for creating worlds are images and videos (42 papers). For understanding users the most common data type is self-report questionnaire data (21 papers). To support interaction the most common data type is hand tracking data (11 papers) and positional tracking (13). Interacting with IVAs is typically investigated in perceptual empirical studies, in which no ML technique or algorithm is applied. Consequently, we could not reveal a main type of data used. Reflecting the generation issue, synthetic data is rarely used in XR and AI research (19 papers in total).

4.4.4 Assessment and Evaluation.

In 73% of the papers that trained a model based on data of a user study, the evaluation of the model was performed on the data of the same user study. Only in 27% a second (or third) user study was performed to test the model or classifier on unseen, new data. Furthermore, the task was typically the same in the training and the evaluation study. The mean sample size for training studies is 28.94 (N=94, SD=36.94, range: 3-212) and for evaluation studies 29.15 (N=179, SD=30.61, range: 3-200). The mean gender distribution of the training studies is 66% male and 34% female participants; for evaluation studies 64% of the participants were male and 36% were female. In total there were three training studies where one person of the participants each identified as non-binary and seven evaluation studies, where on average two participants identified as non-binary. The mean age of the participants in the training studies is 25.96 years (N=38, SD=3.62, range: 19.15-37.26) and in the evaluation studies 26.85 years (N=89, SD=6.72, range: 20.9-40.01).

4.4.5 When is AI applied?

Most of the use cases presented to address an XR problem with an AI technique or method are aiming for real-time processing (77%). Of these about half (52%) are already deployed in real-time, while 48% cannot yet fulfil this goal. In general, only a few papers use an AI method for post-hoc analysis (7%) or for generating virtual content before the interaction takes place (15%). For the remaining 2%, the main focus of the technique was unclear.

5 Discussion

We summarize the results, highlight our paper’s relevance to HCI, and present 13 research opportunities and recommendations for future work.

5.1 Summary of Results

We found five topic clusters on XR and AI research. Most of the reviewed papers address a topic related to using AI to create XR worlds (89), using AI to understand users (61), and using AI to support interaction (48). Papers on these three topics typically address classification (72) or regression (58) problems and often present an ML model (134) together with an empirical (117) contribution. The fourth topic cluster is about interacting with VAs (25). Papers addressing this typically present empirical research (23), investigating user perception of interacting with agents, such as emotions or trust, but rarely present an implementation of agents. Lastly, there is very little work on using XR for AI (7). These seven papers present either a technique to enhance understandability by visualizing AI models in VR (5) or address the problem of limited training data in XR (2).

5.2 Relevance to HCI

We analyzed the distribution of research directions and topics separately for HCI papers and compared them to the complete paper corpus. The distribution of research directions for HCI papers is almost the same as for the corpus in general. This might suggest that the topics at the intersection of XR and AI addressed by HCI research reflect the general distribution of topics as well. This is, however, not the case, as we discuss in the following. Almost half of the HCI papers (48%) are in the category of Using AI to Support Interaction. While this might not come as a surprise, given that this topic is the one that is arguably most relevant to HCI, it is still interesting to note. Thus, the primary interest of HCI research at the intersection of XR and AI is using AI for the improvement of interaction techniques in XR. With 39% the second largest group of HCI papers is about Creating XR Environments. We conclude that HCI researchers’ second most important interest is on investigating how AI methods can be used to enhance and ease content presentation in XR, mostly focusing on user body representations (avatars) and agents. Interestingly, our results show that only a few HCI papers use AI for what we labeled as Understanding Users (6%). This reveals a lack, where the core HCI venues (like CHI) could take inspiration from other venues (in particular XR venues), where AI methods are already applied to understand user characteristics and other properties of users. Lastly, we found only one HCI paper for the topic of Interaction with Intelligent VAs. This is surprising, since the core interest of HCI is about how users interact with computer systems and with more and more intelligent systems entering our lives, we argue that the research on interaction with virtual agents can be very beneficial and helpful to understand how users perceive and interact with cognitively enhanced computer systems, like agents. In general, we note that research at the intersection of XR and AI is highly relevant for HCI, since three of the five topics in out typology are about core HCI problems (understanding users, supporting interaction, and interaction with IVAs). Speaking from an HCI lens, we understand XR research as inherently connected to HCI, given that XR devices will likely become next-generation personal computing devices that we will interact with on a regular basis. Therefore, we are convinced that it is important for the field of HCI to understand how novel sub-areas (in this case the intersection of XR and AI) can influence and shape the field of HCI in general.

5.3 Research Opportunities Based on Topic Analysis

Based on our results, we formulate 13 research opportunities and recommend promising research directions. We first summarize five opportunities based on the analysis of the topics and conclude with eight general opportunities.

5.3.1 The Focus when Creating XR Worlds is on Realism.

Challenge. Most of the papers about creating XR worlds focus on realistically replicating the real world in XR. The benefit of creating realistic XR worlds and realistic representations of avatars and agents seems implicit, not only for the representation of content (e.g., the appearance of environments or avatars), but also for the behavior of avatars and agents. In their review on realism in digital games, Rogers et al. [302] also reported that realism is paramount as a goal for VR games. In contrast to the papers in Rogers et al. [302]’s review, papers in our corpus (e.g., [52, 205, 229, 390]) typically did not give a motivation for why they aim to create realistic worlds. The focus of these papers is often on technical details, addressing how realistic worlds can be implemented.
Opportunities and Recommendations for Future Research. Realism of avatars has frequently been discussed in previous work. Some works indicate that the realistic physical appearance of avatars causes eeriness and an uncanny valley effect [189]. Furthermore, some work suggests that the appearance of an avatar might not actually be the dominant factor in terms of social presence or appeal [401, 419, 420]. Some reviewed papers added to this discussion by comparing realistic, embodied agents with other forms, such as invisible [297] or abstract agents [376]. Furthermore, recent work on XR avatars explores how unrealistic avatars could be used, for example, for target selection at a distance [315], or to see a world from several perspectives [316]. AI methods are currently not used for these types of goal, but almost exclusively for realistic representations. We recommend to critically reflect on the need for realism in the representation of avatars, as well as in agents, objects, and environments.

5.3.2 The Focus when Understanding Users is Performance-driven Perspective.

Challenge. In terms of understanding users, most papers focus on performance-driven issues, resulting in a lack of work on usability and user experience as a criteria for understanding users. Almost half of the papers in the understanding users category use AI to predict VR sickness. However, it is mostly used for recognizing VR sickness in users and solution techniques are rarely developed. Another large field is viewport prediction. This is most often done to understand where users will look to improve the presentation of content accordingly [305, 357]. We found seven papers on gaze prediction, eye tracking, and gaze modeling, but it is not a big focus of research at the intersection. The main focus of these works is on predicting users’ gaze (i.e., on the technical challenge of predicting gaze).
Opportunities and Recommendations for Future Research. ML techniques are typically best applicable for well-defined problems, where a clear performance metric can be applied. Yet, we see potential in applying them for subjective user evaluations as well. Some works focus on experience-related aspects, such as presence [230] or mental workload [226, 362]. However, there is no bigger community for experience-related work (like VR sickness prediction), which makes it difficult to accumulate findings into general observations. These works could be combined with research on creating XR worlds to analyze how users perceive specific aspects of these worlds. For example, ML-based presence estimation could be used to automatically evaluate and adapt XR environments, and affect and emotion models could be used to improve our understanding of the presentation of VAs.

5.3.3 Focus of Interaction is on Gestural and Locomotion Techniques.

Challenge. In terms of supporting interaction in XR, AI is currently used most frequently for gesture-based interaction (45.8%) and locomotion techniques (27.1%). In both cases, the focus is again on technical challenges, such as improving midair pointing  [239], hand tracking [98], or path prediction [46]. Furthermore, we were surprised to see little work on haptics (3), although it is one of the major problems in current XR research [275].
Opportunities and Recommendations for Future Research. Although we found this technical focus of papers, the data types show that ML models can also be applied to subjective self-report data, which seems to be a promising future research direction to improve our understanding of users not only from a technical and performance-driven perspective, but also from their subjective self-reports. Dalsgaard et al. [73] show an example of how ML methods can be applied to improve the presentation of haptic stimuli. They present user-driven mapping for mid-air haptic experiences based on keywords extracted by two natural language processing techniques.

5.3.4 Interaction with VAs is mostly based on Perceptual Experiments.

Challenge. Similar to Norouzi et al. [262], we found a focus on agents’ influence on personality and empathy. We also identified different roles that VAs can inhabit, such as companions or assistants. In their review, Norouzi et al. [262] note that more research is necessary to understand the spatial relationship between users and AR agents and we found some works addressing these issues [40, 48]. However, most of the work on IVAs in XR worlds investigates users’ perception towards agents in perceptual experiments with the aim to inform the design of IVAs. Typically, the behavior of VAs is not implemented, but simulated or scripted. The validation of these studies from a technical perspective has yet to take place.
Opportunities and Recommendations for Future Research. Our recommendation for future research is to invest in the technical implementation of agent models and to work on validating the findings in empirical user studiesvice.

5.3.5 Lack of Research on XR Supporting AI.

Challenge. The few works on using XR technologies to support AI research focus on visualizing methods, for example, to support non-experts [125] in working with complex neural network structures. This huge imbalance between using XR for AI research and using AI for XR is hand expected. AI is predominantly used as a method in XR, either applied to technical issues (e.g., tracking, locomotion), or for analysis (e.g., user characteristics). On the other hand, XR is a technology, so the imbalance of the two (a method and a technology) is naturally given. However, whether some form of XR can be used as an interface to interact with AI or to improve our understanding of AI methods remains unanswered.
Opportunities and Recommendations for Future Research. How can XR technology be used and contribute to the conception, design, and implementation of artificial intelligence and machine learning? Educating people about the opportunities and challenges AI poses to society is important to create value. We are convinced that XR can contribute to fostering a better understanding of these new methods for a diverse set of individuals. Another unanswered research question is how XR can help design safe, reliable, and trustworthy AI.

5.4 General Research Opportunities

5.4.1 Lack of Generative Use of AI in XR Worlds.

Challenge. We found only seven examples of generative models in XR. This is surprising, since GANs have been around for several years [108] and have been applied to automatically generate images [407] or visualizations [264]. Given that content creation is one of the greatest challenges in current VR research, we were surprised to not see more work on the application of GANs to that problem. Yet, we understand that the research on generative VR content is still in its infancy.
Opportunities and Recommendations for Future Research. We found one promising work in the reviewed papers that applied a generative method to build a new XR world based on mood [334]. Such an affective world could contribute to increasing empathy between individuals. Some other GANs were applied, for example to create context-dependent images [174] or virtual object shadows [216]. Given these promising examples, one avenue for future research is to further explore the use of GANs in the creation of virtual worlds.

5.4.2 Lack of Optimization.

Challenge. Optimization is widely applied in HCI research [149], for example, to optimize interfaces [85] and to design interaction techniques [55]. Surprisingly, we did not find many examples of optimization for VR or AR interfaces.
Opportunities and Recommendations for Future Research. McIntosh et al. [241] show the potential of optimization, for example, to optimize avatar representations for specific tasks. This seems to be a promising direction. Since virtual representations of users are not bound by the same requirements as real world bodies, we see potential for optimizing interaction techniques in XR. For example, the limbs of a virtual user representation could be adapted to the depth of a target in VR, say by optimizing arm extension as a function of target depth. Another example could be to optimize a user’s body for specific tasks, for example, a user’s height could be implemented as a function of distance, thus enlarging or shrinking the user to fit a certain virtual space. In general, we see a lot of potential for optimization to enhance interaction techniques in XR.

5.4.3 Lack of Generalizability.

Challenge. The focus of using AI in XR is currently mainly done on problems. While we understand that this is because, by definition, ML models work best with a well-defined problem, we are missing a bigger picture of these problems. This can best be demonstrated by the following example. There are some larger groups of research, such as predicting VR sickness, predicting path direction for locomotion in VR, or improving 3D gestural interaction, but the individual papers typically collect their own datasets. For none of these problems, we found a general dataset that would provide generalizability of the developed models and algorithms. Another point is that most of the data that are used to train the models are based on WEIRD samples [212], indicating that the models are largely biased towards Western, Educated, Industrialized, Rich and Democratic people. The mean sample size for the studies to generate data for model training was 29 with a mean age of 26 years. Furthermore, there is a bias towards male users (average percentage of 66% male users). All of these points (sample size, mean age, gender distribution) are well-known issues for HCI research in general [50]. Our findings show that this also holds for XR and AI research. This could easily reinforce already known biases and severely influence trust in such systems.
Opportunities and Recommendations for Future Research. There is some effort on creating large and more diverse datasets. For example, Li et al. [204] presented an open-source library that provides a benchmark for “developing, deploying, and evaluating” redirected walking techniques. It even provides multi-user techniques, allowing multiple users to move in the same physical space. Furthermore, we provide a collection of datasets and models that are both used and/or presented by the reviewed papers. With this list, we aim to guide researchers in investigating one of these issues from a more general perspective. While these collections of datasets are certainly useful, we need bigger datasets that include a larger variety of users. This is still very much an open challenge for XR and AI research.

5.4.4 Lack of Robustness.

Challenge. We found a lack of robustness in the data used to train models and algorithms. Most of the data that was used for training was generated by a user study. However, in most of these cases (73%), a model was trained and tested on the data from the same user study (typically also on the same task). Only in 27% of the papers a second or third user study was performed to validate the model, network, or trained algorithm. This is a serious concern regarding data leakage, since the models are typically tested with already known data or, at least, rarely tested with data that includes unseen scenarios or influences.
Opportunities and Recommendations for Future Research. To create robust models that generalize for more than one very specific sample and task, we need to develop the models on more diverse datasets, representing a broader population. Furthermore, current models are typically developed for one specific task. Similarly, we should focus on testing our models in more diverse settings, including a variety of different tasks and environments.

5.4.5 Lack of Theoretical and Methodological Work.

Challenge. Not surprisingly, we did not find much work on theoretical or methodological research. Due to our search process, we excluded “pure” theoretical work, such as surveys and meta-analyzes. However, we expected more discussion of the theoretical implications of the presented works or methodological guidelines, for example, derived by a user study. Most of the theoretical work was related to agents, such as design guidelines for generating VA locomotion [398], needs model for agents [110], or a classification scheme of users interaction with a group of agents [38]. In terms of methodological work, there are some examples that present approaches for how to build models, for example, gaze modeling [337], or detailing how reinforcement learning can be used to create a generative model [123].
Opportunities and Recommendations for Future Research. Our results show a need for more guidelines on how AI can be used for XR problems. For example, how studies should be conducted and how models can be developed. In general, we need more discussion about what type of methods work for which type of specific XR problems. This work provides a first investigation into this topic and provides a stepping stone for future research in this area.

5.4.6 Lack of Discussion about Ethical and Societal Impacts.

Challenge. The societal discussions about ethical concerns of XR and AI are generally not reflected in the reviewed papers, although they are receiving more attention in their respective fields. This is demonstrated by current CHI workshops about safety, security, and privacy in XR [112] or challenges of using VR HMDs in social spaces [111]. Furthermore, one of the largest AI conferences NeurIPS has recently started to require a statement about the “potential negative societal impacts” of the proposed research27.
Since we excluded surveys and literature reviews from our corpus, we might have missed articles that talk about these issues from a meta perspective. Still, papers in the sample that implement models for XR typically do not elaborate on any ethical or societal implications.
Opportunities and Recommendations for Future Research. Social impacts of human-agent interaction are discussed in some papers [43, 113, 382]. However, this is a very specific issue that applies to the interaction with embodied AI. In general, only 21 papers (6.8% of the corpus) touch upon this issue. Similarly, societal issues are not discussed. We suggest that researchers provide statements about the potential societal and ethical impact of their research, similar to the statement required by NeurIPS.

5.4.7 Lack of AR Research.

Challenge. The vast majority of the reviewed papers addressed VR research (68%). This is likely due to the wider distribution of VR hardware. However, several of the topics researched regarding VR, could be applied to AR as well.
Opportunities and Recommendations for Future Research. In AR, AI techniques are currently mostly applied for tracking. However, VR research shows some promising directions, which could also be applied to AR. The interaction with IVAs in AR is an interesting avenue for future research that will become more relevant as AR devices are increasingly used by consumers. A relevant question to answer is, for example, how users can interact with IVAs in a mix of real and virtual worlds. Schmidt et al. [317] show an example of how to merge such physical and virtual consequences of interactions.

5.4.8 Human-AI Interaction in XR.

Challenge. We found a trend to use AI to create XR content or for analysis purposes. Users were mostly involved in the process to evaluate the techniques, or to provide the data to predict, for example, their movement patterns [248]. However, we did not find examples of human users and AI working together collaboratively on problems.
Opportunities and Recommendations for Future Research. We found some examples where an AI technique was combined with expert knowledge. For example, Xing et al. [390] use hair models created by experts as the basis for their model. Similarly, the model presented by Sra et al. [334] is trained on user-based suggestions, and Yu et al. [403] present a 3D sketching tool that creates 3D objects based on users’ 3D sketches. These works are promising examples for human-AI collaboration. However, all these cases are about asynchronous collaboration. Promising real-time human-AI interaction in XR worlds is currently missing and would be a promising avenue for future research.

6 Conclusion

We present a scoping review of 311 papers at the intersection of XR and AI research. We reviewed the papers using a code book with 26 codes covering research direction, contribution, and details about technologies and methods. We present a typology of the state of the art covering five main topics. Furthermore, we provide a list with commonly used tools, software, datasets, and models. Lastly, we summarize 13 research opportunities and provide recommendations for future research. Current XR and AI research mainly focuses on using AI to create realistic XR worlds, support technical aspects of interaction techniques, and understand users from a performance-driven perspective. Furthermore, interaction with VAs is mostly researched with perceptual experiments, and technical implementations are missing. Furthermore, there is a lack of research exploring how XR can be used to support AI research. In general, there is a lack of generalizability, robustness, methodological, and theoretical work in this area. Furthermore, ethical and societal impacts of XR and AI research are largely neglected.

Acknowledgments

We would like to thank Tor-Salve Dalsgaard for helping with collecting the review articles and Sarah Bagge Valsborg for helping with extracting the links and text for the list of tools, methods, datasets, and software. This research was supported by the HumanE AI Network from the European Union’s Horizon 2020 research and innovation program under grant agreement No 952026, and the Pioneer Centre for AI, DNRF grant number P1.

A List OF Datasets, SOFTWARE, Libraries, AND Models

Table 8:
NameShort DescriptionLinkSource
ACE50 users exploring 5 different AR scenceshttps://cs.gmu.edu/ sqchen/open-access/ACE-Dataset.tgz[374]
BigHand2.2MHand pose datasethttps://sites.google.com/site/qiyeincv/home/bibtex_cvpr2017[404]
CASSIE DatasetVR sketch datahttps://gitlab.inria.fr/D3/cassie-data[403]
CityscapeStreet scenes from 50 different citieshttps://www.cityscapes-dataset.com/[69]
CMU Graphics Lab Motion Capture Database49 gaits obtained from subjects walking with different styleshttp://mocap.cs.cmu.edu/ 
CMU Panoptic Dataset65 sequences and 1.5 millions of 3D skeletonshttp://domedb.perception.cs.cmu.edu/[151]
DeepFashionLarge-scale clothes database including annotations of clothing items and cross-pose/cross-domain image pairshttp://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html[220]
DGaze datasetGaze data in dynamic virtual indoor and outdoor sceneshttp://zhiminghu.net/DGaze[135]
Director’s CutIncludes the directional cues and plot points as well as the scan-paths of the test subjects watching films in VRhttps://v-sense.scss.tcd.ie/?p=2477[177]
DISFASpontaneous facial action intensity databasehttp://www.engr.du.edu/mmahoor/DISFA.htm[237]
DIV2KDiverse 2K resolution high quality images with a large diversity of contentshttps://data.vision.ee.ethz.ch/cvl/DIV2K/[3]
EgoCap100.000 egocentric images of eight people in different clothinghttps://vcai.mpi-inf.mpg.de/projects/EgoCap/[298]
EgoVIPEgocentric visual-inertial 3D human pose datasethttps://sites.google.com/site/youngwooncha/egovip[52]
EHTaskDatasetEye and head movements of 30 participants performing four tasks, i.e. Free viewing, Visual search, Saliency, and Track, in 15 360-degree VR videoshttp://zhiminghu.net/EHTask[133]
Enron Mobile Email DatasetSentences written by Enron employees on BlackBerry mobile deviceshttp://www.keithv.com/software/enronmobile/[364]
Extended Cohn-Kanade Dataset (CK+)Dataset for action unit and emotion-specified emotionhttps://sites.pitt.edu/ emotion/ck-spread.htm[223]
FERG-DB2D images of stylized characters with annotated facial expressionshttp://grail.cs.washington.edu/projects/deepexpr/ferg-2d-db.html[13]
GrabAR1Oaired images of hand and objectslink not found[345]
GTSBGerman traffic sign detection benchmark, including 900 images from three categorieshttps://benchmark.ini.rub.de/gtsdb_news.html[130]
GTSRBGerman traffic sign multi-category classification benchmarkhttps://benchmark.ini.rub.de/gtsrb_news.html[335]
Human 3.6M3.6 million human poses and corresponding images of 11 professional actors and 17 scenarioshttp://vision.imar.ro/human3.6m/description.php[141]
IISc Video Discomfort Datastevideos and discomfort scoreshttps://github.com/rajiviisc/Video-Discomfort[25]
ImageNetImage databasehttps://www.image-net.org/challenges/LSVRC/[308]
JAFFEJapanese female facial expression datasethttps://zenodo.org/record/3451524[227]
KITTITraffic scenarioshttps://www.cvlibs.net/datasets/kitti/[104]
Laval Indoor HDR Dataset2100+ high resolution indoor panoramashttp://indoor.hdrdb.com/[101]
Table 8: List of datasets. Part I.
Table 9:
NameShort DescriptionLinkSource
Microsoft COCO: Common Objects in ContextPhotos of 91 object typeshttps://arxiv.org/abs/1405.0312[211]
MPI Emotional Body Expressions Database for Narrative ScenariosEmotional body expressionshttp://figshare.com/articles/MPI_EMBM_Database_Mocap_Files/1220428[367]
MPI-INF-3DHP3D human body pose estimation dataset consisting of both constrained indoor and complex outdoor sceneshttps://vcai.mpi-inf.mpg.de/3dhp-dataset/[242]
MSRA14Hand tracking datasethttps://jimmysuen.github.io/[285]
MSRA15Hand gesture datasethttps://jimmysuen.github.io/[339]
PanoContextPanorama datasethttps://panocontext.cs.princeton.edu/[408]
People Snapshot Database3D body models and texture of arbitrary people from a single, monocular video in which a person is movinghttps://graphics.tu-bs.de/people-snapshot[10]
Places2Scene photographs of a diverse list of the types of environmentshttp://places2.csail.mit.edu/[415]
Public-AR-BooksearchImages of book spines in different size and various conditionshttps://github.com/M-Schrapel/Public-AR-Booksearch[319]
Stanford 2D-3D Semantics DatasetProvides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotationshttp://buildingparser.stanford.edu/dataset.html[18]
SUNCGSynthetic 3D sceneshttps://sscnet.cs.princeton.edu[332]
The Million Song DatasetCollection of audio features and metadata for a million contemporary popular music trackshttps://github.com/tbertinmahieux/MSongsDB[31]
UEC FOOD 100Food photoshttp://foodcam.mobi/dataset100.html[235]
UIBVFEEDVirtual facial expressionshttp://ugivia.uib.es/uibvfed/[265]
UNOC datasetLarge-scale motion capture dataset with body and finger motionshttps://github.com/facebookresearch/UNOC[272]
VR-EyeTrackingEye tracking data of videos captured in dynamic scenes, each video is viewed by at least 31 subjectshttps://github.com/xuyanyu-shh/VR-EyeTracking[393]
VRSAImage and video databasehttps://ivylabdb.kaist.ac.kr/[172]
XR-Ego-PosePhotorealistic egocentric camera images in a varierty of indoot and outdoor spacehttps://github.com/facebookresearch/xR-EgoPose[351]
-LDR environment mapshttp://www.jflalonde.ca/projects/deepIndoorLight[101]
-Colored 3D scans/Collection of points with 3D coordinates and RGB color valueshttp://buildingparser.stanford.edu/dataset.html[19]
-Stereoscopic 3D videos and their sickness ratings [267]
-Speech and corresponding gestures in a 3D human pose formatno link found[295]
-Visual-inertial input dataset for SLAM applicationshttps://doi.org/10.5281/zenodo.5018311 
-Various datasets for viewport predictionhttps://gitlab.com/miguelfromeror/head-motion-prediction/tree/master[305]
-Dataset for improving humans’ ability to interpret deictic gestures in VRhttps://github.com/interactionlab/Deictic-Pointing-in-VR[238]
-Human body motion reconstructing using only eyeglasses-mounted cameras and few body-worn inertial sensorshttps://sites.google.com/site/youngwooncha/egovip[52]
-Exploring user behaviors in spherical video streaminghttps://wuchlei-thu.github.io[385]
Table 9: List of datasets. Part II.
Table 10:
NameShort DescriptionLinkPaper
CERT: The Computer Expression Recognition Toolbox"Fully automated facial expression recognition that operates in real-time"https://inc.ucsd.edu/mplab/users/marni/Projects/CERT.htm[213]
COVAREPRepository for speech processing algorithmshttp://covarep.github.io/covarep[75]
Covert Embodied Choiceunity code for VR experimental setuphttps://github.com/onejgordon/cec_vr[109]
Daz-3D StudioCreation of 3D scenes and charactershttps://www.daz3d.com/-
FAtiMA Toolkit"Collection of tools/assets designed for the creation of characters and robots with social and emotional intelligence."https://fatima-toolkit.eu/-
Googles ARCore platform"With ARCore, build new augmented reality experiences that seamlessly blend the digital and physical worlds. Transform the way people play, shop, learn, create, and experience the world together—at Google scale"https://developers.google.com/ar-
Google’s Dialogflow service for dialogue manager"Lifelike conversational AI with state-of-the-art virtual agents. Available in two editions: Dialogflow CX (advanced), Dialogflow ES (standard)"https://cloud.google.com/dialogflow-
HeMoGgravitational white-box model for head motion estimation in 360 videoshttps://gitlab.com/miguelfromeror/hemog[304]
HRV Python library"Heart Reate Variability analysis"https://pypi.org/project/hrv-analysis/-
Keras"Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear & actionable error messages. It also has extensive documentation and developer guides."https://keras.io/-
Learning Gain Predictioncontains code and featurized datahttps://github.com/LeonDong1993/learning-gain-prediction[252]
LIPSYNCLip-syncing and facial animation tool for Unityhttps://lipsync.rogodigital.com/-
MixamoAnimation tool for 3D character animationhttps://www.mixamo.com/-
OpenPose"Real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images"https://github.com/CMU-Perceptual-Computing-Lab/openpose[51]
Table 10: List of software toolkits and libraries. Part I.
Table 11:
NameShort DescriptionLinkPaper
OpenRDWProvides APIs to access the attributes of scenes, to customize the RDW controllers, to simulate and visualize the navigation process, to export multiple formats of the results, and to evaluate RDW techniqueshttps://github.com/yaoling1997/OpenRDW[204]
Panoptic-DeepLabImage segmentation libraryhttps://github.com/bowenc0221/panoptic-deeplab[59]
PhysioNet"The Research Resource for Complex Physiologic Signals"https://physionet.org/-
Poly Haven3D asset libraryhttps://hdrihaven.com/-
PyTorch"An open source machine learning framework that accelerates the path from research prototyping to production deployment."https://pytorch.org/-
ResonanceAudio"Resonance Audio is a multi-platform spatial audio SDK, delivering high fidelity at scale. This powerful spatial audio technology is critical to realistic experiences for AR, VR, gaming, and video."https://resonance-audio.github.io/resonance-audio/-
Scikit-learn"Simple and efficient tools for predictive data analysis Accessible to everybody, and reusable in various contexts Built on NumPy, SciPy, and matplotlib Open source, commercially usable - BSD license"https://scikit-learn.org/stable/-
TensorFlow"Create production-grade machine learning models with TensorFlow"https://www.tensorflow.org/-
Shark library"Shark is a fast, modular, feature-rich open-source C++ machine learning library. It provides methods for linear and nonlinear optimization, kernel-based learning algorithms, neural networks, and various other machine learning techniques. It serves as a powerful toolbox for real world applications as well as for research. Shark works on Windows, MacOS X, and Linux. It comes with extensive documentation. Shark is licensed under the GNU Lesser General Public License."https://www.shark-ml.org/-
Seuratsystem for image-based scene simplification for VRhttps://github.com/googlevr/seurat[184]
SimSenseiVirtual interviewer for healthcare decision supporthttp://simsensei.ict.usc.edu/[78]
VGG Image AnnotatorImage annotator toolhttps://www.robots.ox.ac.uk/ vgg/software/via/via_demo.html-
Virtual Human ToolkitToolkit for the creation of virtual human conversational charactershttps://vhtoolkit.ict.usc.edu/-
Table 11: List of software toolkits and libraries. Part II.
Table 12:
NameShort DescriptionLinkPaper
ARShadowGANModel for creating virtual shadowshttps://github.com/ldq9526/ARShadowGAN[214]
BodyNetVolumetric inference of 3D human body shapeshttp://www.di.ens.fr/willow/research/bodynet/[360]
Convolutional-Pose-MachinesModel for articulated pose estimationhttps://github.com/CMU-Perceptual-Computing-Lab/convolutional-pose-machines-release[379]
CUTContrastive unparied translation for image-to-image translationhttps://github.com/taesungp/contrastive-unpaired-translation[274]
CycleGANImage-to-image transaltion without input-output pairshttps://github.com/junyanz/CycleGAN[417]
EEGModelsA Collection of Convolutional Neural Network (CNN) models for EEG signal processing and classification, written in Keras and Tensorflow.https://github.com/vlawhern/arl-eegmodels[190]
ICNetModel that creates segmentation masks for every pizel in an imagehttps://github.com/hellochick/ICNet-tensorflow[414]
Pix2PixImage-to-image translation with conditional adversarial networkshttps://github.com/phillipi/pix2pix[144]
SiCloPeSilhouette-based representation for modeling clothed human bodieshttps://vgl.ict.usc.edu/Research/SiCloPe/[260]
StarGANImage-to-image translations for multiple domainshttps://github.com/yunjey/StarGAN[63]
StarGAN v2Image-to-image translations for multiple domainshttps://github.com/clovaai/stargan-v2[64]
-Neural network for predicting avatar movements in VRhttps://github.com/david-halbhuber/motionprediction[321]
Table 12: List of ML models and neural networks.

B Search Queries

B.1 Search for Venue-based Strategy

DATE: June 15, 2022 to June 24, 2022
QUERY: TITLE-ABSTRACT-KEYWORDS("augmented reality" OR "AR" OR "extended reality" OR "head-mounted display" OR "head-up display" OR "head-worn display" OR "headset" OR "HMD" OR "immersive environment" OR "mixed reality" OR "virtual environment" OR "virtual reality" OR "virtual space" OR "VR" OR "XR")AND TITLE-ABSTRACT-KEYWORDS("agent" OR
"artificial intelligence" OR "bandit" OR "classif*" OR "cluster*" OR "computational" OR "computer vision" OR "dataset" OR "deep" OR "estimation" OR "generative" OR "intelligent" OR "learning" OR "machine learning" OR "markov" OR "model*" OR "natural language
processing" OR "neural" OR "optimi*" OR "predict*" OR "reasoning" OR "recognition" OR "segmentation" OR "*supervised*" OR "tensor").

B.2 First Searches

Scopus.
DATE: May 16, 2022
QUERY: TITLE-ABS-KEY("augmented reality" OR AR OR "extended reality" OR "head-mounted display" OR "head-up display" OR "head-worn display" OR "headset" OR HMD OR "immersive environment" OR "mixed reality" OR "virtual environment" OR "virtual reality" OR "virtual space" OR VR OR XR) AND TITLE-ABS-KEY(agent OR "artificial intelligence" OR bandit OR classif* OR cluster* OR computational OR "computer vision" OR dataset OR deep OR estimation OR generative OR intelligent OR learning OR "machine learning" OR markov OR model* OR "natural language processing" OR neural OR optimi* OR predict* OR reasoning OR recognition OR segmentation OR
supervised* OR tensor), FILTER: years between 2017 and 2021
number of results: 45031
LANGUAGE: English (43552), Chinese (620), Spanish (337), Portuguese (150), German (133), Russian (113), French (75), Korean (51), Turkish (37), Japanese (29), Italian (24), Slovenian (10), Hungarian (6), Czech (4), Ukrainian (4), Bosnian (3), Lithuanian (3), Polish (3), Arabic (2), Croatian (2), Danish (2), Dutch (2), Greek (2), Persian (2), Slovak (2), Afrikaans (1), Estonian (1) Indonesian (1), Malay (1), Undefined (1)
SUBJECT AREA: Computer Science (27587), Engineering (17,566), Mathematics (7,390), Social Sciences (6,190), Medicine (5,149), Physics and Astronomy (3,857), Materials Science (2,388), Decision Sciences (2,358), Biochemistry, Genetics and Molecular Biology (1,579), Neuroscience (1,447), Psychology (1,383), Energy (1,300), Business, Management and Accounting (1,294), Environmental Science (1,170), Arts and Humanities (1,107), Earth and Planetary Sciences (905), Chemistry (826), Chemical Engineering (767), Health Professions (553), Multidisciplinary (435), Pharmacology, Toxicology and Pharmaceutics (378), Agricultural and Biological Sciences (367), Nursing (255), Economics, Econometrics and Finance (227), Immunology and Microbiology (142), Dentistry (125), Veterinary (30), Undefined 2)
DOCUMENT TYPE: Conference Paper (18216), Article 7165), Conference Review (1,260), Book Chapter (533), Review (313), Book (29), Editorial (26), Erratum (15), Note (6), Retracted (5), Short Survey (2), Data Paper(1), Letter (1), Undefined (15)
SOURCE TYPE: Conference Proceedings (14639), Journal (7292), Book Series (3431), Trade Journal (19)
SOURCE TITLE: excluded only: Workshop Proceedings (383), National Venues (255+97+51+45+31+28), Adjunct Proceedings (104+63+59+57+42)
FINAL 8877 without abbreviations, 23979 including abbreviations
KEYWORD: human computer interaction (1356)
Web of Science.
DATE: May 16, 2022
QUERY: (TI=("augmented reality" OR AR OR "extended reality" OR "head-mounted display" OR "head-up display" OR "head-worn display" OR "headset" OR HMD OR "immersive environment" OR "mixed reality" OR "virtual environment" OR "virtual reality" OR "virtual space" OR VR OR XR) OR AB=("augmented reality" OR AR OR "extended reality" OR "head-
mounted display" OR "head-up display" OR "head-worn display" OR "headset" OR HMD OR "immersive envi-
ronment" OR "mixed reality" OR "virtual environment" OR "virtual reality" OR "virtual space" OR VR OR XR) OR AK=("augmented reality" OR AR OR "extended reality" OR "head-mounted display" OR "head-up display" OR "head-worn display" OR "headset" OR HMD OR "immersive environment" OR "mixed reality" OR "virtual environment" OR "virtual reality" OR "virtual space" OR VR OR XR)) AND (TI=(agent OR "artificial intelligence" OR bandit OR classif* OR cluster* OR computational OR "computer vision" OR dataset OR deep OR estimation OR generative OR intelligent OR learning OR "machine learning" OR markov OR model* OR "natural language processing" OR neural OR optimi* OR predict* OR reasoning OR recognition OR segmentation OR *supervised* OR tensor) OR AB=(agent OR "artificial intelligence" OR bandit OR classif OR cluster OR computational OR "computer vision" OR dataset OR deep OR estimation OR generative OR intelligent OR learning OR "machine learning" OR markov OR model OR "natural language processing" OR neural OR optimi OR predict OR reasoning OR recognition OR segmentation OR super-
vised OR tensor) OR AK=(agent OR "artificial intel-
ligence" OR bandit OR classif OR cluster OR compu-
tational OR "computer vision" OR dataset OR deep OR estimation OR generative OR intelligent OR learning OR "machine learning" OR markov OR model OR "natural language processing" OR neural OR optimi OR predict OR reasoning OR recognition OR segmentation OR supervised OR tensor))
number of results: 39380
LANGUAGE: English(38,434), Spanish (293), Chinese (141), Portuguese (128), Russian (113), German (77), Turkish (48), French (41), Italian (24), Korean (19), Japanese (15), Ukrainian (11), Polish(7), Hungarian(6), Bulgarian(4), Catalan(3), Afri-
kaans(2), Arabic(2), Croatian(2), Czech(2), Malay(2), Slovenian(2), Estonian(1), Norwegian (1), Slovak(1), Unspecified (1)
RESEARCH AREA: Computer Science (11379), 5 excluded with most papers: Engineering (9346), Education Educational Research (2869), Physics (2746), Chemistry (2559), Telecommunications (1822)
DOCUMENT TYPE: Proceedings Papers (7512), Articles (3858), Review Articles (127), Early Access (98), Book Chapters (91), Editorial Materials (32), Corrections (5), Books (1), Data Papers (1), Retracted Publications (1)
PUBLICATION TITLES: excluded only: workshop (28+14+17), adjunct (27+25+59+56+50+43+31+31), regional (45+23+13+12), other winter conference (16), lecture notes (23)
FINAL 10713

C Criteria FOR Including A Venue

Venues that explicitly mention the name of one of the fields of interest (we include HCI here, because a lot of XR research is published in general HCI venues). An example for an XR venue is VRST5 and an example for an AI venue is ICML7.
Venues that include one of the following terms: computer vision, computer graphics, image processing.
Venues whose name includes the word “intelligent” combined with “system”, “agent”, “user interface”, “computing”, “automation”, “fuzzy systems”, or “signal processing”.
Venues were excluded when their name contained the word “intelligent” without further specification that is of interest to us, such as “robots”, “vehicles”, “design”, “transportation system”, “engineering”, or “control”.

D List OF Included Venues

ACM CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS
ACM COMPUTING SURVEYS
ACM CONFERENCE ON DESIGNING INTERACTIVE SYSTEMS
ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS
ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES
ACM SIGGRAPH
ACM SIGGRAPH INTERNATIONAL CONFERENCE ON VIRTUAL-REALITY CONTINUUM AND ITS APPLICATIONS IN INDUSTRY
ACM SYMPOSIUM ON APPLIED PERCEPTION
ACM SYMPOSIUM ON EYE TRACKING RESEARCH AND APPLICATIONS
ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY
ACM TRANSACTIONS ON APPLIED PERCEPTION
ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION
ACM TRANSACTIONS ON GRAPHICS
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS
ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY
AUGMENTED HUMAN INTERNATIONAL CONFERENCE
COMPUTER GRAPHICS INTERNATIONAL CONFERENCE
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INFORMATION SYSTEMS
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY
INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND ARTIFICIAL INTELLIGENCE (CSAI) / INTERNATIONAL CONFERENCE ON INFORMATION AND MULTIMEDIA TECHNOLOGY (ICIMT)
INTERNATIONAL CONFERENCE ON COMPUTING AND ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON COMPUTING AND PATTERN RECOGNITION
INTERNATIONAL CONFERENCE ON HCI AND UX
INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION
INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS PROCESSING
INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES
INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING
INTERNATIONAL CONFERENCE ON MATHEMATICS AND ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON MOBILE HUMAN-COMPUTER INTERACTION
INTERNATIONAL CONFERENCE ON ROBOTICS, INTELLIGENT CONTROL AND ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON VIRTUAL AND AUGMENTED REALITY SIMULATIONS
INTERNATIONAL CONFERENCE ON VIRTUAL REALITY
INTERNATIONAL CONFERENCE ON VISION, IMAGE AND SIGNAL PROCESSING
PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON TANGIBLE, EMBEDDED, AND EMBODIED INTERACTION
SYMPOSIUM ON SPATIAL USER INTERACTION
VIRTUAL REALITY INTERNATIONAL CONFERENCE
AMITY INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
CSI INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND SIGNAL PROCESSING
IEEE COMPUTER GRAPHICS AND APPLICATIONS
IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING AND ECONOMICS CIFER
IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION
IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENCE SYSTEMS
IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES
IEEE INTELLIGENT SYSTEMS
IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING
IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY
IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS
IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND INTELLIGENT SYSTEMS
IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE & COMMUNICATION TECHNOLOGY
IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS
IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH
IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND VIRTUAL ENVIRONMENTS FOR MEASUREMENT SYSTEMS AND APPLICATIONS
IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION
IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING ICIP
IEEE INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND INTELLIGENCE SYSTEM
IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS
IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS
IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING
IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS
IEEE RECENT ADVANCES IN INTELLIGENT COMPUTATIONAL SYSTEMS
IEEE SYMPOSIUM ON 3D USER INTERFACES
IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
IEEE TRANSACTIONS ON FUZZY SYSTEMS
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS
IEEE TRANSACTIONS ON IMAGE PROCESSING
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
IEEE VIRTUAL HUMANS AND CROWDS FOR IMMERSIVE ENVIRONMENTS
IEEE/ACIS INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ARTIFICIAL INTELLIGENCE, NETWORKING AND PARALLEL/DIS-
TRIBUTED COMPUTING
IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY
INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE
INTELLIGENT SYSTEMS CONFERENCE
INTERNATIONAL CONFERENCE INFORMATION INTELLIGENCE SYSTEMS AND APPLICATIONS
INTERNATIONAL CONFERENCE ON 3D IMMERSION
INTERNATIONAL CONFERENCE ON 3D VISION
INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER ENGINEERING
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE FOR INDUSTRIES
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION
INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS AND COMPUTATIONAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS
INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY
INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE IN DATA SCIENCE
INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE/IN-
TELLIGENCE AND APPLIED INFORMATICS
INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION, AND INTELLIGENT SYSTEMS
INTERNATIONAL CONFERENCE ON CYBERNETICS AND INTELLIGENT SYSTEM
INTERNATIONAL CONFERENCE ON CYBERNETICS AND INTELLIGENT SYSTEMS
INTERNATIONAL CONFERENCE ON CYBERNETICS AND INTELLIGENT SYSTEMS (CIS) ROBOTICS, AUTOMATION AND MECHATRONICS (RAM)
INTERNATIONAL CONFERENCE ON ELECTRONICS COMPUTERS AND ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON GAMES AND VIRTUAL WORLDS FOR SERIOUS APPLICATIONS
INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING
INTERNATIONAL CONFERENCE ON INTELLIGENT AND ADVANCED SYSTEM (ICIAS 2018) / WORLD ENGINEERING, SCIENCE & TECHNOLOGY CONGRESS
INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND HUMAN-COMPUTER INTERACTION
INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS
INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS
INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS
INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND DATA SCIENCE
INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY
INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS
INTERNATIONAL CONFERENCE ON MECHATRONICS AND MACHINE VISION IN PRACTICE
INTERNATIONAL CONFERENCE ON PATTERN ANALYSIS AND INTELLIGENT SYSTEMS
INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION
INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION AND IMAGE ANALYSIS
INTERNATIONAL CONFERENCE ON ROBOTS & INTELLIGENT SYSTEM
INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS
INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE ISCMI
INTERNATIONAL CONFERENCE ON SOFT COMPUTING, INTELLIGENT SYSTEM AND INFORMATION TECHNOLOGY
INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE
INTERNATIONAL CONFERENCE ON TRANSDISCIPLINARY AI
INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION
INTERNATIONAL CONFERENCE ON VIRTUAL SYSTEMS & MULTIMEDIA
INTERNATIONAL CONFERNCE ON COMPUTATIONAL INTELLIGENCE AND COMMUNICATION NETWORKS
INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE
INTERNATIONAL SEMINAR ON RESEARCH OF INFORMATION TECHNOLOGY AND INTELLIGENT SYSTEMS
INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS
INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN
INTERNATIONAL SYMPOSIUM ON INSTRUMENTATION, CONTROL, ARTIFICIAL INTELLIGENCE, AND ROBOTICS
INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS ISPACS
INTERNATIONAL SYMPOSIUM ON INTELLIGENT SYSTEMS AND INFORMATICS
INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY
JOINT INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS SCIS AND INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS ISIS
SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS
SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURAL)
ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING (SPRINGER)
AI & SOCIETY (SPRINGER)
APPLIED INTELLIGENCE (SPRINGER)
ARTIFICIAL INTELLIGENCE REVIEW (SPRINGER)
HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES (SPRINGER)
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION (SPRINGER)
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS (SPRINGER)
JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING (SPRINGER)
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS (SPRINGER)
JOURNAL OF INTELLIGENT INFORMATION SYSTEMS (SPRINGER)
JOURNAL OF REAL-TIME IMAGE PROCESSING (SPRINGER)
JOURNAL OF VISUALIZATION (SPRINGER)
LEARNING AND ANALYTICS IN INTELLIGENT SYSTEMS (SPRINGER)
MACHINE LEARNING (SPRINGER)
MACHINE VISION AND APPLICATIONS (SPRINGER)
NEURAL COMPUTING & APPLICATIONS (SPRINGER)
NEURAL PROCESSING LETTERS (SPRINGER)
PATTERN ANALYSIS AND APPLICATIONS (SPRINGER)
STUDIES IN COMPUTATIONAL INTELLIGENCE (SPRINGER)
VIRTUAL REALITY (SPRINGER)
VISUAL COMPUTER (SPRINGER)

E Code Book

Table 13:
ItemDescription
General research objective and contribution 
C1 Category❍ AI applied to solve a XR problem
 ❍ XR applied to solve an AI problem
 ❍ XR and AI both applied but not focus of the work
C2 Research question/objective[...]
C3 Contribution or main findings[...]
C4 Contribution type❏ Application; ❏ Empirical; ❏ Dataset; ❏ Methodological; ❏ ML model;
 ❏ System/artifact; ❏ Technological; ❏ Theoretical; ❏ Other
C5 AI part of the contribution?❍  Yes, paper presents the implementation of an algorithm, classifier, model, etc. as part of the key contribution
 ❍ Yes (not communicated by the authors), but focus of paper is clearly on the algorithm
 ❍ No, AI algorithm is applied to solve a problem but not the actual focus of the work (e.g., applied for analysis of results)
 ❍ AI is not actually applied, but paper discusses/studies/investigates some issue that might become important with AI, e.g., interaction with social agents
C6 Limitations[...]
User-based evaluation 
C7 Type of user study❍ Yes, brainstorming/ideation; ❍ Yes, empirical lab study; ❍ Yes, empirical remote study; ❍ Yes, expert evaluation; ❍ Yes, field study; ❍ Yes, pilot testing; ❍ Yes, workshop; ❍ No user study; ❍ Other
C8 Purpose of user study[...]
C9 Metric for user-based evaluation[...]
C10 Study details(e.g., age, gender, target user group)[...]
Table 13: ❍  stands for one selection only; ❏  stands for multiple selections; [...] stands for copied text from the paper.
Table 14:
ItemDescription
XR-related 
C11 Type of XR❍ AR (not further specified); ❍ AR (optical see-through, 3DoF); ❍ AR (optical see-through, 6DoF); ❍ AR (projection); ❍ AR (smartphone); ❍ AR (video see-through, 3DoF); ❍ AR (video see-through, 6DoF); ❍ VR (3DoF); ❍ VR (6 DoF); ❍ VR (not further specified); ❍ Other
C12 Device type❍ HoloLen1; ❍ HoloLens2; ❍ HTC Vive; ❍ HTC Vive Pro; ❍ HTC Vive Pro Eye; ❍ Smartphone-based AR; ❍ Oculus Go; ❍ Oculus Quest; ❍ Oculus Rift; ❍ Samsung Gear VR; ❍ Custom device; ❍ Not specified; ❍ Other
C13 Interaction/application/task❍ Interaction/collaboration with artificial agent/embodied AI; ❍ Interaction/collaboration with people; ❍ Locomotion/navigation; ❍ Manipulation; ❍ Pointing; ❍ Selection; ❍ Typing; ❍ Viewing; ❍ Visual search; ❍ Other
C14 What XR problem is solved?❏ Collaboration/shared visual environments with artificial agents/embodied AI; ❏ Collaboration/shared visual environments with people; ❏ Display technology; ❏ High fidelity virtual human characters/virtual representation of humans; ❏ Interaction techniques; ❏ Perception and neuroscience; ❏ Social and ethical issues/impact; ❏ Tracking technologies; ❏ Health-related impacts; ❏ Longitudinal effects; ❏ Novel systems and devices; ❏ Not applicable, focus on AI problem; ❏ Not applicable, is sued as an application; ❏ Other
AI-related 
C15 Custom implementation?❍ Yes; ❍ No
C16 Tool/library used[...]
C17 Class of algorithm❍ Supervised learning; ❍ Unsupervised learning;❍ Semi-supervised learning;❍ Reinforcement learning; ❍ No algorithm applied; ❍ Not specified; ❍ Unclear; ❍ Other
C18 Details about algorithm[...]
C19 Validation and test[...]
C20 Performance and/or validation metric[...]
C21 Model technique❍ Classification; ❍ Regression; ❍ Clustering; ❍ Dimensionality reduction; ❍ Other
C22 Purpose + application[...]
C23 When/how AI is applied❍ "Before" interaction, e.g., for generation of virtual content, generation of model/classifier etc.; ❍ use case "During" interaction: use case meant for online use of AI, but not yet done in paper e.g., interaction with embodied AI; ❍ Deployment "during" interaction: algorithm/model actually applied/deployed online; ❍ "After" interaction: e.g., to analyse results of a user study, to build a model based on the recorded data; ❍ Other
C24 Data acquisition❏ "Human input" subjective data; ❏ Acoustic sensor; ❏ Brain computer interface; ❏ Electroencephalography; ❏ Eye tracking; ❏ Hand tracking; ❏ Images/videos; ❏ Inertial sensor; ❏ Mid air pointing; ❏ Positional tracking;❏ Publicly available data set; ❏ Synthetic data; ❏ Speech/audio; ❏ Data not recorded but based on previous paper; ❏ Data not recorded but gathered from literature survey; ❏ Other
C25 Publicly available resources (e.g., data sets, code, models)[...]
C26 What AI problem is solved?❏ Explainability and understandability; ❏ Human-AI interaction and collaboration; ❏ Learning, reasoning, planning; ❏ Perception, cognitive modeling; ❏ Privacy protection trust, and security; ❏ Social, ethical, legal, political issues; ❏ Not applicable, focus on XR problem; not applicable, both applied; ❏ Other
Table 14: Continuation of Table 13Table 13

F PUBLICATION Venues

Table 15:
Venue#Venue#Venue#Venue#Venue#Venue#Venue#
VRST57ICIP7TAFFC3AIH2TAP2TEI1ICPR1
AIVR42IVA6VC3SSCI2CGA2ISRITI1IISA1
TVCG35CVPR5CVR3PAMI2GVWSA2RTIP1JIS1
ISMAR28ACII4TIP3ETRA2ICCV1PAA1JAI1
CHI22SIGGRAPH4AH3CHI PLAY2IJCANN1ICMLA1JV1
VR14HAI4IC3D3SUI2TNNLS1MobileHCI1DIS1
TOG11SAP3AAMAS3ICASSP2VRCAI1PACMCGIT1VCIP1
UIST10            
Table 15: Published papers per publication venue and category. The full conference venue names are shown in Table 19 and Table 20
Table 16:
 AI applied XRXR applied to AIIntelligent VAsXR and AI appliedSum
XR7021723112
Computer Graphics4805861
AIXR22531444
HCI25061142
AI703818
Agents108110
Computer Vision80019
Affective Computing10157
Eye Tracking and Perception40217
Visualization00011
Table 16: Published papers per publication venue/community and category.
Table 17:
 KeywordAI applied to XRXR applied to AIIntelligent VAsXR and AI both appliedSum
XR keywordsVR273122859372
 virtual reality235174380375
 augmented reality7201556143
 virtual42-14820109
 AR5701435106
 virtual environment441141170
 mixed reality26051041
 HMD2202630
 head-mounted display2101527
 headset2302227
 immersive environment50218
 virtual space40004
 XR30003
 extended reality10034
 head-up display00000
 head-worn display20002
AI keywordsmodel13922838207
 agent1509030135
 learning13711858214
 predict1120421137
 deep852326116
 neural949218123
 classif37023271
 machine learning51732182
 dataset4443758
 estimation3701543
 recognition35001348
 optimi29021445
 computational2102932
 segmentation1320621
 intelligent30131632
 computer vision1230520
 generative1340219
 artificial intelligence926724
 supervised1000313
 cluster32049
 bandit30003
 markov20035
 natural language processing10102
 reasoning00000
 tensor10001
Table 17: Distribution of XR and AI keywords for each paper category.
Table 18:
Main Topic ClusterCountPapers
Applying XR and AI to an External Problem74 
     Health-related Training Applications18 
            Medical Training Applications11[26, 27, 49, 97, 157, 169, 253, 313, 318, 330, 394]
            Sport-related Applications4[138, 217, 312, 389]
            Psychotherapy in XR3[240, 291, 384]
     Training/Learning Applications18 
            General Training/Learning Applications11[35, 65, 84, 155, 160, 203, 252, 266, 278, 301, 311]
            Training Applications for Healthcare Workers7[93, 102, 310, 314, 359, 396, 412]
     Using XR for Simulation Purposes13 
            General9[20, 22, 29, 109, 233, 327, 355, 369, 422]
            XR as Driving Simulator4[45, 67, 152, 363]
     Special Applications12[39, 70, 79, 86, 89, 150, 254, 276, 381, 397, 405, 413]
     Using XR for Visualization8[1, 145, 206, 218, 319, 324, 354, 378]
     Testing Ecological Validity in XR4[154, 198, 244, 349]
     Using XR as Interface1[338]
Table 18: Papers applying XR and AI to an external problem.
Table 19:
AcronymPublication VenueVenue Group
ACII2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)Affective Computing
TAFFCIEEE Transactions on Affective ComputingAffective Computing
IVAIVA ’20: Proceedings of the 20th ACM International Conference on Intelligent Virtual AgentsAgents
AAMASAAMAS ’18: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent SystemsAgents
TNNLSIEEE Transactions on Neural Networks and Learning SystemsAgents
HAIHAI ’19: Proceedings of the 7th International Conference on Human-Agent InteractionAI
SSCI2021 IEEE Symposium Series on Computational Intelligence (SSCI)AI
PAMIIEEE Transactions on Pattern Analysis and Machine IntelligenceAI
ICASSPICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)AI
IJCNN2019 International Joint Conference on Neural Networks (IJCNN)AI
ISRITI2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI)AI
PAAPattern Analysis and ApplicationsAI
ICMLA2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)AI
ICPR2020 25th International Conference on Pattern Recognition (ICPR)AI
IISA2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)AI
JISJournal of Intelligent Information SystemsAI
JAIInternational Journal of Artificial Intelligence in EducationAI
AIVR2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)AIXR
AIHJournal of Ambient Intelligence and Humanized ComputingAIXR
TVCGIEEE Transactions on Visualization and Computer GraphicsComputer Graphics
TOGACM Transactions on GraphicsComputer Graphics
ICIP2019 IEEE International Conference on Image Processing (ICIP)Computer Graphics
TIPIEEE Transactions on Image ProcessingComputer Graphics
CGAIEEE Computer Graphics and ApplicationsComputer Graphics
RTIPJournal of Real-Time Image ProcessingComputer Graphics
PACMCGITProceedings of the ACM on Computer Graphics and Interactive TechniquesComputer Graphics
VCIP2018 IEEE Visual Communications and Image Processing (VCIP)Computer Graphics
CVPR2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Computer Vision
VCThe Visual ComputerComputer Vision
ICCV2019 IEEE/CVF International Conference on Computer Vision (ICCV)Computer Vision
SAPSAP ’19: ACM Symposium on Applied Perception 2019Eye Tracking and Perception
ETRAETRA ’21 Full Papers: ACM Symposium on Eye Tracking Research and ApplicationsEye Tracking and Perception
TAPACM Transactions On Applied PerceptionEye Tracking and Perception
Table 19: Publication venues and venue groups of included papers. Part I.
Table 20:
AcronymPublication VenueVenue Group
CHICHI ’17: Proceedings of the 2017 CHI Conference on Human Factors in Computing SystemsHCI
UISTUIST ’17: Proceedings of the 30th Annual ACM Symposium on User Interface Software and TechnologyHCI
AHAH2019: Proceedings of the 10th Augmented Human International Conference 2019HCI
CHI PLAYCHI PLAY ’17: Proceedings of the Annual Symposium on Computer-Human Interaction in PlayHCI
SUISUI ’20: Symposium on Spatial User InteractionHCI
TEITEI ’20: Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied InteractionHCI
MobileHCIMobileHCI ’20: 22nd International Conference on Human-Computer Interaction with Mobile Devices and ServicesHCI
DISDIS ’21: Designing Interactive Systems Conference 2021HCI
JVJournal of VisualizationVisualization
VRST2021 IEEE Virtual Reality and 3D User Interfaces (VR)XR
ISMAR2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)XR
VRVirtual RealityXR
SIGGRAPHSVR’21: Symposium on Virtual and Augmented RealityXR
SVR2019 21st Symposium on Virtual and Augmented Reality (SVR)XR
IC3D2021 International Conference on 3D Immersion (IC3D)XR
GVWSA2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)XR
VRCAIVRCAI ’19: The 17th International Conference on Virtual-Reality Continuum and its Applications in IndustryXR
Table 20: Publication venues and venue groups of included papers. Part II.

Footnotes

1
IEEE International Conference on Artificial Intelligence and Virtual Reality: https://ieeexplore.ieee.org/xpl/conhome/1830004/all-proceedings, last accessed: August 18, 2022
2
ACM International Conference on Artificial Intelligence and Virtual Reality: https://dl.acm.org/conference/aivr, last accessed: August 18, 2022
3
ACM Computing Classification System: https://dl.acm.org/ccs, last accessed September 10, 2022
4
International Symposium on Mixed and Augmented Reality: https://ieeexplore.ieee.org/xpl/conhome/9583730/proceeding, last accessed August 28, 2022
5
ACM Symposium on Virtual Reality Software and Technology: https://dl.acm.org/doi/proceedings/10.1145/3489849, last accessed August 28, 2002
6
Throughout the following sections, we will refer to the six authors of this paper with A-F to indicate which authors took part in the search, data extraction, and analysis parts.
7
International Conference on Machine Learning: https://proceedings.mlr.press/v139/, last accessed September 10, 2022
8
Advances in Neural Information Processing Systems: https://papers.nips.cc/paper/2021, last accessed August 28, 2022
9
Web of Science: https://www.webofscience.com/, last accessed: July 15. 2022
10
Scopus: https://www.scopus.com/, last accessed: July 15. 2022
11
IEEE Transactions on Image Processing: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=83, last accessed September 10. 2022
12
CHI 2021 Proceedings: https://chi2021.acm.org/proceedings, last accessed September 10, 2022
13
ACM DL: https://dl.acm.org/, last accessed: July 15. 2022
14
IEEE Xplore: https://ieeexplore.ieee.org/, last accessed: July 15. 2022
15
ScienceDirect: https://www.sciencedirect.com/, last accessed: July 15. 2022
16
Proceeding of Machine Learning Research: https://proceedings.mlr.press/, last accessed: July 15. 2022
17
NeurIPS Proceedings: https://papers.nips.cc/, last accessed: July 15. 2022
18
These cases should have been excluded by the search engine, but still we had some in the search result.
19
Due to rounding, the percentages add up to 99.9% and not 100%.
20
From here on the percentages are given in relation to the 237 papers that are part of the typology. 3.3% of these papers categorized as other.
21
Resonance Audio: https://resonance-audio.github.io/resonance-audio/, last accessed September 15, 2022
22
Note that the 11 HCI papers about applying XR and AI to an external use case are not discussed here.
23
PyTorch: https://pytorch.org/, last accessed September 13, 2022
24
Keras: https://keras.io/, last accessed September 13, 2022
25
TensorFlow: https://www.tensorflow.org/, last accessed September 13, 2022
26
Scikit-learn: https://scikit-learn.org/, last accessed September 13, 2022
27
NeurIPS Ethical Guidelines: https://nips.cc/public/EthicsGuidelines, September 13, 2022

Supplementary Material

Supplemental Materials (3544548.3581072-supplemental-materials.zip)
MP4 File (3544548.3581072-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
✱ Lotfi Abdi and Aref Meddeb. 2018. In-vehicle augmented reality system to provide driving safety information. Journal of Visualization 21, 1 (Feb. 2018), 163–184. https://doi.org/10.1007/s12650-017-0442-6
[2]
✱ Philipp Agethen, Viswa Subramanian Sekar, Felix Gaisbauer, Thies Pfeiffer, Michael Otto, and Enrico Rukzio. 2018. Behavior Analysis of Human Locomotion in the Real World and Virtual Reality for the Manufacturing Industry. ACM Trans. Appl. Percept. 15, 3, Article 20 (jul 2018), 19 pages. https://doi.org/10.1145/3230648
[3]
Eirikur Agustsson and Radu Timofte. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. IEEE, New York, NY, USA, –.
[4]
✱ Imtiaj Ahmed, Ville J. Harjunen, Giulio Jacucci, Niklas Ravaja, Tuukka Ruotsalo, and Michiel Spape. 2020. Touching virtual humans: Haptic responses reveal the emotional impact of affective agents. IEEE Transactions on Affective Computing -, - (2020), 1–1. https://doi.org/10.1109/TAFFC.2020.3038137
[5]
✱ Ashwin Ajit, Natasha Kholgade Banerjee, and Sean Banerjee. 2019. Combining Pairwise Feature Matches from Device Trajectories for Biometric Authentication in Virtual Reality Environments. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 9–97. https://doi.org/10.1109/AIVR46125.2019.00012
[6]
✱ Kaan Akşit, Praneeth Chakravarthula, Kishore Rathinavel, Youngmo Jeong, Rachel Albert, Henry Fuchs, and David Luebke. 2019. Manufacturing Application-Driven Foveated Near-Eye Displays. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 1928–1939. https://doi.org/10.1109/TVCG.2019.2898781
[7]
✱ A. Deniz Aladagli, Erhan Ekmekcioglu, Dmitri Jarnikov, and Ahmet Kondoz. 2017. Predicting head trajectories in 360° virtual reality videos. In 2017 International Conference on 3D Immersion (IC3D), Vol. 1. IEEE, New York, NY, USA, 1–6. https://doi.org/10.1109/IC3D.2017.8251913
[8]
✱ Rawan Alghofaili, Yasuhito Sawahata, Haikun Huang, Hsueh-Cheng Wang, Takaaki Shiratori, and Lap-Fai Yu. 2019. Lost in Style: Gaze-Driven Adaptive Aid for VR Navigation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300578
[9]
✱ Rawan Alghofaili, Michael S Solah, Haikun Huang, Yasuhito Sawahata, Marc Pomplun, and Lap-Fai Yu. 2019. Optimizing Visual Element Placement via Visual Attention Analysis. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 464–473. https://doi.org/10.1109/VR.2019.8797816
[10]
Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. 2018. Video Based Reconstruction of 3D People Models. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. 1. IEEE, New York, NY, USA, 8387–8397. https://doi.org/10.1109/CVPR.2018.00875
[11]
✱ Stevão A. Andrade, Fatima L. S. Nunes, and Marcio E. Delamaro. 2019. Towards the Systematic Testing of Virtual Reality Programs. In 2019 21st Symposium on Virtual and Augmented Reality (SVR), Vol. 1. IEEE, New York, NY, USA, 196–205. https://doi.org/10.1109/SVR.2019.00044
[12]
✱ Sean Andrist, Michael Gleicher, and Bilge Mutlu. 2017. Looking Coordinated: Bidirectional Gaze Mechanisms for Collaborative Interaction with Virtual Characters. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 2571–2582. https://doi.org/10.1145/3025453.3026033
[13]
Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2016. Modeling Stylized Character Expressions via Deep Learning. In Asian Conference on Computer Vision. Springer, -, 136–153.
[14]
Ryan Antel, Samira Abbasgholizadeh-Rahimi, Elena Guadagno, Jason M. Harley, and Dan Poenaru. 2022. The use of artificial intelligence and virtual reality in doctor-patient risk communication: A scoping review. Patient Education and Counseling -, - (2022), –. https://doi.org/10.1016/j.pec.2022.06.006
[15]
✱ Abdullah Al Arafat, Zhishan Guo, and Amro Awad. 2021. VR-Spy: A Side-Channel Attack on Virtual Key-Logging in VR Headsets. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 564–572. https://doi.org/10.1109/VR50410.2021.00081
[16]
✱ Kazuyuki Arimatsu and Hideki Mori. 2020. Evaluation of Machine Learning Techniques for Hand Pose Estimation on Handheld Device with Proximity Sensor. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376712
[17]
Hilary Arksey and Lisa O’Malley. 2005. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology 8, 1(2005), 19–32. https://doi.org/10.1080/1364557032000119616 arXiv:https://doi.org/10.1080/1364557032000119616
[18]
I. Armeni, A. Sax, A. R. Zamir, and S. Savarese. 2017. Joint 2D-3D-Semantic Data for Indoor Scene Understanding. ArXiv e-prints -, 1 (Feb. 2017), –. arxiv:1702.01105 [cs.CV]
[19]
Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 2016. 3D Semantic Parsing of Large-Scale Indoor Spaces. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1. IEEE, New York, NY, USA, 1534–1543. https://doi.org/10.1109/CVPR.2016.170
[20]
✱ Alexander Arntz, Agostino Di Dia, Tim Riebner, and Sabrina C. Eimler. 2021. Machine Learning Concepts for Dual-Arm Robots within Virtual Reality. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 168–172. https://doi.org/10.1109/AIVR52153.2021.00038
[21]
Edoardo Aromataris and Zachary Munn. 2020. JBI Manual for Evidence Synthesis. https://doi.org/10.46658/JBIMES-20-01
[22]
✱ Doris Aschenbrenner, Danielle van Tol, Zoltan Rusak, and Claudia Werker. 2020. Using Virtual Reality for scenario-based Responsible Research and Innovation approach for Human Robot Co-production. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 146–150. https://doi.org/10.1109/AIVR50618.2020.00033
[23]
Ronald T. Azuma. 1997. A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6, 4 (08 1997), 355–385. https://doi.org/10.1162/pres.1997.6.4.355 arXiv:https://direct.mit.edu/pvar/article-pdf/6/4/355/1623026/pres.1997.6.4.355.pdf
[24]
✱ Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabián Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, and Jason Saragih. 2021. Driving-Signal Aware Full-Body Avatars. ACM Trans. Graph. 40, 4, Article 143 (jul 2021), 17 pages. https://doi.org/10.1145/3450626.3459850
[25]
✱ Suprith Balasubramanian and Rajiv Soundararajan. 2019. Prediction of Discomfort due to Egomotion in Immersive Videos for Virtual Reality. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 169–177. https://doi.org/10.1109/ISMAR.2019.000-7
[26]
✱ Catherine Ball, Eric Novotny, Sun Joo Ahn, Lindsay Hahn, Michael D. Schmidt, Stephen L. Rathbun, and Kyle Johnsen. 2022. Scaling the Virtual Fitness Buddy Ecosystem as a School-Based Physical Activity Intervention for Children. IEEE Computer Graphics and Applications 42 (2022), 105–115. https://doi.org/10.1109/MCG.2021.3130555
[27]
✱ Giuliana Barrios Dell’Olio and Misha Sra. 2021. FaraPy: An Augmented Reality Feedback System for Facial Paralysis Using Action Unit Intensity Estimation. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 1027–1038. https://doi.org/10.1145/3472749.3474803
[28]
✱ Martin Bellgardt, Christian Scheiderer, and Torsten W. Kuhlen. 2020. An Immersive Node-Link Visualization of Artificial Neural Networks for Machine Learning Experts. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 33–36. https://doi.org/10.1109/AIVR50618.2020.00015
[29]
✱ Aniket Bera, Tanmay Randhavane, Emily Kubin, Husam Shaik, Kurt Gray, and Dinesh Manocha. 2018. Data-Driven Modeling of Group Entitativity in Virtual Environments. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST ’18). Association for Computing Machinery, New York, NY, USA, Article 31, 10 pages. https://doi.org/10.1145/3281505.3281524
[30]
Kirsten Bergmann, Friederike Eyssel, and Stefan Kopp. 2012. A Second Chance to Make a First Impression? How Appearance and Nonverbal Behavior Affect Perceived Warmth and Competence of Virtual Agents over Time. In Intelligent Virtual Agents, Yukiko Nakano, Michael Neff, Ana Paiva, and Marilyn Walker (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 126–138.
[31]
Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. 2011. The million song dataset. - -, -(2011), –.
[32]
✱ Andrew Best, Sahil Narang, and Dinesh Manocha. 2020. SPA: Verbal Interactions between Agents and Avatars in Shared Virtual Environments using Propositional Planning. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 117–126. https://doi.org/10.1109/VR46266.2020.00030
[33]
✱ Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, and Dinesh Manocha. 2021. Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1–10. https://doi.org/10.1109/VR50410.2021.00037
[34]
✱ Uttaran Bhattacharya, Nicholas Rewkowski, Pooja Guhan, Niall L. Williams, Trisha Mittal, Aniket Bera, and Dinesh Manocha. 2020. Generating Emotive Gaits for Virtual Agents Using Affect-Based Autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 24–35. https://doi.org/10.1109/ISMAR50242.2020.00020
[35]
✱ Manish Bhattarai, Aura Rose Jensen-Curtis, and Manel Martínez-Ramón. 2020. An embedded deep learning system for augmented reality in firefighting applications. In 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Vol. 1. IEEE, New York, NY, USA, 1224–1230. https://doi.org/10.1109/ICMLA51294.2020.00193
[36]
Mark Billinghurst. 2021. Grand Challenges for Augmented Reality. Frontiers in Virtual Reality 2 (2021), –. https://doi.org/10.3389/frvir.2021.578080
[37]
Mark Billinghurst and Michael Nebeling. 2021. Rapid Prototyping of XR Experiences. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York, NY, USA, Article 132, 3 pages. https://doi.org/10.1145/3411763.3445002
[38]
✱ Andrea Bönsch, Alexander R. Bluhm, Jonathan Ehret, and Torsten W. Kuhlen. 2020. Inferring a User’s Intent on Joining or Passing by Social Groups. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (Virtual Event, Scotland, UK) (IVA ’20). Association for Computing Machinery, New York, NY, USA, Article 10, 8 pages. https://doi.org/10.1145/3383652.3423862
[39]
✱ Andrea Bönsch, David Hashem, Jonathan Ehret, and Torsten W. Kuhlen. 2021. Being Guided or Having Exploratory Freedom: User Preferences of a Virtual Agent’s Behavior in a Museum. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (Virtual Event, Japan) (IVA ’21). Association for Computing Machinery, New York, NY, USA, 33–40. https://doi.org/10.1145/3472306.3478339
[40]
✱ Andrea Bönsch, Sina Radke, Jonathan Ehret, Ute Habel, and Torsten W. Kuhlen. 2020. The Impact of a Virtual Agent’s Non-Verbal Emotional Expression on a User’s Personal Space Preferences. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (Virtual Event, Scotland, UK) (IVA ’20). Association for Computing Machinery, New York, NY, USA, Article 12, 8 pages. https://doi.org/10.1145/3383652.3423888
[41]
✱ Andrea Bönsch, Sina Radke, Heiko Overath, Laura M. Asché, Jonathan Wendt, Tom Vierjahn, Ute Habel, and Torsten W. Kuhlen. 2018. Social VR: How Personal Space is Affected by Virtual Agents’ Emotions. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 199–206. https://doi.org/10.1109/VR.2018.8446480
[42]
Kim Bosman, Tibor Bosse, and Daniel Formolo. 2019. Virtual Agents for Professional Social Skills Training: An Overview of the State-of-the-Art. In Intelligent Technologies for Interactive Entertainment, Paulo Cortez, Luís Magalhães, Pedro Branco, Carlos Filipe Portela, and Telmo Adão (Eds.). Springer International Publishing, Cham, 75–84.
[43]
✱ Tibor Bosse, Tilo Hartmann, Romy A.M. Blankendaal, Nienke Dokter, Marco Otte, and Linford Goedschalk. 2018. Virtually Bad: A Study on Virtual Agents That Physically Threaten Human Beings. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (Stockholm, Sweden) (AAMAS ’18). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1258–1266.
[44]
✱ Fabien Boucaud, Catherine Pelachaud, and Indira Thouvenin. 2021. Decision Model for a Virtual Agent that can Touch and be Touched. In 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2021). HAL open science, Londres (virtual), United Kingdom, –. https://hal.archives-ouvertes.fr/hal-03428918
[45]
✱ Efe Bozkir, David Geisler, and Enkelejda Kasneci. 2019. Person Independent, Privacy Preserving, and Real Time Assessment of Cognitive Load using Eye Tracking in a Virtual Reality Setup. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1834–1837. https://doi.org/10.1109/VR.2019.8797758
[46]
✱ Gianni Bremer, Niklas Stein, and Markus Lappe. 2021. Predicting Future Position From Natural Walking and Eye Movements with Machine Learning. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 19–28. https://doi.org/10.1109/AIVR52153.2021.00013
[47]
✱ Hugo Brument, Gerd Bruder, Maud Marchal, Anne Hélène Olivier, and Ferran Argelaguet. 2021. Understanding, Modeling and Simulating Unintended Positional Drift during Repetitive Steering Navigation Tasks in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 27, 11(2021), 4300–4310. https://doi.org/10.1109/TVCG.2021.3106504
[48]
✱ Lauren E. Buck, Sohee Park, and Bobby Bodenheimer. 2020. Determining Peripersonal Space Boundaries and Their Plasticity in Relation to Object and Agent Characteristics in an Immersive Virtual Environment. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 332–342. https://doi.org/10.1109/VR46266.2020.00053
[49]
✱ Domenico Buongiorno, Cristian Camardella, Giacomo Donato Cascarano, Luis Pelaez Murciego, Michele Barsotti, Irio De Feudis, Antonio Frisoli, and Vitoantonio Bevilacqua. 2019. An undercomplete autoencoder to extract muscle synergies for motor intention detection. In 2019 International Joint Conference on Neural Networks (IJCNN), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/IJCNN.2019.8851975
[50]
Kelly Caine. 2016. Local Standards for Sample Size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 981–992. https://doi.org/10.1145/2858036.2858498
[51]
Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence -, -(2019), –.
[52]
✱ Young-Woon Cha, Husam Shaik, Qian Zhang, Fan Feng, Andrei State, Adrian Ilie, and Henry Fuchs. 2021. Mobile. Egocentric Human Body Motion Reconstruction Using Only Eyeglasses-mounted Cameras and a Few Body-worn Inertial Sensors. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 616–625. https://doi.org/10.1109/VR50410.2021.00087
[53]
✱ Jacob Chakareski. 2019. UAV-IoT for Next Generation Virtual Reality. IEEE Transactions on Image Processing 28, 12 (2019), 5977–5990. https://doi.org/10.1109/TIP.2019.2921869
[54]
✱ Praneeth Chakravarthula, Ethan Tseng, Tarun Srivastava, Henry Fuchs, and Felix Heide. 2020. Learned Hardware-in-the-Loop Phase Retrieval for Holographic near-Eye Displays. ACM Trans. Graph. 39, 6, Article 186 (nov 2020), 18 pages. https://doi.org/10.1145/3414685.3417846
[55]
Liwei Chan, Yi-Chi Liao, George B Mo, John J Dudley, Chun-Lien Cheng, Per Ola Kristensson, and Antti Oulasvirta. 2022. Investigating Positive and Negative Qualities of Human-in-the-Loop Optimization for Designing Interaction Techniques. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 112, 14 pages. https://doi.org/10.1145/3491102.3501850
[56]
✱ Minwook Chang, Youngwon Ryan Kim, and Gerard Jounghyun Kim. 2018. A Perceptual Evaluation of Generative Adversarial Network Real-Time Synthesized Drum Sounds in a Virtual Environment. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 144–148. https://doi.org/10.1109/AIVR.2018.00030
[57]
✱ Taizhou Chen, Lantian Xu, Xianshan Xu, and Kening Zhu. 2021. GestOnHMD: Enabling Gesture-based Interaction on Low-cost VR Head-Mounted Display. IEEE Transactions on Visualization and Computer Graphics 27, 5(2021), 2597–2607. https://doi.org/10.1109/TVCG.2021.3067689
[58]
✱ Ze-Yin Chen, Yi-Jun Li, Miao Wang, Frank Steinicke, and Qinping Zhao. 2021. A Reinforcement Learning Approach to Redirected Walking with Passive Haptic Feedback. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 184–192. https://doi.org/10.1109/ISMAR52148.2021.00033
[59]
Bowen Cheng, Maxwell D Collins, Yukun Zhu, Ting Liu, Thomas S Huang, Hartwig Adam, and Liang-Chieh Chen. 2020. Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation. In CVPR. IEEE, New York, NY, USA, –.
[60]
✱ Yifei Cheng, Yukang Yan, Xin Yi, Yuanchun Shi, and David Lindlbauer. 2021. SemanticAdapt: Optimization-Based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 282–297. https://doi.org/10.1145/3472749.3474750
[61]
✱ Venkata Rami Reddy Chirra, Srinivasulu Reddy Uyyala, and Venkata Krishna Kishore Kolli. 2021. Virtual facial expression recognition using deep CNN with ensemble learning. Journal of Ambient Intelligence and Humanized Computing 12, 12 (Dec. 2021), 10581–10599. https://doi.org/10.1007/s12652-020-02866-3
[62]
✱ Yong-Hun Cho, Dong-Yong Lee, and In-Kwon Lee. 2018. Path Prediction Using LSTM Network for Redirected Walking. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 527–528. https://doi.org/10.1109/VR.2018.8446442
[63]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, New York, NY, USA, –.
[64]
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. 2020. StarGAN v2: Diverse Image Synthesis for Multiple Domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, New York, NY, USA, –.
[65]
✱ Athanasios Christopoulos, Marc Conrad, and Mitul Shukla. 2019. What Does the Pedagogical Agent Say?. In 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA), Vol. 1. IEEE, New York, NY, USA, 1–7. https://doi.org/10.1109/IISA.2019.8900767
[66]
✱ Aldrich Clarence, Jarrod Knibbe, Maxime Cordeil, and Michael Wybrow. 2021. Unscripted Retargeting: Reach Prediction for Haptic Retargeting in Virtual Reality. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 150–159. https://doi.org/10.1109/VR50410.2021.00036
[67]
✱ Mark Colley, Benjamin Eder, Jan Ole Rixen, and Enrico Rukzio. 2021. Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 155, 11 pages. https://doi.org/10.1145/3411764.3445351
[68]
Simon Cooper, Robyn Cant, Michelle Kelly, Tracy Levett-Jones, Lisa McKenna, Philippa Seaton, and Fiona Bogossian. 2021. An Evidence-Based Checklist for Improving Scoping Review Quality. Clinical Nursing Research 30, 3 (March 2021), 230–240. https://doi.org/10.1177/1054773819846024
[69]
Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. 2016. The Cityscapes Dataset for Semantic Urban Scene Understanding. CoRR abs/1604.01685(2016), –. arXiv:1604.01685http://arxiv.org/abs/1604.01685
[70]
✱ Edmanuel Cruz, Sergio Orts-Escolano, Francisco Gomez-Donoso, Carlos Rizo, Jose Carlos Rangel, Higinio Mora, and Miguel Cazorla. 2019. An augmented reality application for improving shopping experience in large retail stores. Virtual Reality 23, 3 (Sept. 2019), 281–291. https://doi.org/10.1007/s10055-018-0338-3
[71]
✱ János Czentye, Balázs Péter Gerő, and Balázs Sonkoly. 2021. Managing Localization Delay for Cloud-assisted AR Applications Via LSTM-driven Overload Control. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 92–101. https://doi.org/10.1109/AIVR52153.2021.00023
[72]
✱ Tor-Salve Dalsgaard, Jarrod Knibbe, and Joanna Bergström. 2021. Modeling Pointing for 3D Target Selection in VR. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (Osaka, Japan) (VRST ’21). Association for Computing Machinery, New York, NY, USA, Article 42, 10 pages. https://doi.org/10.1145/3489849.3489853
[73]
Tor-Salve Dalsgaard, Joanna Bergström, Marianna Obrist, and Kasper Hornbæk. 2022. A user-derived mapping for mid-air haptic experiences. International Journal of Human-Computer Studies 168 (2022), 102920. https://doi.org/10.1016/j.ijhcs.2022.102920
[74]
✱ Ferdinand de Coninck, Zerrin Yumak, Guntur Sandino, and Remco Veltkamp. 2019. Non-Verbal Behavior Generation for Virtual Characters in Group Conversations. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 41–418. https://doi.org/10.1109/AIVR46125.2019.00016
[75]
Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. COVAREP — A collaborative voice analysis repository for speech technologies. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 1. IEEE, New York, NY, USA, 960–964. https://doi.org/10.1109/ICASSP.2014.6853739
[76]
✱ Javier Dehesa, Andrew Vidler, Christof Lutteroth, and Julian Padget. 2020. Touché: Data-Driven Interactive Sword Fighting in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376714
[77]
✱ Victor Delvigne, Hazem Wannous, Jean-Philippe Vandeborre, Laurence Ris, and Thierry Dutoit. 2020. Attention Estimation in Virtual Reality with EEG based Image Regression. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 10–16. https://doi.org/10.1109/AIVR50618.2020.00012
[78]
David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Ed Fast, Alesia Gainer, Kallirroi Georgila, Jon Gratch, Arno Hartholt, Margaux Lhommet, Gale Lucas, Stacy Marsella, Fabrizio Morbini, Angela Nazarian, Stefan Scherer, Giota Stratou, Apar Suri, David Traum, Rachel Wood, Yuyu Xu, Albert Rizzo, and Louis-Philippe Morency. 2014. SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems(Paris, France) (AAMAS ’14). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1061–1068.
[79]
✱ Patrick Dickinson, Kathrin Gerling, Kieran Hicks, John Murray, John Shearer, and Jacob Greenwood. 2019. Virtual reality crowd simulation: effects of agent density on user experience and behaviour. Virtual Reality 23, 1 (March 2019), 19–32. https://doi.org/10.1007/s10055-018-0365-0
[80]
✱ Zhi-Chao Dong, Xiao-Ming Fu, Chi Zhang, Kang Wu, and Ligang Liu. 2017. Smooth Assembled Mappings for Large-Scale Real Walking. ACM Trans. Graph. 36, 6, Article 211 (nov 2017), 13 pages. https://doi.org/10.1145/3130800.3130893
[81]
✱ Panagiotis Drakopoulos, George-alex Koulieris, and Katerina Mania. 2021. Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera. ACM Trans. Appl. Percept. 18, 3, Article 11 (may 2021), 20 pages. https://doi.org/10.1145/3456875
[82]
✱ Minghan Du, Hui Cui, Yuan Wang, and Henry Duh. 2021. Learning from Deep Stereoscopic Attention for Simulator Sickness Prediction. IEEE Transactions on Visualization and Computer Graphics 1 (2021), 1–1. https://doi.org/10.1109/TVCG.2021.3115901
[83]
✱ Tinglin Duan, Parinya Punpongsanon, Daisuke Iwai, and Kosuke Sato. 2018. FlyingHand: Extending the Range of Haptic Feedback on Virtual Hand Using Drone-Based Object Recognition. In SIGGRAPH Asia 2018 Technical Briefs (Tokyo, Japan) (SA ’18). Association for Computing Machinery, New York, NY, USA, Article 28, 4 pages. https://doi.org/10.1145/3283254.3283258
[84]
✱ Tinglin Duan, Parinya Punpongsanon, Sheng Jia, Daisuke Iwai, Kosuke Sato, and Konstantinos N. Plataniotis. 2019. Remote Environment Exploration with Drone Agent and Haptic Force Feedback. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 167–1673. https://doi.org/10.1109/AIVR46125.2019.00034
[85]
John J. Dudley, Jason T. Jacques, and Per Ola Kristensson. 2019. Crowdsourcing Interface Feature Design with Bayesian Optimization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300482
[86]
✱ Colm O Fearghail, Sebastian Knorr, and Aljosa Smolic. 2019. Analysis of Intended Viewing Area vs Estimated Saliency on Narrative Plot Structures in VR Film. In 2019 International Conference on 3D Immersion (IC3D), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/IC3D48390.2019.8975990
[87]
✱ Tobias Feigl, Christopher Mutschler, and Michael Philippsen. 2018. Head-to-Body-Pose Classification in No-Pose VR Tracking Systems. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1–2. https://doi.org/10.1109/VR.2018.8446495
[88]
✱ Tobias Feigl, Daniel Roth, Stefan Gradl, Markus Wirth, Marc Erich Latoschik, Bjoern M. Eskofier, Michael Philippsen, and Christopher Mutschler. 2019. Sick Moves! Motion Parameters as Indicators of Simulator Sickness. IEEE Transactions on Visualization and Computer Graphics 25, 11(2019), 3146–3157. https://doi.org/10.1109/TVCG.2019.2932224
[89]
✱ Benjamin Felbrich, Gwyllim Jahn, Cameron Newnham, and Achim Menges. 2018. Self-Organizing Maps for Intuitive Gesture-Based Geometric Modelling in Augmented Reality. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 61–67. https://doi.org/10.1109/AIVR.2018.00016
[90]
✱ Xianglong Feng, Zeyang Bao, and Sheng Wei. 2019. Exploring CNN-Based Viewport Prediction for Live Virtual Reality Streaming. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 183–1833. https://doi.org/10.1109/AIVR46125.2019.00038
[91]
✱ Xianglong Feng, Yao Liu, and Sheng Wei. 2020. LiveDeep: Online Viewport Prediction for Live Virtual Reality Streaming Using Lifelong Deep Learning. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 800–808. https://doi.org/10.1109/VR46266.2020.00104
[92]
David B. Fogel. 2022. Defining Artificial Intelligence. John Wiley & Sons, Ltd, -, Chapter 5, 91–120. https://doi.org/10.1002/9781119815075.ch7 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781119815075.ch7
[93]
✱ Rita Francese, Maria Frasca, Michele Risi, and Genoveffa Tortora. 2021. A mobile augmented reality application for supporting real-time skin lesion analysis based on deep learning. Journal of Real-Time Image Processing 18, 4 (Aug. 2021), 1247–1259. https://doi.org/10.1007/s11554-021-01109-8
[94]
✱ Valerio Franchi and Evridiki Ntagiou. 2021. Augmentation of a Virtual Reality Environment Using Generative Adversarial Networks. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 219–223. https://doi.org/10.1109/AIVR52153.2021.00050
[95]
✱ Timothée Fréville, Charles Hamesse, Benoit Pairet, Rihab Lahouli, and Rob Haelterman. 2021. From Floor Plans to Virtual Reality. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 129–133. https://doi.org/10.1109/AIVR52153.2021.00030
[96]
✱ Eisuke Fujinawa, Shigeo Yoshida, Yuki Koyama, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2017. Computational Design of Hand-Held VR Controllers Using Haptic Shape Illusion. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 28, 10 pages. https://doi.org/10.1145/3139131.3139160
[97]
✱ Thomas L. Fuller and Amir Sadovnik. 2017. Image level color classification for colorblind assistance. In 2017 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 1985–1989. https://doi.org/10.1109/ICIP.2017.8296629
[98]
✱ Nisal Menuka Gamage, Deepana Ishtaweera, Martin Weigel, and Anusha Withana. 2021. So Predictable! Continuous 3D Hand Trajectory Prediction in Virtual Reality. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 332–343. https://doi.org/10.1145/3472749.3474753
[99]
Narayan H. Gandedkar, Matthew T. Wong, and M. Ali Darendeliler. 2021. Role of virtual reality (VR), augmented reality (AR) and artificial intelligence (AI) in tertiary education and research of orthodontics: An insight. Seminars in Orthodontics 27, 2 (2021), 69–77. https://doi.org/10.1053/j.sodo.2021.05.003
[100]
✱ Peizhong Gao, Keigo Matsumoto, Takuji Narumi, and Michitaka Hirose. 2020. Visual-Auditory Redirection: Multimodal Integration of Incongruent Visual and Auditory Cues for Redirected Walking. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 639–648. https://doi.org/10.1109/ISMAR50242.2020.00092
[101]
Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde. 2017. Learning to Predict Indoor Illumination from a Single Image. CoRR abs/1704.00090(2017), –. arXiv:1704.00090http://arxiv.org/abs/1704.00090
[102]
✱ Pu Ge, Junjun Pan, Fanghong Li, Weiyun Shi, and Hong Qin. 2019. Real-Time Tracking of Corneal Contour in Dalk Surgical Navigation Using Deep Neural Networks. In 2019 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 1356–1360. https://doi.org/10.1109/ICIP.2019.8803779
[103]
✱ Christoph Gebhardt, Brian Hecox, Bas van Opheusden, Daniel Wigdor, James Hillis, Otmar Hilliges, and Hrvoje Benko. 2019. Learning Cooperative Personalized Policies from Gaze Data. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 197–208. https://doi.org/10.1145/3332165.3347933
[104]
Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32, 11 (2013), 1231–1237.
[105]
✱ Daniele Giunchi, Stuart James, and Anthony Steed. 2018. 3D Sketching for Interactive Model Retrieval in Virtual Reality. In Proceedings of the Joint Symposium on Computational Aesthetics and Sketch-Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering (Victoria, British Columbia, Canada) (Expressive ’18). Association for Computing Machinery, New York, NY, USA, Article 1, 12 pages. https://doi.org/10.1145/3229147.3229166
[106]
✱ Gilzamir Gomes, Creto A. Vidal, Joaquim B. Cavalcante Neto, and Yuri L. B. Nogueira. 2019. An Emotional Virtual Character: A Deep Learning Approach with Reinforcement Learning. In 2019 21st Symposium on Virtual and Augmented Reality (SVR), Vol. 1. -, -, 223–231. https://doi.org/10.1109/SVR.2019.00047
[107]
✱ Ester González-Sosa, Pablo Perez-Garcia, Diego Gonzalez-Morin, and Alvaro Villegas. 2021. Subjective Evaluation of Egocentric Human Segmentation for Mixed Reality. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 232–236. https://doi.org/10.1109/AIVR52153.2021.00053
[108]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative Adversarial Networks. Commun. ACM 63, 11 (oct 2020), 139–144. https://doi.org/10.1145/3422622
[109]
✱ Jeremy Raboff Gordon, Max T. Curran, John Chuang, and Coye Cheshire. 2021. Covert Embodied Choice: Decision-Making and the Limits of Privacy Under Biometric Surveillance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 551, 12 pages. https://doi.org/10.1145/3411764.3445309
[110]
✱ Lysa Gramoli, Jérémy Lacoche, Anthony Foulonneau, Valérie Gouranton, and Bruno Arnaldi. 2021. Needs Model for an Autonomous Agent during Long-term Simulations. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 134–138. https://doi.org/10.1109/AIVR52153.2021.00031
[111]
Jan Gugenheimer, Christian Mai, Mark McGill, Julie Williamson, Frank Steinicke, and Ken Perlin. 2019. Challenges Using Head-Mounted Displays in Shared and Social Spaces. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3290607.3299028
[112]
Jan Gugenheimer, Wen-Jie Tseng, Abraham Hani Mhaidli, Jan Ole Rixen, Mark McGill, Michael Nebeling, Mohamed Khamis, Florian Schaub, and Sanchari Das. 2022. Novel Challenges of Safety, Security and Privacy in Extended Reality. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 108, 5 pages. https://doi.org/10.1145/3491101.3503741
[113]
✱ Manuel Guimarães, Rui Prada, Pedro A. Santos, João Dias, Arnav Jhala, and Samuel Mascarenhas. 2020. The Impact of Virtual Reality in the Social Presence of a Virtual Agent. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (Virtual Event, Scotland, UK) (IVA ’20). Association for Computing Machinery, New York, NY, USA, Article 23, 8 pages. https://doi.org/10.1145/3383652.3423879
[114]
✱ Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, and Mark Billinghurst. 2019. In AI We Trust: Investigating the Relationship between Biosignals, Trust and Cognitive Load in VR. In 25th ACM Symposium on Virtual Reality Software and Technology (Parramatta, NSW, Australia) (VRST ’19). Association for Computing Machinery, New York, NY, USA, Article 33, 10 pages. https://doi.org/10.1145/3359996.3364276
[115]
✱ Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, and Mark Billinghurst. 2020. Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 756–765. https://doi.org/10.1109/VR46266.2020.00099
[116]
✱ Feyza Merve Hafızoğlu and Sandip Sen. 2018. The Effects of Past Experience on Trust in Repeated Human-Agent Teamwork. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems(Stockholm, Sweden) (AAMAS ’18). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 514–522.
[117]
✱ Lei Han, Tian Zheng, Yinheng Zhu, Lan Xu, and Lu Fang. 2020. Live Semantic 3D Perception for Immersive Augmented Reality. IEEE Transactions on Visualization and Computer Graphics 26, 5(2020), 2012–2022. https://doi.org/10.1109/TVCG.2020.2973477
[118]
✱ Sara Hanson, Richard A. Paris, Haley A. Adams, and Bobby Bodenheimer. 2019. Improving Walking in Place Methods with Individualization and Deep Networks. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 367–376. https://doi.org/10.1109/VR.2019.8797751
[119]
Joanne Harmon, Victoria Pitt, Peter Summons, and Kerry J. Inder. 2021. Use of artificial intelligence and virtual reality within clinical simulation for nursing pain education: A scoping review. Nurse Education Today 97(2021), 104700. https://doi.org/10.1016/j.nedt.2020.104700
[120]
✱ Arno Hartholt, Ed Fast, Adam Reilly, Wendy Whitcup, Matt Liewer, and Sharon Mozgai. 2019. Ubiquitous Virtual Humans: A Multi-platform Framework for Embodied AI Agents in XR. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 308–3084. https://doi.org/10.1109/AIVR46125.2019.00072
[121]
✱ Jeremy Hartmann, Aakar Gupta, and Daniel Vogel. 2020. Extend, Push, Pull: Smartphone Mediated Interaction in Spatial Augmented Reality via Intuitive Mode Switching. In Symposium on Spatial User Interaction (Virtual Event, Canada) (SUI ’20). Association for Computing Machinery, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/3385959.3418456
[122]
✱ Yu He, Yingtian Liu, Yihan Jin, Song-Hai Zhang, Yu-Kun Lai, and Shi-Min Hu. 2021. Context-Consistent Generation of Indoor Virtual Environments based on Geometry Constraints. IEEE Transactions on Visualization and Computer Graphics 1 (2021), 1–1. https://doi.org/10.1109/TVCG.2021.3111729
[123]
✱ Lorenz Hetzel, John Dudley, Anna Maria Feit, and Per Ola Kristensson. 2021. Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning. IEEE Transactions on Visualization and Computer Graphics 27, 11(2021), 4140–4149. https://doi.org/10.1109/TVCG.2021.3106494
[124]
✱ Joris Heyse, Maria Torres Vega, Femke de Backere, and Filip de Turck. 2019. Contextual Bandit Learning-Based Viewport Prediction for 360 Video. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 972–973. https://doi.org/10.1109/VR.2019.8797830
[125]
✱ Clarice Hilton, Nicola Plant, Carlos González Díaz, Phoenix Perry, Ruth Gibson, Bruno Martelli, Michael Zbyszynski, Rebecca Fiebrink, and Marco Gillies. 2021. InteractML: Making Machine Learning Accessible for Creative Practitioners Working with Movement Interaction in Immersive Media. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (Osaka, Japan) (VRST ’21). Association for Computing Machinery, New York, NY, USA, Article 23, 10 pages. https://doi.org/10.1145/3489849.3489879
[126]
✱ Koki Hirota and Takashi Komuro. 2019. Situation-Adaptive Object Grasping Recognition in VR Environment. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 171–1713. https://doi.org/10.1109/AIVR46125.2019.00035
[127]
✱ Simon M. Hofmann, Felix Klotzsche, Alberto Mariola, Vadim V. Nikulin, Arno Villringer, and Michael Gaebler. 2018. Decoding Subjective Emotional Arousal during a Naturalistic VR Experience from EEG Using LSTMs. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 128–131. https://doi.org/10.1109/AIVR.2018.00026
[128]
✱ Valentin Holzwarth, Johannes Schneider, Joshua Handali, Joy Gisler, Christian Hirt, Andreas Kunz, and Jan vom Brocke. 2021. Towards estimating affective states in Virtual Reality based on behavioral data. Virtual Reality 25, 4 (Dec. 2021), 1139–1152. https://doi.org/10.1007/s10055-021-00518-1
[129]
✱ Matthias Hoppe, Beat Rossmy, Daniel Peter Neumann, Stephan Streuber, Albrecht Schmidt, and Tonja-Katrin Machulla. 2020. A Human Touch: Social Touch Increases the Perceived Human-Likeness of Agents in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3313831.3376719
[130]
Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Christian Igel. 2013. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. In International Joint Conference on Neural Networks. -, -.
[131]
✱ Kai-Wen Hsiao, Jheng-Wei Su, Yu-Chih Hung, Kuo-Wei Chen, Chih-Yuan Yao, and Hung-Kuo Chu. 2021. A Large-Scale Indoor Layout Reconstruction and Localization System for Spatial-Aware Mobile AR Applications. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 237–241. https://doi.org/10.1109/AIVR52153.2021.00054
[132]
✱ Ping Hu, Qi Sun, Piotr Didyk, Li-Yi Wei, and Arie E. Kaufman. 2019. Reducing Simulator Sickness with Perceptual Camera Control. ACM Trans. Graph. 38, 6, Article 210 (nov 2019), 12 pages. https://doi.org/10.1145/3355089.3356490
[133]
✱ Zhiming Hu, Andreas Bulling, Sheng Li, and Guoping Wang. 2021. EHTask: Recognizing User Tasks from Eye and Head Movements in Immersive Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 1 (2021), 1–1. https://doi.org/10.1109/TVCG.2021.3138902
[134]
✱ Zhiming Hu, Andreas Bulling, Sheng Li, and Guoping Wang. 2021. FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments. IEEE Transactions on Visualization and Computer Graphics 27, 5(2021), 2681–2690. https://doi.org/10.1109/TVCG.2021.3067779
[135]
✱ Zhiming Hu, Sheng Li, Congyi Zhang, Kangrui Yi, Guoping Wang, and Dinesh Manocha. 2020. DGaze: CNN-Based Gaze Prediction in Dynamic Scenes. IEEE Transactions on Visualization and Computer Graphics 26, 5(2020), 1902–1911. https://doi.org/10.1109/TVCG.2020.2973473
[136]
✱ Zhiming Hu, Congyi Zhang, Sheng Li, Guoping Wang, and Dinesh Manocha. 2019. SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 2002–2010. https://doi.org/10.1109/TVCG.2019.2899187
[137]
✱ Bingyao Huang and Haibin Ling. 2021. DeProCams: Simultaneous Relighting, Compensation and Shape Reconstruction for Projector-Camera Systems. IEEE Transactions on Visualization and Computer Graphics 27, 5(2021), 2725–2735. https://doi.org/10.1109/TVCG.2021.3067771
[138]
✱ Tobias Huber, Silvan Mertes, Stanislava Rangelova, Simon Flutura, and Elisabeth André. 2021. Dynamic Difficulty Adjustment in Virtual Reality Exergames through Experience-driven Procedural Content Generation. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/SSCI50451.2021.9660086
[139]
✱ Brandon Huynh, Adam Ibrahim, Yun Suk Chang, Tobias Höllerer, and John O’Donovan. 2018. A Study of Situated Product Recommendations in Augmented Reality. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 35–43. https://doi.org/10.1109/AIVR.2018.00013
[140]
✱ Brandon Huynh, Jason Orlosky, and Tobias Höllerer. 2019. In-Situ Labeling for Augmented Reality Language Learning. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1606–1611. https://doi.org/10.1109/VR.2019.8798358
[141]
Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. 2014. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 7 (jul 2014), 1325–1339.
[142]
✱ Rifatul Islam, Kevin Desai, and John Quarles. 2021. Cybersickness Prediction from Integrated HMD’s Sensors: A Multimodal Deep Fusion Approach using Eye-tracking and Head-tracking Data. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 31–40. https://doi.org/10.1109/ISMAR52148.2021.00017
[143]
✱ Rifatul Islam, Yonggun Lee, Mehrad Jaloli, Imtiaz Muhammad, Dakai Zhu, Paul Rad, Yufei Huang, and John Quarles. 2020. Automatic Detection and Prediction of Cybersickness Severity using Deep Neural Networks from user’s Physiological Signals. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 400–411. https://doi.org/10.1109/ISMAR50242.2020.00066
[144]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to-Image Translation with Conditional Adversarial Networks. CoRR abs/1611.07004(2016), –. arXiv:1611.07004http://arxiv.org/abs/1611.07004
[145]
✱ Juan Izquierdo-Domenech, Jordi Linares-Pellicer, and Jorge Orta-Lopez. 2020. Supporting interaction in augmented reality assisted industrial processes using a CNN-based semantic layer. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 27–32. https://doi.org/10.1109/AIVR50618.2020.00014
[146]
✱ Daekyo Jeong, Sangbong Yoo, and Jang Yun. 2019. Cybersickness Analysis with EEG Using Deep Learning Algorithms. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 827–835. https://doi.org/10.1109/VR.2019.8798334
[147]
✱ Jianqing Jia, Semir Elezovikj, Heng Fan, Shuojin Yang, Jing Liu, Wei Guo, Chiu C. Tan, and Haibin Ling. 2021. Semantic-aware label placement for augmented reality in street view. The Visual Computer 37, 7 (July 2021), 1805–1819. https://doi.org/10.1007/s00371-020-01939-w
[148]
✱ Xianta Jiang, Zhen Gang Xiao, and Carlo Menon. 2018. Virtual grasps recognition using fusion of Leap Motion and force myography. Virtual Reality 22, 4 (Nov. 2018), 297–308. https://doi.org/10.1007/s10055-018-0339-2
[149]
Yue Jiang, Yuwen Lu, Jeffrey Nichols, Wolfgang Stuerzlinger, Chun Yu, Christof Lutteroth, Yang Li, Ranjitha Kumar, and Toby Jia-Jun Li. 2022. Computational Approaches for Understanding, Generating, and Adapting User Interfaces. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 74, 6 pages. https://doi.org/10.1145/3491101.3504030
[150]
✱ Brendan John, Pallavi Raiturkar, Arunava Banerjee, and Eakta Jain. 2018. An Evaluation of Pupillary Light Response Models for 2D Screens and VR HMDs. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST ’18). Association for Computing Machinery, New York, NY, USA, Article 19, 11 pages. https://doi.org/10.1145/3281505.3281538
[151]
Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Scott Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh. 2017. Panoptic Studio: A Massively Multiview System for Social Interaction Capture. IEEE Transactions on Pattern Analysis and Machine Intelligence 1 (2017), - pages.
[152]
✱ Jinki Jung, Hyeopwoo Lee, Jeehye Choi, Abhilasha Nanda, Uwe Gruenefeld, Tim Stratmann, and Wilko Heuten. 2018. Ensuring Safety in Augmented Reality from Trade-off Between Immersion and Situation Awareness. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 70–79. https://doi.org/10.1109/ISMAR.2018.00032
[153]
✱ Raehyuk Jung, Aiden Seung Joon Lee, Amirsaman Ashtari, and Jean-Charles Bazin. 2019. Deep360Up: A Deep Learning-Based Approach for Automatic VR Image Upright Adjustment. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/VR.2019.8798326
[154]
✱ Apostolos Kalatzis, Laura Stanley, and Vishnunarayan Girishan Prabhu. 2021. Affective State Classification in Virtual Reality Environments Using Electrocardiogram and Respiration Signals. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 160–167. https://doi.org/10.1109/AIVR52153.2021.00037
[155]
✱ Seokbin Kang, Ekta Shokeen, Virginia L. Byrne, Leyla Norooz, Elizabeth Bonsignore, Caro Williams-Pierce, and Jon E. Froehlich. 2020. ARMath: Augmenting Everyday Life with Math Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376252
[156]
✱ Yuna Kano and Junya Morita. 2020. The Effect of Experience and Embodiment on Empathetic Behavior toward Virtual Agents. In Proceedings of the 8th International Conference on Human-Agent Interaction (Virtual Event, USA) (HAI ’20). Association for Computing Machinery, New York, NY, USA, 112–120. https://doi.org/10.1145/3406499.3415074
[157]
✱ Tamás Karácsony, John Paulin Hansen, Helle Klingenberg Iversen, and Sadasivan Puthusserypady. 2019. Brain Computer Interface for Neuro-Rehabilitation With Deep Learning Classification and Virtual Reality Feedback. In Proceedings of the 10th Augmented Human International Conference 2019 (Reims, France) (AH2019). Association for Computing Machinery, New York, NY, USA, Article 22, 8 pages. https://doi.org/10.1145/3311823.3311864
[158]
✱ Pinachuan Ke and Kenina Zhu. 2021. Larger Step Faster Speed: Investigating Gesture-Amplitude-based Locomotion in Place with Different Virtual Walking Speed in Virtual Reality. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 438–447. https://doi.org/10.1109/VR50410.2021.00067
[159]
Hanan Khalil, Micah Peters, Christina M. Godfrey, Patricia McInerney, Cassia Baldini Soares, and Deborah Parker. 2016. An Evidence-Based Approach to Scoping Reviews. Worldviews on Evidence-Based Nursing 13, 2 (2016), 118–123. https://doi.org/10.1111/wvn.12144
[160]
✱ Adil Khokhar, Andrew Yoshimura, and Christoph W. Borst. 2019. Pedagogical Agent Responsive to Eye Tracking in Educational VR. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1018–1019. https://doi.org/10.1109/VR.2019.8797896
[161]
✱ Hansung Kim, Luca Remaggi, Aloisio Dourado, Teofilo de Campos, Philip J. B. Jackson, and Adrian Hilton. 2022. Immersive audio-visual scene reproduction using semantic scene reconstruction from 360 cameras. Virtual Reality 26, 3 (Sept. 2022), 823–838. https://doi.org/10.1007/s10055-021-00594-3
[162]
✱ Hansung Kim, Luca Remaggi, Philip J.B. Jackson, and Adrian Hilton. 2019. Immersive Spatial Audio Reproduction for VR/AR Using Room Acoustic Modelling from 360° Images. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 120–126. https://doi.org/10.1109/VR.2019.8798247
[163]
✱ Hak Gu Kim, Wissam J. Baddar, Heoun-taek Lim, Hyunwook Jeong, and Yong Man Ro. 2017. Measurement of Exceptional Motion in VR Video Contents for VR Sickness Assessment Using Deep Convolutional Autoencoder. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 36, 7 pages. https://doi.org/10.1145/3139131.3139137
[164]
✱ Hak Gu Kim, Heoun-Taek Lim, Sangmin Lee, and Yong Man Ro. 2019. VRSA Net: VR Sickness Assessment Considering Exceptional Motion for 360° VR Video. IEEE Transactions on Image Processing 28, 4 (2019), 1646–1660. https://doi.org/10.1109/TIP.2018.2880509
[165]
✱ Jinwoo Kim, Woojae Kim, Heeseok Oh, Seongmin Lee, and Sanghoon Lee. 2019. A Deep Cybersickness Predictor Based on Brain Signal Analysis for Virtual Reality Contents. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Vol. 1. IEEE, New York, NY, USA, 10579–10588. https://doi.org/10.1109/ICCV.2019.01068
[166]
✱ Jinwoo Kim, Heeseok Oh, Woojae Kim, Seonghwa Choi, Wookho Son, and Sanghoon Lee. 2022. A Deep Motion Sickness Predictor Induced by Visual Stimuli in Virtual Reality. IEEE Transactions on Neural Networks and Learning Systems 33, 2(2022), 554–566. https://doi.org/10.1109/TNNLS.2020.3028080
[167]
✱ Kangsoo Kim, Luke Boelling, Steffen Haesler, Jeremy Bailenson, Gerd Bruder, and Greg F. Welch. 2018. Does a Digital Assistant Need a Body? The Influence of Visual Embodiment and Social Behavior on the Perception of Intelligent Virtual Agents in AR. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 105–114. https://doi.org/10.1109/ISMAR.2018.00039
[168]
✱ Kihyun Kim, Sangmin Lee, Hak GU Kim, Minho Park, and Yong Man Ro. 2019. Deep Objective Assessment Model Based on Spatio-Temporal Perception of 360-Degree Video for VR Sickness Prediction. In 2019 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 3192–3196. https://doi.org/10.1109/ICIP.2019.8803257
[169]
✱ Kangsoo Kim, Nahal Norouzi, Tiffany Losekamp, Gerd Bruder, Mindi Anderson, and Gregory Welch. 2019. Effects of Patient Care Assistant Embodiment and Computer Mediation on User Experience. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 17–177. https://doi.org/10.1109/AIVR46125.2019.00013
[170]
✱ Seongyeop Kim, Sangmin Lee, and Yong Man Ro. 2020. Estimating VR Sickness Caused By Camera Shake in VR Videography. In 2020 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 3433–3437. https://doi.org/10.1109/ICIP40778.2020.9190721
[171]
✱ Woojae Kim, Sanghoon Lee, and Alan Conrad Bovik. 2021. VR Sickness Versus VR Presence: A Statistical Prediction Model. IEEE Transactions on Image Processing 30 (2021), 559–571. https://doi.org/10.1109/TIP.2020.3036782
[172]
Hak Gu Kim, Heoun-Taek Lim, Sangmin Lee, and Yong Man Ro. 2019. VRSA Net: VR Sickness Assessment Considering Exceptional Motion for 360° VR Video. IEEE Transactions on Image Processing 28, 4 (2019), 1646–1660. https://doi.org/10.1109/TIP.2018.2880509
[173]
Kangsoo Kim, Mark Billinghurst, Gerd Bruder, Henry Been-Lirn Duh, and Gregory F. Welch. 2018. Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017). IEEE Transactions on Visualization and Computer Graphics 24, 11(2018), 2947–2962. https://doi.org/10.1109/TVCG.2018.2868591
[174]
✱ Naoki Kimura and Jun Rekimoto. 2018. ExtVision: Augmentation of Visual Experiences with Generation of Context Images for a Peripheral Vision Using Deep Neural Network. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3173574.3174001
[175]
✱ F. Klotzsche, A. Mariola, S. Hofmann, V. V. Nikulin, A. Villringer, and M. Gaebler. 2018. Using EEG to Decode Subjective Levels of Emotional Arousal During an Immersive VR Roller Coaster Ride. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 605–606. https://doi.org/10.1109/VR.2018.8446275
[176]
✱ Benjamin Knopp, Dmytro Velychko, Johannes Dreibrodt, Alexander C. Schütz, and Dominik Endres. 2020. Evaluating Perceptual Predictions Based on Movement Primitive Models in VR- and Online-Experiments. In ACM Symposium on Applied Perception 2020 (Virtual Event, USA) (SAP ’20). Association for Computing Machinery, New York, NY, USA, Article 1, 9 pages. https://doi.org/10.1145/3385955.3407940
[177]
Sebastian Knorr, Cagri Ozcinar, Colm O Fearghail, and Aljosa Smolic. 2018. Director’s Cut - A Combined Dataset for Visual Attention Analysis in Cinematic VR Content. In The 15th ACM SIGGRAPH European Conference on Visual Media Production. ACM, New York, NY, USA, –.
[178]
✱ Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2021. A Perceptual Model for Eccentricity-Dependent Spatio-Temporal Flicker Fusion and Its Applications to Foveated Graphics. ACM Trans. Graph. 40, 4, Article 47 (jul 2021), 11 pages. https://doi.org/10.1145/3450626.3459784
[179]
✱ Po-Chen Kuo, Li-Chung Chuang, Dong-Yi Lin, and Ming-Sui Lee. 2021. VR Sickness Assessment with Perception Prior and Hybrid Temporal Features. In 2020 25th International Conference on Pattern Recognition (ICPR), Vol. 1. IEEE, New York, NY, USA, 5558–5564. https://doi.org/10.1109/ICPR48806.2021.9412423
[180]
✱ Peter Kán and Hannes Kafumann. 2019. DeepLight: light source estimation for augmented reality using deep learning. The Visual Computer 35, 6 (June 2019), 873–883. https://doi.org/10.1007/s00371-019-01666-x
[181]
✱ Philipp Ladwig, Alexander Pech, Ralf Dörner, and Christian Geiger. 2020. Unmasking Communication Partners: A Low-Cost AI Solution for Digitally Removing Head-Mounted Displays in VR-Based Telepresence. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 82–90. https://doi.org/10.1109/AIVR50618.2020.00025
[182]
✱ Po Kong Lai, Shuang Xie, Jochen Lang, and Robert Laganière. 2019. Real-Time Panoramic Depth Maps from Omni-directional Stereo Images for 6 DoF Videos in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 405–412. https://doi.org/10.1109/VR.2019.8798016
[183]
✱ Divesh Lala and Toyoaki Nishida. 2015. A data-driven passing interaction model for embodied basketball agents. Journal of Intelligent Information Systems 48 (2015), 27–60.
[184]
✱ Puneet Lall, Silviu Borac, Dave Richardson, Matt Pharr, and Manfred Ernst. 2018. View-Region Optimized Image-Based Scene Simplification. Proc. ACM Comput. Graph. Interact. Tech. 1, 2, Article 26 (aug 2018), 22 pages. https://doi.org/10.1145/3233311
[185]
✱ Maurice Lamb, Tamara Lorenz, Stephen J. Harrison, Rachel Kallen, Ali Minai, and Michael J. Richardson. 2017. PAPAc: A Pick and Place Agent Based on Human Behavioral Dynamics. In Proceedings of the 5th International Conference on Human Agent Interaction (Bielefeld, Germany) (HAI ’17). Association for Computing Machinery, New York, NY, USA, 131–141. https://doi.org/10.1145/3125739.3125771
[186]
Georgios Lampropoulos, Euclid Keramopoulos, and Konstantinos Diamantaras. 2020. Enhancing the functionality of augmented reality using deep learning, semantic web and knowledge graphs: A review. Visual Informatics 4, 1 (2020), 32–42. https://doi.org/10.1016/j.visinf.2020.01.001
[187]
✱ Yining Lang, Wei Liang, and Lap-Fai Yu. 2019. Virtual Agent Positioning Driven by Scene Semantics in Mixed Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 767–775. https://doi.org/10.1109/VR.2019.8798018
[188]
✱ Marc Erich Latoschik, Florian Kern, Jan-Philipp Stauffert, Andrea Bartl, Mario Botsch, and Jean-Luc Lugrin. 2019. Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 2134–2144. https://doi.org/10.1109/TVCG.2019.2899250
[189]
Marc Erich Latoschik, Daniel Roth, Dominik Gall, Jascha Achenbach, Thomas Waltemate, and Mario Botsch. 2017. The Effect of Avatar Realism in Immersive Social Virtual Realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 39, 10 pages. https://doi.org/10.1145/3139131.3139156
[190]
Vernon J Lawhern, Amelia J Solon, Nicholas R Waytowich, Stephen M Gordon, Chou P Hung, and Brent J Lance. 2018. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. Journal of Neural Engineering 15, 5 (2018), 056013. http://stacks.iop.org/1741-2552/15/i=5/a=056013
[191]
✱ Dong-Yong Lee, Yong-Hun Cho, and In-Kwon Lee. 2019. Real-time Optimal Planning for Redirected Walking Using Deep Q-Learning. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 63–71. https://doi.org/10.1109/VR.2019.8798121
[192]
✱ Dong-Yong Lee, Yong-Hun Cho, Dae-Hong Min, and In-Kwon Lee. 2020. Optimal Planning for Redirected Walking Based on Reinforcement Learning in Multi-user Environment with Irregularly Shaped Physical Space. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 155–163. https://doi.org/10.1109/VR46266.2020.00034
[193]
✱ Juyoung Lee, Myungho Lee, Gerard Jounghyun Kim, and Jae-In Hwang. 2020. Effects of Synchronized Leg Motion in Walk-in-Place Utilizing Deep Neural Networks for Enhanced Body Ownership and Sense of Presence in VR. In 26th ACM Symposium on Virtual Reality Software and Technology (Virtual Event, Canada) (VRST ’20). Association for Computing Machinery, New York, NY, USA, Article 12, 10 pages. https://doi.org/10.1145/3385956.3418959
[194]
✱ Sangmin Lee, Seongyeop Kim, Hak Gu Kim, Min Seob Kim, Seokho Yun, Bumseok Jeong, and Yong Man Ro. 2019. Physiological Fusion Net: Quantifying Individual VR Sickness with Content Stimulus and Physiological Response. In 2019 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 440–444. https://doi.org/10.1109/ICIP.2019.8802983
[195]
✱ Tae Min Lee, Jong-Chul Yoon, and In-Kwon Lee. 2019. Motion Sickness Prediction in Stereoscopic Videos using 3D Convolutional Neural Networks. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 1919–1927. https://doi.org/10.1109/TVCG.2019.2899186
[196]
✱ Teesid Leelasawassuk, Dima Damen, and Walterio Mayol-Cuevas. 2017. Automated Capture and Delivery of Assistive Task Guidance with an Eyewear Computer: The GlaciAR System. In Proceedings of the 8th Augmented Human International Conference (Silicon Valley, California, USA) (AH ’17). Association for Computing Machinery, New York, NY, USA, Article 16, 9 pages. https://doi.org/10.1145/3041164.3041185
[197]
Danielle Levac, Heather Colquhoun, and Kelly K. O’Brien. 2010. Scoping studies: advancing the methodology. Implementation Science 5, 1 (Sept. 2010), 69. https://doi.org/10.1186/1748-5908-5-69
[198]
✱ Gang Li and Muhammad Adeel Khan. 2019. Deep Learning on VR-Induced Attention. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 163–1633. https://doi.org/10.1109/AIVR46125.2019.00033
[199]
✱ Gang Li, Ogechi Onuoha, Mark McGill, Stephen Brewster, Chao Ping Chen, and Frank Pollick. 2021. Comparing Autonomic Physiological and Electroencephalography Features for VR Sickness Detection Using Predictive Models. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. 1. IEEE, New York, NY, USA, 01–08. https://doi.org/10.1109/SSCI50451.2021.9660126
[200]
✱ Jisheng Li, Yuze He, Yubin Hu, Yuxing Han, and Jiangtao Wen. 2021. Learning To Compose 6-DOF Omnidirectional Videos Using Multi-Sphere Images. In 2021 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 3298–3302. https://doi.org/10.1109/ICIP42928.2021.9506127
[201]
✱ Xiangdong Li, Yifei Shan, Wenqian Chen, Yue Wu, Praben Hansen, and Simon Perrault. 2021. Predicting User Visual Attention in Virtual Reality with a Deep Learning Model. Virtual Real. 25, 4 (dec 2021), 1123–1136. https://doi.org/10.1007/s10055-021-00512-7
[202]
✱ Xiang Li, Yuan Tian, Fuyao Zhang, Shuxue Quan, and Yi Xu. 2020. Object Detection in the Context of Mobile Augmented Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 156–163. https://doi.org/10.1109/ISMAR50242.2020.00037
[203]
✱ Yuwei Li, Xi Luo, Youyi Zheng, Pengfei Xu, and Hongbo Fu. 2017. SweepCanvas: Sketch-Based 3D Prototyping on an RGB-D Image. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST ’17). Association for Computing Machinery, New York, NY, USA, 387–399. https://doi.org/10.1145/3126594.3126611
[204]
✱ Yi-Jun Li, Miao Wang, Frank Steinicke, and Qinping Zhao. 2021. OpenRDW: A Redirected Walking Library and Benchmark with Multi-User, Learning-based Functionalities and State-of-the-art Algorithms. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 21–30. https://doi.org/10.1109/ISMAR52148.2021.00016
[205]
✱ Zhong Li, Lele Chen, Celong Liu, Yu Gao, Yuanzhou Ha, Chenliang Xu, Shuxue Quan, and Yi Xu. 2019. 3D Human Avatar Digitization from a Single Image. In The 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry (Brisbane, QLD, Australia) (VRCAI ’19). Association for Computing Machinery, New York, NY, USA, Article 12, 8 pages. https://doi.org/10.1145/3359997.3365707
[206]
✱ Wei Liang, Jingjing Liu, Yining Lang, Bing Ning, and Lap-Fai Yu. 2019. Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 1836–1845. https://doi.org/10.1109/TVCG.2019.2898721
[207]
✱ Haodong Liao, Ning Xie, Huiyuan Li, Yuhang Li, Jianping Su, Feng Jiang, Weipeng Huang, and Heng Tao Shen. 2020. Data-Driven Spatio-Temporal Analysis via Multi-Modal Zeitgebers and Cognitive Load in VR. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 473–482. https://doi.org/10.1109/VR46266.2020.00068
[208]
✱ Jonathan Liebers, Patrick Horn, Christian Burschik, Uwe Gruenefeld, and Stefan Schneegass. 2021. Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (Osaka, Japan) (VRST ’21). Association for Computing Machinery, New York, NY, USA, Article 22, 9 pages. https://doi.org/10.1145/3489849.3489880
[209]
✱ Heaun-Taek Lim, Hak Gu Kim, and Yang Man Ra. 2018. VR IQA NET: Deep Virtual Reality Image Quality Assessment Using Adversarial Learning. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 1. IEEE, New York, NY, USA, 6737–6741. https://doi.org/10.1109/ICASSP.2018.8461317
[210]
✱ Kyungmin Lim, Jaesung Lee, Kwanghyun Won, Nupur Kala, and Tammy Lee. 2021. A novel method for VR sickness reduction based on dynamic field of view processing. Virtual Reality 25, 2 (June 2021), 331–340. https://doi.org/10.1007/s10055-020-00457-3
[211]
Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. 2014. Microsoft COCO: Common Objects in Context. https://doi.org/10.48550/ARXIV.1405.0312
[212]
Sebastian Linxen, Christian Sturm, Florian Brühlmann, Vincent Cassau, Klaus Opwis, and Katharina Reinecke. 2021. How WEIRD is CHI?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 143, 14 pages. https://doi.org/10.1145/3411764.3445488
[213]
Gwen Littlewort, Jacob Whitehill, Tingfan Wu, Ian Fasel, Mark Frank, Javier Movellan, and Marian Bartlett. 2011. The computer expression recognition toolbox (CERT). In 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Vol. 1. IEEE, New York, NY, USA, 298–305. https://doi.org/10.1109/FG.2011.5771414
[214]
Daquan Liu, Chengjiang Long, Hongpan Zhang, Hanning Yu, Xinzhi Dong, and Chunxia Xiao. 2020. ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, –.
[215]
✱ Chang Liu, Alexander Plopski, Kiyoshi Kiyokawa, Photchara Ratsamee, and Jason Orlosky. 2018. IntelliPupil: Pupillometric Light Modulation for Optical See-Through Head-Mounted Displays. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 98–104. https://doi.org/10.1109/ISMAR.2018.00037
[216]
✱ Daquan Liu, Chengjiang Long, Hongpan Zhang, Hanning Yu, Xinzhi Dong, and Chunxia Xiao. 2020. ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1. IEEE, New York, NY, USA, 8136–8145. https://doi.org/10.1109/CVPR42600.2020.00816
[217]
✱ Huimin Liu, Zhiquan Wang, Christos Mousas, and Dominic Kao. 2020. Virtual Reality Racket Sports: Virtual Drills for Exercise and Training. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 566–576. https://doi.org/10.1109/ISMAR50242.2020.00084
[218]
✱ Jingjing Liu, Wei Liane, Bing Ning, and Ting Mao. 2021. Work Surface Arrangement Optimization Driven by Human Activity. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 270–278. https://doi.org/10.1109/VR50410.2021.00049
[219]
✱ Zhihao Liu, Fanxing Zhang, and Zhanglin Cheng. 2021. BuildingSketch: Freehand Mid-Air Sketching for Building Modeling. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 329–338. https://doi.org/10.1109/ISMAR52148.2021.00049
[220]
Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. 2016. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, –.
[221]
✱ Conny Lu, Praneeth Chakravarthula, Yujie Tao, Steven Chen, and Henry Fuchs. 2020. Improved vergence and accommodation via Purkinje Image tracking with multiple cameras for AR glasses. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 320–331. https://doi.org/10.1109/ISMAR50242.2020.00058
[222]
Andrés Lucero. 2015. Using Affinity Diagrams to Evaluate Interactive Prototypes. In Human-Computer Interaction – INTERACT 2015, Julio Abascal, Simone Barbosa, Mirko Fetter, Tom Gross, Philippe Palanque, and Marco Winckler (Eds.). Springer International Publishing, Cham, 231–248.
[223]
Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, Vol. 1. IEEE, New York, NY, USA, 94–101. https://doi.org/10.1109/CVPRW.2010.5543262
[224]
Michael Luck and Ruth Aylett. 2000. Applying artificial intelligence to virtual reality: Intelligent virtual environments. Applied Artificial Intelligence 14, 1 (2000), 3–32. https://doi.org/10.1080/088395100117142
[225]
Martin H. Luerssen and Tim Hawke. 2018. Virtual Agents as a Service: Applications in Healthcare. In Proceedings of the 18th International Conference on Intelligent Virtual Agents (Sydney, NSW, Australia) (IVA ’18). Association for Computing Machinery, New York, NY, USA, 107–112. https://doi.org/10.1145/3267851.3267858
[226]
✱ Tiffany Luong, Nicolas Martin, Anaïs Raison, Ferran Argelaguet, Jean-Marc Diverrez, and Anatole Lécuyer. 2020. Towards Real-Time Recognition of Users Mental Workload Using Integrated Physiological Sensors Into a VR HMD. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 425–437. https://doi.org/10.1109/ISMAR50242.2020.00068
[227]
Michael J. Lyons. 2021. "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset. CoRR abs/2107.13998(2021). arXiv:2107.13998https://arxiv.org/abs/2107.13998
[228]
✱ Zhuoyue Lyu, Jiannan Li, and Bryan Wang. 2021. AIive: Interactive Visualization and Sonification of Neural Networks in Virtual Reality. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 251–255. https://doi.org/10.1109/AIVR52153.2021.00057
[229]
✱ Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando De la Torre, and Yaser Sheikh. 2021. Pixel Codec Avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, 64–73.
[230]
✱ Ochs Magalie, Jain Sameer, and Blache Philippe. 2018. Toward an Automatic Prediction of the Sense of Presence in Virtual Reality Environment. In Proceedings of the 6th International Conference on Human-Agent Interaction (Southampton, United Kingdom) (HAI ’18). Association for Computing Machinery, New York, NY, USA, 161–166. https://doi.org/10.1145/3284432.3284452
[231]
✱ David Mandl, Kwang Moo Yi, Peter Mohr, Peter M. Roth, Pascal Fua, Vincent Lepetit, Dieter Schmalstieg, and Denis Kalkofen. 2017. Learning Lightprobes for Mixed Reality Illumination. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 82–89. https://doi.org/10.1109/ISMAR.2017.25
[232]
✱ Nicolas Martin, Nicolas Mathieu, Nico Pallamin, Martin Ragot, and Jean-Marc Diverrez. 2020. Virtual reality sickness detection: an approach based on physiological signals and machine learning. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 387–399. https://doi.org/10.1109/ISMAR50242.2020.00065
[233]
✱ Pablo Martinez-Gonzalez, Sergiu Oprea, Alberto Garcia-Garcia, Alvaro Jover-Alvarez, Sergio Orts-Escolano, and Jose Garcia-Rodriguez. 2020. UnrealROX: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation. Virtual Reality 24, 2 (June 2020), 271–288. https://doi.org/10.1007/s10055-019-00399-5
[234]
✱ Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, and Maki Sugimoto. 2020. Face Commands - User-Defined Facial Gestures for Smart Glasses. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 374–386. https://doi.org/10.1109/ISMAR50242.2020.00064
[235]
Y. Matsuda, H. Hoashi, and K. Yanai. 2012. Recognition of Multiple-Food Images by Detecting Candidate Regions. In Proc. of IEEE International Conference on Multimedia and Expo (ICME). IEEE, New York, NY, USA.
[236]
✱ Fabrice Matulic, Aditya Ganeshan, Hiroshi Fujiwara, and Daniel Vogel. 2021. Phonetroller: Visual Representations of Fingers for Precise Touch Input with Mobile Phones in VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 129, 13 pages. https://doi.org/10.1145/3411764.3445583
[237]
S. Mohammad Mavadati, Mohammad H. Mahoor, Kevin Bartlett, Philip Trinh, and Jeffrey F. Cohn. 2013. DISFA: A Spontaneous Facial Action Intensity Database. IEEE Transactions on Affective Computing 4, 2 (2013), 151–160. https://doi.org/10.1109/T-AFFC.2013.4
[238]
✱ Sven Mayer, Jens Reinhardt, Robin Schweigert, Brighten Jelke, Valentin Schwind, Katrin Wolf, and Niels Henze. 2020. Improving Humans’ Ability to Interpret Deictic Gestures in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376340
[239]
✱ Sven Mayer, Valentin Schwind, Robin Schweigert, and Niels Henze. 2018. The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3174227
[240]
✱ Eric J. McDermott, Johanna Metsomaa, Paolo Belardinelli, Moritz Grosse-Wentrup, Ulf Ziemann, and Christoph Zrenner. 2021. Predicting motor behavior: an efficient EEG signal processing pipeline to detect brain states with potential therapeutic relevance for VR-based neurorehabilitation. Virtual Reality 1 (Sept. 2021). https://doi.org/10.1007/s10055-021-00538-x
[241]
✱ Jess McIntosh, Hubert Dariusz Zajac, Andreea Nicoleta Stefan, Joanna Bergström, and Kasper Hornbæk. 2020. Iteratively Adapting Avatars Using Task-Integrated Optimisation. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 709–721. https://doi.org/10.1145/3379337.3415832
[242]
Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. 2017. Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision. In 3D Vision (3DV), 2017 Fifth International Conference on. IEEE, IEEE, New York, NY, USA. https://doi.org/10.1109/3dv.2017.00064
[243]
✱ Nadine Meissler, Annika Wohlan, Nico Hochgeschwender, and Andreas Schreiber. 2019. Using Visualization of Convolutional Neural Networks in Virtual Reality for Machine Learning Newcomers. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 152–1526. https://doi.org/10.1109/AIVR46125.2019.00031
[244]
✱ Ben Meuleman and David Rudrauf. 2021. Induction and Profiling of Strong Multi-Componential Emotions in Virtual Reality. IEEE Transactions on Affective Computing 12 (2021), 189–202. https://doi.org/10.1109/TAFFC.2018.2864730
[245]
Abraham Hani Mhaidli and Florian Schaub. 2021. Identifying Manipulative Advertising Techniques in XR Through Scenario Construction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 296, 18 pages. https://doi.org/10.1145/3411764.3445253
[246]
Paul Milgram and Fumio Kishino. 1994. A Taxonomy of Mixed Reality Visual Displays. IEICE TRANSACTIONS on Information and Systems E77-D, 12(1994), 1321–1329. https://search.ieice.org/bin/summary.php?id=e77-d_12_1321
[247]
✱ Robert Miller, Natasha Kholgade Banerjee, and Sean Banerjee. 2021. Using Siamese Neural Networks to Perform Cross-System Behavioral Authentication in Virtual Reality. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 140–149. https://doi.org/10.1109/VR50410.2021.00035
[248]
✱ Xin Min, Wenqiao Zhang, Shouqian Sun, Nan Zhao, Siliang Tang, and Yueting Zhuang. 2019. VPModel: High-Fidelity Product Simulation in a Virtual-Physical Environment. IEEE Transactions on Visualization and Computer Graphics 25, 11(2019), 3083–3093. https://doi.org/10.1109/TVCG.2019.2932276
[249]
Marvin Minsky. 1988. The Society of Mind. Simon & Schuster, -.
[250]
✱ George B. Mo, John J Dudley, and Per Ola Kristensson. 2021. Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 291, 13 pages. https://doi.org/10.1145/3411764.3445766
[251]
✱ Diego Monteiro, Hai-Ning Liang, Xiaohang Tang, and Pourang Irani. 2021. Using Trajectory Compression Rate to Predict Changes in Cybersickness in Virtual Reality Games. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 138–146. https://doi.org/10.1109/ISMAR52148.2021.00028
[252]
✱ Alec G. Moore, Ryan P. McMahan, Hailiang Dong, and Nicholas Ruozzi. 2020. Extracting Velocity-Based User-Tracking Features to Predict Learning Gains in a Virtual Reality Training Application. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 694–703. https://doi.org/10.1109/ISMAR50242.2020.00099
[253]
✱ Fariba Mostajeran, Frank Steinicke, Oscar Javier Ariza Nunez, Dimitrios Gatsios, and Dimitrios Fotiadis. 2020. Augmented Reality for Older Adults: Exploring Acceptability of Virtual Coaches for Home-Based Balance Training in an Aging Population. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376565
[254]
✱ Christos Mousas. 2018. Performance-Driven Dance Motion Control of a Virtual Partner Character. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 57–64. https://doi.org/10.1109/VR.2018.8446498
[255]
✱ Moritz Mühlhausen, Moritz Kappel, Marc Kassubeck, Paul M. Bittner, Susana Castillo, and Marcus Magnor. 2020. Temporal Consistent Motion Parallax for Omnidirectional Stereo Panorama Video. In 26th ACM Symposium on Virtual Reality Software and Technology (Virtual Event, Canada) (VRST ’20). Association for Computing Machinery, New York, NY, USA, Article 21, 9 pages. https://doi.org/10.1145/3385956.3418965
[256]
Zachary Munn, Micah D. J. Peters, Cindy Stern, Catalin Tufanaru, Alexa McArthur, and Edoardo Aromataris. 2018. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology 18, 1 (Nov. 2018), 143. https://doi.org/10.1186/s12874-018-0611-x
[257]
✱ Kizashi Nakano, Daichi Horita, Nobuchika Sakata, Kiyoshi Kiyokawa, Keiji Yanai, and Takuji Narumi. 2019. DeepTaste: Augmented Reality Gustatory Manipulation with GAN-Based Real-Time Food-to-Food Translation. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 212–223. https://doi.org/10.1109/ISMAR.2019.000-1
[258]
✱ Sahil Narang, Andrew Best, and Dinesh Manocha. 2018. Simulating Movement Interactions Between Avatars & Agents in Virtual Worlds Using Human Motion Constraints. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 9–16. https://doi.org/10.1109/VR.2018.8446152
[259]
✱ Sahil Narang, Andrew Best, and Dinesh Manocha. 2019. Inferring User Intent using Bayesian Theory of Mind in Shared Avatar-Agent Virtual Environments. IEEE Transactions on Visualization and Computer Graphics 25, 5(2019), 2113–2122. https://doi.org/10.1109/TVCG.2019.2898800
[260]
Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, and Shigeo Morishima. 2019. SiCloPe: Silhouette-Based Clothed People. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA.
[261]
Allen Newell and Herbert A. Simon. 1976. Computer Science as Empirical Inquiry: Symbols and Search. Commun. ACM 19, 3 (mar 1976), 113–126. https://doi.org/10.1145/360018.360022
[262]
Nahal Norouzi, Kangsoo Kim, Gerd Bruder, Austin Erickson, Zubin Choudhary, Yifan Li, and Greg Welch. 2020. A Systematic Literature Review of Embodied Augmented Reality Agents in Head-Mounted Display Environments. In ICAT-EGVE 2020 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Ferran Argelaguet, Ryan McMahan, and Maki Sugimoto (Eds.). The Eurographics Association, -, –. https://doi.org/10.2312/egve.20201264
[263]
Nahal Norouzi, Kangsoo Kim, Jason Hochreiter, Myungho Lee, Salam Daher, Gerd Bruder, and Greg Welch. 2018. A Systematic Survey of 15 Years of User Studies Published in the Intelligent Virtual Agents Conference. In Proceedings of the 18th International Conference on Intelligent Virtual Agents (Sydney, NSW, Australia) (IVA ’18). Association for Computing Machinery, New York, NY, USA, 17–22. https://doi.org/10.1145/3267851.3267901
[264]
Ariel Noyman and Kent Larson. 2020. DeepScope: HCI Platform for Generative Cityscape Visualization. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3334480.3382809
[265]
Miquel Mascaró Oliver and Esperança Amengual Alcover. 2020. UIBVFED: Virtual facial expression dataset. PLOS ONE 15, 4 (04 2020), 1–10. https://doi.org/10.1371/journal.pone.0231266
[266]
✱ Jason Orlosky, Brandon Huynh, and Tobias Hollerer. 2019. Using Eye Tracked Virtual Reality to Classify Understanding of Vocabulary in Recall Tasks. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 66–667. https://doi.org/10.1109/AIVR46125.2019.00019
[267]
✱ Nitish Padmanaban, Timon Ruban, Vincent Sitzmann, Anthony M. Norcia, and Gordon Wetzstein. 2018. Towards a Machine-Learning Approach for Sickness Prediction in 360 Stereoscopic Videos. IEEE Transactions on Visualization and Computer Graphics 24, 4(2018), 1594–1603. https://doi.org/10.1109/TVCG.2018.2793560
[268]
✱ Wolfgang Paier, Anna Hilsmann, and Peter Eisert. 2021. Example-Based Facial Animation of Virtual Reality Avatars Using Auto-Regressive Neural Networks. IEEE Computer Graphics and Applications 41, 4 (2021), 52–63. https://doi.org/10.1109/MCG.2021.3068035
[269]
✱ Seungwon Paik, Youngseung Jeon, Patrick C. Shih, and Kyungsik Han. 2021. I Feel More Engaged When I Move!: Deep Learning-based Backward Movement Detection and its Application. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 483–492. https://doi.org/10.1109/VR50410.2021.00072
[270]
Ana Paiva, Iolanda Leite, Hana Boukricha, and Ipke Wachsmuth. 2017. Empathy in Virtual Agents and Robots: A Survey. ACM Trans. Interact. Intell. Syst. 7, 3, Article 11 (sep 2017), 40 pages. https://doi.org/10.1145/2912150
[271]
✱ Robin Palmberg, Christopher Peters, and Adam Qureshi. 2017. When facial expressions dominate emotion perception in groups of virtual characters. In 2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Vol. 1. IEEE, New York, NY, USA, 157–160. https://doi.org/10.1109/VS-GAMES.2017.8056588
[272]
✱ Mathias Parger, Chengcheng Tang, Yuanlu Xu, Christopher David Twigg, Lingling Tao, Yijing Li, Robert Wang, and Markus Steinberger. 2021. UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 1 (2021), 1–1. https://doi.org/10.1109/TVCG.2021.3085407
[273]
✱ Sangin Park, Laehyun Kim, Jangho Kwon, Soo Ji Choi, and Mincheol Whang. 2022. Evaluation of visual-induced motion sickness from head-mounted display using heartbeat evoked potential: a cognitive load-focused approach. Virtual Reality 26, 3 (Sept. 2022), 979–1000. https://doi.org/10.1007/s10055-021-00600-8
[274]
Taesung Park, Alexei A. Efros, Richard Zhang, and Jun-Yan Zhu. 2020. Contrastive Learning for Unpaired Image-to-Image Translation. In European Conference on Computer Vision. Springer, -.
[275]
Mark Paterson. 2006. Feel the presence: technologies of touch and distance. Environment and Planning D: Society and Space 24, 5 (2006), 691–708.
[276]
✱ Tabitha C. Peck, Jessica J. Good, and Katharina Seitz. 2021. Evidence of Racial Bias Using Immersive Virtual Reality: Analysis of Head and Hand Motions During Shooting Decisions. IEEE Transactions on Visualization and Computer Graphics 27, 5(2021), 2502–2512. https://doi.org/10.1109/TVCG.2021.3067767
[277]
Micah D. J. Peters, Christina M. Godfrey, Hanan Khalil, Patricia McInerney, Deborah Parker, and Cassia Baldini Soares. 2015. Guidance for conducting systematic scoping reviews. JBI Evidence Implementation 13, 3 (2015), 141–146. https://doi.org/10.1097/XEB.0000000000000050
[278]
✱ Gustav Bøg Petersen, Aske Mottelson, and Guido Makransky. 2021. Pedagogical Agents in Educational VR: An in the Wild Study. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 482, 12 pages. https://doi.org/10.1145/3411764.3445760
[279]
✱ Stefano Petrangeli, Gwendal Simon, and Viswanathan Swaminathan. 2018. Trajectory-Based Viewport Prediction for 360-Degree Virtual Reality Videos. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 157–160. https://doi.org/10.1109/AIVR.2018.00033
[280]
✱ Ken Pfeuffer, Matthias J. Geiger, Sarah Prange, Lukas Mecke, Daniel Buschek, and Florian Alt. 2019. Behavioural Biometrics in VR: Identifying People from Body Motion and Relations in Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300340
[281]
✱ Duc-Minh Pham. 2018. Human Identification Using Neural Network-Based Classification of Periodic Behaviors in Virtual Reality. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 657–658. https://doi.org/10.1109/VR.2018.8446529
[282]
Mai T. Pham, Andrijana Rajić, Judy D. Greig, Jan M. Sargeant, Andrew Papadopoulos, and Scott A. McEwen. 2014. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Research Synthesis Methods 5, 4 (2014), 371–385. https://doi.org/10.1002/jrsm.1123
[283]
✱ Pierre-Olivier Pigny and Lionel Dominjon. 2019. Using CNNs For Users Segmentation In Video See-Through Augmented Virtuality. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 229–2295. https://doi.org/10.1109/AIVR46125.2019.00048
[284]
✱ Thibault Porssut, Yawen Hou, Olaf Blanke, Bruno Herbelin, and Ronan Boulic. 2022. Adapting Virtual Embodiment Through Reinforcement Learning. IEEE Transactions on Visualization and Computer Graphics 28, 9(2022), 3193–3205. https://doi.org/10.1109/TVCG.2021.3057797
[285]
Chen Qian, Xiao Sun, Yichen Wei, Xiaoou Tang, and Jian Sun. 2014. Realtime and Robust Hand Tracking from Depth. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA.
[286]
✱ Muhammad Raees and Sehat Ullah. 2021. RUN: rational ubiquitous navigation, a model for automated navigation and searching in virtual environments. Virtual Reality 25, 2 (June 2021), 511–521. https://doi.org/10.1007/s10055-020-00468-0
[287]
✱ Pierluigi Zama Ramirez, Claudio Paternesi, Luca De Luigi, Luigi Lella, Daniele De Gregorio, and Luigi Di Stefano. 2020. Shooting Labels: 3D Semantic Labeling by Virtual Reality. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 99–106. https://doi.org/10.1109/AIVR50618.2020.00027
[288]
✱ Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Kurt Gray, and Dinesh Manocha. 2019. FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics. IEEE Transactions on Visualization and Computer Graphics 25, 11(2019), 3135–3145. https://doi.org/10.1109/TVCG.2019.2932235
[289]
✱ Tanmay Randhavane, Aniket Bera, Kyra Kapsaskis, Rahul Sheth, Kurt Gray, and Dinesh Manocha. 2019. EVA: Generating Emotional Behavior of Virtual Agents Using Expressive Features of Gait and Gaze. In ACM Symposium on Applied Perception 2019(Barcelona, Spain) (SAP ’19). Association for Computing Machinery, New York, NY, USA, Article 6, 10 pages. https://doi.org/10.1145/3343036.3343129
[290]
✱ Tanmay Randhavane, Aniket Bera, Emily Kubin, Kurt Gray, and Dinesh Manocha. 2021. Modeling Data-Driven Dominance Traits for Virtual Characters Using Gait Analysis. IEEE Transactions on Visualization and Computer Graphics 27, 6(2021), 2967–2979. https://doi.org/10.1109/TVCG.2019.2953063
[291]
✱ Hedieh Ranjbartabar, Deborah Richards, Ayse Aysin Bilgin, and Cat Kutay. 2021. First Impressions Count! The Role of the Human’s Emotional State on Rapport Established with an Empathic versus Neutral Virtual Therapist. IEEE Transactions on Affective Computing 12, 3 (2021), 788–800. https://doi.org/10.1109/TAFFC.2019.2899305
[292]
✱ Joshua Ratcliff, Alexey Supikov, Santiago Alfaro, and Ronald Azuma. 2020. ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Transactions on Visualization and Computer Graphics 26, 5(2020), 1981–1990. https://doi.org/10.1109/TVCG.2020.2973064
[293]
Jack Ratcliffe, Francesco Soave, Nick Bryan-Kinns, Laurissa Tokarchuk, and Ildar Farkhatdinov. 2021. Extended Reality (XR) Remote Research: A Survey of Drawbacks and Opportunities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 527, 13 pages. https://doi.org/10.1145/3411764.3445170
[294]
✱ Lisa Rebenitsch and Charles Owen. 2021. Estimating cybersickness from virtual reality applications. Virtual Reality 25, 1 (March 2021), 165–174. https://doi.org/10.1007/s10055-020-00446-6
[295]
✱ Manuel Rebol, Christian Güti, and Krzysztof Pietroszek. 2021. Passing a Non-verbal Turing Test: Evaluating Gesture Animations Generated from Speech. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 573–581. https://doi.org/10.1109/VR50410.2021.00082
[296]
Dirk Reiners, Mohammad Reza Davahli, Waldemar Karwowski, and Carolina Cruz-Neira. 2021. The Combination of Artificial Intelligence and Extended Reality: A Systematic Review. Frontiers in Virtual Reality 2 (2021). https://www.frontiersin.org/articles/10.3389/frvir.2021.721933
[297]
✱ Jens Reinhardt, Luca Hillen, and Katrin Wolf. 2020. Embedding Conversational Agents into AR: Invisible or with a Realistic Human Body?. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (Sydney NSW, Australia) (TEI ’20). Association for Computing Machinery, New York, NY, USA, 299–310. https://doi.org/10.1145/3374920.3374956
[298]
Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, and Christian Theobalt. 2016. EgoCap: Egocentric Marker-Less Motion Capture with Two Fisheye Cameras. ACM Trans. Graph. 35, 6, Article 162 (nov 2016), 11 pages. https://doi.org/10.1145/2980179.2980235
[299]
Taina Ribeiro de Oliveira, Matheus Moura da Silva, Rafael Antonio Nepomuceno Spinasse, Gabriel Giesen Ludke, Mateus Ruy Soares Gaudio, Guilherme Iglesias Rocha Gomes, Luan Guio Cotini, Daniel Vargens, MARCELO QUEIROZ SCHIMIDT, Rodrigo Varejao Andreao, and Mario Mestria. 2021. Systematic Review of Virtual Reality Solutions Employing Artificial Intelligence Methods. In Symposium on Virtual and Augmented Reality(SVR’21). Association for Computing Machinery, New York, NY, USA, 42–55. https://doi.org/10.1145/3488162.3488209
[300]
Jeff Rickel. 2001. Intelligent Virtual Agents for Education and Training: Opportunities and Challenges. In Intelligent Virtual Agents, Angélica de Antonio, Ruth Aylett, and Daniel Ballin (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 15–22.
[301]
✱ Andrés Ovidio Restrepo Rodríguez, Maddyzeth Ariza Riaño, Paulo Alonso Gaona García, Carlos Enrique Montenegro Marín, Rubén González Crespo, and Xing Wu. 2020. Emotional characterization of children through a learning environment using learning analytics and AR-Sandbox. Journal of Ambient Intelligence and Humanized Computing 11, 11 (Nov. 2020), 5353–5367. https://doi.org/10.1007/s12652-020-01887-2
[302]
Katja Rogers, Sukran Karaosmanoglu, Maximilian Altmeyer, Ally Suarez, and Lennart E. Nacke. 2022. Much Realistic, Such Wow! A Systematic Literature Review of Realism in Digital Games. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 190, 21 pages. https://doi.org/10.1145/3491102.3501875
[303]
✱ Robert A. Rolin, Jolande Fooken, Miriam Spering, and Dinesh K. Pai. 2019. Perception of Looming Motion in Virtual Reality Egocentric Interception Tasks. IEEE Transactions on Visualization and Computer Graphics 25, 10(2019), 3042–3048. https://doi.org/10.1109/TVCG.2018.2859987
[304]
✱ Miguel Fabian Romero Rondon, Dario Zanca, Stefano Melacci, Marco Gori, and Lucile Sassatelli. 2021. HeMoG: A White-Box Model to Unveil the Connection between Saliency Information and Human Head Motion in Virtual Reality. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 10–18. https://doi.org/10.1109/AIVR52153.2021.00012
[305]
✱ Miguel Fabián Romero Rondón, Lucile Sassatelli, Ramón Aparicio-Pardo, and Frédéric Precioso. 2022. TRACK: A New Method From a Re-Examination of Deep Architectures for Head Motion Prediction in 360 Videos. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 9(2022), 5681–5699. https://doi.org/10.1109/TPAMI.2021.3070520
[306]
✱ Daniel Roth, Gary Bente, Peter Kullmann, David Mal, Chris Felix Purps, Kai Vogeley, and Marc Erich Latoschik. 2019. Technologies for Social Augmentations in User-Embodied Virtual Reality. In 25th ACM Symposium on Virtual Reality Software and Technology (Parramatta, NSW, Australia) (VRST ’19). Association for Computing Machinery, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3359996.3364269
[307]
✱ Menandro Roxas, Tomoki Hori, Taiki Fukiage, Yasuhide Okamoto, and Takeshi Oishi. 2018. Occlusion Handling Using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST ’18). Association for Computing Machinery, New York, NY, USA, Article 20, 8 pages. https://doi.org/10.1145/3281505.3281546
[308]
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115, 3 (2015), 211–252. https://doi.org/10.1007/s11263-015-0816-y
[309]
✱ Pejman Sajjadi, Laura Hoffmann, Philipp Cimiano, and Stefan Kopp. 2018. On the Effect of a Personality-Driven ECA on Perceived Social Presence and Game Experience in VR. In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/VS-Games.2018.8493436
[310]
✱ Lucas H. Sallaberry, Romero Tori, and Fatima L S Nunes. 2021. Comparison of Machine Learning Algorithms for Automatic Assessment of Performance in a Virtual Reality Dental Simulator. In Symposium on Virtual and Augmented Reality(Virtual Event, Brazil) (SVR’21). Association for Computing Machinery, New York, NY, USA, 14–23. https://doi.org/10.1145/3488162.3488207
[311]
✱ Wallas Santos, Isabela Chambers, Emilio Vital Brazil, and Marcio Moreno. 2019. Structuring and Inspecting 3D Anchors for Seismic Volume into Hyperknowledge Base in Virtual Reality. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 271–2713. https://doi.org/10.1109/AIVR46125.2019.00063
[312]
✱ Koya Sato, Yuji Sano, Mai Otsuki, Mizuki Oka, and Kazuhiko Kato. 2019. Augmented Recreational Volleyball Court: Supporting the Beginners’ Landing Position Prediction Skill by Providing Peripheral Visual Feedback. In Proceedings of the 10th Augmented Human International Conference 2019 (Reims, France) (AH2019). Association for Computing Machinery, New York, NY, USA, Article 15, 9 pages. https://doi.org/10.1145/3311823.3311843
[313]
✱ Batuhan Sayis, Ciera Crowell, Juan Benitez, Rafael Ramirez, and Narcis Pares. 2019. Computational Modeling of Psycho-physiological Arousal and Social Initiation of children with Autism in Interventions through Full-Body Interaction. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Vol. 1. IEEE, New York, NY, USA, 573–579. https://doi.org/10.1109/ACII.2019.8925474
[314]
✱ David Scherfgen and Jonas Schild. 2021. Estimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training. In Symposium on Virtual and Augmented Reality(Virtual Event, Brazil) (SVR’21). Association for Computing Machinery, New York, NY, USA, 33–41. https://doi.org/10.1145/3488162.3488166
[315]
Jonas Schjerlund, Kasper Hornbæk, and Joanna Bergström. 2021. Ninja Hands: Using Many Hands to Improve Target Selection in VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 130, 14 pages. https://doi.org/10.1145/3411764.3445759
[316]
Jonas Schjerlund, Kasper Hornbæk, and Joanna Bergström. 2022. OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 355, 13 pages. https://doi.org/10.1145/3491102.3501873
[317]
✱ Susanne Schmidt, Oscar Javier Ariza Nunez, and Frank Steinicke. 2019. Blended Agents: Manipulation of Physical Objects within Mixed Reality Environments and Beyond. In Symposium on Spatial User Interaction (New Orleans, LA, USA) (SUI ’19). Association for Computing Machinery, New York, NY, USA, Article 6, 10 pages. https://doi.org/10.1145/3357251.3357591
[318]
✱ Anderson Schrader, Isabella Gebhart, Drew Garrison, Andrew Duchowski, Martian Lapadatescu, Weiyu Feng, Mahmoud Thabit, Fang Wang, Krzysztof Krejtz, and Daniel D. Petty. 2021. Toward Eye-Tracked Sideline Concussion Assessment in EXtended Reality. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA ’21 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 7, 11 pages. https://doi.org/10.1145/3448017.3457378
[319]
✱ Maximilian Schrapel, Thilo Schulz, and Michael Rohs. 2020. Augmenting Public Bookcases to Support Book Sharing. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (Oldenburg, Germany) (MobileHCI ’20). Association for Computing Machinery, New York, NY, USA, Article 11, 11 pages. https://doi.org/10.1145/3379503.3403542
[320]
✱ Gabriel Schwartz, Shih-En Wei, Te-Li Wang, Stephen Lombardi, Tomas Simon, Jason Saragih, and Yaser Sheikh. 2020. The Eyes Have It: An Integrated Eye and Face Model for Photorealistic Facial Animation. ACM Trans. Graph. 39, 4, Article 91 (aug 2020), 15 pages. https://doi.org/10.1145/3386569.3392493
[321]
✱ Valentin Schwind, David Halbhuber, Jakob Fehle, Jonathan Sasse, Andreas Pfaffelhuber, Christoph Tögel, Julian Dietz, and Niels Henze. 2020. The Effects of Full-Body Avatar Movement Predictions in Virtual Reality Using Neural Networks. In 26th ACM Symposium on Virtual Reality Software and Technology (Virtual Event, Canada) (VRST ’20). Association for Computing Machinery, New York, NY, USA, Article 28, 11 pages. https://doi.org/10.1145/3385956.3418941
[322]
✱ Valentin Schwind, Sven Mayer, Alexandre Comeau-Vermeersch, Robin Schweigert, and Niels Henze. 2018. Up to the Finger Tip: The Effect of Avatars on Mid-Air Pointing Accuracy in Virtual Reality. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (Melbourne, VIC, Australia) (CHI PLAY ’18). Association for Computing Machinery, New York, NY, USA, 477–488. https://doi.org/10.1145/3242671.3242675
[323]
✱ Sven Seele, Sebastian Misztal, Helmut Buhler, Rainer Herpers, and Jonas Schild. 2017. Here’s Looking At You Anyway! How Important is Realistic Gaze Behavior in Co-Located Social Virtual Reality Games?. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (Amsterdam, The Netherlands) (CHI PLAY ’17). Association for Computing Machinery, New York, NY, USA, 531–540. https://doi.org/10.1145/3116595.3116619
[324]
✱ Nathan Semertzidis, Michaela Scary, Josh Andres, Brahmi Dwivedi, Yutika Chandrashekhar Kulwe, Fabio Zambetta, and Florian Floyd Mueller. 2020. Neo-Noumena: Augmenting Emotion Communication. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376599
[325]
✱ Xinyu Shi, Junjun Pan, Zeyong Hu, Juncong Lin, Shihui Guo, Minghong Liao, Ye Pan, and Ligang Liu. 2019. Accurate and Fast Classification of Foot Gestures for Virtual Locomotion. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 178–189. https://doi.org/10.1109/ISMAR.2019.000-6
[326]
✱ Jotaro Shigeyama, Takeru Hashimoto, Shigeo Yoshida, Takuji Narumi, Tomohiro Tanikawa, and Michitaka Hirose. 2019. Transcalibur: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering Based on Computational Perception Model. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300241
[327]
✱ Deeksha Shravani, Prajwal Y R, Prajwal V Atreyas, and Shobha G. 2021. VR Supermarket: a Virtual Reality Online Shopping Platform with a Dynamic Recommendation System. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 119–123. https://doi.org/10.1109/AIVR52153.2021.00028
[328]
✱ Ilia Shumailov and Hatice Gunes. 2017. Computational analysis of valence and arousal in virtual reality gaming using lower arm electromyograms. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), Vol. 1. IEEE, New York, NY, USA, 164–169. https://doi.org/10.1109/ACII.2017.8273595
[329]
Mel Slater. 2014. Grand Challenges in Virtual Environments. Frontiers in Robotics and AI 1 (2014), –. https://doi.org/10.3389/frobt.2014.00003
[330]
✱ Agata Marta Soccini and Federica Cena. 2021. The Ethics of Rehabilitation in Virtual Reality: the role of Self-Avatars and Deep Learning. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 324–328. https://doi.org/10.1109/AIVR52153.2021.00068
[331]
✱ Gowri Somanath and Daniel Kurz. 2021. HDR Environment Map Estimation for Real-Time Augmented Reality. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1. IEEE, New York, NY, USA, 11293–11301. https://doi.org/10.1109/CVPR46437.2021.01114
[332]
Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. 2017. Semantic Scene Completion from a Single Depth Image. Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition 1(2017).
[333]
Maximilian Speicher, Brian D. Hall, and Michael Nebeling. 2019. What is Mixed Reality?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300767
[334]
✱ Misha Sra, Pattie Maes, Prashanth Vijayaraghavan, and Deb Roy. 2017. Auris: Creating Affective Virtual Spaces from Music. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 26, 11 pages. https://doi.org/10.1145/3139131.3139139
[335]
J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks 32(2012), 323–332. https://doi.org/10.1016/j.neunet.2012.02.016
[336]
✱ Ryan R. Strauss, Raghuram Ramanujan, Andrew Becker, and Tabitha C. Peck. 2020. A Steering Algorithm for Redirected Walking Using Reinforcement Learning. IEEE Transactions on Visualization and Computer Graphics 26, 5(2020), 1955–1963. https://doi.org/10.1109/TVCG.2020.2973060
[337]
✱ Lena Stubbemann, Dominik Dürrschnabel, and Robert Refflinghaus. 2021. Neural Networks for Semantic Gaze Analysis in XR Settings. In ACM Symposium on Eye Tracking Research and Applications (Virtual Event, Germany) (ETRA ’21 Full Papers). Association for Computing Machinery, New York, NY, USA, Article 5, 11 pages. https://doi.org/10.1145/3448017.3457380
[338]
✱ Yongbin Sun, Alexandre Armengol-Urpi, Sai Nithin Reddy Kantareddy, Joshua Siegel, and Sanjay Sarma. 2019. MagicHand: Interact with IoT Devices in Augmented Reality Environment. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1738–1743. https://doi.org/10.1109/VR.2019.8798053
[339]
Xiao Sun, Yichen Wei, Shuang Liang, Xiaoou Tang, and Jian Sun. 2015. Cascaded Hand Pose Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, –.
[340]
Anthea Sutton, Mark Clowes, Louise Preston, and Andrew Booth. 2019. Meeting the review family: exploring review types and associated information retrieval requirements. Health Information & Libraries Journal 36, 3 (2019), 202–222. https://doi.org/10.1111/hir.12276
[341]
✱ Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto. 2017. Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display. In 2017 IEEE Virtual Reality (VR), Vol. 1. IEEE, New York, NY, USA, 177–185. https://doi.org/10.1109/VR.2017.7892245
[342]
✱ Justyna Swidrak and Grzegorz Pochwatko. 2019. Being Touched by a Virtual Human.: Relationships Between Heart Rate, Gender, Social Status, and Compliance. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (Paris, France) (IVA ’19). Association for Computing Machinery, New York, NY, USA, 49–55. https://doi.org/10.1145/3308532.3329467
[343]
✱ Sławomir K. Tadeja, Patrick Langdon, and Per Ola Kristensson. 2021. Supporting Iterative Virtual Reality Analytics Design and Evaluation by Systematic Generation of Surrogate Clustered Datasets. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 376–385. https://doi.org/10.1109/ISMAR52148.2021.00054
[344]
✱ Xiao Tang, Xiaowei Hu, Chi-Wing Fu, and Daniel Cohen-Or. 2020. GrabAR: Occlusion-Aware Grabbing Virtual Objects in AR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 697–708. https://doi.org/10.1145/3379337.3415835
[345]
Xiao Tang, Xiaowei Hu, Chi-Wing Fu, and Daniel Cohen-Or. 2020. GrabAR: Occlusion-Aware Grabbing Virtual Objects in AR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 697–708. https://doi.org/10.1145/3379337.3415835
[346]
✱ Catherine Taylor, Chris Mullany, Robin McNicholas, and Darren Cosker. 2019. VR Props: An End-to-End Pipeline for Transporting Real Objects Into Virtual and Augmented Environments. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 83–92. https://doi.org/10.1109/ISMAR.2019.00-22
[347]
✱ Kevin Kennard Thiel, Florian Naumann, Eduard Jundt, Stephan Guennemann, and Gudrun J. Klinker. 2021. C.DOT - Convolutional Deep Object Tracker for Augmented Reality Based Purely on Synthetic Data. IEEE Transactions on Visualization and Computer Graphics 1 (2021), 1–1. https://doi.org/10.1109/TVCG.2021.3089096
[348]
✱ Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2018. FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality. ACM Trans. Graph. 37, 2, Article 25 (jun 2018), 15 pages. https://doi.org/10.1145/3182644
[349]
✱ Fuhui Tian, Shogo Okada, and Katsumi Nitta. 2019. Analyzing Eye Movements in Interview Communication with Virtual Reality Agents. In Proceedings of the 7th International Conference on Human-Agent Interaction (Kyoto, Japan) (HAI ’19). Association for Computing Machinery, New York, NY, USA, 3–10. https://doi.org/10.1145/3349537.3351889
[350]
✱ Hao Tian, Changbo Wang, Dinesh Manocha, and Xinyu Zhang. 2019. Realtime Hand-Object Interaction Using Learned Grasp Space for Virtual Environments. IEEE Transactions on Visualization and Computer Graphics 25, 8(2019), 2623–2635. https://doi.org/10.1109/TVCG.2018.2849381
[351]
Denis Tome, Patrick Peluse, Lourdes Agapito, and Hernan Badino. 2019. xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, New York, NY, USA, 7728–7738.
[352]
✱ Denis Tome, Thiemo Alldieck, Patrick Peluse, Gerard Pons-Moll, Lourdes Agapito, Hernan Badino, and Fernando De la Torre. 2020. SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 1 (2020), 1–1. https://doi.org/10.1109/TPAMI.2020.3029700
[353]
✱ Daiki Tone, Daisuke Iwai, Shinsaku Hiura, and Kosuke Sato. 2020. FibAR: Embedding Optical Fibers in 3D Printed Objects for Active Markers in Dynamic Projection Mapping. IEEE Transactions on Visualization and Computer Graphics 26, 5(2020), 2030–2040. https://doi.org/10.1109/TVCG.2020.2973444
[354]
✱ Miguel Torres-Ruiz, Felix Mata, Roberto Zagal, Giovanni Guzmán, Rolando Quintero, and Marco Moreno-Ibarra. 2020. A recommender system to generate museum itineraries applying augmented reality and social-sensor mining techniques. Virtual Reality 24, 1 (March 2020), 175–189. https://doi.org/10.1007/s10055-018-0366-z
[355]
✱ Tomas Trescak and Anton Bogdanovych. 2017. Case-Based Planning for Large Virtual Agent Societies. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 33, 10 pages. https://doi.org/10.1145/3139131.3139155
[356]
Andrea C. Tricco, Erin Lillie, Wasifa Zarin, Kelly K. O’Brien, Heather Colquhoun, Danielle Levac, David Moher, Micah D.J. Peters, Tanya Horsley, Laura Weeks, Susanne Hempel, Elie A. Akl, Christine Chang, Jessie McGowan, Lesley Stewart, Lisa Hartling, Adrian Aldcroft, Michael G. Wilson, Chantelle Garritty, Simon Lewin, Christina M. Godfrey, Marilyn T. Macdonald, Etienne V. Langlois, Karla Soares-Weiser, Jo Moriarty, Tammy Clifford, özge Tunçalp, and Sharon E. Straus. 2018. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine 169, 7 (Oct. 2018), 467–473. https://doi.org/10.7326/M18-0850
[357]
✱ Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-Contrast-Aware Foveated Rendering. ACM Trans. Graph. 38, 4, Article 98 (jul 2019), 14 pages. https://doi.org/10.1145/3306346.3322985
[358]
✱ Junya Ueda and Katsunori Okajima. 2019. AR Food Changer using Deep Learning And Cross-Modal Effects. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 110–1107. https://doi.org/10.1109/AIVR46125.2019.00025
[359]
✱ Narumol Vannaprathip, Peter Haddawy, Holger Schultheis, and Siriwan Suebnukarn. 2022. Intelligent Tutoring for Surgical Decision Making: a Planning-Based Approach. International Journal of Artificial Intelligence in Education 32, 2 (June 2022), 350–381. https://doi.org/10.1007/s40593-021-00261-3
[360]
Gul Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, Ivan Laptev, and Cordelia Schmid. 2018. BodyNet: Volumetric Inference of 3D Human Body Shapes. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, -, –.
[361]
✱ Valentin Vasiliu and Gábor Sörös. 2019. Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 66–73. https://doi.org/10.1109/ISMAR.2019.00-25
[362]
✱ Harshita Ved and Caglar Yildirim. 2021. Detecting Mental Workload in Virtual Reality Using EEG Spectral Data: A Deep Learning Approach. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 173–178. https://doi.org/10.1109/AIVR52153.2021.00039
[363]
✱ Rohith Venkatakrishnan, Roshan Venkatakrishnan, Reza Ghaiumy Anaraky, Matias Volonte, Bart Knijnenburg, and Sabarish V Babu. 2020. A Structural Equation Modeling Approach to Understand the Relationship between Control, Cybersickness and Presence in Virtual Reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 682–691. https://doi.org/10.1109/VR46266.2020.00091
[364]
Keith Vertanen and Per Ola Kristensson. 2011. A Versatile Dataset for Text Entry Evaluations Based on Genuine Mobile Emails. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (Stockholm, Sweden) (MobileHCI ’11). Association for Computing Machinery, New York, NY, USA, 295–298. https://doi.org/10.1145/2037373.2037418
[365]
✱ Johanna Vielhaben, Hüseyin Camalan, Wojciech Samek, and Markus Wenzel. 2019. Viewport Forecasting in 360° Virtual Reality Videos with Machine Learning. 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 1(2019), 74–747.
[366]
✱ Adam Viola, Sahil Sharma, Pankaj Bishnoi, Matheus Gadelha, Stefano Petrangeli, Haoliang Wang, and Viswanathan Swaminathan. 2021. Trace Match & Merge: Long-Term Field-Of-View Prediction for AR Applications. In 2021 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 1–9. https://doi.org/10.1109/AIVR52153.2021.00011
[367]
Ekaterina Volkova, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty Mohler. 2014. The MPI Emotional Body Expressions Database for Narrative Scenarios. PLOS ONE 9, 12 (Dec. 2014), 1–28. https://doi.org/10.1371/journal.pone.0113647
[368]
✱ Matias Volonte, Yu-Chun Hsu, Kuan-Yu Liu, Joe P. Mazer, Sai-Keung Wong, and Sabarish V. Babu. 2020. Effects of Interacting with a Crowd of Emotional Virtual Humans on Users’ Affective and Non-Verbal Behaviors. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 293–302. https://doi.org/10.1109/VR46266.2020.00049
[369]
✱ Haoshuo Wang, Colm O’Fearghail, Emin Zerman, Karsten Braungart, Aljosa Smolic, and Sebastian Knorr. 2021. Visual Attention Analysis and User Guidance in Cinematic VR Film. In 2021 International Conference on 3D Immersion (IC3D), Vol. 1. IEEE, New York, NY, USA, 1–8. https://doi.org/10.1109/IC3D53758.2021.9687294
[370]
✱ Isaac Wang, Jesse Smith, and Jaime Ruiz. 2019. Exploring Virtual Agents for Augmented Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300511
[371]
✱ Ker-Jiun Wang, Quanbo Liu, Yifan Zhao, Caroline Yan Zheng, Soumya Vhasure, Quanfeng Liu, Prakash Thakur, Mingui Sun, and Zhi-Hong Mao. 2018. Intelligent Wearable Virtual Reality (VR) Gaming Controller for People with Motor Disabilities. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 161–164. https://doi.org/10.1109/AIVR.2018.00034
[372]
✱ Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Yuanzhi Cao, and Karthik Ramani. 2021. GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications. In The 34th Annual ACM Symposium on User Interface Software and Technology(Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 552–567. https://doi.org/10.1145/3472749.3474769
[373]
✱ Yuyang Wang, Jean-Rémy Chardonnet, and Frédéric Merienne. 2019. VR Sickness Prediction for Navigation in Immersive Virtual Environments using a Deep Long Short Term Memory Model. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1874–1881. https://doi.org/10.1109/VR.2019.8798213
[374]
Na Wang, Haoliang Wang, Stefano Petrangeli, Viswanathan Swaminathan, Fei Li, and Songqing Chen. 2020. Towards Field-of-View Prediction for Augmented Reality Applications on Mobile Devices. In Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems (Istanbul, Turkey) (MMVE ’20). Association for Computing Machinery, New York, NY, USA, 13–18. https://doi.org/10.1145/3386293.3397114
[375]
Pei Wang. 2019. On Defining Artificial Intelligence. Journal of Artificial General Intelligence 10, 2 (2019), 1–37. https://doi.org/10.2478/jagi-2019-0002
[376]
✱ Philip Weber, Kevin Krings, Julia Nießner, Sabrina Brodesser, and Thomas Ludwig. 2021. FoodChattAR: Exploring the Design Space of Edible Virtual Agents for Human-Food Interaction. In Designing Interactive Systems Conference 2021(Virtual Event, USA) (DIS ’21). Association for Computing Machinery, New York, NY, USA, 638–650. https://doi.org/10.1145/3461778.3461998
[377]
✱ Shih-En Wei, Jason Saragih, Tomas Simon, Adam W. Harley, Stephen Lombardi, Michal Perdoch, Alexander Hypes, Dawei Wang, Hernan Badino, and Yaser Sheikh. 2019. VR Facial Animation via Multiview Image Translation. ACM Trans. Graph. 38, 4, Article 67 (jul 2019), 16 pages. https://doi.org/10.1145/3306346.3323030
[378]
✱ Wei Wei, Edmond S. L. Ho, Kevin D. McCay, Robertas Damaševičius, Rytis Maskeliūnas, and Anna Esposito. 2022. Assessing Facial Symmetry and Attractiveness using Augmented Reality. Pattern Analysis and Applications 25, 3 (Aug. 2022), 635–651. https://doi.org/10.1007/s10044-021-00975-z
[379]
Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. 2016. Convolutional pose machines. In CVPR. IEEE, address, –.
[380]
✱ Yueting Weng, Chun Yu, Yingtian Shi, Yuhang Zhao, Yukang Yan, and Yuanchun Shi. 2021. FaceSight: Enabling Hand-to-Face Gesture Interaction on AR Glasses with a Downward-Facing Camera Vision. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 10, 14 pages. https://doi.org/10.1145/3411764.3445484
[381]
✱ Poorni Wickramarathne, Minoli De Silva, Chathurangi Weerasinghe, Heshani Nanayakkara, Pradeep Abeygunawardhana, and Suranjini Silva. 2019. TrendiTex: An Intelligent Fashion Designer. In 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Vol. 1. IEEE, New York, NY, USA, 505–510. https://doi.org/10.1109/ISRITI48646.2019.9034631
[382]
✱ Carolin Wienrich, Richard Gross, Felix Kretschmer, and Gisela Müller-Plath. 2018. Developing and Proving a Framework for Reaction Time Experiments in VR to Objectively Measure Social Interaction with Virtual Agents. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 191–198. https://doi.org/10.1109/VR.2018.8446352
[383]
Alexander Winkler-Schwartz, Vincent Bissonnette, Nykan Mirchi, Nirros Ponnudurai, Recai Yilmaz, Nicole Ledwos, Samaneh Siyar, Hamed Azarnoush, Bekir Karlik, and Rolando F. Del Maestro. 2019. Artificial Intelligence in Medical Education: Best Practices Using Machine Learning to Assess Surgical Expertise in Virtual Reality Simulation. Journal of Surgical Education 76, 6 (2019), 1681–1690. https://doi.org/10.1016/j.jsurg.2019.05.015
[384]
✱ Torsten Wörtwein and Stefan Scherer. 2017. What really matters — An information gain analysis of questions and reactions in automated PTSD screenings. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), Vol. 1. IEEE, New York, NY, USA, 15–20. https://doi.org/10.1109/ACII.2017.8273573
[385]
Chenglei Wu, Zhihao Tan, Zhi Wang, and Shiqiang Yang. 2017. A Dataset for Exploring User Behaviors in VR Spherical Video Streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference (Taipei, Taiwan) (MMSys’17). Association for Computing Machinery, New York, NY, USA, 193–198. https://doi.org/10.1145/3083187.3083210
[386]
✱ Nannan Wu, Qianwen Chao, Yanzhen Chen, Weiwei Xu, Chen Liu, Dinesh Manocha, Wenxin Sun, Yi Han, Xinran Yao, and Xiaogang Jin. 2021. AgentDress: Realtime Clothing Synthesis for Virtual Agents using Plausible Deformations. IEEE Transactions on Visualization and Computer Graphics 27, 11(2021), 4107–4118. https://doi.org/10.1109/TVCG.2021.3106429
[387]
✱ Pei Wu, Wenxin Ding, Zhixiang You, and Ping An. 2019. Virtual Reality Video Quality Assessment Based on 3d Convolutional Neural Networks. In 2019 IEEE International Conference on Image Processing (ICIP), Vol. 1. IEEE, New York, NY, USA, 3187–3191. https://doi.org/10.1109/ICIP.2019.8803023
[388]
✱ Lei Xiao, Anton Kaplanyan, Alexander Fix, Matthew Chapman, and Douglas Lanman. 2018. DeepFocus: Learned Image Synthesis for Computational Displays. ACM Trans. Graph. 37, 6, Article 200 (dec 2018), 13 pages. https://doi.org/10.1145/3272127.3275032
[389]
✱ Biao Xie, Yongqi Zhang, Haikun Huang, Elisa Ogawa, Tongjian You, and Lap-Fai Yu. 2018. Exercise Intensity-Driven Level Design. IEEE Transactions on Visualization and Computer Graphics 24, 4(2018), 1661–1670. https://doi.org/10.1109/TVCG.2018.2793618
[390]
✱ Jun Xing, Koki Nagano, Weikai Chen, Haotian Xu, Li-yi Wei, Yajie Zhao, Jingwan Lu, Byungmoon Kim, and Hao Li. 2019. HairBrush for Immersive Data-Driven Hair Modeling. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (New Orleans, LA, USA) (UIST ’19). Association for Computing Machinery, New York, NY, USA, 263–279. https://doi.org/10.1145/3332165.3347876
[391]
✱ Di Xu, Zhen Li, and Qi Cao. 2021. Object-based illumination transferring and rendering for applications of mixed reality. The Visual Computer 1 (Oct. 2021), –. https://doi.org/10.1007/s00371-021-02292-2
[392]
✱ Xuhai Xu, Jiahao Li, Tianyi Yuan, Liang He, Xin Liu, Yukang Yan, Yuntao Wang, Yuanchun Shi, Jennifer Mankoff, and Anind K Dey. 2021. HulaMove: Using Commodity IMU for Waist Interaction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 503, 16 pages. https://doi.org/10.1145/3411764.3445182
[393]
✱ Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, and Shenghua Gao. 2018. Gaze Prediction in Dynamic 360° Immersive Videos. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. 1. IEEE, New York, NY, USA, 5333–5342. https://doi.org/10.1109/CVPR.2018.00559
[394]
✱ Megha Yadav, Md Nazmus Sakib, Kexin Feng, Theodora Chaspari, and Amir Behzadan. 2019. Virtual reality interfaces and population-specific models to mitigate public speaking anxiety. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Vol. 1. IEEE, New York, NY, USA, 1–7. https://doi.org/10.1109/ACII.2019.8925509
[395]
✱ Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, and Yuta Sugiura. 2017. CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-Mounted Display. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (Gothenburg, Sweden) (VRST ’17). Association for Computing Machinery, New York, NY, USA, Article 19, 8 pages. https://doi.org/10.1145/3139131.3139146
[396]
✱ Jacky Yang, Michael Chan, Alvaro Uribe-Quevedo, Bill Kapralos, Norman Jaimes, and Adam Dubrowski. 2020. Prototyping Virtual Reality Interactions in Medical Simulation Employing Speech Recognition. In 2020 22nd Symposium on Virtual and Augmented Reality (SVR), Vol. 1. IEEE, New York, NY, USA, 351–355. https://doi.org/10.1109/SVR51698.2020.00059
[397]
✱ Hui Ye, Kin Chung Kwan, Wanchao Su, and Hongbo Fu. 2020. ARAnimator: In-Situ Character Animation in Mobile AR with User-Defined Motion Gestures. ACM Trans. Graph. 39, 4, Article 83 (jul 2020), 12 pages. https://doi.org/10.1145/3386569.3392404
[398]
✱ Zi-Ming Ye, Jun-Long Chen, Miao Wang, and Yong-Liang Yang. 2021. PAVAL: Position-Aware Virtual Agent Locomotion for Assisted Virtual Reality Navigation. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 239–247. https://doi.org/10.1109/ISMAR52148.2021.00039
[399]
✱ Yan Yixian, Kazuki Takashima, Anthony Tang, Takayuki Tanno, Kazuyuki Fujita, and Yoshifumi Kitamura. 2020. ZoomWalls: Dynamic Walls That Simulate Haptic Infrastructure for Room-Scale VR World. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 223–235. https://doi.org/10.1145/3379337.3415859
[400]
✱ Hwanmoo Yong, Jisuk Lee, and Jongeun Choi. 2019. Emotion Recognition in Gamers Wearing Head-mounted Display. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 1251–1252. https://doi.org/10.1109/VR.2019.8797736
[401]
Boram Yoon, Hyung-il Kim, Gun A. Lee, Mark Billinghurst, and Woontack Woo. 2019. The Effect of Avatar Appearance on Social Presence in an Augmented Reality Remote Collaboration. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Vol. 1. IEEE, New York, NY, USA, 547–556. https://doi.org/10.1109/VR.2019.8797719
[402]
✱ Leonard Yoon, Dongseok Yang, Jaehyun Kim, ChoongHo Chung, and Sung-Hee Lee. 2022. Placement Retargeting of Virtual Avatars to Dissimilar Indoor Environments. IEEE Transactions on Visualization and Computer Graphics 28, 3(2022), 1619–1633. https://doi.org/10.1109/TVCG.2020.3018458
[403]
✱ Emilie Yu, Rahul Arora, Tibor Stanko, J. Andreas Bærentzen, Karan Singh, and Adrien Bousseau. 2021. CASSIE: Curve and Surface Sketching in Immersive Environments. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 190, 14 pages. https://doi.org/10.1145/3411764.3445158
[404]
Shanxin Yuan, Qi Ye, Bjorn Stenger, Siddhant Jain, and Tae-Kyun Kim. 2017. BigHand2.2M Benchmark: Hand Pose Dataset and State of the Art Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, New York, NY, USA, –.
[405]
✱ Hong Zeng, Xingxi He, and Honghu Pan. 2021. Implementation of escape room system based on augmented reality involving deep convolutional neural network. Virtual Reality 25, 3 (Sept. 2021), 585–596. https://doi.org/10.1007/s10055-020-00476-0
[406]
✱ Xue Zhang, Gene Cheung, Patrick Le Callet, and Jack Z. G. Tan. 2020. Sparse Directed Graph Learning for Head Movement Prediction in 360 Video Streaming. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 1. IEEE, New York, NY, USA, 2678–2682. https://doi.org/10.1109/ICASSP40776.2020.9053598
[407]
Enhao Zhang and Nikola Banovic. 2021. Method for Exploring Generative Adversarial Networks (GANs) via Automatically Generated Image Galleries. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 76, 15 pages. https://doi.org/10.1145/3411764.3445714
[408]
Yinda Zhang, Shuran Song, Ping Tan, and Jianxiong Xiao. 2014. PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding. In Computer Vision – ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, Cham, 668–686.
[409]
✱ Junhong Zhao, Andrew Chalmers, and Taehyun Rhee. 2021. Adaptive Light Estimation using Dynamic Filtering for Diverse Lighting Conditions. IEEE Transactions on Visualization and Computer Graphics 27, 11(2021), 4097–4106. https://doi.org/10.1109/TVCG.2021.3106497
[410]
✱ Lizhi Zhao, Xuequan Lu, Min Zhao, and Meili Wang. 2021. Classifying In-Place Gestures with End-to-End Point Cloud Learning. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 229–238. https://doi.org/10.1109/ISMAR52148.2021.00038
[411]
✱ Lili Zhao, Meng Zhang, Wenyi Wang, Rumin Zhang, Liaoyuan Zeng, and Jianwen Chen. 2018. High Efficient VR Video Coding Based on Auto Projection Selection Using Transferable Features. In 2018 IEEE Visual Communications and Image Processing (VCIP), Vol. 1. IEEE, New York, NY, USA, 1–4. https://doi.org/10.1109/VCIP.2018.8698628
[412]
✱ Shang Zhao, Xiao Xiao, Qiyue Wang, Xiaoke Zhang, Wei Li, Lamia Soghier, and James Hahn. 2020. An Intelligent Augmented Reality Training Framework for Neonatal Endotracheal Intubation. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Vol. 1. IEEE, New York, NY, USA, 672–681. https://doi.org/10.1109/ISMAR50242.2020.00097
[413]
✱ Zhenjie Zhao and Xiaojuan Ma. 2018. A Compensation Method of Two-Stage Image Generation for Human-AI Collaborated In-Situ Fashion Design in Augmented Reality Environment. In 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Vol. 1. IEEE, New York, NY, USA, 76–83. https://doi.org/10.1109/AIVR.2018.00018
[414]
Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. 2017. ICNet for Real-Time Semantic Segmentation on High-Resolution Images. CoRR abs/1704.08545(2017), –. arXiv:1704.08545http://arxiv.org/abs/1704.08545
[415]
Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Places: A 10 million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 1 (2017), –.
[416]
✱ Bing Zhou and Sinem Güven. 2020. Fine-Grained Visual Recognition in Mobile Augmented Reality for Technical Support. IEEE Transactions on Visualization and Computer Graphics 26, 12(2020), 3514–3523. https://doi.org/10.1109/TVCG.2020.3023635
[417]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss. In Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, New York, NY, USA, –.
[418]
✱ Katja Zibrek, Elena Kokkinara, and Rachel McDonnell. 2017. Don’t Stand so Close to Me: Investigating the Effect of Control on the Appeal of Virtual Humans Using Immersion and a Proximity-Based Behavioral Task. In Proceedings of the ACM Symposium on Applied Perception (Cottbus, Germany) (SAP ’17). Association for Computing Machinery, New York, NY, USA, Article 3, 11 pages. https://doi.org/10.1145/3119881.3119887
[419]
Katja Zibrek, Elena Kokkinara, and Rachel Mcdonnell. 2018. The Effect of Realistic Appearance of Virtual Characters in Immersive Environments - Does the Character’s Personality Play a Role?IEEE Transactions on Visualization and Computer Graphics 24, 4(2018), 1681–1690. https://doi.org/10.1109/TVCG.2018.2794638
[420]
Katja Zibrek, Sean Martin, and Rachel McDonnell. 2019. Is Photorealism Important for Perception of Expressive Virtual Humans in Virtual Reality?ACM Trans. Appl. Percept. 16, 3, Article 14 (sep 2019), 19 pages. https://doi.org/10.1145/3349609
[421]
✱ Sahba Zojaji, Christopher Peters, and Catherine Pelachaud. 2020. Influence of Virtual Agent Politeness Behaviors on How Users Join Small Conversational Groups. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (Virtual Event, Scotland, UK) (IVA ’20). Association for Computing Machinery, New York, NY, USA, Article 59, 8 pages. https://doi.org/10.1145/3383652.3423917
[422]
✱ Yiming Zuo, Weichao Qiu, Lingxi Xie, Fangwei Zhong, Yizhou Wang, and Alan L. Yuille. 2019. CRAVES: Controlling Robotic Arm With a Vision-Based Economic System. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1. IEEE, New York, NY, USA, 4209–4218. https://doi.org/10.1109/CVPR.2019.00434

Cited By

View all
  • (2024)Artificial Intelligence's Evolutionary Impact on the Tourism SectorMultidisciplinary Applications of Extended Reality for Human Experience10.4018/979-8-3693-2432-5.ch007(118-146)Online publication date: 14-Jun-2024
  • (2024)A Multiuser, Multisite, and Platform-Independent On-the-Cloud Framework for Interactive Immersion in Holographic XRApplied Sciences10.3390/app1405207014:5(2070)Online publication date: 1-Mar-2024
  • (2024)Enhancing AI Education for Business Students through Extended Reality: An Exploratory StudyProceedings of Mensch und Computer 202410.1145/3670653.3677506(599-604)Online publication date: 1-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
April 2023
14911 pages
ISBN:9781450394215
DOI:10.1145/3544548
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 April 2023

Check for updates

Author Tags

  1. artificial intelligence
  2. extended reality
  3. scoping review

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • HumanE-AI-Net

Conference

CHI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6,233
  • Downloads (Last 6 weeks)737
Reflects downloads up to 03 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Artificial Intelligence's Evolutionary Impact on the Tourism SectorMultidisciplinary Applications of Extended Reality for Human Experience10.4018/979-8-3693-2432-5.ch007(118-146)Online publication date: 14-Jun-2024
  • (2024)A Multiuser, Multisite, and Platform-Independent On-the-Cloud Framework for Interactive Immersion in Holographic XRApplied Sciences10.3390/app1405207014:5(2070)Online publication date: 1-Mar-2024
  • (2024)Enhancing AI Education for Business Students through Extended Reality: An Exploratory StudyProceedings of Mensch und Computer 202410.1145/3670653.3677506(599-604)Online publication date: 1-Sep-2024
  • (2024)GazeIntent: Adapting Dwell-time Selection in VR Interaction with Real-time Intent ModelingProceedings of the ACM on Human-Computer Interaction10.1145/36556008:ETRA(1-18)Online publication date: 28-May-2024
  • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
  • (2024)Born to Run, Programmed to Play: Mapping the Extended Reality Exergames LandscapeProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642124(1-28)Online publication date: 11-May-2024
  • (2023)XR and AI: AI-Enabled Virtual, Augmented, and Mixed RealityAdjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586182.3617432(1-3)Online publication date: 29-Oct-2023
  • (2023)Towards Universal Interaction for Extended Reality2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct60411.2023.00047(205-207)Online publication date: 16-Oct-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media