PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9392, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
College football recruiting is a competitive process. Athletic administrations attempt to gain an edge by bringing recruits to a home game, highlighting the atmosphere unique to campus. This is however not always possible since most recruiting efforts happen off-season. So, they relate the football game experience through video recordings and visits to football facilities. While these substitutes provide a general idea of a game, they cannot capture the feeling of playing while cheered on by a crowd of 55,000 people. To address this challenge and improve the recruitment process, the Iowa State University (ISU) athletic department and the Virtual Reality Applications Center (VRAC) teamed up to build an alternative to the game-day experience using the world’s highest resolution six-sided virtual reality (VR) environment - the C6, and a portable low-cost head-mounted display (HMD) system. This paper presents techniques used in the development of the immersive and portable VR environments followed by validation of the work through quantifying immersion and presence through a formal user study. Results from the user study indicate that both the HMD and C6 are an improvement over the standard practice of showing videos to convey the atmosphere of an ISU Cyclone football game. In addition, both the C6 and HMD were scored similar in immersion and presence categories. This indicates that the low-cost portable HMD version of the application produces minimal trade off in experience for a fraction of the cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an application for Android phones or tablets called “archAR” that uses augmented reality as an alternative,
portable way of viewing archaeological information from UCSD’s Levantine Archaeology Laboratory. archAR provides
a unique experience of flying through an archaeological dig site in the Levantine area and exploring the artifacts
uncovered there. Using a Google Nexus tablet and Qualcomm’s Vuforia API, we use an image target as a map and
overlay a three-dimensional model of the dig site onto it, augmenting reality such that we are able to interact with the
plotted artifacts. The user can physically move the Android device around the image target and see the dig site model
from any perspective. The user can also move the device closer to the model in order to “zoom” into the view of a
particular section of the model and its associated artifacts. This is especially useful, as the dig site model and the
collection of artifacts are very detailed. The artifacts are plotted as points, colored by type. The user can touch the
virtual points to trigger a popup information window that contains details of the artifact, such as photographs, material
descriptions, and more.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While a lot of areas in VR have made significant advances, visual rendering in VR is often not quite keeping up with the state of the art. There are many reasons for this, but one way to alleviate some of the issues is by using ray tracing instead of rasterization for image generation. Contrary to popular belief, ray tracing is a realistic, competitive technology nowadays. This paper looks at the pros and cons of using ray tracing and demonstrates the feasibility of employing it using the example of a helicopter flight simulator image generator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper frames issues of trans-scalar perception in visualization, reflecting on the limits of the human senses,
particularly those which are related to space, and describe planetarium shows, presentations, and exhibit experiences of
spatial immersion and interaction in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness.
While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault’s 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims at describing an experimental platform used to evaluate the performance of individuals at training
immersive physiological games. The platform proposed is embedded in an immersive environment in a CAVE of Virtual
Reality and consists on a base frame with actuators with three degrees of freedom, sensor array interface and physiological
sensors. Physiological data of breathing, galvanic skin resistance (GSR) and pressure on the hand of the user and a
subjective questionnaire were collected during the experiments. The theoretical background used in a project focused on
Software Engineering, Biomedical Engineering in the field of Ergonomics and Creative Technologies in order to presents
this case study, related of an evaluation of a vehicular simulator located inside the CAVE. The analysis of the simulator
uses physiological data of the drivers obtained in a period of rest and after the experience, with and without movements at
the simulator. Also images from the screen are captured through time at the embedded experience and data collected
through physiological data visualization (average frequency and RMS graphics). They are empowered by the subjective
questionnaire as strong lived experience provided by the technological apparatus. The performed immersion experience
inside the CAVE allows to replicate behaviors from physical spaces inside data space enhanced by physiological properties.
In this context, the biocybrid condition is expanded beyond art and entertainment, as it is applied to automotive engineering
and biomedical engineering. In fact, the kinesthetic sensations amplified by synesthesia replicates the sensation of
displacement in the interior of an automobile, as well as the sensations of vibration and vertical movements typical of a
vehicle, different speeds, collisions, etc. The contribution of this work is the possibility to tracing a stress analysis protocol
for drivers while operating a vehicle getting affective behaviors coming from physiological data, mixed to embedded
simulation in Mixed Reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an environmental feedback device (EFD) control system aimed at simplifying the VR development cycle. Programmable Immersive Peripheral Environmental System (PIPES) affords VR developers a custom approach to programming and controlling EFD behaviors while relaxing the required knowledge and expertise of electronic systems. PIPES has been implemented for the Unity engine and features EFD control using the Arduino integrated development environment. PIPES was installed and tested on two VR systems, a large format CAVE system and an Oculus Rift HMD system. A photocell based end-to-end latency experiment was conducted to measure latency within the system. This work extends previously unpublished prototypes of a similar design. Development and experiments described in this paper are part of the VR community goal to understand and apply environment effects to VEs that ultimately add to users’ perceived presence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pushing the Boundaries in Data, Dimensions, and Cognition
A familiar realm in the world of two-dimensional art is the craft of taking a flat canvas and creating, through color, size, and perspective, the illusion of a three-dimensional space. Using well-explored tricks of logic and sight, impossible landscapes such as those by surrealists de Chirico or Salvador Dalí seem to be windows into new and incredible spaces which appear to be simultaneously feasible and utterly nonsensical. As real-time 3D imaging becomes increasingly prevalent as an artistic medium, this process takes on an additional layer of depth: no longer is two-dimensional space restricted to strategies of light, color, line and geometry to create the impression of a three-dimensional space. A digital interactive environment is a space laid out in three dimensions, allowing the user to explore impossible environments in a way that feels very real. In this project, surrealist two-dimensional art was researched and reimagined: what would stepping into a de Chirico or a Magritte look and feel like, if the depth and distance created by light and geometry were not simply single-perspective illusions, but fully formed and explorable spaces? 3D environment-building software is allowing us to step into these impossible spaces in ways that 2D representations leave us yearning for. This art project explores what we gain--and what gets left behind--when these impossible spaces become doors, rather than windows. Using sketching, Maya 3D rendering software, and the Unity Engine, surrealist art was reimagined as a fully navigable real-time digital environment. The surrealist movement and its key artists were researched for their use of color, geometry, texture, and space and how these elements contributed to their work as a whole, which often conveys feelings of unexpectedness or uneasiness. The end goal was to preserve these feelings while allowing the viewer to actively engage with the space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Body image/body schema (BIBS) is within the larger realm of embodied cognition. Its interdisciplinary literature can
inspire Virtual Reality (VR) researchers and designers to develop novel ideas and provide them with approaches to
human perception and experience. In this paper, we introduced six fundamental ideas in designing interactions in VR,
derived from BIBS literature that demonstrates how the mind is embodied. We discuss our own research, ranging from
two mature works to a prototype, to support explorations VR interaction design from a BIBS approach. Based on our
experiences, we argue that incorporating ideas of embodiment into design practices requires a shift in the perspective or
understanding of the human body, perception and experiences, all of which affect interaction design in unique ways. The
dynamic, interactive and distributed understanding of cognition guides our approach to interaction design, where the
interrelatedness and plasticity of BIBS play a crucial role.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A renaissance in the development of virtual (VR), augmented (AR), and mixed reality (MR) technologies with a focus on consumer and industrial applications is underway. As data becomes ubiquitous in our lives, a need arises to revisit the role of our bodies, explicitly in relation to data or information. Our observation is that VR/AR/MR technology development is a vision of the future framed in terms of promissory narratives. These narratives develop alongside the underlying enabling technologies and create new use contexts for virtual experiences. It is a vision rooted in the combination of responsive, interactive, dynamic, sharable data streams, and augmentation of the physical senses for capabilities beyond those normally humanly possible. In parallel to the varied definitions of information and approaches to elucidating information behavior, a myriad of definitions and methods of measuring and understanding presence in virtual experiences exist. These and other ideas will be tested by designers, developers and technology adopters as the broader ecology of head-worn devices for virtual experiences evolves in order to reap the full potential and benefits of these emerging technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method’s viability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method of marker-less AR which uses line segment feature. Estimating camera poses is important part of AR system. In most of conventional marker-less AR system, feature point matching between a model and its image is required for the camera pose estimation. However, a sufficient number of correspondence points is not always detected to estimate accurate camera poses. To solve this problem, we propose the use of line segments feature that can possibly be detected even when only a few feature points can be detected. In this paper, we propose a marker-less AR system that uses line segment features for camera pose estimation. In this system, we propose a novel descriptor of the line segment feature for achieving fast computation of the camera pose estimation. We also construct a database which contains a k-d tree of line feature and 3D line segment position for finding 2D-3D line segment correspondences from input image and the database, so that we can estimate the camera pose and perform AR. We demonstrated that our proposed method can estimate the camera pose and provide robust marker-less AR in the situation where point matching method fails.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual Reality (VR) leads to realistic experimental situations, while enabling researchers to have deterministic control on these situations, and to precisely measure participants' behavior. However, because more realistic and complex situations can be implemented, important questions arise, concerning the validity and representativeness of the observed behavior, with reference to a real situation. One example is the investigation of a critical (virtually dangerous) situation, in which the participant knows that no actual threat is present in the simulated situation, and might thus exhibit a behavioral response that is far from reality. This poses serious problems, for instance in training situations, in terms of transfer of learning to a real situation. Facing this difficult question, it seems necessary to study the relationships between three factors: immersion (physical realism), presence (psychological realism) and behavior. We propose a conceptual framework, in which presence is a necessary condition for the emergence of a behavior that is representative of what is observed in real conditions. Presence itself depends not only on physical immersive characteristics of the Virtual Reality setup, but also on contextual and psychological factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Head Up Display (HUD) is being applied to automobile. HUD displays information as far virtual image on the windshield. Existing HUD usually displays planar information. If the image corresponding to scenery on the road like Augmented Reality (AR) is displayed on the HUD, driver can efficiently get the information. To actualize this, HUD covering large viewing field is needed. However existing HUD cannot cover large viewing field. Therefore we have proposed system consisting of projector and many small diameter convex lenses. However observed virtual image has blurring and distortion . In this paper, we propose two methods to reduce blurring and distortion of images. First, to reduce blurring of images, distance between each of screen and lens comprised in lens array is adjusted. We inferred from the more distant the lens from center of the array is more blurred that the cause of blurring is curvature of field of lens in the array. Second, to avoid distortion of images, each lens in the array is curved spherically. We inferred from the more distant the lens from center of the array is more distorted that the cause of distortion is incident angle of ray. We confirmed effectiveness of both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.