Touch-sensitive screens enable natural interaction without any instrumentation and support tangib... more Touch-sensitive screens enable natural interaction without any instrumentation and support tangible feedback on the touch surface. In particular multi-touch interaction has proven its usability for 2D tasks, but the challenges to exploit these technologies in virtual reality (VR) setups have rarely been studied. In this paper we address the challenge to allow users to interact with stereoscopically displayed virtual environments when the input is constrained to a 2D touch surface. During interaction with a large-scale touch display a user changes between three different states: (1) beyond the arm-reach distance from the surface, (2) at arm-reach distance and (3) interaction. We have analyzed the user's ability to discriminate stereoscopic display parallaxes while she moves through these states, i. e., if objects can be imperceptibly shifted onto the interactive surface and become accessible for natural touch interaction. Our results show that the detection thresholds for such manipulations are related to both user motion and stereoscopic parallax, and that users have problems to discriminate whether they touched an object or not, when tangible feedback is expected.
IEEE Transactions on Visualization and Computer Graphics , 2015
Redirected walking allows users to walk through a large-scale immersive virtual environment (IVE)... more Redirected walking allows users to walk through a large-scale immersive virtual environment (IVE) while physically remaining in a reasonably small workspace. Therefore, manipulations are applied to virtual camera motions so that the user's self-motion in the virtual world differs from movements in the real world. Previous work found that the human perceptual system tolerates a certain amount of inconsistency between proprioceptive, vestibular and visual sensation in IVEs, and even compensates for slight discrepancies with recalibrated motor commands. Experiments showed that users are not able to detect an inconsistency if their physical path is bent with a radius of at least 22 meters during virtual straightforward movements. If redirected walking is applied in a smaller workspace, manipulations become noticeable, but users are still able to move through a potentially infinitely large virtual world by walking. For this semi-natural form of locomotion, the question arises if such manipulations impose cognitive demands on the user, which may compete with other tasks in IVEs for finite cognitive resources. In this article we present an experiment in which we analyze the mutual influence between redirected walking and verbal as well as spatial working memory tasks using a dual-tasking method. The results show an influence of redirected walking on verbal as well as spatial working memory tasks, and we also found an effect of cognitive tasks on walking behavior. We discuss the implications and provide guidelines for using redirected walking in virtual reality laboratories.
ABSTRACT Recent developments in the fields of display technology provide new possibilities for en... more ABSTRACT Recent developments in the fields of display technology provide new possibilities for engaging users in interactive exploration of three-dimensional (3D) virtual environments (VEs). Tracking technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive setups while providing portable solutions for engaging presentations and exhibitions. In this poster we describe an extension of the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive setups. We extended the smARTbox by making the entire setup portable, which provides a new interaction experience, when exploring 3D data sets. The portable tracked multi-touch interface supports two different interaction paradigms: exploration by multi-touch gestures as well a s exploration by lateral movements of the entire setup. Hence, typical gestures supporting rotation and panning can be implemented via multi-touch gestures, but also via actual movements of the setup.
ABSTRACT Animating virtual characters is a complex task, which requires professional animators an... more ABSTRACT Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application.
Touch-sensitive screens enable natural interaction without any instrumentation and support tangib... more Touch-sensitive screens enable natural interaction without any instrumentation and support tangible feedback on the touch surface. In particular multi-touch interaction has proven its usability for 2D tasks, but the challenges to exploit these technologies in virtual reality (VR) setups have rarely been studied. In this paper we address the challenge to allow users to interact with stereoscopically displayed virtual environments when the input is constrained to a 2D touch surface. During interaction with a large-scale touch display a user changes between three different states: (1) beyond the arm-reach distance from the surface, (2) at arm-reach distance and (3) interaction. We have analyzed the user's ability to discriminate stereoscopic display parallaxes while she moves through these states, i. e., if objects can be imperceptibly shifted onto the interactive surface and become accessible for natural touch interaction. Our results show that the detection thresholds for such manipulations are related to both user motion and stereoscopic parallax, and that users have problems to discriminate whether they touched an object or not, when tangible feedback is expected.
IEEE Transactions on Visualization and Computer Graphics , 2015
Redirected walking allows users to walk through a large-scale immersive virtual environment (IVE)... more Redirected walking allows users to walk through a large-scale immersive virtual environment (IVE) while physically remaining in a reasonably small workspace. Therefore, manipulations are applied to virtual camera motions so that the user's self-motion in the virtual world differs from movements in the real world. Previous work found that the human perceptual system tolerates a certain amount of inconsistency between proprioceptive, vestibular and visual sensation in IVEs, and even compensates for slight discrepancies with recalibrated motor commands. Experiments showed that users are not able to detect an inconsistency if their physical path is bent with a radius of at least 22 meters during virtual straightforward movements. If redirected walking is applied in a smaller workspace, manipulations become noticeable, but users are still able to move through a potentially infinitely large virtual world by walking. For this semi-natural form of locomotion, the question arises if such manipulations impose cognitive demands on the user, which may compete with other tasks in IVEs for finite cognitive resources. In this article we present an experiment in which we analyze the mutual influence between redirected walking and verbal as well as spatial working memory tasks using a dual-tasking method. The results show an influence of redirected walking on verbal as well as spatial working memory tasks, and we also found an effect of cognitive tasks on walking behavior. We discuss the implications and provide guidelines for using redirected walking in virtual reality laboratories.
ABSTRACT Recent developments in the fields of display technology provide new possibilities for en... more ABSTRACT Recent developments in the fields of display technology provide new possibilities for engaging users in interactive exploration of three-dimensional (3D) virtual environments (VEs). Tracking technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive setups while providing portable solutions for engaging presentations and exhibitions. In this poster we describe an extension of the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive setups. We extended the smARTbox by making the entire setup portable, which provides a new interaction experience, when exploring 3D data sets. The portable tracked multi-touch interface supports two different interaction paradigms: exploration by multi-touch gestures as well a s exploration by lateral movements of the entire setup. Hence, typical gestures supporting rotation and panning can be implemented via multi-touch gestures, but also via actual movements of the setup.
ABSTRACT Animating virtual characters is a complex task, which requires professional animators an... more ABSTRACT Animating virtual characters is a complex task, which requires professional animators and performers, expensive motion capture systems, or considerable amounts of time to generate convincing results. In this paper we introduce the SmurVEbox, which is a cost-effective animating system that encompasses many important aspects of animating virtual characters by providing a novel shared user experience. SmurVEbox is a collaborative environment for generating character animations in real time, which has the potential to enhance the computer animation process. Our setup allows animators and performers to cooperate on the same virtual animation sequence in real time. Performers are able to communicate with the animator in the real space while simultaneously perceiving the effects of their actions on the virtual character in the virtual space. The animator can refine actions of a performer in real time so that both collaborate together on the same animation of a virtual character. We describe the setup and present a simple application.
Ballistic phases in hand movement trajectories in reach to grasp tasks are recorded for further c... more Ballistic phases in hand movement trajectories in reach to grasp tasks are recorded for further categorization and analysis in order to create a predictor of target objects. The results suggest that the index of difficulty (ID), according to Fitts' Law, has no influence on the speed of reaching movements, but seems to determine the shape of the velocity versus time relation in the virtual experiment (VE). Closed-loop and open-loop conditions in ballistic aiming movements result in similar effects between maximum speed and distance of objects. The results provide important findings for interaction with 3D objects as well as human-robot collaboration, which allows for more robust and efficient interaction techniques in real-time scenarios.
Uploads
Papers by Gerd Bruder