Researcher and software developer employing psychological experimental designs and cognitive neuroscience methods to test the impacts of embodied interfaces in virtual environments on learning, cooperation, and empathy. Ultimately, my goal is to work towards augmented social cognition to enhance our abilities to understand one another through embodied interfaces created based on theoretical models of empathy from neuroscience and psychology.
New sensor and recording devices such as binaural microphones, stereoscopic cameras, eye trackers... more New sensor and recording devices such as binaural microphones, stereoscopic cameras, eye trackers, and motion capture sensors can track aspects of sensorimotor experience and simulate a model of an individual's embodied reality. Moreover, head-mounted display technologies can allow a separate individual to quite literally wear the embodied sense perception of another, superimposed on top of their own. This affords a unique spatial alignment of first-person embodied sensory perception between two individuals. For example, by pairing sensor equipment with head-mounted display technology, two real people can engage in a body swap, seeing and hearing from one anothers embodied point of view, and this can be done in real-time (The Machine to Be Another, BeAnotherLabs, 2014). In First-Person Squared, users wear a virtual-reality head-mounted display that presents a semi-transparent video overlay of their partners first-person visual perspective, such that the user can see through the video overlay of their partners body towards his or her own body. Motion capture tracks the two users' hand movements in real-time and displays visual effects to guide the two users to move together, in and out of leader-follower dynamics towards joint improvisations. The system is designed to encourage rhythmic, synchronous movements between the two users by comparing the two users movements, both in physical shape and temporal matching. This provides a technological research tool to begin to explore the impacts of capturing and transmitting aspects of phenomenological experience from one person to another. Another tool for transmitting embodied sensorimotor perception between individuals involves using eye tracking to capture eye saccades and fixations during various tasks. Paint With Me (Gerry, 2017) is a system in which users see and hear from the perspective of a painter while painting along with her on their own physical canvas, augmented by hand and paint brush real-object tracking. A foveated imaging technique was added to Paint With Me to indicate where the painter was looking while she paints, and this gaze transfer technique effectively promoted gaze following behavior in users, as well as greater self-reported learning outcomes. A final technique for multisensory perspective taking is to use auditory perspective-taking. In The Augmented Design to Embody a Piano Teacher (ADEPT) system, users are able to lean into a virtual sphere to hear the same piano piece played from their teachers point of view while embodying her visual perspective as a semi-transparent overlay on top of their own (Gerry, Dahl, Serafin, 2019). Learning how to effectively design these virtual environments to accurately render and display an embodied, phenomenological experience as lived by one person involves an in-depth analysis of the persons lived experience beyond merely what the sensing and recording equipment alone can capture. However, this opens the possibilities for how these technologies can compliment a rigorous first-person scientific methodology by modeling and simulating that reality and using experiential fidelity as the main indicator for an effective evaluation.
This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a ne... more This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
Rhythmic, synchronous social interactions increase sense of affiliation, feelings of togetherness... more Rhythmic, synchronous social interactions increase sense of affiliation, feelings of togetherness, and empathy. Additionally, music has been shown to synchronize emotional experience between the performer and the audience as well as shared group emotions in listeners. Combined, the evocative power of music on shared emotion and the effects of moving together create a compelling case for using rhythmic, musical training in tasks to increase empathy. This project involves allowing users to virtually embody an expert pianist, initially being led by the expert and eventually exploring instances of co-confident motion during Joint Improvisation (JI), a task in which the participant is told to move with their partner without any clear leader or follower. Co-confident motion was analyzed using Leap Motion hand tracking with an algorithm to calculate a match score for hand and finger position, velocity, morphology (angle, tilt), and smoothness (low jitter). Instances of co-confident motion were scored in terms of frequency, duration, quality (smoothness), and complexity. The hypothesis is that such musical performance training while virtually embodying an expert can not only increase fine motor skill while playing, but can also be a paradigm to explore sensory prediction, attributions of agency, and joint intentionality and the potential modulating effects that these mechanisms may have on empathic outcomes. Specifically, this paradigm endeavors to explore how and when leader-follower dynamics emerge and fade in joint improvisation, and how this relates to self-other distinctions and self-other overlap in empathic exchanges.
Virtual environments usually study dynamics of social cognition by creating virtual agents or ava... more Virtual environments usually study dynamics of social cognition by creating virtual agents or avatars who can simulate modes of social interaction that human users perceive and respond to naturally and intuitively. Embodied simulations offer a new research paradigm by creating first-person perspective virtual environments capturing the bodily, sensory, motoric, perceptual, cognitive and affective aspects of experience. Thus, embodied simulations shift the research and design focus to the relationship between the perceiving subject, sensorimotor aspects of experience, and task-based tool use within the environment. The Machine to Be Another (BeAnother Lab) is one such embodied simulation. This setup uses a live video stream from a " performer " wearing chest-mounted cameras to a " user " wearing a head-mounted display (HMD) such that the user sees from the performer's point of view. Through an imitation procedure, the user moves synchronously with the performer, offering a new way to learn about another person's movement patterns during task performance or social interactions. Exploring the efficacy of embodied simulations on expert-novice skills transmission and stimulating creativity, Paint With Me was created as a mixed reality virtual environment where users see and hear from the painter's point of view while stepping into the painter's shoes and synchronously following her movements as the user paints on their own physical canvas. Users also see an augmented, semi-transparent layer of their real-time tracked hand movements and a 3-Dimensional mesh of the paintbrush. The educational efficacy of Paint With Me was enhanced by tracking the painter's movements of her hand (using Leap Motion), brush (using a 3-point tracker for tip, middle, and top of brush) and eyes (using EyeTribe eye tracker) while recording the stereoscopic 360 degree videos used in the training embodied simulations. This allowed for multiple new interactive design elements. First, the user's hand and brush movements are computed and analyzed in comparison to the painter's hand and brush movements. Thus, the interface was designed such that the video interactively responds to user movements and slows down when the user falls out of synch with the painter. Modeled after MIT Media Lab's ALIVE system, the software also allows users to see footage playback of moments of greatest discrepancy between their movements and those of the painter after the 20-minute painting exercise. The embodied simulation with interaction design significantly improved objective painting performance (movement similarity) and subjective learning outcomes, as compared to a presentation of the same video on a 2-dimensional screen without the augmented interactive elements during the same " paint-along " painting exercise. Secondly, the eye tracking data was coded into the video with a foveated moving circle indicating the eye movements of the painter, while the background of the video is slightly blurred 1. Users reported that they naturally followed the foveated imaging in the video (successful gaze cueing), and that they understood that this indicated where the painter was looking. Users also reported that the eye-tracking helped them understand the painter's intentions, where she would be painting 1 See video prototype here: https://youtu.be/0qXGC8xU4-o
Multisensoy virtual environments offer factual variation to explore key questions in embodied cog... more Multisensoy virtual environments offer factual variation to explore key questions in embodied cognition and enactive perception. This study proposes one such model for testing the action-perception feedback system in linguistic category formation based on a new haptics interface technology (UltraHaptics). Developmental psychologist, linguistic, and embodied cognition researcher Linda B. Smith conducted multiple experiments at Indiana University with young children, ages 3-4, published in a paper titled " Action Alters Shape Categories ". In one experiment, she showed children a novel toy called a "wug", a small tube with two bulbs on each end (Smith, 2005) 1. Crucially, one bulb was larger than the other (the object was asymmetrical). Children were split into two groups: one group was told to play with the wug by holding each end and twisting it between their hands, and the other group was told to play with the wug by holding it at the larger end and waving it around in the air. Afterwards, she placed the children in a new room with several objects and asked them to identify other "wugs." She found that the children who had been taught to interact with the object in a way that accented its asymmetry were more accurate in their classifications of other wug-like objects, whereas the group taught to play with the object by twisting it between their hands had a broader classification for the object. This study demonstrates that the ways that we interact with objects is fundamental in our ability to form linguistic patterns and categories. This inspired me to conduct a similar between-groups exploratory study on adults using UltaHaptics technology and loosely based on the Molyneux problem. UltraHaptics is a virtual reality system based on vibrational haptics that allow users to touch and feel 3-dimensional virtual objects. While the resolution of UltraHaptics is still in development, it is sufficient to give a sense of object shape, size, features, and dimensions. The question in the Molyneux problem is: If a person is born blind and has restored vision, can he/she recognize an object by sight alone? My research question was how well normal or corrected-to-normal vision adults could be trained to recognize, categorize, and identify objects by touch alone. Thus, subjects were presented a novel object by touch stimuli using the UltraHaptic system and told that this object is a " wug ". This was an abstract three-dimensional sphere with a circular bulb on one side and a spike on the other side. Subjects also wore chest-mounted Leap Motion devices for hand tracking to track users hand movements around the three-dimensional object. Subsequently, subjects were presented with novel 3-dimensional objects either in a sensory-consistent modality (touch in 3-Dimensional haptic VR) or in a novel sensory 1 Smith, L.B. (2005). Action alters shape categories. Cognitive Science, 29, 665-679.
The nature of how we come to access, know, understand, and share experiences with other minded be... more The nature of how we come to access, know, understand, and share experiences with other minded beings has been a topic of debate within phenomenological psychology. New virtual and augmented reality technologies allow users to see, hear, and even haptically feel from another person's embodied point of view. The Machine to Be Another (BeAnother Lab, 2014) is a perspective-sharing art performance installation that allows two people to see a live video stream from the other person's point of view. Through slow and synchronous movements, an initial turn-taking breaks down into a unique agency illusion whereby users cannot tell whether they initiated a movement or are following the movement of another. I argue that this disruption to our normal sense of agency (one aspect of our sense of self) can stimulate a higher mental autonomy to draws user's attention to previously unreflective aspects of their sense of self and in turn their sense of other (alterity). Moreover, technologies augment the senses such as to induce certain sensory substitution methods which evoke a variant of a different phenomenological perception. For instance, Sun Joo Ahn and colleague's (2015) virtual environment showed participants a task space as it appeared to a colorblind subject (confederate) and evaluated subsequent helping behavior. Husserl (1956) distinguishes three different ways to intend an object: signitive, pictorial, and perceptual. These intentional modes can be ranked regarding their ability to give us the object as directly, originally, and optimally as possible (as cited in Zahavi, 2009). While another person's first-person experiences cannot be given directly in its felt primacy, these new virtual technologies are playing with the possibility of transmitting sensory and cognitive experiences. Using binaural audio to record an artist's stream of consciousness while in the creative flow state of painting, I created Paint With Me, a software that allows users to see and hear from an artist's point of view while synchronously following her movements with a tracked rendering of their own hand painting on their own physical canvas. This stereoscopic virtual environment that included Leap Motion head tracking was compared to watching the same video on a 2-Dimensional screen with higher empathic accuracy, self-other merging, perspective taking, and agency in the immersive virtual environment condition. Thus, I defend that new virtual environments facilitate imaginative projection into another person's shoes and get closer to a perceptual sharing and a deeper entanglement with another's subjective experience than other forms of mediated communication, while also adding to the efficacy of interpersonal understanding in face-to-face communication.
While nothing can be more vivid, immediate and real than our own sensorial experiences, emerging ... more While nothing can be more vivid, immediate and real than our own sensorial experiences, emerging virtual reality technologies are playing with the possibility of being able to share someone else's sense reality. The Painter Project is a virtual environment where users see a video from a painter's point of view in tandem with a tracked rendering of their own hand while they paint on a physical canvas. The end result is an experiment in superimposition of one experiential reality on top of another, hopefully opening a new window into an artist's creative process.
This paper presents two versions of embodied simulations: avatar embodiment and virtual alterity.... more This paper presents two versions of embodied simulations: avatar embodiment and virtual alterity. Embodied simulations involve embodying the first-person perspective of either an avatar body (avatar embodiment) or another real person (virtual alterity). Avatar embodiment studies focus on overcoming barriers to automatic empathic processes, such as the neurobiological correlates of body representations that allow us to understand one another's bodily and affective states. These neurological processes have decreased activation when observing an out-group member. Therefore, identifying with an avatar body representing an out-group member can be a tool for increasing self-other overlap of body representations, and these studies have successfully tranformed implict biases towards more positive intergroup attitudes. By contrast, virtual alterity projects are premised on self-other distinctions and posit empathy as grounded in the recognition of another person as another subjectivity like me but with unique structures of experience. Therefore, these interfaces are designed differently to facilitate these features of empathy conceived in an other-focused way. This chapter reviews avatar studies to explain why this re-conceptualization of empathy may be important for effectively facilitating empathy in VR, while honoring that avatar studies provide the foundation for this work.
The main research question motivating this paper is what types of experience are shared in empath... more The main research question motivating this paper is what types of experience are shared in empathy. Simulation theories (ST) of empathy suggest that state-matching between oneself and another allows for an emotional sharing through motor resonance. Theory of Mind (ToM) suggests a move from shared affective states to a cognitive understanding of another's mental states. Thus, theorists like Barresi and Moore (2008) and Lockwood (2016) unite the two theories by dividing empathy along affective and cognitive dimensions processed at different neural and cognitive levels. This paper argues that empathy cannot be divided into such clear and distinct informational processing units based on third-person observational inference and first-personal mentalizing or simulation, but is instead rooted in the space of social interaction. However, interaction theory (IT) only accounts for everyday social cognition and not empathy. This paper argues that such social cognition is the root of empathy as a deeper mode of interactive engagement that involves reciprocity and mutual recognition.
This research proposal explores the media affordances of live-streamed visual and auditory inform... more This research proposal explores the media affordances of live-streamed visual and auditory information from one person's point-of-view (the performer) into a head-mounted display and binaural microphones worn by a separate person (the user). Specifically, this project proposal is designed to explore moving together in the act of drawing with another person even though the user cannot see their own canvas and is simply following the movements of the performer. The goal is to communicate the creative process form one an artist to a novice and to increase interpersonal understanding.
This paper presents a cognitive and phenomenological approach to film theory and analysis to addr... more This paper presents a cognitive and phenomenological approach to film theory and analysis to address the use of the point-of-view (POV) shot and first-person perspective (FPP) in film media, defending Julian Schnabel's 2007 film The Diving Bell and the Butterfly as an exemplary case of FPP. Diving Bell presents Bauby's character to the audience through various modes of FPP to represent both perceptual (camera-eye metaphor) and conceptual (subjective lens) ways to metaphorically render Bauby's lived experience of locked-in syndrome. The paper then explores new media uses of FPP in immersive virtual reality cinema through the case of Mads Dambo and Johan Knattrup Jensen's 2014 film Skammekrogen. Ultimately the paper argues that traditional film techniques are better suited for the task of subjectively aligning the audience to a character and cultivating character engagement and emotional entanglement.
Recent technological advances coupled with progress in brain and psychological sciences allow the... more Recent technological advances coupled with progress in brain and psychological sciences allow the controlled induction and regulation of human psychophysiological states. These progresses often aim toward the goal of developing human-machine interfaces to improve human factors such as mental health, human relations, well-being, and empathy. In this short article, we present some of such devices with a particular emphasis on technology aiming to foster empathic abilities in humans; that is, our ability to care, understand, and help our fellow human beings. In order to discuss any possible use for such devices in a clinical setting, we start by outlining definitions for the terms used in the article, and present three devices designed with the goal of modulating empathy in humans.
Salient features in a visual search task can direct attention and increase competency on these ta... more Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the " pop-out effect " can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally synched with a pop-out effect can improve reaction time in a visual search task, called the " pip and pop effect " [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo, and binaural), with rising degrees of difficulty by increasing the set size. The results (n=10) indicate a statistically significant difference in reaction time between the three conditions. Overall, in spite of the small sample size, our results seem to indicate that binaural audio renders a clear advantage to the spatial visual processing.
While nothing can be more vivid, immediate and real than our own sensory experiences, emerging vi... more While nothing can be more vivid, immediate and real than our own sensory experiences, emerging virtual reality technologies are playing with the possibility of being able to share someone else's sensory reality. The Painter Project is a virtual environment where users see a video from a painter's point of view in tandem with a tracked rendering of their own hand while they paint on a physical canvas. The end result is an experiment in superimposition of one experiential reality on top of another, hopefully opening a new window into an artist's creative process. This explorative study tested this virtual environment on stimulating empathy and creativity. The findings indicate potential for this technology as a new expert-novice mentorship simulation.
Research paradigms for stimulating empathic responses in virtual reality change perceived self-ot... more Research paradigms for stimulating empathic responses in virtual reality change perceived self-other overlap through illusions that cause users to experience their own body as a virtual avatar with a different type of body. Virtual Alterity paradigms involve sharing aspects of another real person's first-person experience in interactive virtual environments. In this thesis, I define empathy as an other-directed emotion motivating concern for another's welfare, and argue that virtual alterity systems are better designed to facilitate empathy when conceived in this way, as compared to avatar illusions in VR.
New sensor and recording devices such as binaural microphones, stereoscopic cameras, eye trackers... more New sensor and recording devices such as binaural microphones, stereoscopic cameras, eye trackers, and motion capture sensors can track aspects of sensorimotor experience and simulate a model of an individual's embodied reality. Moreover, head-mounted display technologies can allow a separate individual to quite literally wear the embodied sense perception of another, superimposed on top of their own. This affords a unique spatial alignment of first-person embodied sensory perception between two individuals. For example, by pairing sensor equipment with head-mounted display technology, two real people can engage in a body swap, seeing and hearing from one anothers embodied point of view, and this can be done in real-time (The Machine to Be Another, BeAnotherLabs, 2014). In First-Person Squared, users wear a virtual-reality head-mounted display that presents a semi-transparent video overlay of their partners first-person visual perspective, such that the user can see through the video overlay of their partners body towards his or her own body. Motion capture tracks the two users' hand movements in real-time and displays visual effects to guide the two users to move together, in and out of leader-follower dynamics towards joint improvisations. The system is designed to encourage rhythmic, synchronous movements between the two users by comparing the two users movements, both in physical shape and temporal matching. This provides a technological research tool to begin to explore the impacts of capturing and transmitting aspects of phenomenological experience from one person to another. Another tool for transmitting embodied sensorimotor perception between individuals involves using eye tracking to capture eye saccades and fixations during various tasks. Paint With Me (Gerry, 2017) is a system in which users see and hear from the perspective of a painter while painting along with her on their own physical canvas, augmented by hand and paint brush real-object tracking. A foveated imaging technique was added to Paint With Me to indicate where the painter was looking while she paints, and this gaze transfer technique effectively promoted gaze following behavior in users, as well as greater self-reported learning outcomes. A final technique for multisensory perspective taking is to use auditory perspective-taking. In The Augmented Design to Embody a Piano Teacher (ADEPT) system, users are able to lean into a virtual sphere to hear the same piano piece played from their teachers point of view while embodying her visual perspective as a semi-transparent overlay on top of their own (Gerry, Dahl, Serafin, 2019). Learning how to effectively design these virtual environments to accurately render and display an embodied, phenomenological experience as lived by one person involves an in-depth analysis of the persons lived experience beyond merely what the sensing and recording equipment alone can capture. However, this opens the possibilities for how these technologies can compliment a rigorous first-person scientific methodology by modeling and simulating that reality and using experiential fidelity as the main indicator for an effective evaluation.
This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a ne... more This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
Rhythmic, synchronous social interactions increase sense of affiliation, feelings of togetherness... more Rhythmic, synchronous social interactions increase sense of affiliation, feelings of togetherness, and empathy. Additionally, music has been shown to synchronize emotional experience between the performer and the audience as well as shared group emotions in listeners. Combined, the evocative power of music on shared emotion and the effects of moving together create a compelling case for using rhythmic, musical training in tasks to increase empathy. This project involves allowing users to virtually embody an expert pianist, initially being led by the expert and eventually exploring instances of co-confident motion during Joint Improvisation (JI), a task in which the participant is told to move with their partner without any clear leader or follower. Co-confident motion was analyzed using Leap Motion hand tracking with an algorithm to calculate a match score for hand and finger position, velocity, morphology (angle, tilt), and smoothness (low jitter). Instances of co-confident motion were scored in terms of frequency, duration, quality (smoothness), and complexity. The hypothesis is that such musical performance training while virtually embodying an expert can not only increase fine motor skill while playing, but can also be a paradigm to explore sensory prediction, attributions of agency, and joint intentionality and the potential modulating effects that these mechanisms may have on empathic outcomes. Specifically, this paradigm endeavors to explore how and when leader-follower dynamics emerge and fade in joint improvisation, and how this relates to self-other distinctions and self-other overlap in empathic exchanges.
Virtual environments usually study dynamics of social cognition by creating virtual agents or ava... more Virtual environments usually study dynamics of social cognition by creating virtual agents or avatars who can simulate modes of social interaction that human users perceive and respond to naturally and intuitively. Embodied simulations offer a new research paradigm by creating first-person perspective virtual environments capturing the bodily, sensory, motoric, perceptual, cognitive and affective aspects of experience. Thus, embodied simulations shift the research and design focus to the relationship between the perceiving subject, sensorimotor aspects of experience, and task-based tool use within the environment. The Machine to Be Another (BeAnother Lab) is one such embodied simulation. This setup uses a live video stream from a " performer " wearing chest-mounted cameras to a " user " wearing a head-mounted display (HMD) such that the user sees from the performer's point of view. Through an imitation procedure, the user moves synchronously with the performer, offering a new way to learn about another person's movement patterns during task performance or social interactions. Exploring the efficacy of embodied simulations on expert-novice skills transmission and stimulating creativity, Paint With Me was created as a mixed reality virtual environment where users see and hear from the painter's point of view while stepping into the painter's shoes and synchronously following her movements as the user paints on their own physical canvas. Users also see an augmented, semi-transparent layer of their real-time tracked hand movements and a 3-Dimensional mesh of the paintbrush. The educational efficacy of Paint With Me was enhanced by tracking the painter's movements of her hand (using Leap Motion), brush (using a 3-point tracker for tip, middle, and top of brush) and eyes (using EyeTribe eye tracker) while recording the stereoscopic 360 degree videos used in the training embodied simulations. This allowed for multiple new interactive design elements. First, the user's hand and brush movements are computed and analyzed in comparison to the painter's hand and brush movements. Thus, the interface was designed such that the video interactively responds to user movements and slows down when the user falls out of synch with the painter. Modeled after MIT Media Lab's ALIVE system, the software also allows users to see footage playback of moments of greatest discrepancy between their movements and those of the painter after the 20-minute painting exercise. The embodied simulation with interaction design significantly improved objective painting performance (movement similarity) and subjective learning outcomes, as compared to a presentation of the same video on a 2-dimensional screen without the augmented interactive elements during the same " paint-along " painting exercise. Secondly, the eye tracking data was coded into the video with a foveated moving circle indicating the eye movements of the painter, while the background of the video is slightly blurred 1. Users reported that they naturally followed the foveated imaging in the video (successful gaze cueing), and that they understood that this indicated where the painter was looking. Users also reported that the eye-tracking helped them understand the painter's intentions, where she would be painting 1 See video prototype here: https://youtu.be/0qXGC8xU4-o
Multisensoy virtual environments offer factual variation to explore key questions in embodied cog... more Multisensoy virtual environments offer factual variation to explore key questions in embodied cognition and enactive perception. This study proposes one such model for testing the action-perception feedback system in linguistic category formation based on a new haptics interface technology (UltraHaptics). Developmental psychologist, linguistic, and embodied cognition researcher Linda B. Smith conducted multiple experiments at Indiana University with young children, ages 3-4, published in a paper titled " Action Alters Shape Categories ". In one experiment, she showed children a novel toy called a "wug", a small tube with two bulbs on each end (Smith, 2005) 1. Crucially, one bulb was larger than the other (the object was asymmetrical). Children were split into two groups: one group was told to play with the wug by holding each end and twisting it between their hands, and the other group was told to play with the wug by holding it at the larger end and waving it around in the air. Afterwards, she placed the children in a new room with several objects and asked them to identify other "wugs." She found that the children who had been taught to interact with the object in a way that accented its asymmetry were more accurate in their classifications of other wug-like objects, whereas the group taught to play with the object by twisting it between their hands had a broader classification for the object. This study demonstrates that the ways that we interact with objects is fundamental in our ability to form linguistic patterns and categories. This inspired me to conduct a similar between-groups exploratory study on adults using UltaHaptics technology and loosely based on the Molyneux problem. UltraHaptics is a virtual reality system based on vibrational haptics that allow users to touch and feel 3-dimensional virtual objects. While the resolution of UltraHaptics is still in development, it is sufficient to give a sense of object shape, size, features, and dimensions. The question in the Molyneux problem is: If a person is born blind and has restored vision, can he/she recognize an object by sight alone? My research question was how well normal or corrected-to-normal vision adults could be trained to recognize, categorize, and identify objects by touch alone. Thus, subjects were presented a novel object by touch stimuli using the UltraHaptic system and told that this object is a " wug ". This was an abstract three-dimensional sphere with a circular bulb on one side and a spike on the other side. Subjects also wore chest-mounted Leap Motion devices for hand tracking to track users hand movements around the three-dimensional object. Subsequently, subjects were presented with novel 3-dimensional objects either in a sensory-consistent modality (touch in 3-Dimensional haptic VR) or in a novel sensory 1 Smith, L.B. (2005). Action alters shape categories. Cognitive Science, 29, 665-679.
The nature of how we come to access, know, understand, and share experiences with other minded be... more The nature of how we come to access, know, understand, and share experiences with other minded beings has been a topic of debate within phenomenological psychology. New virtual and augmented reality technologies allow users to see, hear, and even haptically feel from another person's embodied point of view. The Machine to Be Another (BeAnother Lab, 2014) is a perspective-sharing art performance installation that allows two people to see a live video stream from the other person's point of view. Through slow and synchronous movements, an initial turn-taking breaks down into a unique agency illusion whereby users cannot tell whether they initiated a movement or are following the movement of another. I argue that this disruption to our normal sense of agency (one aspect of our sense of self) can stimulate a higher mental autonomy to draws user's attention to previously unreflective aspects of their sense of self and in turn their sense of other (alterity). Moreover, technologies augment the senses such as to induce certain sensory substitution methods which evoke a variant of a different phenomenological perception. For instance, Sun Joo Ahn and colleague's (2015) virtual environment showed participants a task space as it appeared to a colorblind subject (confederate) and evaluated subsequent helping behavior. Husserl (1956) distinguishes three different ways to intend an object: signitive, pictorial, and perceptual. These intentional modes can be ranked regarding their ability to give us the object as directly, originally, and optimally as possible (as cited in Zahavi, 2009). While another person's first-person experiences cannot be given directly in its felt primacy, these new virtual technologies are playing with the possibility of transmitting sensory and cognitive experiences. Using binaural audio to record an artist's stream of consciousness while in the creative flow state of painting, I created Paint With Me, a software that allows users to see and hear from an artist's point of view while synchronously following her movements with a tracked rendering of their own hand painting on their own physical canvas. This stereoscopic virtual environment that included Leap Motion head tracking was compared to watching the same video on a 2-Dimensional screen with higher empathic accuracy, self-other merging, perspective taking, and agency in the immersive virtual environment condition. Thus, I defend that new virtual environments facilitate imaginative projection into another person's shoes and get closer to a perceptual sharing and a deeper entanglement with another's subjective experience than other forms of mediated communication, while also adding to the efficacy of interpersonal understanding in face-to-face communication.
While nothing can be more vivid, immediate and real than our own sensorial experiences, emerging ... more While nothing can be more vivid, immediate and real than our own sensorial experiences, emerging virtual reality technologies are playing with the possibility of being able to share someone else's sense reality. The Painter Project is a virtual environment where users see a video from a painter's point of view in tandem with a tracked rendering of their own hand while they paint on a physical canvas. The end result is an experiment in superimposition of one experiential reality on top of another, hopefully opening a new window into an artist's creative process.
This paper presents two versions of embodied simulations: avatar embodiment and virtual alterity.... more This paper presents two versions of embodied simulations: avatar embodiment and virtual alterity. Embodied simulations involve embodying the first-person perspective of either an avatar body (avatar embodiment) or another real person (virtual alterity). Avatar embodiment studies focus on overcoming barriers to automatic empathic processes, such as the neurobiological correlates of body representations that allow us to understand one another's bodily and affective states. These neurological processes have decreased activation when observing an out-group member. Therefore, identifying with an avatar body representing an out-group member can be a tool for increasing self-other overlap of body representations, and these studies have successfully tranformed implict biases towards more positive intergroup attitudes. By contrast, virtual alterity projects are premised on self-other distinctions and posit empathy as grounded in the recognition of another person as another subjectivity like me but with unique structures of experience. Therefore, these interfaces are designed differently to facilitate these features of empathy conceived in an other-focused way. This chapter reviews avatar studies to explain why this re-conceptualization of empathy may be important for effectively facilitating empathy in VR, while honoring that avatar studies provide the foundation for this work.
The main research question motivating this paper is what types of experience are shared in empath... more The main research question motivating this paper is what types of experience are shared in empathy. Simulation theories (ST) of empathy suggest that state-matching between oneself and another allows for an emotional sharing through motor resonance. Theory of Mind (ToM) suggests a move from shared affective states to a cognitive understanding of another's mental states. Thus, theorists like Barresi and Moore (2008) and Lockwood (2016) unite the two theories by dividing empathy along affective and cognitive dimensions processed at different neural and cognitive levels. This paper argues that empathy cannot be divided into such clear and distinct informational processing units based on third-person observational inference and first-personal mentalizing or simulation, but is instead rooted in the space of social interaction. However, interaction theory (IT) only accounts for everyday social cognition and not empathy. This paper argues that such social cognition is the root of empathy as a deeper mode of interactive engagement that involves reciprocity and mutual recognition.
This research proposal explores the media affordances of live-streamed visual and auditory inform... more This research proposal explores the media affordances of live-streamed visual and auditory information from one person's point-of-view (the performer) into a head-mounted display and binaural microphones worn by a separate person (the user). Specifically, this project proposal is designed to explore moving together in the act of drawing with another person even though the user cannot see their own canvas and is simply following the movements of the performer. The goal is to communicate the creative process form one an artist to a novice and to increase interpersonal understanding.
This paper presents a cognitive and phenomenological approach to film theory and analysis to addr... more This paper presents a cognitive and phenomenological approach to film theory and analysis to address the use of the point-of-view (POV) shot and first-person perspective (FPP) in film media, defending Julian Schnabel's 2007 film The Diving Bell and the Butterfly as an exemplary case of FPP. Diving Bell presents Bauby's character to the audience through various modes of FPP to represent both perceptual (camera-eye metaphor) and conceptual (subjective lens) ways to metaphorically render Bauby's lived experience of locked-in syndrome. The paper then explores new media uses of FPP in immersive virtual reality cinema through the case of Mads Dambo and Johan Knattrup Jensen's 2014 film Skammekrogen. Ultimately the paper argues that traditional film techniques are better suited for the task of subjectively aligning the audience to a character and cultivating character engagement and emotional entanglement.
Recent technological advances coupled with progress in brain and psychological sciences allow the... more Recent technological advances coupled with progress in brain and psychological sciences allow the controlled induction and regulation of human psychophysiological states. These progresses often aim toward the goal of developing human-machine interfaces to improve human factors such as mental health, human relations, well-being, and empathy. In this short article, we present some of such devices with a particular emphasis on technology aiming to foster empathic abilities in humans; that is, our ability to care, understand, and help our fellow human beings. In order to discuss any possible use for such devices in a clinical setting, we start by outlining definitions for the terms used in the article, and present three devices designed with the goal of modulating empathy in humans.
Salient features in a visual search task can direct attention and increase competency on these ta... more Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the " pop-out effect " can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally synched with a pop-out effect can improve reaction time in a visual search task, called the " pip and pop effect " [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo, and binaural), with rising degrees of difficulty by increasing the set size. The results (n=10) indicate a statistically significant difference in reaction time between the three conditions. Overall, in spite of the small sample size, our results seem to indicate that binaural audio renders a clear advantage to the spatial visual processing.
While nothing can be more vivid, immediate and real than our own sensory experiences, emerging vi... more While nothing can be more vivid, immediate and real than our own sensory experiences, emerging virtual reality technologies are playing with the possibility of being able to share someone else's sensory reality. The Painter Project is a virtual environment where users see a video from a painter's point of view in tandem with a tracked rendering of their own hand while they paint on a physical canvas. The end result is an experiment in superimposition of one experiential reality on top of another, hopefully opening a new window into an artist's creative process. This explorative study tested this virtual environment on stimulating empathy and creativity. The findings indicate potential for this technology as a new expert-novice mentorship simulation.
Research paradigms for stimulating empathic responses in virtual reality change perceived self-ot... more Research paradigms for stimulating empathic responses in virtual reality change perceived self-other overlap through illusions that cause users to experience their own body as a virtual avatar with a different type of body. Virtual Alterity paradigms involve sharing aspects of another real person's first-person experience in interactive virtual environments. In this thesis, I define empathy as an other-directed emotion motivating concern for another's welfare, and argue that virtual alterity systems are better designed to facilitate empathy when conceived in this way, as compared to avatar illusions in VR.
Uploads