Who You Are: The Science of Connectedness (2020, MIT Press) By Michael J. Spivey https://shepherd.com/best-books/the-mind-as-more-than-a-brain https://mitpress.mit.edu/books/who-you-are https://www.psychologytoday.com/us/blog/who-you-are Why you are more than just a brain, more than just a brain-and-body, and more than all your assumptions about who you are. Supervisors: Michael Tanenhaus, and many more...
Recent studies show that visual search often involves a combination of both parallel and serial s... more Recent studies show that visual search often involves a combination of both parallel and serial search strategies. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional parallel or serial descriptions to a continuum from "efficient" to "inefficient." In our first experiment, we demonstrate with various control conditions that search efficiency does not increase with simultaneous delivery of target features in a conjunction search task. In the second experiment, we explore effects of incremental nonlinguistic information delivery and discover improvement of search efficiency. We find a facilitatory effect when visual non-linguistic delivery of target features is concurrent with the visual display onset, but not when the target features are delivered prior to display onset. The results support an interactive account of visual perception that explains linguistic and non-linguistic mediation of visual search as chiefly due to the incrementality of target feature delivery once search has begun.
Linguistic negation can be comprehended with the inclusion (or absence) of features and categorie... more Linguistic negation can be comprehended with the inclusion (or absence) of features and categories associated with the senses in a single step. Under this view, there is no need for explicit logical operators, as the negating word or phase is treated no differently than any other word. Negation provides additional context, whereby visualizing negation as a trajectory in a distributed, grounded perceptual simulation space can easily characterize the comprehension of negated sentences. A mousetracking experiment was conducted to explore how this kind of process may be enacted in the brain and to tease apart hypotheses of logical manipulations vs. analogue signals performing this work.
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing... more When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior.
Recent converging evidence suggests that language and vision interact immediately in non-trivial ... more Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual search tasks, incremental spoken delivery of the target features (e.g., "Is there a red vertical?") can increase the efficiency of conjunction search because only one feature is heard at a time. Moreover, in spoken word recognition tasks, the visual presence of an object whose name is similar to the word being spoken (e.g., a candle present when instructed to "pick up the candy") can alter the process of comprehension. Dense sampling methods, such as eye-tracking and reach-tracking, richly illustrate the nature of this interaction, providing a semi-continuous measure of the temporal dynamics of individual behavioral responses. We review a variety of studies that demonstrate how these methods are particularly promising in further elucidating the dynamic competition that takes place between underlying linguistic and visual representations in multimodal contexts, and we conclude with a discussion of the consequences that these findings have for theories of embodied cognition.
What role does grammatical aspect play in the time course of understanding spatial language, in p... more What role does grammatical aspect play in the time course of understanding spatial language, in particular motion events? Although processing differences between past progressive (was walking) and simple past (walked) aspect suggest differences in prominence of certain semantic properties, details about the temporal dynamics of aspect processing have been largely ignored. The current work uses mouse-tracking [1] to explore spatial differences in motor output response to contextual descriptions and aspectual ...
Am I a Robot? How Verb Agency and Agent Description Influence Perspective-Taking in Visual Scenes... more Am I a Robot? How Verb Agency and Agent Description Influence Perspective-Taking in Visual Scenes Michelle D. Greenwood University of California, Merced Teenie Matlock University of California, Merced Michael J. Spivey University of California, Merced Justin L. Matthews University of California, Merced Abstract: People often take an egocentric perspective when describing space. However, they occasionally take an alternative perspective. When and why? In a series of experiments that followed work on perspective, we explored this question. In one experiment, participants were given photographs of two objects on a table. Objectively, the scene could be described from either the perspective of the person viewing the picture or from the opposite perspective (i.e., facing the viewer). To test which viewpoint would be elicited, we asked participants to describe where an object was relative to another. In one experiment, a toy humanoid robot (facing the participant) was included in the scen...
Darkness has profound effects on human behaviour and the ability to perform everyday activities. ... more Darkness has profound effects on human behaviour and the ability to perform everyday activities. It can influence our ability to function, our moods, emotions, and cognition. Here we examine the relationship between darkness and supernatural beliefs. This work is informed by cross-cultural cave research, which suggests that cave dark zones are used as the settings for rituals from the advent of modern humans to the present. How can this phenomenon be explained? The chapter reviews research on the effects of darkness on the human mind and presents results of our own experimentation. We argue that shared human reactions to darkness, including embodied responses, stimulate the imagination in similar ways, leading to what we refer to as transcendental or imaginary thinking that lies at the heart of supernatural beliefs. Our work suggests that the natural environment is not a passive player but a causative agent in this process.
Allocentric perspectives are more common when describing spatial scenes when affordances and lang... more Allocentric perspectives are more common when describing spatial scenes when affordances and language facilitate taking such a point of view (Greenwood et al., 2010). People tend to feel "close" to friends and "distant" from strangers in both a metaphoric and physical sense (Matthews & Matlock, 2010). This work examines the relationship between perspective taking and social distance in a simulated school setting. Participants imagined attending a meeting with two other students: Mary and John. John's friendship with the participant was varied across conditions. Participants viewed a scene of a table and chairs where each member's location was labeled, then described where Mary was sitting. Egocentric frames of reference were more common when participants were familiar/friends with John and less common when participants and John were unfamiliar with one another. Many factors influence an individual's perspective, and these results suggest that information regarding social relationships can also influence perspective-taking.
According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities re... more According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities recycle evolutionarily ancient mechanisms for perception and action. Here, building on these accounts, we investigate whether creativity builds on our capacity to forage in space (“creativity as strategic foraging”). We report systematic connections between specific forms of creative thinking—divergent and convergent—and corresponding strategies for searching in space. U.S. American adults completed two tasks designed to measure creativity. Before each creativity trial, participants completed an unrelated search of a city map. Between subjects, we manipulated the search pattern, with some participants seeking multiple, dispersed spatial locations and others repeatedly converging on the same location. Participants who searched divergently in space were better at divergent thinking but worse at convergent thinking; this pattern reversed for participants who had converged repeatedly on a single location. These results demonstrate a targeted link between foraging and creativity, thus advancing our understanding of the origins and mechanisms of high-level cognition.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2010
A simple audiovisual two-alternative forced-choice task was conducted to examine processing diffe... more A simple audiovisual two-alternative forced-choice task was conducted to examine processing differences between the modal verbs should and must. Unambiguous propositions were either agreed with or disagreed with, and participants' eye movements were monitored as they heard and read the sentence. Reaction times reveal no differences in processing. However, closer time course analyses revealed a divergence in fixations to the target for should. These results suggest two mental models are simultaneously activated, entailing both agreement and disagreement with the statement in question.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2010
Similarity is central to human cognition. Its relevance is apparent in nearly all theories of cog... more Similarity is central to human cognition. Its relevance is apparent in nearly all theories of cognitive science. Concept acquisition, metaphor, pattern recognition, priming, predictions, inferences; all these processes rely on similarity. Despite its relevance, relatively little is understood about how similarity is processed. In particular, there is a need to better understand the scope in which our perceptual systems constrain our judgments of similarity. The current study investigates this question in the area of visual cognition. By attempting to control for the influence of categorical knowledge, the goal was to understand how different types of feature-dimensions and category boundaries influence the perception of similarity. A connectionist model was developed to explain these findings.
Recent studies show that visual search often involves a combination of both parallel and serial s... more Recent studies show that visual search often involves a combination of both parallel and serial search strategies. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional parallel or serial descriptions to a continuum from "efficient" to "inefficient." In our first experiment, we demonstrate with various control conditions that search efficiency does not increase with simultaneous delivery of target features in a conjunction search task. In the second experiment, we explore effects of incremental nonlinguistic information delivery and discover improvement of search efficiency. We find a facilitatory effect when visual non-linguistic delivery of target features is concurrent with the visual display onset, but not when the target features are delivered prior to display onset. The results support an interactive account of visual perception that explains linguistic and non-linguistic mediation of visual search as chiefly due to the incrementality of target feature delivery once search has begun.
Linguistic negation can be comprehended with the inclusion (or absence) of features and categorie... more Linguistic negation can be comprehended with the inclusion (or absence) of features and categories associated with the senses in a single step. Under this view, there is no need for explicit logical operators, as the negating word or phase is treated no differently than any other word. Negation provides additional context, whereby visualizing negation as a trajectory in a distributed, grounded perceptual simulation space can easily characterize the comprehension of negated sentences. A mousetracking experiment was conducted to explore how this kind of process may be enacted in the brain and to tease apart hypotheses of logical manipulations vs. analogue signals performing this work.
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing... more When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes throughout the brain and body that operate over a wide range of timescales, and these processes combine to be expressed as a general source of 1/f noise. To test this explanation, the present study investigated the coupling vs. independence of 1/f noise in timing deviations, key-press durations, pupil dilations, and heartbeat intervals while tapping to an audiovisual metronome. All four dependent measures exhibited clear 1/f noise, regardless of whether tapping was synchronized or syncopated. 1/f spectra for timing deviations were found to match those for key-press durations on an individual basis, and 1/f spectra for pupil dilations matched those in heartbeat intervals. Results indicate a complex, multiscale relationship among 1/f noises arising from common sources, such as those arising from timing functions vs. those arising from autonomic nervous system (ANS) functions. Results also provide further evidence against the default hypothesis that 1/f noise in human timing is just the additive combination of processes throughout the brain and body. Our findings are better accommodated by theories of complexity matching that begin to formalize multiscale coordination as a foundation of human behavior.
Recent converging evidence suggests that language and vision interact immediately in non-trivial ... more Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual search tasks, incremental spoken delivery of the target features (e.g., "Is there a red vertical?") can increase the efficiency of conjunction search because only one feature is heard at a time. Moreover, in spoken word recognition tasks, the visual presence of an object whose name is similar to the word being spoken (e.g., a candle present when instructed to "pick up the candy") can alter the process of comprehension. Dense sampling methods, such as eye-tracking and reach-tracking, richly illustrate the nature of this interaction, providing a semi-continuous measure of the temporal dynamics of individual behavioral responses. We review a variety of studies that demonstrate how these methods are particularly promising in further elucidating the dynamic competition that takes place between underlying linguistic and visual representations in multimodal contexts, and we conclude with a discussion of the consequences that these findings have for theories of embodied cognition.
What role does grammatical aspect play in the time course of understanding spatial language, in p... more What role does grammatical aspect play in the time course of understanding spatial language, in particular motion events? Although processing differences between past progressive (was walking) and simple past (walked) aspect suggest differences in prominence of certain semantic properties, details about the temporal dynamics of aspect processing have been largely ignored. The current work uses mouse-tracking [1] to explore spatial differences in motor output response to contextual descriptions and aspectual ...
Am I a Robot? How Verb Agency and Agent Description Influence Perspective-Taking in Visual Scenes... more Am I a Robot? How Verb Agency and Agent Description Influence Perspective-Taking in Visual Scenes Michelle D. Greenwood University of California, Merced Teenie Matlock University of California, Merced Michael J. Spivey University of California, Merced Justin L. Matthews University of California, Merced Abstract: People often take an egocentric perspective when describing space. However, they occasionally take an alternative perspective. When and why? In a series of experiments that followed work on perspective, we explored this question. In one experiment, participants were given photographs of two objects on a table. Objectively, the scene could be described from either the perspective of the person viewing the picture or from the opposite perspective (i.e., facing the viewer). To test which viewpoint would be elicited, we asked participants to describe where an object was relative to another. In one experiment, a toy humanoid robot (facing the participant) was included in the scen...
Darkness has profound effects on human behaviour and the ability to perform everyday activities. ... more Darkness has profound effects on human behaviour and the ability to perform everyday activities. It can influence our ability to function, our moods, emotions, and cognition. Here we examine the relationship between darkness and supernatural beliefs. This work is informed by cross-cultural cave research, which suggests that cave dark zones are used as the settings for rituals from the advent of modern humans to the present. How can this phenomenon be explained? The chapter reviews research on the effects of darkness on the human mind and presents results of our own experimentation. We argue that shared human reactions to darkness, including embodied responses, stimulate the imagination in similar ways, leading to what we refer to as transcendental or imaginary thinking that lies at the heart of supernatural beliefs. Our work suggests that the natural environment is not a passive player but a causative agent in this process.
Allocentric perspectives are more common when describing spatial scenes when affordances and lang... more Allocentric perspectives are more common when describing spatial scenes when affordances and language facilitate taking such a point of view (Greenwood et al., 2010). People tend to feel "close" to friends and "distant" from strangers in both a metaphoric and physical sense (Matthews & Matlock, 2010). This work examines the relationship between perspective taking and social distance in a simulated school setting. Participants imagined attending a meeting with two other students: Mary and John. John's friendship with the participant was varied across conditions. Participants viewed a scene of a table and chairs where each member's location was labeled, then described where Mary was sitting. Egocentric frames of reference were more common when participants were familiar/friends with John and less common when participants and John were unfamiliar with one another. Many factors influence an individual's perspective, and these results suggest that information regarding social relationships can also influence perspective-taking.
According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities re... more According to accounts of neural reuse and embodied cognition, higher-level cognitive abilities recycle evolutionarily ancient mechanisms for perception and action. Here, building on these accounts, we investigate whether creativity builds on our capacity to forage in space (“creativity as strategic foraging”). We report systematic connections between specific forms of creative thinking—divergent and convergent—and corresponding strategies for searching in space. U.S. American adults completed two tasks designed to measure creativity. Before each creativity trial, participants completed an unrelated search of a city map. Between subjects, we manipulated the search pattern, with some participants seeking multiple, dispersed spatial locations and others repeatedly converging on the same location. Participants who searched divergently in space were better at divergent thinking but worse at convergent thinking; this pattern reversed for participants who had converged repeatedly on a single location. These results demonstrate a targeted link between foraging and creativity, thus advancing our understanding of the origins and mechanisms of high-level cognition.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2010
A simple audiovisual two-alternative forced-choice task was conducted to examine processing diffe... more A simple audiovisual two-alternative forced-choice task was conducted to examine processing differences between the modal verbs should and must. Unambiguous propositions were either agreed with or disagreed with, and participants' eye movements were monitored as they heard and read the sentence. Reaction times reveal no differences in processing. However, closer time course analyses revealed a divergence in fixations to the target for should. These results suggest two mental models are simultaneously activated, entailing both agreement and disagreement with the statement in question.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2010
Similarity is central to human cognition. Its relevance is apparent in nearly all theories of cog... more Similarity is central to human cognition. Its relevance is apparent in nearly all theories of cognitive science. Concept acquisition, metaphor, pattern recognition, priming, predictions, inferences; all these processes rely on similarity. Despite its relevance, relatively little is understood about how similarity is processed. In particular, there is a need to better understand the scope in which our perceptual systems constrain our judgments of similarity. The current study investigates this question in the area of visual cognition. By attempting to control for the influence of categorical knowledge, the goal was to understand how different types of feature-dimensions and category boundaries influence the perception of similarity. A connectionist model was developed to explain these findings.
Proceedings of the Annual Meeting of the Cognitive Science Society, 2010
Recent studies have shown that instead of a dichotomy between parallel and serial search strategi... more Recent studies have shown that instead of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional parallel or serial descriptions to labels of "efficient" and "inefficient." In the first experiment, we replicate previous findings regarding incremental spoken language comprehension on visual search processing utilizing a between subjects design. Next, a series of four experiments further explore the subtle timing of the influence of real-time language processing on visual search. The results provide further evidence toward understanding linguistically mediated influences on real-time visual search processing and support an interactive processing account of visual search and language comprehension.
Grammatical aspect is known to shape event understanding. However, little is known about how it i... more Grammatical aspect is known to shape event understanding. However, little is known about how it interacts with other important temporal information, such as recent and distant past. The current work uses computer-mouse tracking (Spivey et al., 2005) to explore the interaction of aspect and temporal context. Participants in our experiment listened to past motion event descriptions that varied according to aspect (simple past, past progressive) and temporal distance (recent past, distant past) while viewing scenes with paths and implied destinations. Participants used a computer mouse to place characters into the scene to match event descriptions. Our results indicated that aspect and temporal context interact in interesting ways. When aspect placed emphasis on the ongoing details of the event and the temporal context was recent (thus, making fine details available in memory), this match between conditions elicited smoother and faster computer mouse movements than when conditions mismatched. Likewise, when aspect placed emphasis on the less-detailed end state of the event and temporal context was in the distant past (thus making fine details less available), this match between conditions also elicited smoother and faster computer mouse movements.
Traditional parallel and serial descriptions of the visual search process are often inadequate wh... more Traditional parallel and serial descriptions of the visual search process are often inadequate when describing recent findings. Accordingly, literature and computational models have evolved from a dichotomous parallel and serial explanation to an account of search efficiency that is graded and continuous. In our current experiment, we replicate findings showing concurrent incremental information processing, via auditory spoken language, mediates visual search and improves search efficiency (Spivey et al., 2001; Reali et al., 2006; Chiu & Spivey, 2012). Novel to this study is the use of eye-tracking to investigate the role of language in mediating and improving strategies for visual search. We find evidence that search is best described as a purely parallel mechanism that immediately and rapidly integrates linguistic and visual information. This finding supports an interactive account of visual attention and spoken language.
In this paper, we propose an auditory search task using a virtual ambisonic environment presented... more In this paper, we propose an auditory search task using a virtual ambisonic environment presented through static Head-Related Transfer Functions (HRTF's). Head-tracking using a magnetometer captures the listener's orientation and presents an interactive auditory scene. Reaction times from 15 participants are compared for Simple and Complex auditory search tasks. The results lend support to the hypothesis that similar attentional mechanisms may constrain processing during visual and auditory search tasks.
Perception of Visual Similarity: Modeling Feature-Based Effects Michael Romano University of Cali... more Perception of Visual Similarity: Modeling Feature-Based Effects Michael Romano University of California, Merced Michael Spivey University of California, Merced Abstract: Similarity is central to human cognition. Its relevance is apparent in nearly all theories of cognitive science. Concept acquisition, metaphor, pattern recognition, priming, predictions, inferences; all these processes rely on similarity. Despite its relevance, relatively little is understood about how similarity is processed. In particular, there is a need to better understand the scope in which our perceptual systems constrain our judgments of similarity. The current study investigates this question in the area of visual cognition. By attempting to control for the influence of categorical knowledge, the goal was to understand how different types of feature-dimensions and category boundaries influence the perception of similarity. A connectionist model was developed to explain these findings.
Abstract What role does grammatical aspect play in understanding everyday motion events? Narrativ... more Abstract What role does grammatical aspect play in understanding everyday motion events? Narrative understanding tasks have investigated differences between the past progressive (was walking) and the simple past (walked), showing differences in prominence of information, but details about the temporal dynamics of processing have been largely ignored.
Abstract What role does grammatical aspect play in the time course of understanding motion events... more Abstract What role does grammatical aspect play in the time course of understanding motion events? Although processing differences between past progressive (was walking) and simple past (walked) aspect suggest differences in prominence of certain semantic properties, details about the temporal dynamics of aspect processing have been largely ignored.
Abstract What role does grammatical aspect play in understanding everyday motion events? Narrativ... more Abstract What role does grammatical aspect play in understanding everyday motion events? Narrative understanding tasks have investigated differences between the past progressive (was walking) and the simple past (walked), showing differences in prominence of information, but details about the temporal dynamics of processing have been largely ignored.
Abstract Participants performed a categorization task in which basiclevel animal names (eg, cat) ... more Abstract Participants performed a categorization task in which basiclevel animal names (eg, cat) were assigned to their superordinate categories (eg, mammal). Manual motor output was measured by sampling computer-mouse movement while participants clicked on the correct superordinate category label, and not on a simultaneously presented incorrect category.
Abstract Instead of subscribing to the view that people are unable to perform Bayesian probabilis... more Abstract Instead of subscribing to the view that people are unable to perform Bayesian probabilistic inference, recent research suggests that the algorithms people naturally use to perform Bayesian inference are better adapted for information presented in a natural frequency format than in the common probability format. We tested this hypothesis on the notoriously difficult three doors problem, inducing subjects to consider the likelihoods involved in terms of natural frequencies or in terms of probabilities.
Traditional parallel and serial descriptions of the visual search process are often inadequate wh... more Traditional parallel and serial descriptions of the visual search process are often inadequate when describing recent findings. Accordingly, literature and computational models have evolved from a dichotomous parallel and serial explanation to an account of search efficiency that is graded and continuous. In our current experiment, we replicate findings showing concurrent incremental information processing, via auditory spoken language, mediates visual search and improves search efficiency (Spivey et al., 2001; Reali et al., 2006; Chiu & Spivey, 2012). Novel to this study is the use of eye-tracking to investigate the role of language in mediating and improving strategies for visual search. We find evidence that search is best described as a purely parallel mechanism that immediately and rapidly integrates linguistic and visual information. This finding supports an interactive account of visual attention and spoken language.
Recent studies show that visual search often involves a combination of both parallel and serial s... more Recent studies show that visual search often involves a combination of both parallel and serial search strategies. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional parallel or serial descriptions to a continuum from “efficient” to “inefficient.” In our first experiments (1a & 1b), we demonstrate with various conditions that search efficiency does not increase with simultaneous delivery of target features in a conjunction-search task. In the second experiment, we explore effects of incremental non-linguistic information delivery and discover improvement of search efficiency. We find a facilitatory effect when non-linguistic visual delivery of target features is concurrent with the visual display onset, but not when the target features are delivered prior to display onset. The results support an interactive account of visual perception that explains linguistic and non-linguistic mediation of visual search as chiefly due to the incrementality of target feature delivery once search has begun.
Uploads
Books by Michael Spivey
Papers by Michael Spivey