Autism spectrum conditions (ASC) are associated with a number of atypicalities in face processing, including difficulties in face memory. However, the neural mechanisms underlying this difficulty are unclear. In neurotypical individuals,... more
Autism spectrum conditions (ASC) are associated with a number of atypicalities in face processing, including difficulties in face memory. However, the neural mechanisms underlying this difficulty are unclear. In neurotypical individuals, repeated presentation of the same face is associated with a reduction in activity, known as repetition suppression (RS), in the fusiform face area (FFA). However, to date, no studies have investigated RS to faces in individuals with ASC, or the relationship between RS and face memory. Here, we measured RS to faces and geometric shapes in individuals with a clinical diagnosis of an ASC and in age and IQ matched controls. Relative to controls, the ASC group showed reduced RS to faces in bilateral FFA and reduced performance on a standardized test of face memory. By contrast, RS to shapes in object-selective regions and object memory did not differ between groups. Individual variation in face-memory performance was positively correlated with RS in regions of left parietal and prefrontal cortex. These findings suggest difficulties in face memory in ASC may be a consequence of differences in the way faces are stored and/or maintained across a network of regions involved in both visual perception and short-term/working memory.
Traditionally time perception has been considered the product of a central, generic, cognitive mechanism. However, evidence is emerging for a distributive system with modality-specific sensory components (Morrone et al., 2005; Johnston et... more
Traditionally time perception has been considered the product of a central, generic, cognitive mechanism. However, evidence is emerging for a distributive system with modality-specific sensory components (Morrone et al., 2005; Johnston et al., 2006). Here we show that fast contrast adaptation, which can be observed in the retina, induces a change in apparent duration. The perceived duration of a subsecond interval containing a 50% luminance contrast drifting pattern is compressed when it follows a high (90%) as compared to a low (10%) contrast interval. The duration effect cannot be attributed to changes in latency at onset relative to offset, can be dissociated from the effect of contrast context on apparent speed or apparent contrast per se and it occurs in a retinocentric frame of reference.The temporal compression is limited to high drift temporal frequencies (≥10 Hz) and is not observed for equiluminant chromatic stimuli.This pattern of results indicates a major role for the magnocellular pathway in the neural encoding and representation of visual time.
Traditionally, time perception has been considered the product of a central, generic, cognitive mechanism. Recent evidence, however, has shown that high temporal frequency adaptation induces local reductions in the apparent... more
Traditionally, time perception has been considered the product of a central, generic, cognitive mechanism. Recent evidence, however, has shown that high temporal frequency adaptation induces local reductions in the apparent duration of brief intervals suggesting a distributive system with modality-specific sensory components. Here, we examine the effect of the luminance signal on these adaptation-based temporal distortions. Our results show that the luminance signal is crucial to generate duration compression as the effect disappears at isoluminance and that low visibility and task difficulty at isoluminance cannot explain the discrepancy. We also demonstrate that the effects of adaptation on perceived duration are dissociable from those on apparent temporal frequency. These results provide further evidence for the involvement of the magnocellular system in the neural encoding and representation of visual time.
$ Our ability to process numerical and temporal information is an evolutionary skill thought to originate from a common magnitude system. In line with a common magnitude system, we have previously shown that adaptation to duration alters... more
$ Our ability to process numerical and temporal information is an evolutionary skill thought to originate from a common magnitude system. In line with a common magnitude system, we have previously shown that adaptation to duration alters numerosity perception. Here, we investigate two hypotheses on how duration influences numerosity perception. A channel-based hypothesis predicts that numerosity perception is influenced by adaptation of onset/offset duration channels which also encode numerosity or wire together with numerosity channels (duration/numerosity channels). Hence, the onset/offset duration of the adapter is driving the effect regardless of the total duration of adaptation. A strength-of-adaptation hypothesis predicts that the effect of duration on numerosity perception is driven by the adaptation of numerosity channels only, with the total duration of adaptation driving the effect regardless of the onset/ offset duration of the adapter. We performed two experiments where we manipulated the onset/offset duration of the adapter, the adapter's total presentation time, and the total duration of the adaptation trial. The first experiment tested the effect of adaptation to duration on numerosity discrimination, whereas the second experiment tested the effect of adaptation to numerosity and duration on numerosity discrimination. We found that the effect of adaptation to duration on numerosity perception is primarily driven by adapting duration/numerosity channels, supporting the channel-based hypothesis. In contrast, the effect of adaptation to numerosity on numerosity perception appears to be driven by the total duration of the adaptation trial, supporting the strength-of-adaptation hypothesis. Thus, we show that adaptation of at least two temporal mechanisms influences numerosity perception.
Duration distortions have been shown to occur at the time of saccades and following high temporal frequency or contrast adaptation. Under all these conditions, changes in the temporal tuning of M neurons also occur, suggesting that there... more
Duration distortions have been shown to occur at the time of saccades and following high temporal frequency or contrast adaptation. Under all these conditions, changes in the temporal tuning of M neurons also occur, suggesting that there might be a link between the two phenomena. In order to explore this relationship further, we measured the apparent duration of visual stimuli in the dark, where the temporal impulse response has been reported to lengthen. We first measured a progressive shift and reduction of the occurrence of an apparent motion reversal as we decreased the luminance level, indicating a lengthening of the temporal impulse response. We then measured perceived duration at these luminance levels (0.75, 3, and 50 cd/m2) after matching for apparent contrast and temporal frequency. While perceived temporal frequency did not substantially differ across luminance levels, duration appeared expanded at the lowest luminance level relative to the highest by approximately 60 ms. Thus, we have shown that reduced luminance is associated with both a lengthening of the temporal impulse response and a duration expansion, linking the two and providing further evidence for a relationship between changes in the neuronal tuning in the early stages of the visual system and time perception.
Eye movements present the visual system with the challenge of providing the experience of a stable world. This appears to require the location of objects to be mapped from retinal to head and body referenced coordinates. Following D.... more
Eye movements present the visual system with the challenge of providing the experience of a stable world. This appears to require the location of objects to be mapped from retinal to head and body referenced coordinates. Following D. Burr, A. Tozzi, and M. C. Morrone (2007), here we address the issue of whether adaptation-based duration compression (A. Johnston, D. H. Arnold, & S. Nishida, 2006) takes place in a retinocentric or head-centric frame of reference. Duration compression may be associated with shifts in apparent temporal frequency. However, using an adaptation schedule that minimizes any effect of adaptation on apparent temporal frequency, we still find substantial apparent duration compression. Duration compression remains when the adaptor continuously translates in head-centered coordinates but is fixed on the retina, isolating retinal adaptation. Apparent duration was also measured after a change in gaze directionVa strategy which allows eye-centered and head-centered components of adaptation-induced duration compression to be distinguished. In two different paradigms, we found significant compression was elicited by retinotopic adaptation, with no significant change in apparent duration following spatiotopic adaptation. We also observed no interocular transfer of adaptation. These findings point to an early locus for the adaptation-based duration compression effect.
The two eyes of an individual routinely differ in their optical and neural properties, yet percepts through either eye remain more similar than predicted by these differences. Little is known as to how the brain resolves this conflicting... more
The two eyes of an individual routinely differ in their optical and neural properties, yet percepts through either eye remain more similar than predicted by these differences. Little is known as to how the brain resolves this conflicting information. Differences in visual inputs from the two eyes have been studied extensively in the context of binocular vision and rivalry [1], but it remains unknown how the visual system calibrates and corrects for normal variability in image quality between the eyes, and whether this correction is applied to each eye separately or after their signals have converged. To test this, we used adaptive optics to control and manipulate the blur projected on each retina, and then compared judgments of image focus through either eye and how these judgments were biased by adapting to different levels of blur. Despite significant interocular differences in the magnitude of optical blur, the blur level that appeared best focused was the same through both eyes, and corresponded to the ocular blur of the less aberrated eye. Moreover, for both eyes, blur aftereffects depended on whether the adapting blur was stronger or weaker than the native blur of the better eye, with no aftereffect when the blur equaled the aberrations of the better eye. Our results indicate that the neural calibration for the perception of image focus reflects a single ‘cyclopean’ site that is set monocularly by the eye with better optical quality. Consequently, what people regard as ‘best-focused’ matches the blur encountered through the eye with better optics, even when judging the world through the eye with poorer optics.