Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Chris Benton

Multi-sensor information fusion aims at extracting and combining useful information from different sensors. This paper addresses the problem of estimating and visualising motion information from Multi-sensor information fusion aims at... more
Multi-sensor information fusion aims at extracting and combining useful information from different sensors. This paper addresses the problem of estimating and visualising motion information from Multi-sensor information fusion aims at extracting and combining useful information from different sensors. This paper addresses the problem of estimating and visualising motion information from a pair of visible and infrared cameras, using an optical flow technique. Videos from cameras sensitive to visible light are rich in texture and colour information such that a moving target can readily be positioned. On the other hand, videos from infrared cameras provide extra information which cannot be detected in the visible-light spectrum. In this paper we introduce a stochastic rule for combining optical flow computed from two (or more) sources. We also propose a novel motion-contingent selection method for the fusion of the co-registered visible and infrared video sources.
Research Interests:
We present a reliable real-time optical flow estimation framework which can be used in surveillance applications and video analysis. In normal imaging environments, reliability can be achieved by combining an extended optical flow... more
We present a reliable real-time optical flow estimation framework which can be used in surveillance applications and video analysis. In normal imaging environments, reliability can be achieved by combining an extended optical flow constraint with a smoothing procedure and a masking procedure. In noisy environments, total least squares is adopted to ensure accuracy. The proposed system is able to recover up to 31 frames of dense optical flow per second using a Xeon 3.06GHz workstation, which makes it very useful in a range of surveillance systems that are based on standard PC hardware.
Research Interests:
The current paper presents a novel adaptive multiscale scheme to estimate optical flow from image sequences. The scheme models estimation uncertainties which are used to reduce the influence of unreliable intermediate estimates on... more
The current paper presents a novel adaptive multiscale scheme to estimate optical flow from image sequences. The scheme models estimation uncertainties which are used to reduce the influence of unreliable intermediate estimates on accuracy. The experimental results show that the proposed method provides more accurate estimates for both small and large motions than a standard multiscale scheme in which an increment is added to an intermediate estimate regardless of estimation certainty.
Research Interests:
We describe our studies on summarising surveillance videos using optical flow information. The proposed method incorporates motion analysis into a video skimming scheme in which the playback speed is determined by the detectability of... more
We describe our studies on summarising surveillance videos using optical flow information. The proposed method incorporates motion analysis into a video skimming scheme in which the playback speed is determined by the detectability of interesting motion behaviours according to prior information. A psycho-visual experiment was conducted to compare human performance and viewing strategy for summarised videos using standard video skimming techniques and a proposed motion-based adaptive summarisation technique.
Research Interests:
Evidence that alcohol leads to increased aggressive behaviour is equivocal and confounded by evidence that such effects may operate indirectly via expectancy. One mechanism by which alcohol consumption may increase aggressive behaviour is... more
Evidence that alcohol leads to increased aggressive behaviour is equivocal and confounded by evidence that such effects may operate indirectly via expectancy. One mechanism by which alcohol consumption may increase aggressive behaviour is via alterations in the processing of emotional facial cues.

We investigated whether acute alcohol consumption or the expectancy of consuming alcohol (or both) induces differences in the categorisation of ambiguous emotional expressions. We also explored differences between male and female participants, using male and female facial cues of emotional expression.

Following consumption of a drink, participants completed a categorisation task in which they had to identify the emotional expression of a facial stimulus. Stimuli were morphed facial images ranging between unambiguously angry and happy expressions (condition 1) or between unambiguously angry and disgusted expressions (condition 2). Participants (N = 96) were randomised to receive an alcoholic or non-alcoholic drink and to be told that they would receive an alcoholic or non-alcoholic drink.

Significant effects of alcohol were obtained in the angry-disgusted task condition, but only when the target facial stimulus was male. Participants tended to categorise male disgusted faces as angry after alcohol, but not after placebo.

Our data indicate that alcohol consumption may increase the likelihood of an ambiguous but negative facial expression being judged as angry. However, these effects were only observed for male faces and therefore may have been influenced by the greater expectation of aggression in males compared to females. Implications for alcohol-associated aggressive behaviour are discussed.
Research Interests:
We recently demonstrated that alcohol elicits a difference between men and women in perceptual threshold for facial expressions of sadness. However, this study did not include a manipulation of alcohol expectancy. Therefore, we sought to... more
We recently demonstrated that alcohol elicits a difference between men and women in perceptual threshold for facial expressions of sadness. However, this study did not include a manipulation of alcohol expectancy. Therefore, we sought to determine whether these effects may be due to the expectation of having consumed alcohol. Male and female participants (n = 100) were randomised using a balanced-placebo design to receive either an alcoholic or a non-alcoholic drink and to be told that this was alcoholic or non-alcoholic. Participants completed a psychophysical task which presented male and female faces expressing angry, happy, and sad emotions. Analysis of threshold data indicated a significant two-way interaction of drink × target emotion, reflecting a higher threshold for the detection of sad facial expressions of emotion, compared with angry or happy expressions, in the alcohol condition compared with the placebo condition. We did not observe any evidence of sex differences in these effects. Our data indicate that alcohol modifies the perceptual threshold for facial expressions of sadness. Unlike our previous report, we did not observe evidence of sex differences in these effects. Most importantly, we did not observe any evidence that these effects were due to expectancy effects associated with alcohol consumption.
Research Interests:
Alcohol consumption has been associated with increases in aggressive behaviour. However, experimental evidence of a direct association is equivocal, and mechanisms that may underlie this relationship are poorly understood. One mechanism... more
Alcohol consumption has been associated with increases in aggressive behaviour. However, experimental evidence of a direct association is equivocal, and mechanisms that may underlie this relationship are poorly understood. One mechanism by which alcohol consumption may increase aggressive behaviour is via alterations in processing of emotional facial cues. We investigated the effects of acute alcohol consumption on sensitivity to facial expressions of emotion. Participants attended three experimental sessions where they consumed an alcoholic drink (0.0, 0.2 or 0.4 g/kg), and completed a psychophysical task to distinguish expressive from neutral faces. The level of emotion in the expressive face varied across trials the threshold at which the expressive face was reliably identified and measured. We observed a significant three-way interaction involving emotion, participant sex and alcohol dose. Male participants showed significantly higher perceptual thresholds for sad facial expressions compared with female participants following consumption of the highest dose of alcohol. Our data indicate sex differences in the processing of facial cues of emotional expression following alcohol consumption. There was no evidence that alcohol altered the processing of angry facial expressions. Future studies should examine effects of alcohol expectancy and investigate the effects of alcohol on the miscategorisation of emotional expressions.
Research Interests:
Nonlinear processing can be used to recover the motion of contrast modulations of binary noise patterns. A nonlinear stage has also been proposed to explain the perception of forward motion in motion sequences which typically elicit... more
Nonlinear processing can be used to recover the motion of contrast modulations of binary noise patterns. A nonlinear stage has also been proposed to explain the perception of forward motion in motion sequences which typically elicit reversed-phi. We examined perceived direction of motion for stimuli in which these reversed motion sequences were used to modulate the contrast of binary noise patterns. A percept of forward motion could be elicted by both luminance-defined and contrast-defined stimuli. The perceived direction of motion seen in the contrast-defined stimuli showed a profound carrier dependency. The replacement of a static carrier by a dynamic carrier can reverse the perceived direction of motion. Forward motion was never seen with dynamic carriers. For luminance- and contrast-defined patterns the reversed motion percept increasingly dominated, with increases in the spatial frequency and temporal frequency of the modulation. Differences in the patterns of responses to the two stimuli over spatial and temporal frequency were abolished by the addition of noise to the luminance-defined stimulus. These data suggest the possibility that a single mechanism may mediate the perception of luminance- and contrast-defined motion.
Research Interests:
The class of microbalanced motion stimuli is thought to contain no systematic directional biases in motion energy. The fact that we can see motion in such stimuli implies that models of human motion perception based on Fourier... more
The class of microbalanced motion stimuli is thought to contain no systematic directional biases in motion energy. The fact that we can see motion in such stimuli implies that models of human motion perception based on Fourier decomposition need to be revised. The validity of one widely studied class of microbalanced stimuli, contrast modulated noise, has recently been questioned. It has been proposed that stochastic local biases in the noise carrier give rise to luminance artifacts detectable by a Fourier energy mechanism. However, in this study we show that the response of a motion energy system to contrast modulated noise shows no directional bias over a number of carrier configurations. We conclude that this class of stimuli remains an important tool for researchers wishing to study non-Fourier motion.
Research Interests:
Speed discrimination thresholds were measured for first- and second-order Gaussian bars and edges as a function of speed and the spatial scale of the modulation signal. Discrimination thresholds were generally higher for the second-order... more
Speed discrimination thresholds were measured for first- and second-order Gaussian bars and edges as a function of speed and the spatial scale of the modulation signal. Discrimination thresholds were generally higher for the second-order patterns when compared with modulations of luminance. There were no systematic effects of variations in the width of the bars and edges. The results are discussed in relation to mechanisms for the explicit recovery of contrast modulations and the influence of the form of the carrier signal on visual performance in second-order motion tasks.
Research Interests:
Current computational models of motion processing in the primate motion pathway do not cope well with image sequences in which a moving pattern is superimposed upon a static texture. The use of non-linear operations and the need for... more
Current computational models of motion processing in the primate motion pathway do not cope well with image sequences in which a moving pattern is superimposed upon a static texture. The use of non-linear operations and the need for contrast normalization in motion models mean that the separation of the influences of moving and static patterns on the motion computation is not trivial. Therefore, the response to the superposition of static and moving patterns provides an important means of testing various computational strategies. Here we describe a computational model of motion processing in the visual cortex, one of the advantages of which is that it is highly resistant to interference from static patterns.
Research Interests:
In this study, we show that negative polarity noise patterns appear to have a higher contrast than positive polarity noise patterns with identical expected Fourier amplitude spectra. This demonstrates a failure of contrast constancy over... more
In this study, we show that negative polarity noise patterns appear to have a higher contrast than positive polarity noise patterns with identical expected Fourier amplitude spectra. This demonstrates a failure of contrast constancy over changes in pattern polarity. An examination of local contrast measures shows that negative polarity noise has a wider distribution of local contrast values than positive polarity noise. We propose that the difference in apparent contrast between the two patterns may be based upon spatial non-linearities in the combination of local contrast measures.
Research Interests:
Velocity matching using the method of Constant Stimuli shows that perceived velocity varies with contrast [Thompson, P. (1982). Perceived rate of movement depends upon contrast. Vision Research, 22, 377–380]. Random contrast jitter would... more
Velocity matching using the method of Constant Stimuli shows that perceived velocity varies with contrast [Thompson, P. (1982). Perceived rate of movement depends upon contrast. Vision Research, 22, 377–380]. Random contrast jitter would therefore be expected to increase the slopes of psychometric functions, and thus the velocity discrimination threshold. However, McKee, S., Silverman, G., and Nakayama, K. [(1986) Precise velocity discrimination despite random variation in temporal frequency. Vision Research, 26, 609–620] found no effect of contrast jitter on thresholds, using the method of single stimuli. To determine whether this apparent discrepancy is due to the difference in methodology, or to the different ranges of temporal frequencies used in the two studies, we used the method of single stimuli to measure psychometric functions at three different velocities (0.5, 2.0 and 4.0°/s). We found that contrast jitter increased thresholds at low but not at high velocities. Separate analysis of the psychometric functions at each contrast level showed that increases in contrast increased perceived velocity at low standard speeds (0.5°/s) but not at high. We conclude that the effect of contrast on perceived speed is real, and not a methodological artefact, but that it is found only at low temporal frequencies.
Research Interests:
When a static textured background is covered and uncovered by a moving bar of the same mean luminance we can clearly see the motion of the bar. Texture-defined motion provides an example of a naturally occurring second-order motion.... more
When a static textured background is covered and uncovered by a moving bar of the same mean luminance we can clearly see the motion of the bar. Texture-defined motion provides an example of a naturally occurring second-order motion. Second-order motion sequences defeat standard spatio-temporal energy models of motion perception. It has been proposed that second-order stimuli are analysed by separate systems, operating in parallel with luminance-defined motion processing, which incorporate identifiable pre-processing stages that make second-order patterns visible to standard techniques. However, the proposal of multiple paths to motion analysis remains controversial. Here we describe the behaviour of a model that recovers both luminance-defined and an important class of texture-defined motion. The model also accounts for the induced motion that is seen in some texture-defined motion sequences. We measured the perceived direction and speed of both the contrast envelope and induced motion in the case of a contrast modulation of static noise textures. Significantly, the model predicts the perceived speed of the induced motion seen at second-order texture boundaries. The induced motion investigated here appears distinct from classical induced effects resulting from motion contrast or the movement of a reference frame.
Research Interests:
Despite detailed psychophysical, neurophysiological and electrophysiological investigation, the number and nature of independent and parallel motion processing mechanisms in the visual cortex remains controversial. Here we use... more
Despite detailed psychophysical, neurophysiological and electrophysiological investigation, the number and nature of independent and parallel motion processing mechanisms in the visual cortex remains controversial. Here we use computational modelling to evaluate evidence from two psychophysical studies collectively thought to demonstrate the existence of three separate and independent motion processing channels. We show that the pattern of psychophysical results can largely be accounted for by a single mechanism. The results demonstrate that a low-level luminance based approach can potentially provide a wider account of human motion processing than generally thought possible.
Research Interests:
It is generally assumed that the perception of non-Fourier motion requires the operation of some nonlinearity before motion analysis. We apply a computational model of biological motion processing to a class of non-Fourier motion stimuli... more
It is generally assumed that the perception of non-Fourier motion requires the operation of some nonlinearity before motion analysis. We apply a computational model of biological motion processing to a class of non-Fourier motion stimuli designed to investigate nonlinearity in human visual processing. The model correctly detects direction of motion in these non-Fourier stimuli without recourse to any preprocessing nonlinearity. This demonstrates that the non-Fourier motion in some non-Fourier stimuli is directly available to luminance-based motion mechanisms operating on measurements of local spatial and temporal gradients.
Research Interests:
It has been widely accepted that standard low-level computational approaches to motion processing cannot extract texture-defined motion without applying some pre-processing nonlinearity. This has motivated accounts of motion perception in... more
It has been widely accepted that standard low-level computational approaches to motion processing cannot extract texture-defined motion without applying some pre-processing nonlinearity. This has motivated accounts of motion perception in which luminance- and texture-defined motion are processed by separate mechanisms. Here, we introduce a novel method of image description where motion sequences may be described in terms of their local spatial and temporal gradients. This allows us to assess the local velocity information available to standard low-level motion mechanisms. Our analysis of several texture-motion stimuli shows that the information indicating correct texture-motion velocity and/or direction is present in the raw luminance measures. This raises the possibility that luminance-motion and texture-motion may be processed by the same cortical mechanisms. Our analysis offers a way of looking at texture-motion processing that is, to our knowledge, new and original.
Research Interests:
A gradient-based image analysis technique is applied to a class of non-Fourier stimuli. To create the stimuli, n translating sine waves with identical spatial and temporal frequencies, but separated by 2π/n radians, are spatially randomly... more
A gradient-based image analysis technique is applied to a class of non-Fourier stimuli. To create the stimuli, n translating sine waves with identical spatial and temporal frequencies, but separated by 2π/n radians, are spatially randomly sampled to produce a Pn stimulus. For n⩾2, the stimuli are non-Fourier. Local image gradients are represented in the form of a gradient plot, a histogram which shows the frequency of ranges of temporal gradient/spatial gradient pairs occurring. It is shown that the gradient plots contain features, oriented in gradient space, which indicate correct non-Fourier velocity. As n increases, so too does the complexity of the gradient plots, a finding which may account for the concomitant decrease in perceived coherent motion [Vision Res 37 (1997) 1459]. This paper demonstrates that the gradient plot and associated velocity plots are a useful way of assessing gradient-based motion information. Compared to the traditional Fourier based approach, gradient-based analysis can lead to different judgement of the motion information available to standard models of low-level motion processing.
Research Interests:
Research Interests:
A theory of early motion processing in the human and primate visual system is presented which is based on the idea that spatio-temporal retinal image data is represented in primary visual cortex by a truncated 3D Taylor expansion that we... more
A theory of early motion processing in the human and primate visual system is presented which is based on the idea that spatio-temporal retinal image data is represented in primary visual cortex by a truncated 3D Taylor expansion that we refer to as a jet vector. This representation allows all the concepts of differential geometry to be applied to the analysis of visual information processing. We show in particular how the generalised Stokes theorem can be used to move from the calculation of derivatives of image brightness at a point to the calculation of image brightness differences on the boundary of a volume in space-time and how this can be generalised to apply to integrals of products of derivatives. We also provide novel interpretations of the roles of direction selective, bi-directional and pan-directional cells and of type I and type II cells in V5/MT.
Research Interests:
When viewing two superimposed, translating sets of dots moving in different directions, one overestimates direction difference. This phenomenon of direction repulsion is thought to be driven by inhibitory interactions between... more
When viewing two superimposed, translating sets of dots moving in different directions, one overestimates direction difference. This phenomenon of direction repulsion is thought to be driven by inhibitory interactions between directionally tuned motion detectors. However, there is disagreement on where this occurs-at early stages of motion processing, when local motions are extracted; or at the later, global motion-processing stage following "pooling" of these local measures. These two stages of motion processing have been identified as occurring in area V1 and the human homolog of macaque MT/V5, respectively. We designed experiments in which local and global predictions of repulsion are pitted against one another. Our stimuli contained a target set of dots, moving at a uniform speed, superimposed on a "mixed-speed" distractor set. Because the perceived speed of a mixed-speed stimulus is equal to the dots' average speed, a global-processing account of direction repulsion predicts that repulsion magnitude induced by a mixed-speed distractor will be indistinguishable from that induced by a single-speed distractor moving at the same mean speed. This is exactly what we found. These results provide compelling evidence that global-motion interactions play a major role in driving direction repulsion.
Research Interests:
Direction repulsion describes the phenomenon in which observers typically overestimate the direction difference between two superimposed motions moving in different directions (Marshak & Sekuler, Science 205 (1979) 1399). Previous... more
Direction repulsion describes the phenomenon in which observers typically overestimate the direction difference between two superimposed motions moving in different directions (Marshak & Sekuler, Science 205 (1979) 1399). Previous research has found that, when a relatively narrow range of distractor speeds is considered, direction repulsion of a target motion increases monotonically with increasing speed of the distractor motion. We sought to obtain a more complete measurement of this speed-tuning function by considering a wider range of distractor speeds than has previously been used. Our results show that, contrary to previous reports, direction repulsion as a function of distractor speed describes an inverted U-function. For a target of 2.5 deg/s, we demonstrate that the attenuation of repulsion magnitude with high-speed disractors can be largely explained in terms of the reduced apparent contrast of the distractor. However, when we reduce target motion speed, this no longer holds. When considered from the perspective of Edwards et al.'s (Edwards, Badcock, & Smith, Vision Research 38 (1998) 1573) two global-motion channels, our results suggest that direction repulsion is speed dependent when the distractor and target motions are processed by different global-motion channels, but is not speed dependent when both motions are processed by the same, high-speed channel. The implications of these results for models of direction repulsion are discussed.
Research Interests:
Two low-level motion models are applied to a second-order stimulus, a translating contrast modulation of static binary noise. Both models have been used to demonstrate equivalence between energy and gradient algorithms and can be split... more
Two low-level motion models are applied to a second-order stimulus, a translating contrast modulation of static binary noise. Both models have been used to demonstrate equivalence between energy and gradient algorithms and can be split into a motion-opponent stage followed by a contrast-normalised stage. Analysis of results shows no directional bias at the motion-opponent stage but a strong bias, indicating the correct direction of second-order motion, at the contrast-normalised stage. This demonstrates that the intrinsically non-linear process of contrast-normalisation may play a part in the detection of second-order motion.
Research Interests:
Neural adaptation and inhibition are pervasive characteristics of the primate brain and are probably understood better within the context of visual processing than with any other sensory modality. These processes are thought to underlie... more
Neural adaptation and inhibition are pervasive characteristics of the primate brain and are probably understood better within the context of visual processing than with any other sensory modality. These processes are thought to underlie illusions in which one motion affects the perceived direction of another, such as the direction aftereffect (DAE) and direction repulsion. The DAE describes how, following prolonged viewing of motion in one direction, the direction of a subsequently viewed test pattern is misperceived. In the case of direction repulsion, the direction difference between two transparently moving surfaces is overestimated. Explanations of the DAE appeal to neural adaptation, whereas direction repulsion is accounted for through lateral inhibition. Here, we report on a new illusion, the binary DAE (bDAE), in which superimposed slow and fast dots moving in the same direction are perceived to move in different directions following adaptation to a mixed-speed stimulus. This new phenomenon is essentially a combination of the DAE and direction repulsion. Interestingly, the magnitude of the bDAE is greater than would be expected simply through a linear combination of the DAE and direction repulsion, suggesting that the mechanisms underlying these two phenomena interact in a nonlinear fashion.
Research Interests:
We produced morph sequences between identities at a variety of viewpoints, ranging from the three quarter leftward facing view, to the three quarter rightward facing view. We measured the strength of identity adaptation as a function of... more
We produced morph sequences between identities at a variety of viewpoints, ranging from the three quarter leftward facing view, to the three quarter rightward facing view. We measured the strength of identity adaptation as a function of changing test viewpoint whilst keep the adaptation viewpoint constant, and as a function of adaptation viewpoint whilst keeping test viewpoint constant. Our results show a substantial decrease in adaptation as the angle between adaptation and test viewpoint increases. These findings persisted when we introduced controls for low-level retinotopic adaptation, leading us to conclude that our results show strong evidence for viewpoint dependence in the high-level encoding of facial identity. Our findings support models in which identity is encoded, to a large degree, by viewpoint dependent non-retinotopic neural mechanisms. Functional imaging studies suggest the fusiform gyrus as the most likely location for this mechanism.
Research Interests:
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereffect (MAE). The results of our first experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function.... more
Using a speed-matching task, we measured the speed tuning of the dynamic motion aftereffect (MAE). The results of our first experiment, in which we co-varied dot speed in the adaptation and test stimuli, revealed a speed tuning function. We sought to tease apart what contribution, if any, the test stimulus makes towards the observed speed tuning. This was examined by independently manipulating dot speed in the adaptation and test stimuli, and measuring the effect this had on the perceived speed of the dynamic MAE. The results revealed that the speed tuning of the dynamic MAE is determined, not by the speed of the adaptation stimulus, but by the local motion characteristics of the dynamic test stimulus. The role of the test stimulus in determining the perceived speed of the dynamic MAE was confirmed by showing that, if one uses a test stimulus containing two sources of local speed information, observers report seeing a transparent MAE; this is despite the fact that adaptation is induced using a single-speed stimulus. Thus while the adaptation stimulus necessarily determines perceived direction of the dynamic MAE, its perceived speed is determined by the test stimulus. This dissociation of speed and direction supports the notion that the processing of these two visual attributes may be partially independent.
Research Interests:
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The... more
The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereffect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction difference between the adapted direction and a subsequently viewed stimulus moving in a different direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.
Research Interests:
We tested the hypothesis that the right cerebral hemisphere contributes to the enhanced body image distortions seen in women when compared to men. Using classical psychophysics, 60 right-handed healthy participants (30 women) were briefly... more
We tested the hypothesis that the right cerebral hemisphere contributes to the enhanced body image distortions seen in women when compared to men. Using classical psychophysics, 60 right-handed healthy participants (30 women) were briefly presented with size-distorted pictures of themselves, another person (an experimenter), and a non-corporal, familiar object (a coke bottle) to the central, right, and left visual field. Participants had to decide whether the presented stimulus was fatter or thinner than the real body/object, and thus compare the presented picture with the stored representation of the stimulus from memory. From these data we extracted the amount of image distortion at which participants judged the various stimuli to be veridical. We found that right visual field presentations (initial left hemisphere processing) revealed a general "fatter" bias, which was more evident for bodies than for objects. Crucially, a "fatter" bias with own body presentations in the left visual field (initial right hemisphere processing) was only found for women. Our findings suggest that right visual field presentation results in a general size overestimation, and that this overestimation is more pronounced for bodies than for objects. Moreover, the particular "fatter" bias after own body presentations to the left visual field in women supports the notion of a specific role of the right hemisphere in sex-specific body image distortion.
Research Interests:
Prolonged viewing of a face can result in a change of our perception of subsequent faces. This process of adaptation is believed to be functional and to reflect optimization-driven changes in the neural encoding. Because it is believed to... more
Prolonged viewing of a face can result in a change of our perception of subsequent faces. This process of adaptation is believed to be functional and to reflect optimization-driven changes in the neural encoding. Because it is believed to target the neural systems underlying face processing, the measurement of face aftereffects is seen as a powerful behavioral technique that can provide deep insights into our facial encoding. Face identity aftereffects have typically been measured by assessing the way in which adaptation changes the perception of images from a test sequence, the latter commonly derived from morphing between two base images. The current study asks to what extent such face aftereffects are driven by the test sequence used to measure them. Using subjects trained to respond either to identity of expression, we examined the effects of identity and expression adaptation on test stimuli that varied in both identity and expression. We found that face adaptation produced measured aftereffects that were congruent with the adaptation stimulus; the composition of the test sequences did not affect the measured direction of the face aftereffects. Our results support the view that face adaptation studies can meaningfully tap into the intrinsically multidimensional nature of our representation of facial identity.
Research Interests:
It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction... more
It is well known that context influences our perception of visual motion direction. For example, spatial and temporal context manipulations can be used to induce two well-known motion illusions: direction repulsion and the direction after-effect (DAE). Both result in inaccurate perception of direction when a moving pattern is either superimposed on (direction repulsion), or presented following adaptation to (DAE), another pattern moving in a different direction. Remarkable similarities in tuning characteristics suggest that common processes underlie the two illusions. What is not clear, however, is whether the processes driving the two illusions are expressions of the same or different neural substrates. Here we report two experiments demonstrating that direction repulsion and the DAE are, in fact, expressions of different neural substrates. Our strategy was to use each of the illusions to create a distorted perceptual representation upon which the mechanisms generating the other illusion could potentially operate. We found that the processes mediating direction repulsion did indeed access the distorted perceptual representation induced by the DAE. Conversely, the DAE was unaffected by direction repulsion. Thus parallels in perceptual phenomenology do not necessarily imply common neural substrates. Our results also demonstrate that the neural processes driving the DAE occur at an earlier stage of motion processing than those underlying direction repulsion.
Research Interests:
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object... more
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects.
Research Interests:
The ability of human observers to detect ‘biological motion’ of humans and animals has been taken as evidence of specialized perceptual mechanisms. This ability remains unimpaired when the stimulus is reduced to a moving array of dots... more
The ability of human observers to detect ‘biological motion’ of humans and animals has been taken as evidence of specialized perceptual mechanisms. This ability remains unimpaired when the stimulus is reduced to a moving array of dots representing only the joints of the agent: the point light walker (PLW) (G. Johansson, 1973). Such stimuli arguably contain underlying form, and recent debate has centered on the contributions of form and motion to their processing (J. O. Garcia & E. D. Grossman, 2008; E. Hiris, 2007). Human actions contain periodic variations in form; we exploit this by using brief presentations to reveal how these natural variations affect perceptual processing. Comparing performance with static and dynamic presentations reveals the influence of integrative motion signals. Form information appears to play a critical role in biological motion processing and our results show that this information is supported, not replaced, by the integrative motion signals conveyed by the relationships between the dots of the PLW. However, our data also suggest strong task effects on the relevance of the information presented by the PLW. We discuss the relationship between task performance and stimulus in terms of form and motion information, and the implications for conclusions drawn from PLW based studies.
Research Interests:
Recent studies have shown that reaction times to expressions of anger with averted gaze and fear with direct gaze appear slower than those to direct anger and averted fear. Such findings have been explained by appealing to the notion of... more
Recent studies have shown that reaction times to expressions of anger with averted gaze and fear with direct gaze appear slower than those to direct anger and averted fear. Such findings have been explained by appealing to the notion of gaze/expression congruence with aversion (avoidance) associated with fear, whereas directness (approach) is associated with anger. The current study examined reactions to briefly presented direct and averted faces displaying expressions of fear and anger. Participants were shown four blocked series of faces; each block contained an equal mix of two facial expressions (neutral plus either fear or anger) presented at one viewpoint (either full face or three quarter leftward facing). Participants were instructed to make rapid responses classifying the expressions as either neutral or expressive. Initial analysis of reaction time distributions showed differences in distribution shape with reactions to averted anger and direct fear showing greater skew than those to direct anger and averted fear. Computational modelling, using a diffusion model of decision making and reaction time, showed a difference in the rate of information accrual with more rapid rates of accrual when viewpoint and expression were congruent. This analysis supports the notion of signal congruence as a mechanism through which gaze and viewpoint affect our responses to facial expressions.
Research Interests:
Interacting with a dynamic environment calls for close coordination between the timing and direction of motor behaviors. Accurate motor behavior requires the system to predict where the target for action will be, both when action planning... more
Interacting with a dynamic environment calls for close coordination between the timing and direction of motor behaviors. Accurate motor behavior requires the system to predict where the target for action will be, both when action planning is complete and when the action is executed. In the current study, we investigate the time course of velocity information accrual in the period leading up to a saccade toward a moving object. In two experiments, observers were asked to generate saccades to one of two moving targets. Experiment 1 looks at the accuracy of saccades to targets that have trial-by-trial variations in velocity. We show that the pattern of errors in saccade landing position is best explained by proposing that trial-by-trial target velocity is taken into account in saccade planning. In Experiment 2, target velocity stepped up or down after a variable interval after the movement cue. The extent to which the movement endpoint reflects pre- or post-step velocity can be used to identify the temporal velocity integration window; we show that the system takes a temporally blurred snapshot of target velocity centered ∼200 ms before saccade onset. This estimate is used to generate a dynamically updated prediction of the target's likely future location.
Research Interests:
The point light walker (PLW) has been taken to demonstrate the existence of mechanisms specialised in the processing of biological motion, but the roles of form and motion information in such processing remain unclear. While processing is... more
The point light walker (PLW) has been taken to demonstrate the existence of mechanisms specialised in the processing of biological motion, but the roles of form and motion information in such processing remain unclear. While processing is robust to distortion and exclusion of the local motion signals of the individual elements of the PLW, the motion relationships between the elements – referred to as opponent motion – have been suggested to be crucial. By using Gabor patches oriented in relation to the opponent motion paths as the elements of the PLW, the influence of form and opponent motion information on biological motion processing can be compared. In both a detection in noise, and a novel form distortion task, performance was improved by orienting the elements orthogonally to the opponent motion paths – strengthening the opponent motion signal – compared to orienting them collinearly. However, similar benefits were found with static tasks presentations. Orienting the Gabor patches orthogonally to their opponent motion also benefits contour integration mechanisms by aligning neighbouring elements along the limbs of the PLW. During static presentations this enhanced form cue could account for all the changes in performance, and the lack of additional improvement in moving presentations suggests that the strengthened opponent motion signal may not be affecting performance. We suggest the results demonstrate the primacy of form information over that of opponent motion in the processing of biological motion from PLW stimuli.
Research Interests:
In a recent paper, Edwards and Grainger (2006) manipulated the coherence of random dot patterns and found that a reduction in coherence led to an increase in perceived speed; they took this to indicate that vector averaging is not... more
In a recent paper, Edwards and Grainger (2006) manipulated the coherence of random dot patterns and found that a reduction in coherence led to an increase in perceived speed; they took this to indicate that vector averaging is not employed in global speed calculations (Edwards & Grainger, 2006). We would like to take this opportunity to comment on the generality of their findings.
Research Interests:
Here, we describe a motion stimulus in which the quality of rotation is fractal. This makes its motion unavailable to the translation-based motion analysis known to underlie much of our motion perception. In contrast, normal rotation can... more
Here, we describe a motion stimulus in which the quality of rotation is fractal. This makes its motion unavailable to the translation-based motion analysis known to underlie much of our motion perception. In contrast, normal rotation can be extracted through the aggregation of the outputs of translational mechanisms. Neural adaptation of these translation-based motion mechanisms is thought to drive the motion after-effect, a phenomenon in which prolonged viewing of motion in one direction leads to a percept of motion in the opposite direction. We measured the motion after-effects induced in static and moving stimuli by fractal rotation. The after-effects found were an order of magnitude smaller than those elicited by normal rotation. Our findings suggest that the analysis of fractal rotation involves different neural processes than those for standard translational motion. Given that the percept of motion elicited by fractal rotation is a clear example of motion derived from form analysis, we propose that the extraction of fractal rotation may reflect the operation of a general mechanism for inferring motion from changes in form.
Research Interests:
A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object... more
A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs.
Research Interests:
Increased vigilance to threat-related stimuli is thought to be a core cognitive feature of anxiety. We sought to investigate the cognitive impact of experimentally induced anxiety, by means of a 7.5% CO(2) challenge, which acts as an... more
Increased vigilance to threat-related stimuli is thought to be a core cognitive feature of anxiety. We sought to investigate the cognitive impact of experimentally induced anxiety, by means of a 7.5% CO(2) challenge, which acts as an unconditioned anxiogenic stimulus, on attentional bias for positive and negative facial cues of emotional expression in the dot-probe task. In two experiments we found robust physiological and subjective effects of the CO(2) inhalation consistent with the claim that the procedure reliably induces anxiety. Data from the dot-probe task demonstrated an attentional bias to emotional facial expressions compared with neutral faces regardless of valence (happy, angry, and fearful). These attentional effects, however, were entirely inconsistent in terms of their relationship with induced anxiety. We conclude that the previously reported poor reliability of this task is the most parsimonious explanation for our conflicting findings and that future research should develop a more reliable paradigm for measuring attentional bias in this field.
Research Interests:
Evidence suggests that underlying the human system processing facial expressions are two types of representation of expression: one dependent on identity and the other independent of identity. We recently presented findings indicating... more
Evidence suggests that underlying the human system processing facial expressions are two types of representation of expression: one dependent on identity and the other independent of identity. We recently presented findings indicating that identity-dependent representations are encoded using a prototype-referenced scheme, in a manner notably similar to that proposed for facial identity. Could it be that identity-independent representations are encoded this way too? We investigated this by adapting participant to anti-expressions and asking them to categorize the expression aftereffect in a prototype probe that was either the same (congruent) or different (incongruent) identity to that of the adapter. To distinguish between encoding schemes, we measured how aftereffect magnitude changed in response to variations in the strength of adapters. The increase in aftereffect magnitude with adapter strength characteristic of prototype-referenced encoding was observed in both congruent and, crucially, incongruent conditions. We conclude that identity-independent representations of expression are indeed encoded using a prototype-referenced scheme. The striking similarity between the encoding of facial identity and both representations of expression raises the possibility that prototype-referenced encoding might be a common scheme for encoding the many types of information in faces needed to enable our complex social interactions.
Research Interests:
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We... more
We used visual search to explore whether the preattentive mechanisms that enable rapid detection of facial expressions are driven by visual information from the displacement of features in expressions, or other factors such as affect. We measured search slopes for luminance and contrast equated images of facial expressions and anti-expressions of six emotions (anger, fear, disgust, surprise, happiness, and sadness). Anti-expressions have an equivalent magnitude of facial feature displacements to their corresponding expressions, but different affective content. There was a strong correlation between these search slopes and the magnitude of feature displacements in expressions and anti-expressions, indicating feature displacement had an effect on search performance. There were significant differences between search slopes for expressions and anti-expressions of happiness, sadness, anger, and surprise, which could not be explained in terms of feature differences, suggesting preattentive mechanisms were sensitive to other factors. A categorization task confirmed that the affective content of expressions and anti-expressions of each of these emotions were different, suggesting signals of affect might well have been influencing attention and search performance. Our results support a picture in which preattentive mechanisms may be driven by factors at a number of levels, including affect and the magnitude of feature displacement. We note that indirect effects of feature displacement, such as changes in local contrast, may well affect preattentive processing. These are most likely to be nonlinearly related to feature displacement and are, we argue, an important consideration for any study using images of expression to explore how affect guides attention. We also note that indirect effects of feature displacement (for example, changes in local contrast) may well affect preattentive processing. We argue that such effects are an important consideration for any study using images of expression to explore how affect guides attention.
Research Interests:
Event duration perception is fundamental to cognitive functioning. Recent research has shown that localized sensory adaptation compresses perceived duration of brief visual events in the adapted location; however, there is disagreement on... more
Event duration perception is fundamental to cognitive functioning. Recent research has shown that localized sensory adaptation compresses perceived duration of brief visual events in the adapted location; however, there is disagreement on whether the source of these temporal distortions is cortical or pre-cortical. The current study reveals that spatially localized duration compression can also be direction contingent, in that duration compression is induced when adapting and test stimuli move in the same direction but not when they move in opposite directions. Because of its direction-contingent nature, the induced duration compression reported here is likely to be cortical in origin. A second experiment shows that the adaptation processes driving duration compression can occur at or beyond human cortical area MT+, a specialized motion center located upstream from primary visual cortex. The direction-specificity of these temporal mechanisms, in conjunction with earlier reports of pre-cortical temporal mechanisms driving duration perception, suggests that our encoding of subsecond event duration is driven by activity at multiple levels of processing.
Research Interests:
How do we visually encode facial expressions? Is this done by viewpoint-dependent mechanisms representing facial expressions as two-dimensional templates or do we build more complex viewpoint independent three-dimensional representations?... more
How do we visually encode facial expressions? Is this done by viewpoint-dependent mechanisms representing facial expressions as two-dimensional templates or do we build more complex viewpoint independent three-dimensional representations? Recent facial adaptation techniques offer a powerful way to address these questions. Prolonged viewing of a stimulus (adaptation) changes the perception of subsequently viewed stimuli (an after-effect). Adaptation to a particular attribute is believed to target those neural mechanisms encoding that attribute. We gathered images of facial expressions taken simultaneously from five different viewpoints evenly spread from the three-quarter leftward to the three-quarter rightward facing view. We measured the strength of expression after-effects as a function of the difference between adaptation and test viewpoints. Our data show that, although there is a decrease in after-effect over test viewpoint, there remains a substantial after-effect when adapt and test are at differing three-quarter views. We take these results to indicate that neural systems encoding facial expressions contain a mixture of viewpoint-dependent and viewpoint-independent elements. This accords with evidence from single cell recording studies in macaque and is consonant with a view in which viewpoint-independent expression encoding arises from a combination of view-dependent expression-sensitive responses.
Research Interests:
Our visual representation of facial expression is examined in this study: is this representation built from edge information, or does it incorporate surface-based information? To answer this question, photographic negation of grey-scale... more
Our visual representation of facial expression is examined in this study: is this representation built from edge information, or does it incorporate surface-based information? To answer this question, photographic negation of grey-scale images is used. Negation preserves edge information whilst disrupting the surface-based information. In two experiments visual aftereffects produced by prolonged viewing of images of facial expressions were measured. This adaptation-based technique allows a behavioural assessment of the characteristics encoded by the neural systems underlying our representation of facial expression. The experiments show that photographic negation of the adapting images results in a profound decrease of expression aftereffect. Our visual representation of facial expression therefore appears to not just be built from edge information, but to also incorporate surface information. The latter allows an appreciation of the 3-D structure of the expressing face that, it is argued, may underpin the subtlety and range of our non-verbal facial communication.
Research Interests:
Adaptation is a powerful experimental technique that has recently provided insights into how people encode representations of facial identity. Here, we used this approach to explore the visual representation of facial expressions of... more
Adaptation is a powerful experimental technique that has recently provided insights into how people encode representations of facial identity. Here, we used this approach to explore the visual representation of facial expressions of emotion. Participants were adapted to anti-expressions of six facial expressions. The participants were then shown an average face and asked to classify the face’s expression using one of six basic emotion descriptors. Participants chose the emotion matching the anti-expression they were adapted to significantly more often than they chose any other emotion (e.g., if they were adapted to antifear, they classified the emotion on the average face as fear). The strength of this aftereffect of adaptation decreased as the strength of the anti-expression adapter decreased. These findings provide evidence that visual representations of facial expressions of emotion are coded with reference to a prototype within a multidimensional framework.
Research Interests:
patial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear... more
patial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum.
Research Interests:
The increasing ubiquity of haptic displays (e.g., smart phones and tablets) necessitates a better understanding of the perceptual capabilities of the human haptic system. Haptic displays will soon be capable of locally deforming to create... more
The increasing ubiquity of haptic displays (e.g., smart phones and tablets) necessitates a better understanding of the perceptual capabilities of the human haptic system. Haptic displays will soon be capable of locally deforming to create simple 3D shapes. This study investigated the sensitivity of our haptic system to a fundamental component of shapes: edges. A novel set of eight high quality shape stimuli with test edges that varied in sharpness were fabricated in a 3D printer. In a two alternative, forced choice task, blindfolded participants were presented with two of these shapes side by side (one the reference, the other selected randomly from the remaining set of seven) and after actively exploring the test edge of each shape with the tip of their index finger, reported which shape had the sharper edge. We used a model selection approach to fit optimal psychometric functions to performance data, and from these obtained just noticeable differences and Weber fractions. In Experiment 1, participants performed the task with four different references. With sharpness defined as the angle at which one surface meets the horizontal plane, the four JNDs closely followed Weber’s Law, giving a Weber fraction of 0.11. Comparisons to previously reported Weber fractions from other haptic manipulations (e.g. amplitude of vibration) suggests we are sufficiently sensitive to changes in edge sharpness for this to be of potential utility in the design of future haptic displays. In Experiment 2, two groups of participants performed the task with a single reference but different exploration strategies; one was limited to a single touch, the other unconstrained and free to explore as they wished. As predicted, the JND in the free exploration condition was lower than that in the single touch condition, indicating exploration strategy affects sensitivity to edge sharpness.
Research Interests:
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a... more
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.
Research Interests:
Edges are fundamental properties of our environment and the objects we interact with. There is a lack of research on the haptic perception of edges, especially the sharpness of an edge. Skinner et al. [2013 PLoS ONE, 8(9): e73283] found... more
Edges are fundamental properties of our environment and the objects we interact with. There is a lack of research on the haptic perception of edges, especially the sharpness of an edge. Skinner et al. [2013 PLoS ONE, 8(9): e73283] found that haptic discriminability of sharpness was clearly superior when using a relatively unrestrained, free exploration strategy compared with a static single touch strategy. In the free exploration condition two distinct movement patterns were frequently used by participants: a proximal-distal movement of the fingerpad across the test edge and a medial-lateral movement of the fingerpad along the test edge. Here, using the same stimuli and two-alternative forced-choice method of constant stimuli as Skinner et al. (2013), we demonstrate that a proximal-distal movement results in substantially lower sharpness discrimination thresholds than a medial-lateral movement. The underlying neurophysiology and implications for the design of haptic displays are considered.
Research Interests:
Previous research has shown that prior adaptation to a spatially circumscribed, oscillating grating results in the duration of a subsequent stimulus briefly presented within the adapted region being underestimated. There is an on-going... more
Previous research has shown that prior adaptation to a spatially circumscribed, oscillating grating results in the duration of a subsequent stimulus briefly presented within the adapted region being underestimated. There is an on-going debate about where in the motion processing pathway the adaptation underlying this distortion of sub-second duration perception occurs. One position is that the LGN and, perhaps, early cortical processing areas are likely sites for the adaptation; an alternative suggestion is that visual area MT+ contains the neural mechanisms for sub-second timing; and a third position proposes that the effect is driven by adaptation at multiple levels of the motion processing pathway. A related issue is in what frame of reference – retinotopic or spatiotopic – does adaptation induced duration distortion occur. We addressed these questions by having participants adapt to a unidirectional random dot kinematogram (RDK), and then measuring perceived duration of a 600 ms test RDK positioned in either the same retinotopic or the same spatiotopic location as the adaptor. We found that, when it did occur, duration distortion of the test stimulus was direction contingent; that is it occurred when the adaptor and test stimuli drifted in the same direction, but not when they drifted in opposite directions. Furthermore the duration compression was evident primarily under retinotopic viewing conditions, with little evidence of duration distortion under spatiotopic viewing conditions. Our results support previous research implicating cortical mechanisms in the duration encoding of sub-second visual events, and reveal that these mechanisms encode duration within a retinotopic frame of reference.
Research Interests:

And 1 more