Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Maarten van der Smagt

    Maarten van der Smagt

    Two gratings moving in opposite directions, presented to different eyes will result in binocular rivalry. We have shown previously (Paffen et al, 2003, VSS 03, Abstract FR 117) that when a surrounding annulus moving in the same direction... more
    Two gratings moving in opposite directions, presented to different eyes will result in binocular rivalry. We have shown previously (Paffen et al, 2003, VSS 03, Abstract FR 117) that when a surrounding annulus moving in the same direction as one of the gratings is presented, the grating moving in the opposite direction dominates the percept. Here, we investigate to what extent monocular and binocular mechanisms contribute to this phenomenon. A disc containing two oppositely moving vertical sine-wave gratings was ...
    Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil... more
    Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality v...
    Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic... more
    Emotional facial expressions are important visual communication signals that indicate a sender’s intent and emotional state to an observer. As such, it is not surprising that reactions to different expressions are thought to be automatic and independent of awareness. What is surprising, is that studies show inconsistent results concerning such automatic reactions, particularly when using different face stimuli. We argue that automatic reactions to facial expressions can be better explained, and better understood, in terms of quantitative descriptions of their visual features rather than in terms of the semantic labels (e.g. angry) of the expressions. Here, we focused on overall spatial frequency (SF) and localized Histograms of Oriented Gradients (HOG) features. We used machine learning classification to reveal the SF and HOG features that are sufficient for classification of the first selected face out of two simultaneously presented faces. In other words, we show which visual feat...
    During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend... more
    During binocular rivalry, perception alternates between two dissimilar images, presented dichoptically. Although binocular rivalry is thought to result from competition at a local level, neighboring image parts with similar features tend to be perceived together for longer durations than image parts with dissimilar features. This simultaneous dominance of two image parts is called grouping during rivalry. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. In the current study, we examine whether grouping of dynamic optic flow patterns is also primarily driven by monocular (eye-of-origin) information. In addition, we examine whether image parameters, such as optic flow direction, and partial versus full visibility of the optic flow pattern, affect grouping durations during rivalry. The results show that grouping of optic flow is, as is known for static images...
    Binocular rivalry occurs when the images presented to the two eyes do not match. Instead of fusing into a stable percept, perception during rivalry alternates between images over time. However, during rivalry, perception can also resemble... more
    Binocular rivalry occurs when the images presented to the two eyes do not match. Instead of fusing into a stable percept, perception during rivalry alternates between images over time. However, during rivalry, perception can also resemble a patchwork of parts of both eyes' images. Such integration of image parts across eyes is relatively rare compared to integration of image parts presented to the same eye, suggesting that integration across space during rivalry is primarily rooted at the early monocular level of processing. However, recent evidence suggests that rivalry, and potentially also integration across space during rivalry, has its basis at multiple stages of processing, including stages at which monocular signals are minimal. As such, integration and competition at these later stages would be driven more by image-based factors, such as continuity and color than by eye of origin. Because "higher" visual areas also have increasingly larger receptive fields, ima...
    In general, moving sensory stimuli (visual and auditory) can induce illusory sensations of self-motion (i.e. vection) in the direction opposite of the sensory stimulation. The aim of the current study was to examine whether tactile... more
    In general, moving sensory stimuli (visual and auditory) can induce illusory sensations of self-motion (i.e. vection) in the direction opposite of the sensory stimulation. The aim of the current study was to examine whether tactile stimulation encircling the waist could induce circular vection (around the body's yaw axis) and to examine whether this type of stimulation would influence participants' walking trajectory and balance. We assessed the strength and direction of perceived self-motion while vision was blocked and while either receiving tactile stimulation encircling the waist clockwise or counterclockwise or no tactile stimulation. Additionally, we assessed participants' walking trajectory and balance while receiving these different stimulations. Tactile stimulation encircling the waist was found to lead to self-reported circular vection in a subset of participants. In this subset of participants, circular vection was on average experienced in the same direction ...
    Processing quantities such as the number of objects in a set, size, spatial arrangement and time is an essential means of structuring the external world and preparing for action. The theory of magnitude suggests that number and time,... more
    Processing quantities such as the number of objects in a set, size, spatial arrangement and time is an essential means of structuring the external world and preparing for action. The theory of magnitude suggests that number and time, among other continuous magnitudes, are linked by a common cortical metric, and their specialization develops from a single magnitude system. In order to investigate potentially shared neural mechanisms underlying numerosity and time processing, we used visual adaptation, a method which can reveal the existence of a dedicated processing system. We reasoned that cross-adaptation between numerosity and duration would concur with the existence of a common processing mechanism, whereas the absence of cross-adaptation would provide evidence against it. We conducted four experiments using a rapid adaptation protocol where participants adapted to either visual numerosity or visual duration and subsequently performed a numerosity or duration discrimination task....
    Research Interests:
    Human psychophysical and electrophysiological evidence suggests at least two separate visual motion pathways, one tuned to a lower and one tuned to a broader and partly overlapping range of higher speeds. It remains unclear whether these... more
    Human psychophysical and electrophysiological evidence suggests at least two separate visual motion pathways, one tuned to a lower and one tuned to a broader and partly overlapping range of higher speeds. It remains unclear whether these two different channels are represented by different cortical areas or by sub‐populations within a single area. We recorded evoked potentials at 59 scalp locations to the onset of a slow (3.5°/s) and fast (32°/s) moving test pattern, preceded by either a slow or fast adapting pattern that moved in either the same direction or opposite to the test motion. Baseline potentials were recorded for slow and fast moving test patterns after adaptation to a static pattern. Comparison of adapted responses with baseline responses revealed that the N2 peak around 180 ms after test stimulus onset was modulated by the preceding adaptation. This modulation depended on both direction and speed. Source localization of baseline potentials as well as direction‐independent motion adaptation revealed cortical areas activated by fast motion to be more dorsal, medial and posterior compared with neural structures underlying slow motion processing. For both speeds, the direction‐dependent component of this adaptation modulation occurred in the same area, located significantly more dorsally compared with neural structures that were adapted in a direction‐independent manner. These results demonstrate for the first time the cortical separation of more ventral areas selectively activated by visual motion at low speeds (and not high speeds) and dorsal motion‐sensitive cortical areas that are activated by both high and low speeds
    Research Interests:
    Binocular rivalry occurs when two dissimilar images are dichoptically presented, each to a different eye. Neighboring image-parts with similar features, such as motion or orientation, tend to be perceived together for longer durations... more
    Binocular rivalry occurs when two dissimilar images are dichoptically presented, each to a different eye. Neighboring image-parts with similar features, such as motion or orientation, tend to be perceived together for longer durations than image-parts with dissimilar features, i.e. grouping occurs. Previous studies have shown that this grouping depends on a shared eye-of-origin to a much larger extent than on image content, irrespective of the complexity of a static image. Here, we address the question whether grouping of optic flow patterns is also primarily driven by monocular information. In addition, we examine whether parameters, such as optic flow direction, and partial versus full visibility of the optic flow structure, affects grouping durations during rivalry. For each eye, two apertures (diameter 1.0°) were presented above and below the fixation dot (diameter ~0.22°). Each aperture contained either optic flow (expansion or contraction) or incoherent motion. The speed of the dots of the optic flow pattern increased from center (0.086°/s) to periphery (1.49°/s). The dots were colored either white or black and observers had to track the color of the dots they perceived in the two apertures during 1 minute trials. The results show that, as for static images, grouping of motion information is primarily affected by its eye-of-origin. The motion direction of the optic flow pattern (i.e. the 'image cue') only affected grouping durations when full optic flow patterns were presented within each aperture. This effect was absent for partial optic flow parts that could be perceived holistically as a single optic flow pattern. These results suggest that grouping during rivalry is primarily driven by monocular information even for complex motion stimuli thought to rely on higher-level motion areas. Meeting abstract presented at VSS 2015.
    Research on multisensory integration often makes use of stochastic ('race-') models to distinguish performance enhancement in Reaction Times (RTs) due to multisensory integration from enhancement due to... more
    Research on multisensory integration often makes use of stochastic ('race-') models to distinguish performance enhancement in Reaction Times (RTs) due to multisensory integration from enhancement due to statistical facilitation. Only when performance on a multisensory task supersedes that of the race-model it is attributed to multisensory 'integration'. Previously (VSS 2014) we have shown that the subjective cross-modal correspondence (i.e. a subjective intensity match) influences the degree to which race-model violations occur. Here we investigate how the resulting inter-individual RT-difference to unimodal auditory and visual stimuli affects multisensory integration. Observers first matched the loudness of a 100ms white noise burst to the brightness of a 0.86° light disc (6.25 cd/m2) presented for 100ms on a darker (4.95 cd/m2) background, using a staircase procedure. In a subsequent speeded detection experiment, the observers were instructed to press a key as soon as an audiovisual, auditory only or visual only target was presented to the left or right from fixation. The subjectively matched loudness as well as +5dB and -5dB loudness values were used as auditory stimuli. Catch trials without stimulation were also included. We calculated for each subject, and each auditory condition, the unimodal RT-difference between detecting a visual or auditory stimulus. In addition we calculated the Multisensory Response Enhancement (MRE), and whether the race-model predictions were violated (RMV). We correlated MRE and average RMV with unimodal RT-differences across observers. Interestingly, the results show a significant, negative, correlation between MRE magnitude and unimodal RT-difference, but no correlation between MRE and individual RT's, nor any correlation between RMV and the unimodal RT-difference. These results are in line with the model proposed by Otto, Dassy and Mammasian (Journal of Neuroscience, 33, 7463-7474, 2013) and indicate that unimodal stimuli that yield similar RTs in an individual lead to the largest Multisensory Response Enhancement. Meeting abstract presented at VSS 2015.
    ABSTRACT Binocular rivalry is thought to occur at multiple stages along the visual processing hierarchy. Although monocular channels in primary visual cortex have been suggested to play an important role, recent studies have hinted at... more
    ABSTRACT Binocular rivalry is thought to occur at multiple stages along the visual processing hierarchy. Although monocular channels in primary visual cortex have been suggested to play an important role, recent studies have hinted at monocular information being available to higher-level visual (motion) areas. These areas, such as the medial superior temporal area (MST), are involved in the processing of radial optic flow. It is known that cells in these areas are selectively tuned to either expansion or contraction, while areas earlier in the visual hierarchy cannot distinguish between these two directions. Previous studies have shown that MST cells tuned to expansion outnumber those tuned to contraction. If monocular information reaches higher-level visual areas, one might expect that these tuning differences play a role in binocular rivalry. Here we question whether the time it takes to reach awareness differs between expanding and contracting optic flow. We used breaking continuous flash suppression to measure the duration until expanding or contracting optic flow broke suppression. Observers viewed the stimuli (3.6° radius) through a mirror stereoscope mounted on a chin rest. A white frame and a noise pattern (subtending 4.9° x 4.9°) surrounded the stimuli to facilitate binocular fusion. During the experiment, one eye viewed a mask (refresh rate 10Hz), which was created by filtering pink (1/f) noise using a low-pass filter (σ = 1.5), while the other eye viewed an either expanding or contracting radial optic flow pattern with a quadratic speed gradient (speed 2.7 deg/s). Observers pressed one of the two response keys to discriminate the optic flow direction as soon as possible within 6-second trials. The results show that expanding optic flow breaks suppression faster than contracting optic flow. These results may, for instance, reflect the larger prevalence of cells tuned to expansion in MST, suggesting monocular contributions to higher-level motion processing. Meeting abstract presented at VSS 2014
    Grapheme-color synesthetes perceive achromatic graphemes to be inherently colored. In this study grapheme-color synesthetes and non-synesthetes discriminated (1) the color of visual targets presented along with aurally presented digit... more
    Grapheme-color synesthetes perceive achromatic graphemes to be inherently colored. In this study grapheme-color synesthetes and non-synesthetes discriminated (1) the color of visual targets presented along with aurally presented digit primes, and (2) the identity of aurally presented digit targets presented with visual color primes. Reaction times to visual color targets were longer when the color of the target was incongruent with the synesthetic percept reported for the prime. Likewise, discriminating aurally presented digit targets took longer when the color of the prime was incongruent with the synesthetic percept for the target. These priming effects were absent in non-synesthetes. We conclude that binding between digits and colors in grapheme-color synesthetes can occur bidirectionally across senses. The results are in line with the idea that synesthesia is the result of linking inducing stimuli (e.g. digits) to synesthetic percepts (colors) at an abstract - supra-modal - conc...
    Inleiding Revalidatietechnieken om cognitieve uitkomsten na een beroerte te verbeteren kunnen opgedeeld worden in interventies die zich richten op compensatie en interventies die zich richten op direct herstel van functies (Levin e.a.,... more
    Inleiding Revalidatietechnieken om cognitieve uitkomsten na een beroerte te verbeteren kunnen opgedeeld worden in interventies die zich richten op compensatie en interventies die zich richten op direct herstel van functies (Levin e.a., 2009). Als activiteiten op een andere, aangepaste manier worden uitgevoerd spreken we van compensatie. Compensatie is geassocieerd met activatie in andere hersengebieden dan voorafgaande aan de beroerte. Als activiteiten op dezelfde manier worden uitgevoerd als vooraf gaande aan de beroerte spreken we van functieherstel. Herstel is geassocieerd met neurobiologische verbeteringen in aangedane hersen gebieden en/of terugkeer naar de originele activatiepatronen (Levin e.a., 2009). Huidige cognitieve revalidatietechnieken richten zich voornamelijk op het aanleren van compensatiestrategieen. Echter, een patient kan mogelijk de meeste winst behalen als de revalidatie zich zou kunnen richten op functieherstel. Het aanleren van compensatiestrategieen zou dan pas ingezet worden als functieherstel niet (verder) te bewerkstelligen is. De conclusie van de laatste richtlijn voor patienten met een beroerte van het Kwaliteitsinstituut voor de Gezondheidszorg CBO (2008) was echter dat er nog onvoldoende bewijs is voor de werkzaamheid van revalidatietechnieken die zich richten op het herstel van functies. Deze conclusie werd getrokken omdat er te weinig onderzoek van voldoende kwaliteit is uitgevoerd naar deze technieken. Anno 2014, zes jaar verder, zijn er meer studies gepubliceerd en is een aantal interessante technie
    Research Interests:
    ABSTRACT Background / Purpose: The unity assumption (that two signals originate from the same event), required for maximal integration, depends on the two signals being most similar (e.g. in time and space). Here we investigate the... more
    ABSTRACT Background / Purpose: The unity assumption (that two signals originate from the same event), required for maximal integration, depends on the two signals being most similar (e.g. in time and space). Here we investigate the effects of other, subjective, similarities: loudness of sound matched to brightness of visual stimulus. Main conclusion: Subjectively matched similar stimuli shot stronger multi-sensory response enhancements than non-matched stimuli (i.e. ± 5dB sounds). This indicates the importance of the subjective unity assumption, and why multi-sensory response enhancements are not always found. Abstract: Research on multisensory integration often makes use of stochastic (‘race’) models to distinguish a response performance enhancement in Reaction Times (RTs) due to multisensory integration from an enhancement due to the probability summation. Only when performance on a multisensory task supersedes that of the race-model it is attributed to multisensory ‘integration’. An important factor affecting multisensory integration is the ‘unity assumption’, i.e. the degree to which an observer infers that two sensory inputs are of the same source or event. Apart from the obvious spatial and temporal correspondence, other, often more subjective, similarity estimates might play a role as well. Here we investigate how subjective crossmodal correspondence influences multisensory stimulation. Observers first matched the loudness of a 100ms white noise burst to the brightness of a 0.86° light disc (6.25 cd/m2) presented for 100ms on a darker (4.95 cd/m2) background, using a staircase procedure. In a subsequent speeded detection experiment, the observers indicated as fast and accurately as possible whether an audiovisual, auditory only or visual only target was located to the right or left from fixation. The subjectively matched loudness as well as +5dB and -5 dB loudness values were used as auditory stimuli. Auditory detection was generally faster than visual detection and audio-visual detection was generally fastest. However, only when the subjectively matched loudness was used as auditory stimulus did audio-visual detection supersede the predictions made by the race–model. This result demonstrates the importance of subjective correspondence in multisensory integration, and may explain earlier results that found a surprising lack of integration.
    Although the neural location of the plaid motion coherence process is not precisely known, the middle temporal (MT) cortical area has been proposed as a likely candidate. This claim rests largely on the neurophysiological findings showing... more
    Although the neural location of the plaid motion coherence process is not precisely known, the middle temporal (MT) cortical area has been proposed as a likely candidate. This claim rests largely on the neurophysiological findings showing that in response to plaid stimuli, a subgroup of cells in area MT responds to the pattern direction, whereas cells in area V1 respond only to the directions of the component gratings. In Experiment 1, we report that the coherent motion of a plaid pattern can be completely abolished following adaptation to a grating which moves in the plaid direction and has the same spatial period as the plaid features (the so-called “blobs”). Interestingly, we find this phenomenon is monocular: monocular adaptation destroys plaid coherence in the exposed eye but leaves it unaffected in the other eye. Experiment 2 demonstrates that adaptation to a purely binocular (dichoptic) grating does not affect perceived plaid coherence. These data suggest several conclusions:...
    In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal... more
    In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory–visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory–visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that addi...
    A horizontally moving vertical grating viewed through a diamond-shaped aperture can be made to appear to move either upwards or downwards by introduction of appropriate depth-ordering cues at the boundaries of the aperture (Duncan et al,... more
    A horizontally moving vertical grating viewed through a diamond-shaped aperture can be made to appear to move either upwards or downwards by introduction of appropriate depth-ordering cues at the boundaries of the aperture (Duncan et al, 2000 Journal of Neuroscience20 5885–5897). The grating is perceived to move towards (and sliding under) occluding ‘near’ surfaces, and parallel to ‘far’ surfaces. Here we show that these depth-ordering cues affect the perceptual interpretation of the motion aftereffect (MAE) as well. After adaptation to unambiguous horizontal motion, the MAE direction deviates from horizontal towards near surfaces. However, the influence of depth-ordering cues on the illusory motion of the MAE is generally less than that seen for ‘real’ motion. Implications for theories of depth-motion and depth–MAE interactions are discussed.

    And 38 more