Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Guido Biele

    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations... more
    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for aff...
    Research Interests:
    Perceptual decision making in monkeys relies on decision neurons, which accumulate evidence and maintain choices until a response is given. In humans, several brain regions have been proposed to accumulate evidence, but it is unknown if... more
    Perceptual decision making in monkeys relies on decision neurons, which accumulate evidence and maintain choices until a response is given. In humans, several brain regions have been proposed to accumulate evidence, but it is unknown if these regions also maintain choices. To test if accumulator regions in humans also maintain decisions we compared delayed and self-paced responses during a face/house discrimination decision making task. Computational modeling and fMRI results revealed dissociated processes of evidence accumulation and decision maintenance, with potential accumulator activations found in the dorsomedial prefrontal cortex, right inferior frontal gyrus and bilateral insula. Potential maintenance activation spanned the frontal pole, temporal gyri, precuneus and the lateral occipital and frontal orbital cortices. Results of a quantitative reverse inference meta-analysis performed to differentiate the functions associated with the identified regions did not narrow down potential accumulation regions, but suggested that response-maintenance might rely on a verbalization of the response.
    Deficient reward processing has gained attention as an important aspect of ADHD, but little is known about reward-based decision-making (DM) in adults with ADHD. This article summarizes research on DM in adult ADHD and contextualizes DM... more
    Deficient reward processing has gained attention as an important aspect of ADHD, but little is known about reward-based decision-making (DM) in adults with ADHD. This article summarizes research on DM in adult ADHD and contextualizes DM deficits by comparing them to attention deficits. Meta-analytic methods were used to calculate average effect sizes for different DM domains and continuous performance task (CPT) measures. None of the 59 included studies (DM: 12 studies; CPT: 43; both: 4) had indications of publication bias. DM and CPT measures showed robust, small to medium effects. Large effect sizes were found for a drift diffusion model analysis of the CPT. The results support the existence of DM deficits in adults with ADHD, which are of similar magnitude as attention deficits. These findings warrant further examination of DM in adults with ADHD to improve the understanding of underlying neurocognitive mechanisms.
    The mesocorticolimbic dopamine (DA) system linking the dopaminergic midbrain to the prefrontal cortex and subcortical striatum has been shown to be sensitive to reinforcement in animals and humans. Within this system, coexistent... more
    The mesocorticolimbic dopamine (DA) system linking the dopaminergic midbrain to the prefrontal cortex and subcortical striatum has been shown to be sensitive to reinforcement in animals and humans. Within this system, coexistent segregated striato-frontal circuits have been linked to different functions. In the present study, we tested patients with Parkinson's disease (PD), a neurodegenerative disorder characterized by dopaminergic cell loss, on two reward-based learning tasks assumed to differentially involve dorsal and ventral striato-frontal circuits. 15 non-depressed and non-demented PD patients on levodopa monotherapy were tested both on and off medication. Levodopa had beneficial effects on the performance on an instrumental learning task with constant stimulus-reward associations, hypothesized to rely on dorsal striato-frontal circuits. In contrast, performance on a reversal learning task with changing reward contingencies, relying on ventral striato-frontal structures, was better in the unmedicated state. These results are in line with the “overdose hypothesis” which assumes detrimental effects of dopaminergic medication on functions relying upon less affected regions in PD. This study demonstrates, in a within-subject design, a double dissociation of dopaminergic medication and performance on two reward-based learning tasks differing in regard to whether reward contingencies are constant or dynamic. There was no evidence for a dose effect of levodopa on reward-based behavior with the patients’ actual levodopa dose being uncorrelated to their performance on the reward-based learning tasks.
    Using neuroimaging in combination with computational modeling, this study shows that decision threshold modulation for reward maximization is accompanied by a change in effective connectivity within corticostriatal and cerebellar-striatal... more
    Using neuroimaging in combination with computational modeling, this study shows that decision threshold modulation for reward maximization is accompanied by a change in effective connectivity within corticostriatal and cerebellar-striatal brain systems. Research on perceptual decision making suggests that people make decisions by accumulating sensory evidence until a decision threshold is crossed. This threshold can be adjusted to changing circumstances, to maximize rewards. Decision making thus requires effectively managing the amount of accumulated evidence versus the amount of available time. Importantly, the neural substrate of this decision threshold modulation is unknown. Participants performed a perceptual decision-making task in blocks with identical duration but different reward schedules. Behavioral and modeling results indicate that human subjects modulated their decision threshold to maximize net reward. Neuroimaging results indicate that decision threshold modulation was achieved by adjusting effective connectivity within corticostriatal and cerebellar-striatal brain systems, the former being responsible for processing of accumulated sensory evidence and the latter being responsible for automatic, subsecond temporal processing. Participants who adjusted their threshold to a greater extent (and gained more net reward) also showed a greater modulation of effective connectivity. These results reveal a neural mechanism that underlies decision makers' abilities to adjust to changing circumstances to maximize reward.
    Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and... more
    Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.
    The ability to rapidly and flexibly adapt decisions to available rewards is crucial for survival in dynamic environments. Reward-based decisions are guided by reward expectations that are updated based on prediction errors, and processing... more
    The ability to rapidly and flexibly adapt decisions to available rewards is crucial for survival in dynamic environments. Reward-based decisions are guided by reward expectations that are updated based on prediction errors, and processing of these errors involves dopaminergic neuromodulation in the striatum. To test the hypothesis that the COMT gene Val(158)Met polymorphism leads to interindividual differences in reward-based learning, we used the neuromodulatory role of dopamine in signaling prediction errors. We show a behavioral advantage for the phylogenetically ancestral Val/Val genotype in an instrumental reversal learning task that requires rapid and flexible adaptation of decisions to changing reward contingencies in a dynamic environment. Implementing a reinforcement learning model with a dynamic learning rate to estimate prediction error and learning rate for each trial, we discovered that a higher and more flexible learning rate underlies the advantage of the Val/Val genotype. Model-based fMRI analysis revealed that greater and more differentiated striatal fMRI responses to prediction errors reflect this advantage on the neurobiological level. Learning rate-dependent changes in effective connectivity between the striatum and prefrontal cortex were greater in the Val/Val than Met/Met genotype, suggesting that the advantage results from a downstream effect of the prefrontal cortex that is presumably mediated by differences in dopamine metabolism. These results show a critical role of dopamine in processing the weight a particular prediction error has on the expectation updating for the next decision, thereby providing important insights into neurobiological mechanisms underlying the ability to rapidly and flexibly adapt decisions to changing reward contingencies.
    Many decisions people make can be described as decisions under risk. Understanding the mechanisms that drive these decisions is an important goal in decision neuroscience. Two competing classes of risky decision making models have been... more
    Many decisions people make can be described as decisions under risk. Understanding the mechanisms that drive these decisions is an important goal in decision neuroscience. Two competing classes of risky decision making models have been proposed to describe human behavior, namely utility-based models and risk-return models. Here we used a novel investment decision task that uses streams of (past) returns as stimuli to investigate how consistent the two classes of models are with the neurobiological processes underlying investment decisions (where outcomes usually follow continuous distributions). By showing (a) that risk-return models can explain choices behaviorally and (b) that the components of risk-return models (value, risk, and risk attitude) are represented in the brain during choices, we provide evidence that risk-return models describe the neural processes underlying investment decisions well. Most importantly, the observed correlation between risk and brain activity in the anterior insula during choices supports risk-return models more than utility-based models because risk is an explicit component of risk-return models but not of the utility-based models.
    This research examines decisions from experience in restless bandit problems. Two experiments revealed four main effects. (1) Risk neutrality: the typical participant did not learn to become risk averse, a contradiction of the hot stove... more
    This research examines decisions from experience in restless bandit problems. Two experiments revealed four main effects. (1) Risk neutrality: the typical participant did not learn to become risk averse, a contradiction of the hot stove effect. (2) Sensitivity to the transition probabilities ...
    To make decisions based on the value of different options, we often have to combine different sources of probabilistic evidence. For example, when shopping for strawberries on a fruit stand, one uses their color and size to infer-with... more
    To make decisions based on the value of different options, we often have to combine different sources of probabilistic evidence. For example, when shopping for strawberries on a fruit stand, one uses their color and size to infer-with some uncertainty-which strawberries taste best. Despite much progress in understanding the neural underpinnings of value-based decision making in humans, it remains unclear how the brain represents different sources of probabilistic evidence and how they are used to compute value signals needed to drive the decision. Here, we use a visual probabilistic categorization task to show that regions in ventral temporal cortex encode probabilistic evidence for different decision alternatives, while ventromedial prefrontal cortex integrates information from these regions into a value signal using a difference-based comparator operation.