Abstract
The brainâs ability to prioritize behaviorally relevant sensory inputs (i.e., targets) while ignoring irrelevant distractors is crucial for efficient information processing. However, the role of emotional valence in modulating selective attention remains underexplored. This study examined how positive and negative emotions alter the spatial scope of visual selective attention using a modified Eriksen Flanker task. Participants viewed an emotional face cue (happy, angry, or neutral) randomly positioned on the screen and then identified the shape of a subsequent neutral target (bowtie or diamond) at the cued location. Adjacent stimuli either matched the target shape (congruent) or differed (incongruent). Results showed that happy faces increased susceptibility to distractors (i.e., a larger incongruency effect), suggesting a broadening of attentional scope, while angry faces reduced susceptibility (i.e., a smaller incongruency effect), indicating a narrowing of focus. Importantly, the magnitude of this emotion-driven attention modulation was negatively correlated with participantsâ self-reported levels of psychological distress. Participants with higher stress and depression exhibited weaker attention broadening in response to positive cues. Together, the findings provide behavioral evidence of how emotional valence influences attention scope, offering potential insights into the dynamic interplay between psychological distress, emotional processing, and attention modulation.
Similar content being viewed by others
Introduction
In a world overflowing with sensory stimuli, the brainâs ability to prioritize relevant information while filtering out distractions is essential for efficient processing and decision-making. One critical factor influencing selective attention is the emotional content of sensory stimuli. Research consistently shows that stimuli evoking strong negative emotionsâsuch as snakes, spiders, or angry facesâcapture attention more effectively than neutral or positive stimuli1,2,3. This heightened attentional capture reflects an adaptive mechanism that directs resources toward potential threats, enabling rapid responses crucial for survival2,4.
More recent studies have explored the possibility that both positive and negative emotions influence the spatial scope of attention5,6,7,8,9. Positive emotions are hypothesized to broaden attentional focus and enhance the ability to detect peripheral details. For instance, when walking down a busy street, positive emotions might increase awareness of surrounding factors, such as a barking dog or an approaching cyclist. In contrast, negative emotions are thought to narrow attentional focus, drawing attention to immediate, salient details. This so-called âweapon-focus effectâ exemplifies how attention can become concentrated on a perceived threat at the expense of peripheral information (e.g., focusing exclusively on the barking dog). These emotion-driven shifts in attention can have significant cognitive and practical implications. By broadening attention, positive emotions may enhance learning and improve the ability to monitor or detect changes in the environment. Conversely, negative emotions, by narrowing attention, could limit situational awareness and impair performance in dynamic or unpredictable settings.
While emotional saliency has consistently been shown to influence attentional capture across various paradigms (e.g., visual search, attentional blink, spatial cueing) and stimulus categories (e.g., faces, words, sounds)5,6,7,8, the specific role of emotional valence in modulating the spatial scope of visual attention remains less understood. Neuroimaging and psychophysics research has revealed that the impacts of emotion on attentional scope can be detected at early perceptual stages9,10,11. For example, an fMRI study found that the strength of V1 responses to unattended peripheral stimuli was modulated by the emotional expression of a central target face. Happy faces elicited stronger neural responses to nearby unattended stimuli compared to angry faces, suggesting that emotions influence early perceptual encoding through top-down feedback9. Similarly, psychophysics experiments have demonstrated that exposure to valenced faces alters visual contrast perception, which, under the Normalization Model of Attention12,13, could be interpreted as indicating a broadening or narrowing of attentional field size9,10. Nevertheless, to date, there is little direct evidence linking emotional valence to attention scope modulation.
This study aims to achieve two objectives. First, we investigate whether emotional face cues modulate attentional focus using a modified flanker task. The Eriksen-Flanker task paradigm is ideal for measuring selective attention as it directly assesses participantsâ ability to selectively attend to the target while ignoring competing distractors. Participants viewed an emotional face cue (happy, angry, or neutral) randomly positioned on the screen and were subsequently asked to identify the shape of a neutral target (bowtie or diamond) appearing at the same location as the face cue (Fig. 1). We hypothesized that happy faces would broaden the spatial scope of visual attention, leading to a weaker ability to suppress the flanking stimuli and a larger incongruency effect. In contrast, negative faces are hypothesized to restrict attentional focus, increasing the ability to ignore the distractors and thereby producing a smaller incongruency effect. The second objective is to explore whether emotional modulation of attention varies with participantsâ psychological states. Research indicates that individuals with high psychological distress, such as anxiety or depression, show attentional biases toward negative stimuli14,15,16. We examine whether emotional cues modulate attention differently in individuals with self-reported high distress. Our hypothesis is that higher distress is associated with weaker emotion-driven modulation.
Materials and methods
Participants
Participants were 30 native Thai undergraduates (mean ageâ=â20.97; 16 females; 1 left-handed) from King Mongkutâs University of Technology Thonburi (KMUTT). Each participant had normal or corrected-to-normal vision and received financial compensation for participation. Consent forms were obtained from all participants and were signed prior to the experiment. The study was approved by the Institutional Review Board (IRB) protocol at King Mongkutâs University of Technology Thonburi and all methods were performed in accordance with the relevant guidelines and regulations.
The number of participants was estimated based on a previous study that used a behavioral Eriksen Flanker task to assess the impact of emotion on attention17. Since the effect size was not directly reported in their study, Ï2 was estimated from the reported ANOVA outcome and converted to Cohenâs f to facilitate the power analysis. The estimated effect size (Cohenâs f) for Experiment 1Â A in their study was 0.81. Consequently, a minimum sample size of 20 participants was required to achieve a statistical power of 0.95 while maintaining a significance level of 0.05 (âpwrâ package; R software18). To ensure robustness and account for potential variability in participantsâ responses, we opted to recruit 30 participants for the study. Data collection was not influenced by interim analyses or any post-hoc adjustments.
Stimuli
Emotional cues consisted of seventy-two face images selected from the Nimstim Face Database19. To ensure that these images elicited the desired emotional reactions from our Thai participants, a separate mini-experiment was conducted. In this experiment, an independent group of 30 Thai observers (mean age: 20.20; 14 females; 1 left-handed) who did not take part in the main experiment provided ratings of the emotional content for each image in the Nimstim database. Three types of ratings were acquired: (1) emotional valence, or the perceived level of positivity or negativity of each facial expression, on a 0â9 Likert scale; (2) emotional intensity, or the degree of emotion expressed in the image, on a 0â9 Likert scale; and (3) emotional category, or the interpreted emotional content of each image (i.e., happy, sad, fear, angry, surprised, disgusted, calm, neutral). Data from this mini-experiment were used to further screen face images for our study. Specifically, the image selection was performed to ensure that the following three criteria were met: (1) each selected image had a mean categorization score above 70%; (2) the mean categorization accuracies did not statistically differ across selected categories (i.e., 90.83%, 87.78%, 90.00% for happy, angry, and neutral emotions, respectively); and (3) the emotional intensity ratings were statistically comparable across the happy and angry images (6.72 and 6.93, respectively). Given the cultural differences between our participants and those in the original Nimstim validation study (Tottenham et al., 2009), we did not use the original ratings for image selection.
From the total of 673 images in the original database, 72 face images were selected from 24 actors, each expressing happy, angry, and neutral emotions (i.e., 3 images per actor). These images were taken from 11 female and 13 male actors, representing the following ethnic backgrounds: 16 Europeans, 4 Asians, 3 Africans, and 1 Latino-American. To maximize the efficacy of the selected images in evoking the desired emotional response, happy and angry expressions were chosen to represent positive and negative emotions, respectively, due to the highest accuracy scores observed for these emotional categories compared to others. Only images of faces with closed mouths were chosen because, as noted by the original authors19, these images were noticeably perceptually dissimilar from the open-mouth face images due to the visibility of teeth or the shape of the mouth. Each image was converted to grayscale and manually cropped to remove hair, ears, and other non-essential details prior to the experiment.
Experimental Procedure
Each participant was seated in a comfortable chair approximately 57 cm away from the computer screen. In each trial, participants observed a 2000-ms central fixation, followed by a 75-ms face cue (width: 4.4Ë, height: 6.4Ë), which appeared at one of twelve possible locations around the central fixation (eccentricity: 9.8Ë; see Fig. 1). We presented the cue and stimuli at slightly different eccentricities to prevent potential masking effects. After a brief period with a randomly-varied cue-stimulus interval (CSI; 125, 250, or 500 ms), 12 basic shape stimuli appeared on the screen for 1,550 ms (eccentricity: 10.6Ë; diameter: 3.6Ë). Participants were instructed to discriminate the shape of the target (bowtie vs. diamond) while ignoring the presence of surrounding distractors. Stimulus congruency was manipulated by altering the shapes of two flanking distractors on either side of the target. Participants were given three seconds to respond by pressing either âjâ or âkâ using their right index and middle fingers, respectively. Visual feedback was then provided, indicating whether the response was correct, incorrect, or too slow (i.e., if the response took longer than 3,000 ms). The next trial began after an inter-trial interval of 1,000â±â50 ms.
Experimental design. Participants were required to discriminate the shape of the target stimulus (depicted in orange) while ignoring the flanking distractors (depicted in grey). Critical experimental manipulations included altering the emotional categories of face cues (angry, happy, and neutral) and stimulus congruency (congruent and incongruent). Time delays between cue and stimulus onset (cue-target interval, or CSI) were randomized across trials (125, 250, 500 ms). The colors orange and grey are used for illustrative purposes only.
The trial structures were designed to achieve an even distribution of stimulus conditions. Images depicting three facial emotions (happy, angry, neutral) were presented in both upright and inverted orientations at one of twelve possible screen locations. These stimuli conditions were paired with three cue-stimulus intervals (125 ms, 250 ms, and 500 ms) and two congruency conditions (congruent, incongruent), creating a total of 432 stimulus combinations (3 emotions à 2 orientations à 12 locations à 3 CSIs à 2 congruency conditions). For each participant, these combinations were duplicated to create a total of 864 trials. The trial order was randomly shuffled and distributed across 16 blocks, with each block consisting of 54 trials. The entire experiment lasted approximately 2.5 h.
Each participant received a brief practice session prior to the experiment. Additionally, the short version of the Depression, Anxiety, and Stress Scale (DASS-21)20 was administered to assess current and recent levels of psychological distress. Participants were required to rate their experiences of negative affect over the previous week on a 4-point scale. The questionnaire consisted of 21 items, evenly divided into three subscales (depression, anxiety, and stress; 7 items per scale). The Thai translation of the questionnaire, which has been extensively validated in previous studies to ensure translation accuracy and validity21,22, was used. Instructions were given in Thai throughout the experiment.
Stimuli presentation was carried out in a dark room on a 24-inch LG monitor with a 144Â Hz refresh rate. MATLAB software (R2020b version) with the Psychophysics Toolbox-3 package23 was used to execute the display sequences and collect response data.
Data analysis
Data from participants with average accuracy scores more than three standard deviations below the mean were excluded, resulting in the removal of one participant. The final dataset included twenty-nine participants (mean ageâ=â21.03 years; 15 females; 1 left-handed; total trialsâ=â25,056). Planned analyses involved using repeated-measures Analysis of Variance (ANOVA) implemented in R18. An omnibus four-way ANOVA was initially performed to analyze the effects of facial emotion (happy, angry, neutral), facial orientation (upright, inverted), stimulus congruency (congruent, incongruent), and cue-stimulus interval (CSI; 125, 250, and 500 ms) on trial accuracies. Following the identification of a significant three-way interaction (emotion x congruency x orientation), separate two-way ANOVAs were conducted to probe the interaction between facial emotion and stimulus congruency and the nature of this interaction. Simple main effects tests with Bonferroni-corrected p-values were subsequently performed to examine the direction of the attention modulation effect for each emotional category. Effect sizes in this study were calculated using generalized eta squared (ges), which quantifies the proportion of total variance in the dependent variable that is attributable to each factor.
Attention modulation score
To explore how depression, anxiety, and stress impact attention modulation, we conducted a correlation analysis using the DASS scores and a metric called the âAttention Modulation Score.â This score quantifies emotion-driven changes in attentional focus by measuring the difference in the incongruency effect (i.e., susceptibility to flanker interference) between emotional pairs (e.g., happy vs. angry).
For each participant, we first calculated the difference in mean accuracy between congruent and incongruent trials (congruent - incongruent) for each emotional category and facial orientation. This produced six incongruency effects corresponding to the six possible combinations (i.e., happy_upright, happy_inverted, angry_upright, angry _inverted, neutral_upright, and neutral_inverted). To isolate effects specific to upright faces, we subtracted incongruency effects across orientations (upright - inverted), yielding normalized incongruency effects for happy, angry, and neutral emotions.
The Attention Modulation Score was then computed by subtracting normalized incongruency effects between specific emotion pairs: happy â neutral, angry â neutral, and happy â angry. Each emotion pair reflects a different aspect of attentional modulation. Specifically, the happy â neutral and angry â neutral comparisons indicate the degree to which happy and angry emotions increased susceptibility to flanker interference (i.e., attention broadening) relative to the neutral baseline. Positive values reflect a broadening of attention compared to baseline, while negative values indicate a narrowing of attention. The happy â angry comparison reflects the extent to which happy faces induced a broader attentional scope compared to angry faces. Finally, correlation analyses were conducted to explore the relationship between the Attention Modulation Score and the depression, anxiety, and stress subscales of the DASS questionnaire.
Results
The mean accuracy score (and standard deviation) in the Flanker task was 96.28% (0.19%). The average scores (SDs) from the DASS-21 questionnaire were 9.10 (6.90) on the depression scale, 12.14 (8.93) on the anxiety scale, and 13.10 (7.63) on the stress scale. According to the standard DASS scoring guidelines20, these scores suggest a normal level of depression, a moderate level of anxiety, and a normal level of stress among our subject population.
Valence-Induced attention modulation
A four-way repeated-measures ANOVA was performed to investigate the impact of facial emotion (happy, angry, neutral), facial orientation (upright, inverted), stimulus congruency (congruent, incongruent), and cue-stimulus interval (CSI; 125, 250, and 500 ms) on trial accuracy. As expected, the analysis yielded a significant main effect of stimulus congruency (F(1, 28)â=â35.81, pâ<â0.001, gesâ=â0.07), with incongruent trials eliciting lower average accuracy scores than congruent trials. Importantly, a significant three-way interaction was also observed among stimulus congruency, facial emotion, and facial orientation (F(2, 56)â=â5.36, pâ<â0.001, gesâ=â0.007). This interaction suggests that the ability to selectively attend to the target stimulus varied depending on the emotional valence and the orientation of the preceding face cue. The analysis revealed no significant main effects of emotion, orientation, or CSI on trial accuracy (pâ>â0.1). In addition, no significant interactions between CSI and other experimental variables were observed (pâ>â0.1). Consequently, subsequent analyses combined data across different cue-target time intervals.
To interpret the three-way interaction, data from upright and inverted face trials were submitted to separate repeated-measures ANOVAs that included facial emotion and stimulus congruency as within-subjects variables. For upright faces, the analysis revealed a significant interaction between facial emotion and stimulus congruency (F(2, 56)â=â6.45, pâ<â0.001, gesâ=â0.02; see Fig. 2), suggesting that the presentation of upright emotional face cues indeed alter the spatial scope of visual selective attention. For inverted faces, no significant interaction between emotion and congruency was observed (F(2, 56)â=â0.30, pâ>â0.1, gesâ=â0.0008). This suggests that emotion-dependent attention modulation was evident only when the face cues were presented in their canonical orientation, where their semantic content and identities were preserved.
Experiment Results. For upright faces, a significant interaction between facial emotions (angry, happy, neutral) and stimulus congruency (congruent, incongruent) was observed, suggesting that emotional valence modulates the spatial scope of visual selective attention. No interaction was observed for inverted faces, indicating that the effect cannot be attributed to perceptual dissimilarities across image categories. Error bars represent the standard error of the mean (SEM). Asterisks (*, **, ***) indicate statistically significant results from simple main effect tests, with p-valuesâ<â0.05, <â0.01, and <â0.001, respectively. Due to restrictions on the public display of original images from the NimStim dataset, the face images shown in this figure were provided by a laboratory member serving as a model and were not used in the actual experiment.
Post-hoc analyses were conducted to examine the direction of the incongruency effect across facial emotions. For positive faces displayed in the upright orientation, simple main effects tests revealed higher accuracy on congruent trials than on incongruent trials (mean differenceâ=â4.0%, p_Bonfâ<â0.001, gesâ=â0.22). However, this accuracy difference disappeared for trials with upright negative faces (mean differenceâ=â0.9%, p_Bonfâ>â0.1, gesâ=â0.03). As expected, a significant incongruency effect was observed for upright neutral faces (mean differenceâ=â2.5%, p_Bonfâ<â0.01, gesâ=â0.14). For faces presented in the inverted orientation, a significant incongruency effect was observed regardless of emotion type: positive (mean differenceâ=â1.9%, p_Bonfâ=â0.02, gesâ=â0.09), negative (mean differenceâ=â2.4%, p_Bonfâ=â0.01, gesâ=â0.13), and neutral (mean differenceâ=â2.4%, p_Bonfâ<â0.01, gesâ=â0.12). Together, these results suggest that the observed valence-induced attention modulation occurred in the expected direction, with a broadening of attention scope in response to positive faces and a narrowing in response to negative faces.
For the RT data, a four-way repeated-measures ANOVA revealed a significant main effect of stimulus congruency on response time in correct trials (F(1,28)â=â59.3, pâ<â0.001, gesâ=â0.02), indicating that performance was faster on congruent than incongruent trials (mean RTâ=â568.9 ms and 591.1 ms, respectively). The analysis also revealed a significant main effect of CSI (F(1.5, 43.2)â=â211.1, pâ<â0.001, gesâ=â0.1). However, no significant congruency à emotion à facial orientation interaction was observed (F(1.56, 43.6)â=â0.5, pâ>â0.1). Further three-way ANOVAs revealed no significant emotion à congruency interaction for upright (F(2,56)â=â0.7, pâ>â0.1) or inverted faces (F(2,56)â=â3.1, pâ>â0.05; see Supplementary Materials S1).
Psychological distress and attention modulation
We conducted additional correlation analyses to examine the relationships between participantsâ psychological states and their attentional modulation. An âAttention Modulation Scoreâ was calculated for each participant, which quantified the degree to which one emotion induced greater attention broadening relative to another. Specifically, we computed the difference in the incongruency effect across emotional pairs (e.g., happy vs. angry), which reflected differential susceptibility to flanker interference for each pair. Higher Attention Modulation Scores indicate greater attention broadening, while lower scores suggest attention narrowing. Detailed descriptions of this computation are provided in the Materials and Methods section.
Figure 3 displays scatterplots of DASS scores (x-axis) against Attention Modulation Scores (y-axis) for three emotion pairs: happyâneutral (3a), angryâneutral (3b), and happyâangry (3c). An outlier analysis based on scores averaged across the three emotion pairs revealed no significant outliers. For the happyâneutral pair, a significant negative correlation between stress and attention modulation was observed (r = -0.41, pâ=â0.03), suggesting that participants with higher stress exhibited reduced attention broadening when cued by happy faces. No significant correlations were found for the angryâneutral pair.
For the happyâangry pair, we identified a significant negative correlation between attention modulation and stress scores (r = -0.41, pâ=â0.03). The scatterplot for the happyâangry pair (Fig. 3c) suggested potential outliers in the Attention Modulation Scores. A further outlier analysis for specific emotion pairs identified a significant outlier (data point at 0.22) for the happyâangry pair but not for the other pairs. Based on this outlier, we conducted a sensitivity analysis to assess the robustness of the findings by excluding this data point (see Supplementary Figure S2). The reanalysis yielded significant correlations for both depression (r = -0.47) and stress (r = -0.41, pâ=â0.03). Together, these results suggest that elevated psychological distress, particularly high stress levels, is associated with a weaker broadening of emotion-induced attention modulation.
Discussion
The present study examines whether the affective valence of facial cues alters the spatial scope of selective attention in a way that impacts the subsequent processing of neutral target stimuli. Using a modified Eriksen-Flanker task, we found that emotional facial cues modulate attentional focus by influencing the ability to suppress flanking stimuli. Specifically, positive facial cues broadened the spatial scope of attention, leading to greater susceptibility to flanker interference (i.e., a larger incongruency effect). In contrast, negative facial cues narrowed attentional focus, reducing susceptibility to interference (i.e., minimal or no incongruency effect). Notably, these valence-driven changes in attention disappeared when the emotional faces were inverted, suggesting that the observed effects were not simply due to perceptual differences in the images. Additionally, correlation analyses revealed that the magnitude of attentional modulation was negatively associated with participantsâ psychological states. Participants who reported higher levels of stress or depression exhibited weaker attention broadening in response to emotional facial cues. This suggests that the typical emotion-driven changes in attentional scope may be disrupted in individuals experiencing high psychological distress.
The current findings address a gap in the literature by providing the first direct evidence linking emotional valence to attention scope modulation. Previous neuroimaging and psychophysics studies have indicated that emotion-induced changes in attentional breadth could be inferred from visual perception patterns or neural activity in V19,10,11. However, direct evidence of emotionâs influence on selective attention has remained elusive. While some earlier studies have explored the influences of positive and negative emotions on attention using similar behavioral approaches, such as the visual search task or the Flanker paradigm, these studies often employed non-orthogonal stimulus designs, where emotional cues and target stimuli were the same (e.g., schematic drawings of happy or sad faces)17,24. In such studies, it is difficult to determine whether the observed effects were due to emotional valence or the attention demands involved in target selection.
There are several possible mechanisms underlying the occurrence of attention scope modulation in our study. One useful framework is provided by the Dual-Competition Model25, which posits that attentional resource allocation reflects an interaction between bottom-up perceptual competition and top-down cognitive control processes. In our modified Flanker task, different components may engage these two levels of competition. Positive and negative face cues may have influenced bottom-up perceptual competition by reducing (for positive cues) or increasing (for negative cues) competition for the central target stimulus, resulting in broader or narrower attentional scope, respectively. Simultaneously, the cognitive demands of identifying the target shape while suppressing distracting flankers likely engaged top-down cognitive control, redirecting attention from emotional stimuli to task-relevant goals. The patterns we observed may reflect the dynamic interaction between these bottom-up and top-down processes. Alternatively, our findings could be explained by a general attentional mechanism unrelated to emotional processing. For instance, exposure to negative cues might have caused longer disengagement or slower saccadic reaction times, making it harder to shift attention away from the cues. However, our analysis of reaction time (RT) data found no evidence of significant differences in engagement times across emotional categories (mean RTs: happyâ=â593.5 ms, neutralâ=â591.6 ms, angryâ=â590.6 ms; F(2,58)â=â0.65, pâ>â0.1; see Supplementary Figure S2). Moreover, if longer disengagement were occurring, we would expect a greater incongruency effect for negative faces compared to positive or neutral faces, as prolonged engagement would likely lead to increased encoding of peripheral flanking stimuli along with the target. This is in contrast to the observed pattern (a larger incongruency effect for positive compared to negative emotions), suggesting that these alternative explanations are unlikely.
The present study also found that emotion-driven attention modulation varied based on participantsâ psychological states. Specifically, participants with higher levels of stress and depression exhibited reduced attention modulation in response to emotional face cues. Notably, significant negative correlations were observed only for the happyâneutral emotion pair, and not for the negativeâneutral pair (Fig. 3). This suggests that the diminished attention modulation is primarily characterized by a weaker expansion of attentional focus in response to positive cues, possibly due to disrupted processing of positive emotional stimuli, such as that seen in anhedonia. Our findings are also consistent with a few other studies linking psychological distress to difficulties in expanding attention scope26,27. For instance, in a task where participants were required to detect visual stimuli within one or two of four rectangles positioned around a central fixation point (left, right, up, down), highly anxious individuals took longer to complete the task when the rectangles were further from the central fixation [26]. Given the limited number of studies on this topic, future research is necessary to better understand the relationships between psychological distress, emotional processing, and attention scope modulation.
We also examined the duration of the observed attention modulation effect by analyzing its impact on next-trial performance (nâ+â1). The analysis showed no significant emotion à congruency interaction for either upright or inverted faces (p-valuesâ>â0.05; see Supplementary Figure S3), suggesting that the emotion-induced attentional modulation did not carry over to the next trial (approximately 5.5â6 s later). Additionally, given the length of our experimental sessions (1.5â2 h; 16 blocks), we investigated whether the effects persisted across blocks or diminished due to fatigue. While accuracy did not significantly decline, a progressive improvement in response speed was observed, stabilizing around blocks 5â6 (F(3,84)â=â0.08, pâ>â0.1; Supplementary Figure S4), indicating an early learning or adaptation period. However, when the trials were divided into the first and second halves of the experiment, significant attention modulation effects were present in both halves (first half: F(2,56)â=â3.4, pâ=â0.02; second half: F(2,56)â=â56.0, pâ<â0.001). Together, these results suggest that the observed effects were robust over time and were not influenced by learning or fatigue.
Finally, the present study revealed that the attention modulation effect was consistent across all cue-stimulus intervals (125, 250, 500 ms). Previous studies exploring attention scope modulation in perceptual processing9,10 used only short intervals (250 ms), leaving it unclear whether the observed effects reflected solely bottom-up processes or also involved top-down mechanisms. We speculate that the observed effects arose from the interaction of bottom-up perceptual competition (driven by the emotional face cue) and top-down cognitive control (required for target identification amidst distractors). This interplay likely explains why modulation occurred across all time delays. Alternatively, the present results may be influenced by inter-subject variability in sensitivity to timing, with some participants experiencing faster or slower impacts of emotional valence on attention. Data from the current study is insufficient to determine whythe effects appeared generalized across time intervals. Future research would benefit from an in-depth exploration of the complex temporal dynamics of emotion-induced attention scope modulation.
Conclusion
In conclusion, the present study provides valuable insights into how emotional valence modulates the spatial scope of visual selective attention. Using a modified Eriksen Flanker task, we found that positive emotions, such as happiness, broaden attentional focus and increase susceptibility to distractors, while negative emotions, like anger, narrow attentional focus and reduce interference. Importantly, these emotion-driven changes were moderated by participantsâ levels of psychological distress, with those experiencing higher stress or depression exhibiting weaker attention modulation in response to emotional cues. These findings underscore the complex interaction between emotion, attention, and psychological well-being, highlighting the potential utility of attentional scope modulation as a behavioral marker for psychological distress.
Data availability
The data and source codes will be available on the Open Data Framework once the paper is accepted and published.
References
Frischen, A., Eastwood, J. D. & Smilek, D. Visual search for faces with emotional expressions. Psychol. Bull. 134 (5), 662 (2008).
Ãhman, A., Flykt, A. & Esteves, F. Emotion drives attention: detecting the snake in the grass. J. Exp. Psychol. Gen. 130 (3), 466 (2001).
Hansen, C. H. & Hansen, R. D. Finding the face in the crowd: an anger superiority effect. J. Personal. Soc. Psychol. 54 (6), 917 (1988).
Vuilleumier, P. How brains beware: neural mechanisms of emotional attention. Trends Cogn. Sci. 9 (12), 585â594 (2005).
Anderson, A. K. & Phelps, E. A. Lesions of the human amygdala impair enhanced perception of emotionally salient events. Nature 411 (6835), 305â309 (2001).
Eastwood, J. D., Smilek, D. & Merikle, P. M. Negative facial expression captures attention and disrupts performance. Percept. Psychophys. 65 (3), 352â358 (2003).
Fox, E., Russo, R., Bowles, R. & Dutton, K. Do threatening stimuli draw or hold visual attention in subclinical anxiety? J. Exp. Psychol. Gen. 130 (4), 681 (2001).
Raymond, J. E., Shapiro, K. L. & Arnell, K. M. Temporary suppression of visual processing in an RSVP task: an attentional blink? J. Exp. Psychol. Hum. Percept. Perform. 18 (3), 849 (1992).
Zhang, X., Japee, S., Safiullah, Z., Mlynaryk, N. & Ungerleider, L. G. A normalization framework for emotional attention. PLoS Biol., 14(11), e1002578. (2016).
Phelps, E. A., Ling, S. & Carrasco, M. Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychol. Sci. 17 (4), 292â299 (2006).
Schmitz, T. W., De Rosa, E. & Anderson, A. K. Opposing influences of affective state valence on visual cortical encoding. J. Neurosci. 29 (22), 7199â7207 (2009).
Reynolds, J. H. & Heeger, D. J. The normalization model of attention. Neuron 61 (2), 168â185 (2009).
Itthipuripat, S., Garcia, J. O., Rungratsameetaweemana, N., Sprague, T. C. & Serences, J. T. Changing the spatial scope of attention alters patterns of neural gain in human cortex. J. Neurosci. 34 (1), 112â123 (2014).
MacLeod, C., Mathews, A. & Tata, P. Attentional bias in emotional disorders. J. Abnorm. Psychol. 95 (1), 15 (1986).
Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M. J. & Van Ijzendoorn, M. H. Threat-related attentional bias in anxious and nonanxious individuals: a meta-analytic study. Psychol. Bull. 133 (1), 1 (2007).
Cisler, J. M. & Olatunji, B. O. Components of attentional biases in contamination fear: evidence for difficulty in disengagement. Behav. Res. Ther. 48 (1), 74â78 (2010).
Fenske, M. J. & Eastwood, J. D. Modulation of focused attention by faces expressing emotion: evidence from flanker tasks. Emotion 3 (4), 327 (2003).
Core Team, R. R. R: A language and environment for statistical computing. (2013).
Tottenham, N. et al. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Res. 168 (3), 242â249 (2009).
Lovibond, P. F. & Lovibond, S. H. The structure of negative emotional states: comparison of the Depression anxiety stress scales (DASS) with the Beck Depression and anxiety inventories. Behav. Res. Ther. 33 (3), 335â343 (1995).
Oei, T. P., Sawang, S., Goh, Y. W. & Mukhtar, F. Using the depression anxiety stress scale 21 (DASS-21) across cultures. Int. J. Psychol. 48 (6), 1018â1029 (2013).
Wittayapun, Y. et al. Validation of depression, anxiety, and stress scales (DASS-21) among Thai nursing students in an online learning environment during the COVID-19 outbreak: a multi-center study. Plos One. 18 (6), e0288041 (2023).
Brainard, D. H. & Vision, S. The psychophysics toolbox. Spat. Vis. 10 (4), 433â436 (1997).
Rowe, G., Hirsh, J. B. & Anderson, A. K. Positive affect increases the breadth of attentional selection. Proceedings of the National Academy of Sciences, 104(1), 383â388. (2007).
Pessoa, L. How do emotion and motivation direct executive control? Trends Cogn. Sci. 13 (4), 160â166 (2009).
Najmi, S., Kuckertz, J. M. & Amir, N. Attentional impairment in anxiety: Inefficiency in expanding the scope of attention: attention scope in anxiety. Depress. Anxiety. 29 (3), 243â249. https://doi.org/10.1002/da.20900 (2012).
Yoon, K. L., Vidaurri, D. N., Joormann, J. & De Raedt, R. Social anxiety and narrowed attentional breadth toward faces. Emotion 15 (6), 682 (2015).
Acknowledgements
We thank the members of the Neuroscience Center for Research and Innovation (NX) at KMUTT for their helpful feedback and assistance on this work. Special thanks to Kitnipat Boonyadhammakul for allowing us to use his photograph as an example in our manuscript.
Funding
This work was funded by the IBRO Rising Stars Awards to TC, the Young Researcher Grant at King Mongkutâs University of Technology Thonburi (KMUTT) to TC, the research grant from the Research & Innovation for Sustainability Center, Magnolia Quality Development Corporation Limited, Thailand as well as the KMUTTâs Frontier Research Unit Grant for Neuroscience Center for Research and Innovation to TC and SI. This project was also supported by the National Research Council of Thailand grant (fiscal year 2021â2024) to SI as well as the Thailand Science Research and Innovation (TSRI) Basic Research Fund: fiscal year 2023 under project number FRB660073/0164, fiscal year 2022 under project number FRB650048/0164, fiscal year 2021 under project number FRB640008 and fiscal year 2020 under project number 62W1501 to SI.
Author information
Authors and Affiliations
Contributions
T.C., S.P., S.In., and S.It. conceived and designed the experiments. P.K. performed the data collection and analysis under the supervision of T.C. and S.It. The manuscript was written by T.C. and S.It., and revised with input from all authors. S.P., S.In., and S.It. reviewed, edited, and approved the final version for submission.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisherâs note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the articleâs Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâs Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Chaisilprungraung, T., Kaewbuapan, P., Intrachooto, S. et al. The impact of emotional valence on the spatial scope of visual selective attention. Sci Rep 14, 30231 (2024). https://doi.org/10.1038/s41598-024-80666-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-80666-x