Introduction

In a world overflowing with sensory stimuli, the brain’s ability to prioritize relevant information while filtering out distractions is essential for efficient processing and decision-making. One critical factor influencing selective attention is the emotional content of sensory stimuli. Research consistently shows that stimuli evoking strong negative emotions—such as snakes, spiders, or angry faces—capture attention more effectively than neutral or positive stimuli1,2,3. This heightened attentional capture reflects an adaptive mechanism that directs resources toward potential threats, enabling rapid responses crucial for survival2,4.

More recent studies have explored the possibility that both positive and negative emotions influence the spatial scope of attention5,6,7,8,9. Positive emotions are hypothesized to broaden attentional focus and enhance the ability to detect peripheral details. For instance, when walking down a busy street, positive emotions might increase awareness of surrounding factors, such as a barking dog or an approaching cyclist. In contrast, negative emotions are thought to narrow attentional focus, drawing attention to immediate, salient details. This so-called “weapon-focus effect” exemplifies how attention can become concentrated on a perceived threat at the expense of peripheral information (e.g., focusing exclusively on the barking dog). These emotion-driven shifts in attention can have significant cognitive and practical implications. By broadening attention, positive emotions may enhance learning and improve the ability to monitor or detect changes in the environment. Conversely, negative emotions, by narrowing attention, could limit situational awareness and impair performance in dynamic or unpredictable settings.

While emotional saliency has consistently been shown to influence attentional capture across various paradigms (e.g., visual search, attentional blink, spatial cueing) and stimulus categories (e.g., faces, words, sounds)5,6,7,8, the specific role of emotional valence in modulating the spatial scope of visual attention remains less understood. Neuroimaging and psychophysics research has revealed that the impacts of emotion on attentional scope can be detected at early perceptual stages9,10,11. For example, an fMRI study found that the strength of V1 responses to unattended peripheral stimuli was modulated by the emotional expression of a central target face. Happy faces elicited stronger neural responses to nearby unattended stimuli compared to angry faces, suggesting that emotions influence early perceptual encoding through top-down feedback9. Similarly, psychophysics experiments have demonstrated that exposure to valenced faces alters visual contrast perception, which, under the Normalization Model of Attention12,13, could be interpreted as indicating a broadening or narrowing of attentional field size9,10. Nevertheless, to date, there is little direct evidence linking emotional valence to attention scope modulation.

This study aims to achieve two objectives. First, we investigate whether emotional face cues modulate attentional focus using a modified flanker task. The Eriksen-Flanker task paradigm is ideal for measuring selective attention as it directly assesses participants’ ability to selectively attend to the target while ignoring competing distractors. Participants viewed an emotional face cue (happy, angry, or neutral) randomly positioned on the screen and were subsequently asked to identify the shape of a neutral target (bowtie or diamond) appearing at the same location as the face cue (Fig. 1). We hypothesized that happy faces would broaden the spatial scope of visual attention, leading to a weaker ability to suppress the flanking stimuli and a larger incongruency effect. In contrast, negative faces are hypothesized to restrict attentional focus, increasing the ability to ignore the distractors and thereby producing a smaller incongruency effect. The second objective is to explore whether emotional modulation of attention varies with participants’ psychological states. Research indicates that individuals with high psychological distress, such as anxiety or depression, show attentional biases toward negative stimuli14,15,16. We examine whether emotional cues modulate attention differently in individuals with self-reported high distress. Our hypothesis is that higher distress is associated with weaker emotion-driven modulation.

Materials and methods

Participants

Participants were 30 native Thai undergraduates (mean age = 20.97; 16 females; 1 left-handed) from King Mongkut’s University of Technology Thonburi (KMUTT). Each participant had normal or corrected-to-normal vision and received financial compensation for participation. Consent forms were obtained from all participants and were signed prior to the experiment. The study was approved by the Institutional Review Board (IRB) protocol at King Mongkut’s University of Technology Thonburi and all methods were performed in accordance with the relevant guidelines and regulations.

The number of participants was estimated based on a previous study that used a behavioral Eriksen Flanker task to assess the impact of emotion on attention17. Since the effect size was not directly reported in their study, ω2 was estimated from the reported ANOVA outcome and converted to Cohen’s f to facilitate the power analysis. The estimated effect size (Cohen’s f) for Experiment 1 A in their study was 0.81. Consequently, a minimum sample size of 20 participants was required to achieve a statistical power of 0.95 while maintaining a significance level of 0.05 (‘pwr’ package; R software18). To ensure robustness and account for potential variability in participants’ responses, we opted to recruit 30 participants for the study. Data collection was not influenced by interim analyses or any post-hoc adjustments.

Stimuli

Emotional cues consisted of seventy-two face images selected from the Nimstim Face Database19. To ensure that these images elicited the desired emotional reactions from our Thai participants, a separate mini-experiment was conducted. In this experiment, an independent group of 30 Thai observers (mean age: 20.20; 14 females; 1 left-handed) who did not take part in the main experiment provided ratings of the emotional content for each image in the Nimstim database. Three types of ratings were acquired: (1) emotional valence, or the perceived level of positivity or negativity of each facial expression, on a 0–9 Likert scale; (2) emotional intensity, or the degree of emotion expressed in the image, on a 0–9 Likert scale; and (3) emotional category, or the interpreted emotional content of each image (i.e., happy, sad, fear, angry, surprised, disgusted, calm, neutral). Data from this mini-experiment were used to further screen face images for our study. Specifically, the image selection was performed to ensure that the following three criteria were met: (1) each selected image had a mean categorization score above 70%; (2) the mean categorization accuracies did not statistically differ across selected categories (i.e., 90.83%, 87.78%, 90.00% for happy, angry, and neutral emotions, respectively); and (3) the emotional intensity ratings were statistically comparable across the happy and angry images (6.72 and 6.93, respectively). Given the cultural differences between our participants and those in the original Nimstim validation study (Tottenham et al., 2009), we did not use the original ratings for image selection.

From the total of 673 images in the original database, 72 face images were selected from 24 actors, each expressing happy, angry, and neutral emotions (i.e., 3 images per actor). These images were taken from 11 female and 13 male actors, representing the following ethnic backgrounds: 16 Europeans, 4 Asians, 3 Africans, and 1 Latino-American. To maximize the efficacy of the selected images in evoking the desired emotional response, happy and angry expressions were chosen to represent positive and negative emotions, respectively, due to the highest accuracy scores observed for these emotional categories compared to others. Only images of faces with closed mouths were chosen because, as noted by the original authors19, these images were noticeably perceptually dissimilar from the open-mouth face images due to the visibility of teeth or the shape of the mouth. Each image was converted to grayscale and manually cropped to remove hair, ears, and other non-essential details prior to the experiment.

Experimental Procedure

Each participant was seated in a comfortable chair approximately 57 cm away from the computer screen. In each trial, participants observed a 2000-ms central fixation, followed by a 75-ms face cue (width: 4.4˚, height: 6.4˚), which appeared at one of twelve possible locations around the central fixation (eccentricity: 9.8˚; see Fig. 1). We presented the cue and stimuli at slightly different eccentricities to prevent potential masking effects. After a brief period with a randomly-varied cue-stimulus interval (CSI; 125, 250, or 500 ms), 12 basic shape stimuli appeared on the screen for 1,550 ms (eccentricity: 10.6˚; diameter: 3.6˚). Participants were instructed to discriminate the shape of the target (bowtie vs. diamond) while ignoring the presence of surrounding distractors. Stimulus congruency was manipulated by altering the shapes of two flanking distractors on either side of the target. Participants were given three seconds to respond by pressing either ‘j’ or ‘k’ using their right index and middle fingers, respectively. Visual feedback was then provided, indicating whether the response was correct, incorrect, or too slow (i.e., if the response took longer than 3,000 ms). The next trial began after an inter-trial interval of 1,000 ± 50 ms.

Fig. 1
figure 1

Experimental design. Participants were required to discriminate the shape of the target stimulus (depicted in orange) while ignoring the flanking distractors (depicted in grey). Critical experimental manipulations included altering the emotional categories of face cues (angry, happy, and neutral) and stimulus congruency (congruent and incongruent). Time delays between cue and stimulus onset (cue-target interval, or CSI) were randomized across trials (125, 250, 500 ms). The colors orange and grey are used for illustrative purposes only.

The trial structures were designed to achieve an even distribution of stimulus conditions. Images depicting three facial emotions (happy, angry, neutral) were presented in both upright and inverted orientations at one of twelve possible screen locations. These stimuli conditions were paired with three cue-stimulus intervals (125 ms, 250 ms, and 500 ms) and two congruency conditions (congruent, incongruent), creating a total of 432 stimulus combinations (3 emotions × 2 orientations × 12 locations × 3 CSIs × 2 congruency conditions). For each participant, these combinations were duplicated to create a total of 864 trials. The trial order was randomly shuffled and distributed across 16 blocks, with each block consisting of 54 trials. The entire experiment lasted approximately 2.5 h.

Each participant received a brief practice session prior to the experiment. Additionally, the short version of the Depression, Anxiety, and Stress Scale (DASS-21)20 was administered to assess current and recent levels of psychological distress. Participants were required to rate their experiences of negative affect over the previous week on a 4-point scale. The questionnaire consisted of 21 items, evenly divided into three subscales (depression, anxiety, and stress; 7 items per scale). The Thai translation of the questionnaire, which has been extensively validated in previous studies to ensure translation accuracy and validity21,22, was used. Instructions were given in Thai throughout the experiment.

Stimuli presentation was carried out in a dark room on a 24-inch LG monitor with a 144 Hz refresh rate. MATLAB software (R2020b version) with the Psychophysics Toolbox-3 package23 was used to execute the display sequences and collect response data.

Data analysis

Data from participants with average accuracy scores more than three standard deviations below the mean were excluded, resulting in the removal of one participant. The final dataset included twenty-nine participants (mean age = 21.03 years; 15 females; 1 left-handed; total trials = 25,056). Planned analyses involved using repeated-measures Analysis of Variance (ANOVA) implemented in R18. An omnibus four-way ANOVA was initially performed to analyze the effects of facial emotion (happy, angry, neutral), facial orientation (upright, inverted), stimulus congruency (congruent, incongruent), and cue-stimulus interval (CSI; 125, 250, and 500 ms) on trial accuracies. Following the identification of a significant three-way interaction (emotion x congruency x orientation), separate two-way ANOVAs were conducted to probe the interaction between facial emotion and stimulus congruency and the nature of this interaction. Simple main effects tests with Bonferroni-corrected p-values were subsequently performed to examine the direction of the attention modulation effect for each emotional category. Effect sizes in this study were calculated using generalized eta squared (ges), which quantifies the proportion of total variance in the dependent variable that is attributable to each factor.

Attention modulation score

To explore how depression, anxiety, and stress impact attention modulation, we conducted a correlation analysis using the DASS scores and a metric called the “Attention Modulation Score.” This score quantifies emotion-driven changes in attentional focus by measuring the difference in the incongruency effect (i.e., susceptibility to flanker interference) between emotional pairs (e.g., happy vs. angry).

For each participant, we first calculated the difference in mean accuracy between congruent and incongruent trials (congruent - incongruent) for each emotional category and facial orientation. This produced six incongruency effects corresponding to the six possible combinations (i.e., happy_upright, happy_inverted, angry_upright, angry _inverted, neutral_upright, and neutral_inverted). To isolate effects specific to upright faces, we subtracted incongruency effects across orientations (upright - inverted), yielding normalized incongruency effects for happy, angry, and neutral emotions.

The Attention Modulation Score was then computed by subtracting normalized incongruency effects between specific emotion pairs: happy – neutral, angry – neutral, and happy – angry. Each emotion pair reflects a different aspect of attentional modulation. Specifically, the happy – neutral and angry – neutral comparisons indicate the degree to which happy and angry emotions increased susceptibility to flanker interference (i.e., attention broadening) relative to the neutral baseline. Positive values reflect a broadening of attention compared to baseline, while negative values indicate a narrowing of attention. The happy – angry comparison reflects the extent to which happy faces induced a broader attentional scope compared to angry faces. Finally, correlation analyses were conducted to explore the relationship between the Attention Modulation Score and the depression, anxiety, and stress subscales of the DASS questionnaire.

Results

The mean accuracy score (and standard deviation) in the Flanker task was 96.28% (0.19%). The average scores (SDs) from the DASS-21 questionnaire were 9.10 (6.90) on the depression scale, 12.14 (8.93) on the anxiety scale, and 13.10 (7.63) on the stress scale. According to the standard DASS scoring guidelines20, these scores suggest a normal level of depression, a moderate level of anxiety, and a normal level of stress among our subject population.

Valence-Induced attention modulation

A four-way repeated-measures ANOVA was performed to investigate the impact of facial emotion (happy, angry, neutral), facial orientation (upright, inverted), stimulus congruency (congruent, incongruent), and cue-stimulus interval (CSI; 125, 250, and 500 ms) on trial accuracy. As expected, the analysis yielded a significant main effect of stimulus congruency (F(1, 28) = 35.81, p < 0.001, ges = 0.07), with incongruent trials eliciting lower average accuracy scores than congruent trials. Importantly, a significant three-way interaction was also observed among stimulus congruency, facial emotion, and facial orientation (F(2, 56) = 5.36, p < 0.001, ges = 0.007). This interaction suggests that the ability to selectively attend to the target stimulus varied depending on the emotional valence and the orientation of the preceding face cue. The analysis revealed no significant main effects of emotion, orientation, or CSI on trial accuracy (p > 0.1). In addition, no significant interactions between CSI and other experimental variables were observed (p > 0.1). Consequently, subsequent analyses combined data across different cue-target time intervals.

To interpret the three-way interaction, data from upright and inverted face trials were submitted to separate repeated-measures ANOVAs that included facial emotion and stimulus congruency as within-subjects variables. For upright faces, the analysis revealed a significant interaction between facial emotion and stimulus congruency (F(2, 56) = 6.45, p < 0.001, ges = 0.02; see Fig. 2), suggesting that the presentation of upright emotional face cues indeed alter the spatial scope of visual selective attention. For inverted faces, no significant interaction between emotion and congruency was observed (F(2, 56) = 0.30, p > 0.1, ges = 0.0008). This suggests that emotion-dependent attention modulation was evident only when the face cues were presented in their canonical orientation, where their semantic content and identities were preserved.

Fig. 2
figure 2

Experiment Results. For upright faces, a significant interaction between facial emotions (angry, happy, neutral) and stimulus congruency (congruent, incongruent) was observed, suggesting that emotional valence modulates the spatial scope of visual selective attention. No interaction was observed for inverted faces, indicating that the effect cannot be attributed to perceptual dissimilarities across image categories. Error bars represent the standard error of the mean (SEM). Asterisks (*, **, ***) indicate statistically significant results from simple main effect tests, with p-values < 0.05, < 0.01, and < 0.001, respectively. Due to restrictions on the public display of original images from the NimStim dataset, the face images shown in this figure were provided by a laboratory member serving as a model and were not used in the actual experiment.

Post-hoc analyses were conducted to examine the direction of the incongruency effect across facial emotions. For positive faces displayed in the upright orientation, simple main effects tests revealed higher accuracy on congruent trials than on incongruent trials (mean difference = 4.0%, p_Bonf < 0.001, ges = 0.22). However, this accuracy difference disappeared for trials with upright negative faces (mean difference = 0.9%, p_Bonf > 0.1, ges = 0.03). As expected, a significant incongruency effect was observed for upright neutral faces (mean difference = 2.5%, p_Bonf < 0.01, ges = 0.14). For faces presented in the inverted orientation, a significant incongruency effect was observed regardless of emotion type: positive (mean difference = 1.9%, p_Bonf = 0.02, ges = 0.09), negative (mean difference = 2.4%, p_Bonf = 0.01, ges = 0.13), and neutral (mean difference = 2.4%, p_Bonf < 0.01, ges = 0.12). Together, these results suggest that the observed valence-induced attention modulation occurred in the expected direction, with a broadening of attention scope in response to positive faces and a narrowing in response to negative faces.

For the RT data, a four-way repeated-measures ANOVA revealed a significant main effect of stimulus congruency on response time in correct trials (F(1,28) = 59.3, p < 0.001, ges = 0.02), indicating that performance was faster on congruent than incongruent trials (mean RT = 568.9 ms and 591.1 ms, respectively). The analysis also revealed a significant main effect of CSI (F(1.5, 43.2) = 211.1, p < 0.001, ges = 0.1). However, no significant congruency × emotion × facial orientation interaction was observed (F(1.56, 43.6) = 0.5, p > 0.1). Further three-way ANOVAs revealed no significant emotion × congruency interaction for upright (F(2,56) = 0.7, p > 0.1) or inverted faces (F(2,56) = 3.1, p > 0.05; see Supplementary Materials S1).

Psychological distress and attention modulation

We conducted additional correlation analyses to examine the relationships between participants’ psychological states and their attentional modulation. An “Attention Modulation Score” was calculated for each participant, which quantified the degree to which one emotion induced greater attention broadening relative to another. Specifically, we computed the difference in the incongruency effect across emotional pairs (e.g., happy vs. angry), which reflected differential susceptibility to flanker interference for each pair. Higher Attention Modulation Scores indicate greater attention broadening, while lower scores suggest attention narrowing. Detailed descriptions of this computation are provided in the Materials and Methods section.

Figure 3 displays scatterplots of DASS scores (x-axis) against Attention Modulation Scores (y-axis) for three emotion pairs: happy–neutral (3a), angry–neutral (3b), and happy–angry (3c). An outlier analysis based on scores averaged across the three emotion pairs revealed no significant outliers. For the happy–neutral pair, a significant negative correlation between stress and attention modulation was observed (r = -0.41, p = 0.03), suggesting that participants with higher stress exhibited reduced attention broadening when cued by happy faces. No significant correlations were found for the angry–neutral pair.

For the happy–angry pair, we identified a significant negative correlation between attention modulation and stress scores (r = -0.41, p = 0.03). The scatterplot for the happy–angry pair (Fig. 3c) suggested potential outliers in the Attention Modulation Scores. A further outlier analysis for specific emotion pairs identified a significant outlier (data point at 0.22) for the happy–angry pair but not for the other pairs. Based on this outlier, we conducted a sensitivity analysis to assess the robustness of the findings by excluding this data point (see Supplementary Figure S2). The reanalysis yielded significant correlations for both depression (r = -0.47) and stress (r = -0.41, p = 0.03). Together, these results suggest that elevated psychological distress, particularly high stress levels, is associated with a weaker broadening of emotion-induced attention modulation.

Fig. 3
figure 3

Scatterplots of Depression, Anxiety, and Stress Scores (DASS) and Modified Attention Modulation Scores for each emotion pair: (a) Happy-Neutral, (b) Angry-Neutral, and (c) Happy-Angry.

Discussion

The present study examines whether the affective valence of facial cues alters the spatial scope of selective attention in a way that impacts the subsequent processing of neutral target stimuli. Using a modified Eriksen-Flanker task, we found that emotional facial cues modulate attentional focus by influencing the ability to suppress flanking stimuli. Specifically, positive facial cues broadened the spatial scope of attention, leading to greater susceptibility to flanker interference (i.e., a larger incongruency effect). In contrast, negative facial cues narrowed attentional focus, reducing susceptibility to interference (i.e., minimal or no incongruency effect). Notably, these valence-driven changes in attention disappeared when the emotional faces were inverted, suggesting that the observed effects were not simply due to perceptual differences in the images. Additionally, correlation analyses revealed that the magnitude of attentional modulation was negatively associated with participants’ psychological states. Participants who reported higher levels of stress or depression exhibited weaker attention broadening in response to emotional facial cues. This suggests that the typical emotion-driven changes in attentional scope may be disrupted in individuals experiencing high psychological distress.

The current findings address a gap in the literature by providing the first direct evidence linking emotional valence to attention scope modulation. Previous neuroimaging and psychophysics studies have indicated that emotion-induced changes in attentional breadth could be inferred from visual perception patterns or neural activity in V19,10,11. However, direct evidence of emotion’s influence on selective attention has remained elusive. While some earlier studies have explored the influences of positive and negative emotions on attention using similar behavioral approaches, such as the visual search task or the Flanker paradigm, these studies often employed non-orthogonal stimulus designs, where emotional cues and target stimuli were the same (e.g., schematic drawings of happy or sad faces)17,24. In such studies, it is difficult to determine whether the observed effects were due to emotional valence or the attention demands involved in target selection.

There are several possible mechanisms underlying the occurrence of attention scope modulation in our study. One useful framework is provided by the Dual-Competition Model25, which posits that attentional resource allocation reflects an interaction between bottom-up perceptual competition and top-down cognitive control processes. In our modified Flanker task, different components may engage these two levels of competition. Positive and negative face cues may have influenced bottom-up perceptual competition by reducing (for positive cues) or increasing (for negative cues) competition for the central target stimulus, resulting in broader or narrower attentional scope, respectively. Simultaneously, the cognitive demands of identifying the target shape while suppressing distracting flankers likely engaged top-down cognitive control, redirecting attention from emotional stimuli to task-relevant goals. The patterns we observed may reflect the dynamic interaction between these bottom-up and top-down processes. Alternatively, our findings could be explained by a general attentional mechanism unrelated to emotional processing. For instance, exposure to negative cues might have caused longer disengagement or slower saccadic reaction times, making it harder to shift attention away from the cues. However, our analysis of reaction time (RT) data found no evidence of significant differences in engagement times across emotional categories (mean RTs: happy = 593.5 ms, neutral = 591.6 ms, angry = 590.6 ms; F(2,58) = 0.65, p > 0.1; see Supplementary Figure S2). Moreover, if longer disengagement were occurring, we would expect a greater incongruency effect for negative faces compared to positive or neutral faces, as prolonged engagement would likely lead to increased encoding of peripheral flanking stimuli along with the target. This is in contrast to the observed pattern (a larger incongruency effect for positive compared to negative emotions), suggesting that these alternative explanations are unlikely.

The present study also found that emotion-driven attention modulation varied based on participants’ psychological states. Specifically, participants with higher levels of stress and depression exhibited reduced attention modulation in response to emotional face cues. Notably, significant negative correlations were observed only for the happy–neutral emotion pair, and not for the negative–neutral pair (Fig. 3). This suggests that the diminished attention modulation is primarily characterized by a weaker expansion of attentional focus in response to positive cues, possibly due to disrupted processing of positive emotional stimuli, such as that seen in anhedonia. Our findings are also consistent with a few other studies linking psychological distress to difficulties in expanding attention scope26,27. For instance, in a task where participants were required to detect visual stimuli within one or two of four rectangles positioned around a central fixation point (left, right, up, down), highly anxious individuals took longer to complete the task when the rectangles were further from the central fixation [26]. Given the limited number of studies on this topic, future research is necessary to better understand the relationships between psychological distress, emotional processing, and attention scope modulation.

We also examined the duration of the observed attention modulation effect by analyzing its impact on next-trial performance (n + 1). The analysis showed no significant emotion × congruency interaction for either upright or inverted faces (p-values > 0.05; see Supplementary Figure S3), suggesting that the emotion-induced attentional modulation did not carry over to the next trial (approximately 5.5–6 s later). Additionally, given the length of our experimental sessions (1.5–2 h; 16 blocks), we investigated whether the effects persisted across blocks or diminished due to fatigue. While accuracy did not significantly decline, a progressive improvement in response speed was observed, stabilizing around blocks 5–6 (F(3,84) = 0.08, p > 0.1; Supplementary Figure S4), indicating an early learning or adaptation period. However, when the trials were divided into the first and second halves of the experiment, significant attention modulation effects were present in both halves (first half: F(2,56) = 3.4, p = 0.02; second half: F(2,56) = 56.0, p < 0.001). Together, these results suggest that the observed effects were robust over time and were not influenced by learning or fatigue.

Finally, the present study revealed that the attention modulation effect was consistent across all cue-stimulus intervals (125, 250, 500 ms). Previous studies exploring attention scope modulation in perceptual processing9,10 used only short intervals (250 ms), leaving it unclear whether the observed effects reflected solely bottom-up processes or also involved top-down mechanisms. We speculate that the observed effects arose from the interaction of bottom-up perceptual competition (driven by the emotional face cue) and top-down cognitive control (required for target identification amidst distractors). This interplay likely explains why modulation occurred across all time delays. Alternatively, the present results may be influenced by inter-subject variability in sensitivity to timing, with some participants experiencing faster or slower impacts of emotional valence on attention. Data from the current study is insufficient to determine whythe effects appeared generalized across time intervals. Future research would benefit from an in-depth exploration of the complex temporal dynamics of emotion-induced attention scope modulation.

Conclusion

In conclusion, the present study provides valuable insights into how emotional valence modulates the spatial scope of visual selective attention. Using a modified Eriksen Flanker task, we found that positive emotions, such as happiness, broaden attentional focus and increase susceptibility to distractors, while negative emotions, like anger, narrow attentional focus and reduce interference. Importantly, these emotion-driven changes were moderated by participants’ levels of psychological distress, with those experiencing higher stress or depression exhibiting weaker attention modulation in response to emotional cues. These findings underscore the complex interaction between emotion, attention, and psychological well-being, highlighting the potential utility of attentional scope modulation as a behavioral marker for psychological distress.