Can viewing others experiencing stress create a “contagious” physiological stress response in the... more Can viewing others experiencing stress create a “contagious” physiological stress response in the observer? To investigate second-hand stress, we first created a stimulus set of videos, which featured participants speaking under either minimal stress, high stress, or while recovering from stress. We then recruited a second set of participants to watch these videos. All participants (speakers and observers) were monitored via electrocardiogram. Cardiac activity of the observers while watching the videos was then analyzed and compared to that of the speakers. Furthermore, we assessed dispositional levels of empathy in observers to determine how empathy might be related to the degree of stress contagion. Results revealed that depending on the video being viewed, observers experienced differential changes in cardiac activity that were based on the speaker’s stress level. Additionally, this is the first demonstration that individuals high in dispositional empathy experience these physiological changes more quickly.
How eye movements reflect underlying cognitive processes during scene viewing has been a topic of... more How eye movements reflect underlying cognitive processes during scene viewing has been a topic of considerable theoretical interest. In this study, we used eye-movement features and their distributions over time to successfully classify mental states as indexed by the behavioral task performed by participants. We recorded eye movements from 72 participants performing 3 scene-viewing tasks: visual search, scene memorization, and aesthetic preference. To classify these tasks, we used statistical features (mean, standard deviation, and skewness) of fixation durations and saccade amplitudes, as well as the total number of fixations. The same set of visual stimuli was used in all tasks to exclude the possibility that different salient scene features influenced eye movements across tasks. All of the tested classification algorithms were successful in predicting the task within a single participant. The linear discriminant algorithm was also successful in predicting the task for each participant when the training data came from other participants, suggesting some generalizability across participants. The number of fixations contributed most to task classification; however, the remaining features and, in particular, their covariance provided important task-specific information. These results provide evidence on how participants perform different visual tasks. In the visual search task, for example, participants exhibited more variance and skewness in fixation durations and saccade amplitudes, but also showed heightened correlation between fixation durations and the variance in fixation durations. In summary, these results point to the possibility that eye-movement features and their distributional properties can be used to classify mental states both within and across individuals.
Real-world scenes contain low-level visual features (e.g., edges, colors) and high-level semantic... more Real-world scenes contain low-level visual features (e.g., edges, colors) and high-level semantic features (e.g., objects and places). Traditional visual perception models assume that integration of low-level visual features and segmentation of the scene must occur before high-level semantics are perceived. This view implies that low-level visual features of a scene alone do not carry semantic information related to that scene. Here we present evidence that suggests otherwise. We show that high-level semantics can be preserved in low-level visual features, and that different high-level semantics can be preserved in different types of low-level visual features. Spe is preserved in edge features better than color features, whereas the converse is true These findings suggest that semantic processing may start earlier than thought before, and integration of low-level visual features and segmentation of the scene may occur after semantic processing has begun, or in parallel.
Previous research has shown that eye-movements change depending on both the visual features of ou... more Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements.
How eye movements reflect underlying cognitive processes during scene viewing has been a topic of... more How eye movements reflect underlying cognitive processes during scene viewing has been a topic of considerable theoretical interest. In this study, we used eye-movement features and their distributions over time to successfully classify mental states as indexed by the behavioral task performed by participants. We recorded eye movements from 72 participants performing 3 scene-viewing tasks: visual search, scene memorization, and aesthetic preference. To classify these tasks, we used statistical features (mean, standard deviation, and skewness) of fixation durations and saccade amplitudes, as well as the total number of fixations. The same set of visual stimuli was used in all tasks to exclude the possibility that different salient scene features influenced eye movements across tasks. All of the tested classification algorithms were successful in predicting the task within a single participant. The linear discriminant algorithm was also successful in predicting the task for each participant when the training data came from other participants, suggesting some generalizability across participants. The number of fixations contributed most to task classification; however, the remaining features and, in particular, their covariance provided important task-specific information. These results provide evidence on how participants perform different visual tasks. In the visual search task, for example, participants exhibited more variance and skewness in fixation durations and saccade amplitudes, but also showed heightened correlation between fixation durations and the variance in fixation durations. In summary, these results point to the possibility that eye-movement features and their distributional properties can be used to classify mental states both within and across individuals.
Studies have shown that natural environments can enhance health and here we build upon that work ... more Studies have shown that natural environments can enhance health and here we build upon that work by examining the associations between comprehensive greenspace metrics and health. We focused on a large urban population center (Toronto, Canada) and related the two domains by combining high-resolution satellite imagery and individual tree data from Toronto with questionnaire-based self-reports of general health perception, cardio-metabolic conditions and mental illnesses from the Ontario Health Study. Results from multiple regressions and multivariate canonical correlation analyses suggest that people who live in neighborhoods with a higher density of trees on their streets report significantly higher health perception and significantly less cardio-metabolic conditions (controlling for socio-economic and demographic factors). We find that having 10 more trees in a city block, on average, improves health perception in ways comparable to an increase in annual personal income of $10,000 and moving to a neighborhood with $10,000 higher median income or being 7 years younger. We also find that having 11 more trees in a city block, on average, decreases cardio-metabolic conditions in ways comparable to an increase in annual personal income of $20,000 and moving to a neighborhood with $20,000 higher median income or being 1.4 years younger.
Previous research has shown that interacting with natural environments vs. more urban or built en... more Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight l...
Previous research has shown that viewing images of nature scenes can have a beneficial effect on ... more Previous research has shown that viewing images of nature scenes can have a beneficial effect on memory, attention, and mood. In this study, we aimed to determine whether the preference of natural versus man-made scenes is driven by bottom–up processing of the low-level visual features of nature. We used participants’ ratings of perceived naturalness as well as esthetic preference for 307 images with varied natural and urban content. We then quantified 10 low-level image features for each image (a combination of spatial and color properties). These features were used to predict esthetic preference in the images, as well as to decompose perceived naturalness to its predictable (modeled by the low-level visual features) and non-modeled aspects. Interactions of these separate aspects of naturalness with the time it took to make a preference judgment showed that naturalness based on low-level features related more to preference when the judgment was faster (bottom–up). On the other hand, perceived naturalness that was not modeled by low-level features was related more to preference when the judgment was slower. A quadratic discriminant classification analysis showed how relevant each aspect of naturalness (modeled and non-modeled) was to predicting preference ratings, as well as the image features on their own. Finally, we compared the effect of color-related and structure-related modeled naturalness, and the remaining unmodeled naturalness in predicting esthetic preference. In summary, bottom–up (color and spatial) properties of natural images captured by our features and the non-modeled naturalness are important to esthetic judgments of natural and man-made scenes, with each predicting unique variance.
Can viewing others experiencing stress create a “contagious” physiological stress response in the... more Can viewing others experiencing stress create a “contagious” physiological stress response in the observer? To investigate second-hand stress, we first created a stimulus set of videos, which featured participants speaking under either minimal stress, high stress, or while recovering from stress. We then recruited a second set of participants to watch these videos. All participants (speakers and observers) were monitored via electrocardiogram. Cardiac activity of the observers while watching the videos was then analyzed and compared to that of the speakers. Furthermore, we assessed dispositional levels of empathy in observers to determine how empathy might be related to the degree of stress contagion. Results revealed that depending on the video being viewed, observers experienced differential changes in cardiac activity that were based on the speaker’s stress level. Additionally, this is the first demonstration that individuals high in dispositional empathy experience these physiological changes more quickly.
How eye movements reflect underlying cognitive processes during scene viewing has been a topic of... more How eye movements reflect underlying cognitive processes during scene viewing has been a topic of considerable theoretical interest. In this study, we used eye-movement features and their distributions over time to successfully classify mental states as indexed by the behavioral task performed by participants. We recorded eye movements from 72 participants performing 3 scene-viewing tasks: visual search, scene memorization, and aesthetic preference. To classify these tasks, we used statistical features (mean, standard deviation, and skewness) of fixation durations and saccade amplitudes, as well as the total number of fixations. The same set of visual stimuli was used in all tasks to exclude the possibility that different salient scene features influenced eye movements across tasks. All of the tested classification algorithms were successful in predicting the task within a single participant. The linear discriminant algorithm was also successful in predicting the task for each participant when the training data came from other participants, suggesting some generalizability across participants. The number of fixations contributed most to task classification; however, the remaining features and, in particular, their covariance provided important task-specific information. These results provide evidence on how participants perform different visual tasks. In the visual search task, for example, participants exhibited more variance and skewness in fixation durations and saccade amplitudes, but also showed heightened correlation between fixation durations and the variance in fixation durations. In summary, these results point to the possibility that eye-movement features and their distributional properties can be used to classify mental states both within and across individuals.
Real-world scenes contain low-level visual features (e.g., edges, colors) and high-level semantic... more Real-world scenes contain low-level visual features (e.g., edges, colors) and high-level semantic features (e.g., objects and places). Traditional visual perception models assume that integration of low-level visual features and segmentation of the scene must occur before high-level semantics are perceived. This view implies that low-level visual features of a scene alone do not carry semantic information related to that scene. Here we present evidence that suggests otherwise. We show that high-level semantics can be preserved in low-level visual features, and that different high-level semantics can be preserved in different types of low-level visual features. Spe is preserved in edge features better than color features, whereas the converse is true These findings suggest that semantic processing may start earlier than thought before, and integration of low-level visual features and segmentation of the scene may occur after semantic processing has begun, or in parallel.
Previous research has shown that eye-movements change depending on both the visual features of ou... more Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements.
How eye movements reflect underlying cognitive processes during scene viewing has been a topic of... more How eye movements reflect underlying cognitive processes during scene viewing has been a topic of considerable theoretical interest. In this study, we used eye-movement features and their distributions over time to successfully classify mental states as indexed by the behavioral task performed by participants. We recorded eye movements from 72 participants performing 3 scene-viewing tasks: visual search, scene memorization, and aesthetic preference. To classify these tasks, we used statistical features (mean, standard deviation, and skewness) of fixation durations and saccade amplitudes, as well as the total number of fixations. The same set of visual stimuli was used in all tasks to exclude the possibility that different salient scene features influenced eye movements across tasks. All of the tested classification algorithms were successful in predicting the task within a single participant. The linear discriminant algorithm was also successful in predicting the task for each participant when the training data came from other participants, suggesting some generalizability across participants. The number of fixations contributed most to task classification; however, the remaining features and, in particular, their covariance provided important task-specific information. These results provide evidence on how participants perform different visual tasks. In the visual search task, for example, participants exhibited more variance and skewness in fixation durations and saccade amplitudes, but also showed heightened correlation between fixation durations and the variance in fixation durations. In summary, these results point to the possibility that eye-movement features and their distributional properties can be used to classify mental states both within and across individuals.
Studies have shown that natural environments can enhance health and here we build upon that work ... more Studies have shown that natural environments can enhance health and here we build upon that work by examining the associations between comprehensive greenspace metrics and health. We focused on a large urban population center (Toronto, Canada) and related the two domains by combining high-resolution satellite imagery and individual tree data from Toronto with questionnaire-based self-reports of general health perception, cardio-metabolic conditions and mental illnesses from the Ontario Health Study. Results from multiple regressions and multivariate canonical correlation analyses suggest that people who live in neighborhoods with a higher density of trees on their streets report significantly higher health perception and significantly less cardio-metabolic conditions (controlling for socio-economic and demographic factors). We find that having 10 more trees in a city block, on average, improves health perception in ways comparable to an increase in annual personal income of $10,000 and moving to a neighborhood with $10,000 higher median income or being 7 years younger. We also find that having 11 more trees in a city block, on average, decreases cardio-metabolic conditions in ways comparable to an increase in annual personal income of $20,000 and moving to a neighborhood with $20,000 higher median income or being 1.4 years younger.
Previous research has shown that interacting with natural environments vs. more urban or built en... more Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight l...
Previous research has shown that viewing images of nature scenes can have a beneficial effect on ... more Previous research has shown that viewing images of nature scenes can have a beneficial effect on memory, attention, and mood. In this study, we aimed to determine whether the preference of natural versus man-made scenes is driven by bottom–up processing of the low-level visual features of nature. We used participants’ ratings of perceived naturalness as well as esthetic preference for 307 images with varied natural and urban content. We then quantified 10 low-level image features for each image (a combination of spatial and color properties). These features were used to predict esthetic preference in the images, as well as to decompose perceived naturalness to its predictable (modeled by the low-level visual features) and non-modeled aspects. Interactions of these separate aspects of naturalness with the time it took to make a preference judgment showed that naturalness based on low-level features related more to preference when the judgment was faster (bottom–up). On the other hand, perceived naturalness that was not modeled by low-level features was related more to preference when the judgment was slower. A quadratic discriminant classification analysis showed how relevant each aspect of naturalness (modeled and non-modeled) was to predicting preference ratings, as well as the image features on their own. Finally, we compared the effect of color-related and structure-related modeled naturalness, and the remaining unmodeled naturalness in predicting esthetic preference. In summary, bottom–up (color and spatial) properties of natural images captured by our features and the non-modeled naturalness are important to esthetic judgments of natural and man-made scenes, with each predicting unique variance.
Uploads
Papers by Omid Kardan
participants speaking under either minimal stress, high stress, or while recovering from stress. We then recruited a second set of participants to watch these videos. All participants (speakers and observers)
were monitored via electrocardiogram. Cardiac activity of the observers while watching the videos was then analyzed and compared to that of the speakers. Furthermore, we assessed dispositional levels of
empathy in observers to determine how empathy might be related to the degree of stress contagion. Results revealed that depending on the video being viewed, observers experienced differential changes in cardiac activity that were based on the speaker’s stress level. Additionally, this is the first demonstration that individuals high in dispositional empathy experience these physiological changes more quickly.
participants speaking under either minimal stress, high stress, or while recovering from stress. We then recruited a second set of participants to watch these videos. All participants (speakers and observers)
were monitored via electrocardiogram. Cardiac activity of the observers while watching the videos was then analyzed and compared to that of the speakers. Furthermore, we assessed dispositional levels of
empathy in observers to determine how empathy might be related to the degree of stress contagion. Results revealed that depending on the video being viewed, observers experienced differential changes in cardiac activity that were based on the speaker’s stress level. Additionally, this is the first demonstration that individuals high in dispositional empathy experience these physiological changes more quickly.