scholarly journals How Does the Purpose of Inspection Influence the Potency of Visual Salience in Scene Perception?

Perception ◽  
10.1068/p5659 ◽  
2007 ◽  
Vol 36 (8) ◽  
pp. 1123-1138 ◽  
Author(s):  
Tom Foulsham ◽  
Geoffrey Underwood

Salience-map models have been taken to suggest that the locations of eye fixations are determined by the extent of the low-level discontinuities in an image. While such models have found some support, an increasing emphasis on the task viewers are performing implies that these models must combine with cognitive demands to describe how the eyes are guided efficiently. An experiment is reported in which eye movements to objects in photographs were examined while viewers performed a memory-encoding task or one of two search tasks. The objects depicted in the scenes had known salience ranks according to a popular model. Participants fixated higher-salience objects sooner and more often than lower-salience objects, but only when memorising scenes. This difference shows that salience-map models provide useful predictions even in complex scenes and late in viewing. However, salience had no effects when searching for a target defined by category or exemplar. The results suggest that salience maps are not used to guide the eyes in these tasks, that cognitive override by task demands can be total, and that modelling top – down search is important but may not be easily accomplished within a salience-map framework.

Author(s):  
Benjamin de Haas ◽  
Alexios L. Iakovidis ◽  
D. Samuel Schwarzkopf ◽  
Karl R. Gegenfurtner

What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behavior, while differences between observers are interpreted as a “noise-ceiling” that strictly limits predictability of fixations. However, recent twin studies suggest a genetic basis of gaze-trace similarity for a given stimulus. This leads to the question of how individuals differ in their gaze behavior and what may explain these differences. Here, we investigated the fixations of >100 human adults freely viewing a large set of complex scenes containing thousands of semantically annotated objects. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (>twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed toward each image, commonly interpreted as “bottom-up” visual salience. Their perceptual relevance was documented by a correlation between individual face salience and face recognition skills. The set of reliable individual salience dimensions and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual differences in salience along a set of fundamental semantic dimensions and that these differences have meaningful perceptual implications. Visual salience reflects features of the observer as well as the image.


2018 ◽  
Author(s):  
Benjamin de Haas ◽  
Alexios L. Iakovidis ◽  
D. Samuel Schwarzkopf ◽  
Karl R. Gegenfurtner

What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behaviour, while differences between observers are ‘noise’. Here, we investigated the fixations of > 100 human adults freely viewing a large set of complex scenes. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (> twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed towards each image, commonly interpreted as ‘bottom-up’ visual salience. Their perceptual relevance was documented by a correlation between individual face salience and recognition skills. The dimensions of individual salience and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual salience differences along semantic dimensions, with meaningful perceptual implications. Salience reflects features of the observer as well as the image.


2006 ◽  
Vol 12 (2) ◽  
pp. 261-271 ◽  
Author(s):  
DONALD T. STUSS

The frontal lobes (FL), are they a general adaptive global capacity processor, or a series of fractionated processes? Our lesion studies focusing on attention have demonstrated impairments in distinct processes due to pathology in different frontal regions, implying fractionation of the “supervisory system.” However, when task demands are manipulated, it becomes evident that the frontal lobes are not just a series of independent processes. Increased complexity of task demands elicits greater involvement of frontal regions along a fixed network related to a general activation process. For some task demands, one or more anatomically distinct frontal processes may be recruited. In other conditions, there is a bottom-up nonfrontal/frontal network, with impairment noted maximally for the lesser task demands in the nonfrontal automatic processing regions, and then as task demands change, increased involvement of different frontal (more “strategic”) regions, until it appears all frontal regions are involved. With other measures, the network is top-down, with impairment in the measure first noted in the frontal region and then, with changing task demands, involving a posterior region. Adaptability is not just a property of FL, it is the fluid recruitment of different processes anywhere in the brain as required by the current task. (JINS, 2006,12, 261–271.)


2021 ◽  
Author(s):  
Einat Rashal ◽  
Mehdi Senoussi ◽  
Elisa Santandrea ◽  
Suliann Ben Hamed ◽  
Emiliano Macaluso ◽  
...  

This work reports an investigation of the effect of combined top-down and bottom-up attentional control sources, using known attention-related EEG components that are thought to reflect target selection (N2pc) and distractor suppression (PD), in easy and difficult visual search tasks.


2012 ◽  
Vol 25 (0) ◽  
pp. 158
Author(s):  
Pawel J. Matusz ◽  
Martin Eimer

We investigated whether top-down attentional control settings can specify task-relevant features in different sensory modalities (vision and audition). Two audiovisual search tasks were used where a spatially uninformative visual singleton cue preceded a target search array. In different blocks, participants searched for a visual target (defined by colour or shape in Experiments 1 and 2, respectively), or target defined by a combination of visual and auditory features (e.g., red target accompanied by a high-pitch tone). Spatial cueing effects indicative of attentional capture by target-matching visual singleton cues in the unimodal visual search task were reduced or completely eliminated when targets were audiovisually defined. The N2pc component (i.e. index attentional target selection in vision) triggered by these cues was reduced and delayed during search for audiovisual as compared to unimodal visual targets. These results provide novel evidence that the top-down control settings which guide attentional selectivity can include perceptual features from different sensory modalities.


2018 ◽  
Vol 44 (suppl_1) ◽  
pp. S250-S250
Author(s):  
Catherine Barnes ◽  
Lara Rösler ◽  
Michael Schaum ◽  
Deliah Macht ◽  
Benjamin Peters ◽  
...  

2020 ◽  
Author(s):  
Julia W Y Kam ◽  
Randolph F Helfrich ◽  
Anne-Kristin Solbakk ◽  
Tor Endestad ◽  
Pål G Larsson ◽  
...  

Abstract Decades of electrophysiological research on top–down control converge on the role of the lateral frontal cortex in facilitating attention to behaviorally relevant external inputs. However, the involvement of frontal cortex in the top–down control of attention directed to the external versus internal environment remains poorly understood. To address this, we recorded intracranial electrocorticography while subjects directed their attention externally to tones and responded to infrequent target tones, or internally to their own thoughts while ignoring the tones. Our analyses focused on frontal and temporal cortices. We first computed the target effect, as indexed by the difference in high frequency activity (70–150 Hz) between target and standard tones. Importantly, we then compared the target effect between external and internal attention, reflecting a top–down attentional effect elicited by task demands, in each region of interest. Both frontal and temporal cortices showed target effects during external and internal attention, suggesting this effect is present irrespective of attention states. However, only the frontal cortex showed an enhanced target effect during external relative to internal attention. These findings provide electrophysiological evidence for top–down attentional modulation in the lateral frontal cortex, revealing preferential engagement with external attention.


2016 ◽  
Author(s):  
Emily B.J. Coffey ◽  
Alexander M.P. Chepesiuk ◽  
Sibylle C. Herholz ◽  
Sylvain Baillet ◽  
Robert J. Zatorre

AbstractSpeech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role and of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that eliminates the main top-down factors that have been implicated in SIN performance, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy, as was the amplitude of the slower cortical P2 wave, and these enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions.Highlights–Enhancements in periodic sound encoding are correlated with speech-in-noise ability–This effect is observed in the absence of contextual cues and task demands–Better encoding is observed throughout the auditory system and is right-lateralized–Stronger encoding is related to stronger subsequent secondary auditory cortex activity–Musicianship is related to both speech-in-noise perception and enhanced MEG signals


Sign in / Sign up

Export Citation Format

Share Document