scholarly journals Visual attention mediated by biased competition in extrastriate visual cortex

1998 ◽  
Vol 353 (1373) ◽  
pp. 1245-1255 ◽  
Author(s):  
Robert Desimone

According to conventional neurobiological accounts of visual attention, attention serves to enhance extrastriate neuronal responses to a stimulus at one spatial location in the visual field. However, recent results from recordings in extrastriate cortex of monkeys suggest that any enhancing effect of attention is best understood in the context of competitive interactions among neurons representing all of the stimuli present in the visual field. These interactions can be biased in favour of behaviourally relevant stimuli as a result of many different processes, both spatial and non–spatial, and both bottom–up and top–down. The resolution of this competition results in the suppression of the neuronal representations of behaviourally irrelevant stimuli in extrastriate cortex. A main source of top–down influence may derive from neuronal systems underlying working memory.

2019 ◽  
Vol 31 (5) ◽  
pp. 768-779 ◽  
Author(s):  
Justin Riddle ◽  
Kai Hwang ◽  
Dillan Cellier ◽  
Sofia Dhanani ◽  
Mark D'Esposito

Beta and gamma frequency neuronal oscillations have been implicated in top–down and bottom–up attention. In this study, we used rhythmic TMS to modulate ongoing beta and gamma frequency neuronal oscillations in frontal and parietal cortex while human participants performed a visual search task that manipulates bottom–up and top–down attention (single feature and conjunction search). Both task conditions will engage bottom–up attention processes, although the conjunction search condition will require more top–down attention. Gamma frequency TMS to superior precentral sulcus (sPCS) slowed saccadic RTs during both task conditions and induced a response bias to the contralateral visual field. In contrary, beta frequency TMS to sPCS and intraparietal sulcus decreased search accuracy only during the conjunction search condition that engaged more top–down attention. Furthermore, beta frequency TMS increased trial errors specifically when the target was in the ipsilateral visual field for the conjunction search condition. These results indicate that beta frequency TMS to sPCS and intraparietal sulcus disrupted top–down attention, whereas gamma frequency TMS to sPCS disrupted bottom–up, stimulus-driven attention processes. These findings provide causal evidence suggesting that beta and gamma oscillations have distinct functional roles for cognition.


2012 ◽  
Vol 29 ◽  
pp. 3520-3524
Author(s):  
Hui Wang ◽  
Gang Liu ◽  
Yuanyuan Dang
Keyword(s):  
Top Down ◽  

1992 ◽  
Vol 44 (3) ◽  
pp. 529-555 ◽  
Author(s):  
T. A Mondor ◽  
M.P. Bryden

In the typical visual laterality experiment, words and letters are more rapidly and accurately identified in the right visual field than in the left. However, while such studies usually control fixation, the deployment of visual attention is rarely restricted. The present studies investigated the influence of visual attention on the visual field asymmetries normally observed in single-letter identification and lexical decision tasks. Attention was controlled using a peripheral cue that provided advance knowledge of the location of the forthcoming stimulus. The time period between the onset of the cue and the onset of the stimulus (Stimulus Onset Asynchrony—SOA) was varied, such that the time available for attention to focus upon the location was controlled. At short SO As a right visual field advantage for identifying single letters and for making lexical decisions was apparent. However, at longer SOAs letters and words presented in the two visual fields were identified equally well. It is concluded that visual field advantages arise from an interaction of attentional and structural factors and that the attentional component in visual field asymmetries must be controlled in order to approximate more closely a true assessment of the relative functional capabilities of the right and left cerebral hemispheres.


2014 ◽  
Vol 112 (6) ◽  
pp. 1307-1316 ◽  
Author(s):  
Isabel Dombrowe ◽  
Claus C. Hilgetag

The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes.


2018 ◽  
Vol 115 (41) ◽  
pp. 10499-10504 ◽  
Author(s):  
Yin Yan ◽  
Li Zhaoping ◽  
Wu Li

Early sensory cortex is better known for representing sensory inputs but less for the effect of its responses on behavior. Here we explore the behavioral correlates of neuronal responses in primary visual cortex (V1) in a task to detect a uniquely oriented bar—the orientation singleton—in a background of uniformly oriented bars. This singleton is salient or inconspicuous when the orientation contrast between the singleton and background bars is sufficiently large or small, respectively. Using implanted microelectrodes, we measured V1 activities while monkeys were trained to quickly saccade to the singleton. A neuron’s responses to the singleton within its receptive field had an early and a late component, both increased with the orientation contrast. The early component started from the outset of neuronal responses; it remained unchanged before and after training on the singleton detection. The late component started ∼40 ms after the early one; it emerged and evolved with practicing the detection task. Training increased the behavioral accuracy and speed of singleton detection and increased the amount of information in the late response component about a singleton’s presence or absence. Furthermore, for a given singleton, faster detection performance was associated with higher V1 responses; training increased this behavioral–neural correlate in the early V1 responses but decreased it in the late V1 responses. Therefore, V1’s early responses are directly linked with behavior and represent the bottom-up saliency signals. Learning strengthens this link, likely serving as the basis for making the detection task more reflexive and less top-down driven.


1997 ◽  
Vol 77 (2) ◽  
pp. 554-561 ◽  
Author(s):  
Jong-Nam Kim ◽  
Kathleen Mulligan ◽  
Helen Sherk

Kim, Jong-Nam, Kathleen Mulligan, and Helen Sherk. Simulated optic flow and extrastriate cortex. I. Optic flow versus texture. J. Neurophysiol. 77: 554–561, 1997. A locomoting observer sees a very different visual scene than an observer at rest: images throughout the visual field accelerate and expand, and they follow approximately radial outward paths from a single origin. This so-called optic flow field is presumably used for visual guidance, and it has been suggested that particular areas of visual cortex are specialized for the analysis of optic flow. In the cat, the lateral suprasylvian visual area (LS) is a likely candidate. To test the hypothesis that LS is specialized for analysis of optic flow fields, we recorded cell responses to optic flow displays. Stimulus movies simulated the experience of a cat trotting slowly across an endless plain covered with small balls. In different simulations we varied the size of balls, their organization (randomly or regularly dispersed), and their color (all one gray level, or multiple shades of gray). For each optic flow movie, a “texture” movie composed of the same elements but lacking optic flow cues was tested. In anesthetized cats, >500 neurons in LS were studied with a variety of movies. Most (70%) of 454 visually responsive cells responded to optic flow movies. Visually responsive cells generally preferred optic flow to texture movies (69% of those responsive to any movie). The direction in which a movie was shown (forward or reverse) was also an important factor. Most cells (68%) strongly preferred forward motion, which corresponded to visual experience during locomotion.


2021 ◽  
Author(s):  
◽  
Ibrahim Mohammad Hussain Rahman

<p>The human visual attention system (HVA) encompasses a set of interconnected neurological modules that are responsible for analyzing visual stimuli by attending to those regions that are salient. Two contrasting biological mechanisms exist in the HVA systems; bottom-up, data-driven attention and top-down, task-driven attention. The former is mostly responsible for low-level instinctive behaviors, while the latter is responsible for performing complex visual tasks such as target object detection.  Very few computational models have been proposed to model top-down attention, mainly due to three reasons. The first is that the functionality of top-down process involves many influential factors. The second reason is that there is a diversity in top-down responses from task to task. Finally, many biological aspects of the top-down process are not well understood yet.  For the above reasons, it is difficult to come up with a generalized top-down model that could be applied to all high level visual tasks. Instead, this thesis addresses some outstanding issues in modelling top-down attention for one particular task, target object detection. Target object detection is an essential step for analyzing images to further perform complex visual tasks. Target object detection has not been investigated thoroughly when modelling top-down saliency and hence, constitutes the may domain application for this thesis.  The thesis will investigate methods to model top-down attention through various high-level data acquired from images. Furthermore, the thesis will investigate different strategies to dynamically combine bottom-up and top-down processes to improve the detection accuracy, as well as the computational efficiency of the existing and new visual attention models. The following techniques and approaches are proposed to address the outstanding issues in modelling top-down saliency:  1. A top-down saliency model that weights low-level attentional features through contextual knowledge of a scene. The proposed model assigns weights to features of a novel image by extracting a contextual descriptor of the image. The contextual descriptor plays the role of tuning the weighting of low-level features to maximize detection accuracy. By incorporating context into the feature weighting mechanism we improve the quality of the assigned weights to these features.  2. Two modules of target features combined with contextual weighting to improve detection accuracy of the target object. In this proposed model, two sets of attentional feature weights are learned, one through context and the other through target features. When both sources of knowledge are used to model top-down attention, a drastic increase in detection accuracy is achieved in images with complex backgrounds and a variety of target objects.  3. A top-down and bottom-up attention combination model based on feature interaction. This model provides a dynamic way for combining both processes by formulating the problem as feature selection. The feature selection exploits the interaction between these features, yielding a robust set of features that would maximize both the detection accuracy and the overall efficiency of the system.  4. A feature map quality score estimation model that is able to accurately predict the detection accuracy score of any previously novel feature map without the need of groundtruth data. The model extracts various local, global, geometrical and statistical characteristic features from a feature map. These characteristics guide a regression model to estimate the quality of a novel map.  5. A dynamic feature integration framework for combining bottom-up and top-down saliencies at runtime. If the estimation model is able to predict the quality score of any novel feature map accurately, then it is possible to perform dynamic feature map integration based on the estimated value. We propose two frameworks for feature map integration using the estimation model. The proposed integration framework achieves higher human fixation prediction accuracy with minimum number of feature maps than that achieved by combining all feature maps.  The proposed works in this thesis provide new directions in modelling top-down saliency for target object detection. In addition, dynamic approaches for top-down and bottom-up combination show considerable improvements over existing approaches in both efficiency and accuracy.</p>


2019 ◽  
Author(s):  
Chloé Stoll ◽  
Matthew William Geoffrey Dye

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers – both deaf and hearing – in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.


Sign in / Sign up

Export Citation Format

Share Document