Endogenous and Exogenous Control of Visual Selection

Perception ◽  
1994 ◽  
Vol 23 (4) ◽  
pp. 429-440 ◽  
Author(s):  
Jan Theeuwes

Among the most fundamental issues of visual attention research is the extent to which visual selection is controlled by properties of the stimulus or by the intentions, goals, and beliefs of the observer. Before selective attention operates, preattentive processes perform some basic analyses segmenting the visual field into functional perceptual units. The crucial question is whether the allocation of attention to these perceptual units is under the endogenous control of the observer (intentions, goals, beliefs) or under the exogenous control of stimulation. In this article evidence is discussed regarding the endogenous and exogenous control of attention in tasks in which subjects search for a particular ‘basic’ feature (eg search for a unique colour, shape, or brightness). In the present review it is suggested that selectivity in these types of search tasks is dependent on the relative saliency of the stimulus attributes. It is concluded that the visual system automatically calculates differences in basic features (eg difference in shape, colour, or brightness) and that visual information occupying the position of the highest saliency across stimulus dimensions is exogenously passed on to the ‘central representation’ that is responsible for further stimulus analysis. Alternative explanations of the present findings and tentative speculations resulting from the present approach are discussed.

2002 ◽  
Vol 14 (5) ◽  
pp. 687-701 ◽  
Author(s):  
Jason Proksch ◽  
Daphne Bavelier

There is much anecdotal suggestion of improved visual skills in congenitally deaf individuals. However, this claim has only been met by mixed results from careful investigations of visual skills in deaf individuals. Psychophysical assessments of visual functions have failed, for the most part, to validate the view of enhanced visual skills after deafness. Only a few studies have shown an advantage for deaf individuals in visual tasks. Interestingly, all of these studies share the requirement that participants process visual information in their peripheral visual field under demanding conditions of attention. This work has led us to propose that congenital auditory deprivation alters the gradient of visual attention from central to peripheral field by enhancing peripheral processing. This hypothesis was tested by adapting a search task from Lavie and colleagues in which the interference from distracting information on the search task provides a measure of attentional resources. These authors have established that during an easy central search for a target, any surplus attention remaining will involuntarily process a peripheral distractor that the subject has been instructed to ignore. Attentional resources can be measured by adjusting the difficulty of the search task to the point at which no surplus resources are available for the distractor. Through modification of this paradigm, central and peripheral attentional resources were compared in deaf and hearing individuals. Deaf individuals possessed greater attentional resources in the periphery but less in the center when compared to hearing individuals. Furthermore, based on results from native hearing signers, it was shown that sign language alone could not be responsible for these changes. We conclude that auditory deprivation from birth leads to compensatory changes within the visual system that enhance attentional processing of the peripheral visual field.


Perception ◽  
10.1068/p3414 ◽  
2003 ◽  
Vol 32 (6) ◽  
pp. 645-656 ◽  
Author(s):  
Jeremy M Wolfe ◽  
Jennifer S DiMase

The status of ‘intersection’ as a basic feature in visual search tasks has been controversial. Under some circumstances, a target possessing this attribute (eg a plus) ‘pops out’ of a display of distractors that lack the attribute (eg Ls). However, those cases may be artifacts of other features such as relative size or number of line terminators. We report two sets of experiments with stimuli intended to control for these factors. Search for the presence or absence of intersections is very inefficient with these stimuli. The results suggest that intersection should not be included among the list of salient features that support efficient search through visual displays.


1992 ◽  
Vol 44 (3) ◽  
pp. 529-555 ◽  
Author(s):  
T. A Mondor ◽  
M.P. Bryden

In the typical visual laterality experiment, words and letters are more rapidly and accurately identified in the right visual field than in the left. However, while such studies usually control fixation, the deployment of visual attention is rarely restricted. The present studies investigated the influence of visual attention on the visual field asymmetries normally observed in single-letter identification and lexical decision tasks. Attention was controlled using a peripheral cue that provided advance knowledge of the location of the forthcoming stimulus. The time period between the onset of the cue and the onset of the stimulus (Stimulus Onset Asynchrony—SOA) was varied, such that the time available for attention to focus upon the location was controlled. At short SO As a right visual field advantage for identifying single letters and for making lexical decisions was apparent. However, at longer SOAs letters and words presented in the two visual fields were identified equally well. It is concluded that visual field advantages arise from an interaction of attentional and structural factors and that the attentional component in visual field asymmetries must be controlled in order to approximate more closely a true assessment of the relative functional capabilities of the right and left cerebral hemispheres.


Perception ◽  
1992 ◽  
Vol 21 (4) ◽  
pp. 465-480 ◽  
Author(s):  
Jeremy M Wolfe ◽  
Alice Yee ◽  
Stacia R Friedman-Hill

2019 ◽  
Author(s):  
Chloé Stoll ◽  
Matthew William Geoffrey Dye

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers – both deaf and hearing – in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.


Author(s):  
Thomas Z. Strybel ◽  
Jan M. Boucher ◽  
Greg E. Fujawa ◽  
Craig S. Volp

The effectiveness of auditory spatial cues in visual search performance was examined in three experiments. Auditory spatial cues are more effective than abrupt visual onsets when the target appears in the peripheral visual field or when the contrast of the target is degraded. The duration of the auditory spatial cue did not affect search performance.


2021 ◽  
pp. 1-55
Author(s):  
Jeffrey Frederic Queisser ◽  
Minju Jung ◽  
Takazumi Matsumoto ◽  
Jun Tani

Abstract Generalization by learning is an essential cognitive competency for humans. For example, we can manipulate even unfamiliar objects and can generate mental images before enacting a preplan. How is this possible? Our study investigated this problem by revisiting our previous study (Jung, Matsumoto, & Tani, 2019), which examined the problem of vision-based, goal-directed planning by robots performing a task of block stacking. By extending the previous study, our work introduces a large network comprising dynamically interacting submodules, including visual working memory (VWMs), a visual attention module, and an executive network. The executive network predicts motor signals, visual images, and various controls for attention, as well as masking of visual information. The most significant difference from the previous study is that our current model contains an additional VWM. The entire network is trained by using predictive coding and an optimal visuomotor plan to achieve a given goal state is inferred using active inference. Results indicate that our current model performs significantly better than that used in Jung et al. (2019), especially when manipulating blocks with unlearned colors and textures. Simulation results revealed that the observed generalization was achieved because content-agnostic information processing developed through synergistic interaction between the second VWM and other modules during the course of learning, in which memorizing image contents and transforming them are dissociated. This letter verifies this claim by conducting both qualitative and quantitative analysis of simulation results.


Development ◽  
1981 ◽  
Vol 65 (1) ◽  
pp. 199-217
Author(s):  
C. Kennard

The extent, and the development, of the ipsilateral retinothalamic projection in the frog Xenopus laevis have been studied using terminal degeneration and autoradiographic techniques. This ipsilateral projection derives only from those retinal areas receiving visual information from the binocular portion of the visual field. In Xenopus, the ipsilateral retinothalamic projection arises from a larger area of the retina than was found to be the case in earlier studies on Rana. This correlates with the fact that Xenopus has a larger binocular visual field than does Rana. The ipsilateral retinothalamic projection is just detectable at about stage 56 of larval life, considerably later than its contralateral counterpart. Experimental manipulation of the developing eye vesicle at early larval stages followed by histological studies of the ipsilateral retinothalamic projections showed, however, that the retinal areas which give rise to this projection are determined by stage 32 of larval life. Further studies, in which monocular enucleation was performed at different larval stages with subsequent examination of the retinothalamic projections from the remaining eye, indicated that the selective pattern of decussation and non-decussation of retinothalamic fibres at the optic chiasma does not require interactions, at the chiasma, between optic fibres from the two eyes.


Sign in / Sign up

Export Citation Format

Share Document