scholarly journals A model of top-down control of attention during visual search in real-world scenes

2010 ◽  
Vol 8 (6) ◽  
pp. 681-681 ◽  
Author(s):  
A. Hwang ◽  
M. Pomplun
Keyword(s):  
2006 ◽  
Vol 46 (24) ◽  
pp. 4118-4133 ◽  
Author(s):  
Xin Chen ◽  
Gregory J. Zelinsky
Keyword(s):  

Author(s):  
Tobit Kollenberg ◽  
Alexander Neumann ◽  
Dorothe Schneider ◽  
Tessa-Karina Tews ◽  
Thomas Hermann ◽  
...  
Keyword(s):  

Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Christian Valuch

Color can enhance the perception of relevant stimuli by increasing their salience and guiding visual search towards stimuli that match a task-relevant color. Using Continuous Flash Suppression (CFS), the current study investigated whether color facilitates the discrimination of targets that are difficult to perceive due to interocular suppression. Gabor patterns of two or four cycles per degree (cpd) were shown as targets to the non-dominant eye of human participants. CFS masks were presented at a rate of 10 Hz to the dominant eye, and participants had the task to report the target’s orientation as soon as they could discriminate it. The 2-cpd targets were robustly suppressed and resulted in much longer response times compared to 4-cpd targets. Moreover, only for 2-cpd targets, two color-related effects were evident. First, in trials where targets and CFS masks had different colors, targets were reported faster than in trials where targets and CFS masks had the same color. Second, targets with a known color, either cyan or yellow, were reported earlier than targets whose color was randomly cyan or yellow. The results suggest that the targets’ entry to consciousness may have been speeded by color-mediated effects relating to increased (bottom-up) salience and (top-down) task relevance.


10.2741/a503 ◽  
2000 ◽  
Vol 5 (3) ◽  
pp. d169-193 ◽  
Author(s):  
K. Sathian
Keyword(s):  
Top Down ◽  

2020 ◽  
Author(s):  
Anna Kosovicheva ◽  
Abla Alaoui-Soce ◽  
Jeremy Wolfe

Many real-world visual tasks involve searching for multiple instances of a target (e.g., picking ripe berries). What strategies do observers use when collecting items in this type of search? Do they wait to finish collecting the current item before starting to look for the next target, or do they search ahead for future targets? We utilized behavioral and eye tracking measures to distinguish between these two possibilities in foraging search. Experiment 1 used a color wheel technique in which observers searched for T shapes among L shapes while all items independently cycled through a set of colors. Trials were abruptly terminated, and observers reported both the color and location of the next target that they intended to click. Using observers’ color reports to infer target-finding times, we demonstrate that observers found the next item before the time of the click on the current target. We validated these results in Experiment 2 by recording fixation locations around the time of each click. Experiment 3 utilized a different procedure, in which all items were intermittently occluded during the trial. We then calculated a distribution of when targets were visible around the time of each click, allowing us to infer when they were most likely found. In a fourth and final experiment, observers indicated the locations of multiple future targets after the search was abruptly terminated. Together, our results provide converging evidence to demonstrate that observers can find the next target before collecting the current target and can typically forage 1-2 items ahead.


2004 ◽  
Vol 01 (04) ◽  
pp. 345-356
Author(s):  
HYUNG-MIN PARK ◽  
JONG-HWAN LEE ◽  
TAESU KIM ◽  
UN-MIN BAE ◽  
BYUNG TAEK KIM ◽  
...  

An auditory model has been developed for an intelligent speech information acquisition system in real-world noisy environment. The developed mathematical model of the human auditory pathway consists of three components, i.e. the nonlinear feature extraction from cochlea to auditory cortex, the binaural processing at superior olivery complex, and the top-down attention from higher brain to the cochlea. The feature extraction is based on information-theoretic sparse coding throughout the auditory pathway. Also, the time-frequency masking is incorporated as a model of the lateral inhibition in both time and frequency domain. The binaural processing is modeled as the blind signal separation and adaptive noise canceling based on the independent component analysis with hundreds of time-delays for noisy reverberated signals. The Top-Down (TD) attention comes from familiarity and/or importance of the sensory information, i.e. the sound, and a simple but efficient TD attention model had been developed based on the error backpropagation algorithm. Also, the binaural processing and top-down attention are combined for speech signals with heavy noises. This auditory model requires extensive computing, and special hardware had been developed for real-time applications. Experimental results demonstrate much better recognition performance in real-world noisy environments.


Sign in / Sign up

Export Citation Format

Share Document