saliency map models
Recently Published Documents


TOTAL DOCUMENTS

4
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2016 ◽  
Vol 9 (5) ◽  
Author(s):  
John Tsotsos ◽  
Iuliia Kotseruba ◽  
Calden Wloka

A computational explanation of how visual attention, interpretation of visual stimuli, and eye movements combine to produce visual behavior, seems elusive. Here, we focus on one component: how selection is accomplished for the next fixation. The popularity of saliency map models drives the inference that this is solved, but we argue otherwise. We provide arguments that a cluster of complementary, conspicuity representations drive selection, modulated by task goals and history, leading to a hybrid process that encompasses early and late attentional selection. This design is also constrained by the architectural characteristics of the visual processing pathways. These elements combine into a new strategy for computing fixation targets and a first simulation of its performance is presented. A sample video of this performance can be found by clicking on the "Supplementary Files" link under the "Article Tools" heading.


Author(s):  
Kentaro Yamada ◽  
Yusuke Sugano ◽  
Takahiro Okabe ◽  
Yoichi Sato ◽  
Akihiro Sugimoto ◽  
...  

2009 ◽  
Vol 102 (6) ◽  
pp. 3481-3491 ◽  
Author(s):  
Koorosh Mirpour ◽  
Fabrice Arcizet ◽  
Wei Song Ong ◽  
James W. Bisley

In everyday life, we efficiently find objects in the world by moving our gaze from one location to another. The efficiency of this process is brought about by ignoring items that are dissimilar to the target and remembering which target-like items have already been examined. We trained two animals on a visual foraging task in which they had to find a reward-loaded target among five task-irrelevant distractors and five potential targets. We found that both animals performed the task efficiently, ignoring the distractors and rarely examining a particular target twice. We recorded the single unit activity of 54 neurons in the lateral intraparietal area (LIP) while the animals performed the task. The responses of the neurons differentiated between targets and distractors throughout the trial. Further, the responses marked off targets that had been fixated by a reduction in activity. This reduction acted like inhibition of return in saliency map models; items that had been fixated would no longer be represented by high enough activity to draw an eye movement. This reduction could also be seen as a correlate of reward expectancy; after a target had been identified as not containing the reward the activity was reduced. Within a trial, responses to the remaining targets did not increase as they became more likely to yield a result, suggesting that only activity related to an event is updated on a moment-by-moment bases. Together, our data show that all the neural activity required to guide efficient search is present in LIP. Because LIP activity is known to correlate with saccade goal selection, we propose that LIP plays a significant role in the guidance of efficient visual search.


Sign in / Sign up

Export Citation Format

Share Document