Is there a shape to the attention spotlight? Computing saliency over proto-objects predicts fixations during scene viewing.

2019 ◽  
Vol 45 (1) ◽  
pp. 139-154 ◽  
Author(s):  
Yupei Chen ◽  
Gregory J. Zelinsky
Keyword(s):  
2017 ◽  
Vol 114 (10) ◽  
pp. 2771-2776 ◽  
Author(s):  
Hildward Vandormael ◽  
Santiago Herce Castañón ◽  
Jan Balaguer ◽  
Vickie Li ◽  
Christopher Summerfield

Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer (“inlying” elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice (“robust averaging”). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information “robustly” with their eyes during perceptual decision-making.


2011 ◽  
Vol 366 (1564) ◽  
pp. 596-610 ◽  
Author(s):  
Benjamin W. Tatler ◽  
Michael F. Land

One of the paradoxes of vision is that the world as it appears to us and the image on the retina at any moment are not much like each other. The visual world seems to be extensive and continuous across time. However, the manner in which we sample the visual environment is neither extensive nor continuous. How does the brain reconcile these differences? Here, we consider existing evidence from both static and dynamic viewing paradigms together with the logical requirements of any representational scheme that would be able to support active behaviour. While static scene viewing paradigms favour extensive, but perhaps abstracted, memory representations, dynamic settings suggest sparser and task-selective representation. We suggest that in dynamic settings where movement within extended environments is required to complete a task, the combination of visual input, egocentric and allocentric representations work together to allow efficient behaviour. The egocentric model serves as a coding scheme in which actions can be planned, but also offers a potential means of providing the perceptual stability that we experience.


2010 ◽  
Vol 10 (8) ◽  
pp. 20-20 ◽  
Author(s):  
A. Nuthmann ◽  
J. M. Henderson

2015 ◽  
Vol 1608 ◽  
pp. 138-146 ◽  
Author(s):  
Qiang Xu ◽  
Yaping Yang ◽  
Entao Zhang ◽  
Fuqiang Qiao ◽  
Wenyi Lin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document