scholarly journals The relationship between working memory and the dual-target cost in visual search guidance.

2019 ◽  
Vol 45 (7) ◽  
pp. 911-935 ◽  
Author(s):  
Tamaryn Menneer ◽  
Kyle R. Cave ◽  
Elina Kaplan ◽  
Michael J. Stroud ◽  
Junha Chang ◽  
...  
Author(s):  
Stanislas Huynh Cong ◽  
Dirk Kerzel

AbstractRecently, working memory (WM) has been conceptualized as a limited resource, distributed flexibly and strategically between an unlimited number of representations. In addition to improving the precision of representations in WM, the allocation of resources may also shape how these representations act as attentional templates to guide visual search. Here, we reviewed recent evidence in favor of this assumption and proposed three main principles that govern the relationship between WM resources and template-guided visual search. First, the allocation of resources to an attentional template has an effect on visual search, as it may improve the guidance of visual attention, facilitate target recognition, and/or protect the attentional template against interference. Second, the allocation of the largest amount of resources to a representation in WM is not sufficient to give this representation the status of attentional template and thus, the ability to guide visual search. Third, the representation obtaining the status of attentional template, whether at encoding or during maintenance, receives an amount of WM resources proportional to its relevance for visual search. Thus defined, the resource hypothesis of visual search constitutes a parsimonious and powerful framework, which provides new perspectives on previous debates and complements existing models of template-guided visual search.


2014 ◽  
Author(s):  
Hayward J. Godwin ◽  
Tamaryn Menneer ◽  
Simon Liversedge ◽  
Kyle Cave ◽  
Nick S. Holliman ◽  
...  

2010 ◽  
Vol 16 (2) ◽  
pp. 133-144 ◽  
Author(s):  
Tamaryn Menneer ◽  
Nick Donnelly ◽  
Hayward J. Godwin ◽  
Kyle R. Cave

2019 ◽  
Vol 82 (3) ◽  
pp. 966-984 ◽  
Author(s):  
Doug J. K. Barrett ◽  
Oliver Zobay

Abstract Simultaneous search for one of two targets is slower and less accurate than search for a single target. Within the Signal Detection Theoretic (SDT) framework, this can be attributed to the division of resources during the comparison of visual input against independently cued targets. The current study used one or two cues to elicit single- and dual-target searches for orientation targets among similar and dissimilar distractors. In Experiment 1, the accuracy of target discrimination in brief displays was compared at setsizes of 1, 2 and 4. Results revealed a reduction in accuracy that scaled with the product of set size and the number of cued targets. In Experiment 2, the accuracy and latency of observers’ saccadic targeting were compared. Fixations on single-target searches were highly selective towards the target. On dual-target searches, the requirement to detect one of two targets produced a significant reduction in target fixations and equivalent rates of fixations to distractors with opposite orientations. For most observers, the dual-target cost was predicted by an SDT model that simulated increases in decision-noise and the distribution of capacity-limited resources during the comparison of selected input against independently cued targets. For others, search accuracy was consistent with a single-item limit on perceptual decisions and saccadic targeting during search. These findings support a flexible account of the dual-target cost based on different strategies to resolve competition between independently cued targets.


2017 ◽  
Vol 43 (8) ◽  
pp. 1504-1519 ◽  
Author(s):  
Natalie Mestry ◽  
Tamaryn Menneer ◽  
Kyle R. Cave ◽  
Hayward J. Godwin ◽  
Nick Donnelly

2015 ◽  
Vol 15 (12) ◽  
pp. 58
Author(s):  
Natalie Mestry ◽  
Tamaryn Menneer ◽  
Hayward Godwin ◽  
Kyle Cave ◽  
Nick Donnelly

2019 ◽  
Vol 10 ◽  
Author(s):  
Elena S. Gorbunova ◽  
Kirill S. Kozlov ◽  
Sofia Tkhan Tin Le ◽  
Ivan M. Makarov

Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Sign in / Sign up

Export Citation Format

Share Document