scholarly journals Current and future goals are represented in opposite patterns in object-selective cortex

2018 ◽  
Author(s):  
Anouk M. van Loon ◽  
Katya Olmos Solis ◽  
Johannes J. Fahrenfort ◽  
Christian N. L. Olivers

AbstractAdaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Anouk Mariette van Loon ◽  
Katya Olmos-Solis ◽  
Johannes Jacobus Fahrenfort ◽  
Christian NL Olivers

Adaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns.


2019 ◽  
Vol 19 (10) ◽  
pp. 311b
Author(s):  
Zachary A Lively ◽  
Gavin JP Ng ◽  
Simona Buetti ◽  
Alejandro Lleras

2009 ◽  
Vol 62 (7) ◽  
pp. 1430-1454 ◽  
Author(s):  
Bradley J. Poole ◽  
Michael J. Kane

Variation in working-memory capacity (WMC) predicts individual differences in only some attention-control capabilities. Whereas higher WMC subjects outperform lower WMC subjects in tasks requiring the restraint of prepotent but inappropriate responses, and the constraint of attentional focus to target stimuli against distractors, they do not differ in prototypical visual-search tasks, even those that yield steep search slopes and engender top-down control. The present three experiments tested whether WMC, as measured by complex memory span tasks, would predict search latencies when the 1–8 target locations to be searched appeared alone, versus appearing among distractor locations to be ignored, with the latter requiring selective attentional focus. Subjects viewed target-location cues and then fixated on those locations over either long (1,500–1,550 ms) or short (300 ms) delays. Higher WMC subjects identified targets faster than did lower WMC subjects only in the presence of distractors and only over long fixation delays. WMC thus appears to affect subjects’ ability to maintain a constrained attentional focus over time.


2009 ◽  
Vol 21 (6) ◽  
pp. 1081-1091 ◽  
Author(s):  
Judith C. Peters ◽  
Rainer Goebel ◽  
Pieter R. Roelfsema

If we search for an item, a representation of this item in our working memory guides attention to matching items in the visual scene. We can hold multiple items in working memory. Do all these items guide attention in parallel? We asked participants to detect a target object in a stream of objects while they maintained a second item in memory for a subsequent task. On some trials, we presented this memory item as a distractor in the stream. Subjects did not confuse these memory items with the search target, as the false alarm rate on trials where the memory item was presented in the stream was comparable to that on trials with only regular distractors. However, a comparable performance does not exclude that the memory items are processed differently from normal distractors. We therefore recorded event-related potentials (ERPs) evoked by search targets, memory items, and regular distractors. As expected, ERPs evoked by search targets differed from those evoked by distractors. Search targets elicited an occipital selection negativity and a frontal selection positivity indexing selective attention, whereas the P3b component, which reflects the matching of sensory events to memory representations, was enhanced for targets compared to distractors. Remarkably, the ERPs evoked by memory items were indistinguishable from the ERPs evoked by normal distractors. This implies that the search target has a special status in working memory that is not shared by the other items. These other, “accessory” items do not guide attention and are excluded from the matching process.


2017 ◽  
Vol 37 (6) ◽  
pp. 1591-1603 ◽  
Author(s):  
Ingmar E.J. de Vries ◽  
Joram van Driel ◽  
Christian N.L. Olivers

2020 ◽  
Vol 20 (11) ◽  
pp. 1474
Author(s):  
MILAD KHAKI ◽  
MEGAN ROUSSY ◽  
NASIM MORTAZAVI ◽  
ROGELIO LUNA ◽  
ADAM SACHS ◽  
...  

2020 ◽  
Author(s):  
Amir H. Meghdadi ◽  
Barry Giesbrecht ◽  
Miguel P Eckstein

AbstractThe use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images was also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.Significance StatementContextual relationships among objects are fundamental for humans to find objects in real world scenes. Although there is a larger literature understanding the brain mechanisms when a target appears at a location indicated by a synthetic cue such as an arrow or box, less is known about how the scene context modulates target-related neural activity. Here we show how neural activity predictive of the presence of a searched object in cluttered real scenes increases when the target object appears at a contextual location and diminishes when it appears at a place that is out of context. The results increase our understanding of how the brain processes real scenes and how context modulates object processing.


2017 ◽  
Vol 114 (43) ◽  
pp. E9115-E9124 ◽  
Author(s):  
Stephanie Ding ◽  
Christopher J. Cueva ◽  
Misha Tsodyks ◽  
Ning Qian

When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding.


Author(s):  
David Alonso ◽  
Mark Lavelle ◽  
Trafton Drew

AbstractPrior research has shown that interruptions lead to a variety of performance costs. However, these costs are heterogenous and poorly understood. Under some circumstances, interruptions lead to large decreases in accuracy on the primary task, whereas in others task duration increases, but task accuracy is unaffected. Presently, the underlying cause of these costs is unclear. The Memory for Goals model suggests that interruptions interfere with the ability to represent the current goal of the primary task. Here, we test the idea that working memory (WM) may play a critical role in representing the current goal and thus may underlie the observed costs associated with interruption. In two experiments, we utilized laboratory-based visual search tasks, which differed in their WM demands, in order to assess how this difference influenced the observed interruption costs. Interruptions led to more severe performance costs when the target of the search changed on each trial. When the search target was consistent across trials, the cost of interruption was greatly reduced. This suggests that the WM demands associated with the primary task play an important role in determining the performance costs of interruption. Our findings suggest that it is important for research to consider the cognitive processes a task engages in order to predict the nature of the adverse effects of interruption in applied settings such as radiology.


Sign in / Sign up

Export Citation Format

Share Document