scholarly journals Contextual cueing effect in three-dimensional layouts

2010 ◽  
Vol 2 (7) ◽  
pp. 520-520
Author(s):  
J.-i. Kawahara
2020 ◽  
Vol 11 ◽  
Author(s):  
Xiaowei Xie ◽  
Siyi Chen ◽  
Xuelian Zang

In contextual cueing, previously encountered context tends to facilitate the detection of the target embedded in it than when the target appears in a novel context. In this study, we investigated whether the contextual cueing could develop at early time when the search display was presented briefly. In four experiments, participants searched for a target T in an array of distractor Ls. The results showed that with a rather short presentation time of the search display, participants were able to learn the spatial context and speeded up their response time overall, with the learning effect lasting for a long period. Specifically, the contextual cueing effect was observed either with or without a mask after a duration of 300-ms presentation of the search display. Such a context learning under rapid presentation could not operate only with the local context information repeated, thus suggesting that a global context was required to guide spatial attention when the viewing time of the search display was limited. Overall, these findings indicate that contextual cueing might arise at an “early,” target selection stage and that the global context is necessary for the context learning under rapid presentation to function.


2014 ◽  
Vol 14 (10) ◽  
pp. 1075-1075
Author(s):  
Y. Higuchi ◽  
J. Saiki

2021 ◽  
Vol 12 ◽  
Author(s):  
Xuelian Zang ◽  
Leonardo Assumpção ◽  
Jiao Wu ◽  
Xiaowei Xie ◽  
Artyom Zinchenko

In the contextual cueing task, visual search is faster for targets embedded in invariant displays compared to targets found in variant displays. However, it has been repeatedly shown that participants do not learn repeated contexts when these are irrelevant to the task. One potential explanation lays in the idea of associative blocking, where salient cues (task-relevant old items) block the learning of invariant associations in the task-irrelevant subset of items. An alternative explanation is that the associative blocking rather hinders the allocation of attention to task-irrelevant subsets, but not the learning per se. The current work examined these two explanations. In two experiments, participants performed a visual search task under a rapid presentation condition (300 ms) in Experiment 1, or under a longer presentation condition (2,500 ms) in Experiment 2. In both experiments, the search items within both old and new displays were presented in two colors which defined the irrelevant and task-relevant items within each display. The participants were asked to search for the target in the relevant subset in the learning phase. In the transfer phase, the instructions were reversed and task-irrelevant items became task-relevant (and vice versa). In line with previous studies, the search of task-irrelevant subsets resulted in no cueing effect post-transfer in the longer presentation condition; however, a reliable cueing effect was generated by task-irrelevant subsets learned under the rapid presentation. These results demonstrate that under rapid display presentation, global attentional selection leads to global context learning. However, under a longer display presentation, global attention is blocked, leading to the exclusive learning of invariant relevant items in the learning session.


2013 ◽  
Vol 21 (7) ◽  
pp. 1173-1185
Author(s):  
Feifei ZHAO ◽  
Yanju REN

2012 ◽  
Vol 12 (6) ◽  
pp. 11-11 ◽  
Author(s):  
G. Zhao ◽  
Q. Liu ◽  
J. Jiao ◽  
P. Zhou ◽  
H. Li ◽  
...  

2020 ◽  
Vol 10 (7) ◽  
pp. 446
Author(s):  
Nico Marek ◽  
Stefan Pollmann

In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen.


2004 ◽  
Vol 4 (8) ◽  
pp. 399-399
Author(s):  
F. Ono ◽  
Y. Jiang ◽  
J.-i. Kawahara

Perception ◽  
10.1068/p5135 ◽  
2003 ◽  
Vol 32 (11) ◽  
pp. 1351-1358 ◽  
Author(s):  
Tomohiro Nabeta ◽  
Fuminori Ono ◽  
Jun-Ichiro Kawahara

Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.


2017 ◽  
Vol 8 ◽  
Author(s):  
Guang Zhao ◽  
Qian Zhuang ◽  
Jie Ma ◽  
Shen Tu ◽  
Qiang Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document