scholarly journals No exploitation of temporal sequence context during visual search

2021 ◽  
Vol 8 (3) ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

The human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context based on sequence order, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was also true when we looked specifically at participants who revealed a sensitivity to spatial predictive context. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.

2020 ◽  
Author(s):  
Floortje G. Bouwkamp ◽  
Floris P. de Lange ◽  
Eelke Spaak

AbstractThe human visual system can rapidly extract regularities from our visual environment, generating predictive context. It has been shown that spatial predictive context can be used during visual search. We set out to see whether observers can additionally exploit temporal predictive context, using an extended version of a contextual cueing paradigm. Though we replicated the contextual cueing effect, repeating search scenes in a structured order versus a random order yielded no additional behavioural benefit. This was true both for participants who were sensitive to spatial predictive context, and for those who were not. We argue that spatial predictive context during visual search is more readily learned and subsequently exploited than temporal predictive context, potentially rendering the latter redundant. In conclusion, unlike spatial context, temporal context is not automatically extracted and used during visual search.


2013 ◽  
Vol 461 ◽  
pp. 792-800
Author(s):  
Bo Zhao ◽  
Hong Wei Zhao ◽  
Ping Ping Liu ◽  
Gui He Qin

We describe a novel mobile visual search system based on the saliencymechanism and sparse coding principle of the human visual system (HVS). In the featureextraction step, we first divide an image into different regions using thesaliency extraction algorithm. Then scale-invariant feature transform (SIFT)descriptors in all regions are extracted while regional identities arepreserved based on their various saliency levels. According to the sparsecoding principle in the HVS, we adopt a local neighbor preserving Hash functionto establish the binary sparse expression of the SIFT features. In the searchingstep, the nearest neighbors matched to the hashing codes are processed accordingto different saliency levels. Matching scores of images in the database arederived from the matching of hashing codes. Subsequently, the matching scoresof all levels are weighed by degrees of saliency to obtain the initial set of results. In order to further ensure matching accuracy, we propose an optimized retrieval scheme based on global texture information. We conduct extensive experiments on an actual mobile platform in large-scale datasets by using Corel-1000. The resultsshow that the proposed method outperforms the state-of-the-art algorithms on accuracyrate, and no significant increase in the running time of the feature extractionand retrieval can be observed.


2019 ◽  
Author(s):  
Eelke Spaak ◽  
Floris P. de Lange

AbstractObservers rapidly and seemingly automatically learn to predict where to expect relevant items when those items are repeatedly presented in the same spatial context. This form of statistical learning in visual search has been studied extensively using a paradigm known as contextual cueing. The neural mechanisms underlying the learning and exploiting of such regularities remain unclear. We sought to elucidate these by examining behaviour and recording neural activity using magneto-encephalography (MEG) while observers were implicitly acquiring and exploiting statistical regularities. Computational modelling of behavioural data suggested that after repeated exposures to a spatial context, participants’ behaviour was marked by an abrupt switch to an exploitation strategy of the learnt regularities. MEG recordings showed that the initial learning phase was associated with larger hippocampal theta band activity for repeated scenes, while the subsequent exploitation phase showed larger prefrontal theta band activity for these repeated scenes. Strikingly, the behavioural benefit of repeated exposures to certain scenes was inversely related to explicit awareness of such repeats, demonstrating the implicit nature of the expectations acquired. This elucidates how theta activity in the hippocampus and prefrontal cortex underpins the implicit learning and exploitation of spatial statistical regularities to optimize visual search behaviour.


Author(s):  
Tao He ◽  
David Richter ◽  
Zhiguo Wang ◽  
Floris P. de Lange

AbstractBoth spatial and temporal context play an important role in visual perception and behavior. Humans can extract statistical regularities from both forms of context to help processing the present and to construct expectations about the future. Numerous studies have found reduced neural responses to expected stimuli compared to unexpected stimuli, for both spatial and temporal regularities. However, it is largely unclear whether and how these forms of context interact. In the current fMRI study, thirty-three human volunteers were exposed to object stimuli that could be expected or surprising in terms of their spatial and temporal context. We found a reliable independent contribution of both spatial and temporal context in modulating the neural response. Specifically, neural responses to stimuli in expected compared to unexpected contexts were suppressed throughout the ventral visual stream. Interestingly, the modulation by spatial context was stronger in magnitude and more reliable than modulations by temporal context. These results suggest that while both spatial and temporal context serve as a prior that can modulate sensory processing in a similar fashion, predictions of spatial context may be a more powerful modulator in the visual system.Significance StatementBoth temporal and spatial context can affect visual perception, however it is largely unclear if and how these different forms of context interact in modulating sensory processing. When manipulating both temporal and spatial context expectations, we found that they jointly affected sensory processing, evident as a suppression of neural responses for expected compared to unexpected stimuli. Interestingly, the modulation by spatial context was stronger than that by temporal context. Together, our results suggest that spatial context may be a stronger modulator of neural responses than temporal context within the visual system. Thereby, the present study provides new evidence how different types of predictions jointly modulate perceptual processing.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


2020 ◽  
Vol 2020 (1) ◽  
pp. 60-64
Author(s):  
Altynay Kadyrova ◽  
Majid Ansari-Asl ◽  
Eva Maria Valero Benito

Colour is one of the most important appearance attributes in a variety of fields including both science and industry. The focus of this work is on cosmetics field and specifically on the performance of the human visual system on the selection of foundation makeup colour that best matches with the human skin colour. In many cases, colour evaluations tend to be subjective and vary from person to person thereby producing challenging problems to quantify colour for objective evaluations and measurements. Although many researches have been done on colour quantification in last few decades, to the best of our knowledge, this is the first study to evaluate objectively a consumer's visual system in skin colour matching through a psychophysical experiment under different illuminations exploiting spectral measurements. In this paper, the experiment setup is discussed and the results from the experiment are presented. The correlation between observers' skin colour evaluations by using PANTONE Skin Tone Guide samples and spectroradiometer is assessed. Moreover, inter and intra observer variability are considered and commented. The results reveal differences between nine ethnic groups, between two genders, and between the measurements under two illuminants (i.e.D65 and F (fluorescent)). The results further show that skin colour assessment was done better under D65 than under F illuminant. The human visual system was three times worse than instrument in colour matching in terms of colour difference between skin and PANTONE Skin Tone Guide samples. The observers tend to choose lighter, less reddish, and consequently paler colours as the best match to their skin colour. These results have practical applications. They can be used to design, for example, an application for foundation colour selection based on correlation between colour measurements and human visual system based subjective evaluations.


Sign in / Sign up

Export Citation Format

Share Document