scholarly journals The neural basis of visual object learning

2010 ◽  
Vol 14 (1) ◽  
pp. 22-30 ◽  
Author(s):  
Hans P. Op de Beeck ◽  
Chris I. Baker
2017 ◽  
Vol 117 (1) ◽  
pp. 336-347 ◽  
Author(s):  
Behrad Noudoost ◽  
Neda Nategh ◽  
Kelsey Clark ◽  
Hossein Esteky

One goal of our nervous system is to form predictions about the world around us to facilitate our responses to upcoming events. One basis for such predictions could be the recently encountered visual stimuli, or the recent statistics of the visual environment. We examined the effect of recently experienced stimulus statistics on the visual representation of face stimuli by recording the responses of face-responsive neurons in the final stage of visual object recognition, the inferotemporal (IT) cortex, during blocks in which the probability of seeing a particular face was either 100% or only 12%. During the block with only face images, ∼30% of IT neurons exhibit enhanced anticipatory activity before the evoked visual response. This anticipatory modulation is followed by greater activity, broader view tuning, more distributed processing, and more reliable responses of IT neurons to the face stimuli. These changes in the visual response were sufficient to improve the ability of IT neurons to represent a variable property of the predictable face images (viewing angle), as measured by the performance of a simple linear classifier. These results demonstrate that the recent statistics of the visual environment can facilitate processing of stimulus information in the population neuronal representation. NEW & NOTEWORTHY Neurons in inferotemporal (IT) cortex anticipate the arrival of a predictable stimulus, and visual responses to an expected stimulus are more distributed throughout the population of IT neurons, providing an enhanced representation of second-order stimulus information (in this case, viewing angle). The findings reveal a potential neural basis for the behavioral benefits of contextual expectation.


2020 ◽  
Author(s):  
Song Zhao ◽  
Chengzhi Feng ◽  
Xinyin Huang ◽  
Yijun Wang ◽  
Wenfeng Feng

Abstract The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.


1997 ◽  
Author(s):  
David C. Glahn ◽  
Ruben C. Gur ◽  
J. Daniel Ragland ◽  
David M. Censits ◽  
Raquel E. Gur

All tests described below have a visual representation inFig. A1 , where tests are indicated in the figure caption:Fig. A1. Visualization of the 10 Cognition Battery Tests. 1 5 Motor Praxis; 2 5 Visual Object Learning Test; 3 5 FractalNBACK; 4 5 Abstract Matching; 5 5 Line Orientation Test; 6 5 Emotion Recognition Test; 7 5 Matrix Reasoning Test;8 5 Digit Symbol Substitution Test; 9 5 Balloon Analog Risk Task; 10 5 Psychomotor Vigilance Test. [Reprinted withpermission from Moore TM, et al. Validation of the Cognition Test Battery for Spacefl ight in a sample of highly educatedadults. Aerosp. Med. Hum. Perform. 2017; 88(10):937–946 (Appendix A, Fig. A1; https://doi.org/10.3357/AMHP.4801sd.2017).]


2007 ◽  
Vol 412 (2) ◽  
pp. 123-128 ◽  
Author(s):  
Julia Reinholz ◽  
Stefan Pollmann

2021 ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K Robinson ◽  
Martin N Hebart ◽  
Thomas A Carlson

The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, 'big data' stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


NeuroImage ◽  
2011 ◽  
Vol 55 (1) ◽  
pp. 304-311 ◽  
Author(s):  
Carmen Schmid ◽  
Christian Büchel ◽  
Michael Rose

2001 ◽  
Vol 13 (6) ◽  
pp. 793-799 ◽  
Author(s):  
Moshe Bar

The nature of visual object representation in the brain is the subject of a prolonged debate. One set of theories asserts that objects are represented by their structural description and the representation is “object-centered.” Theories from the other side of the debate suggest that humans store multiple “snapshots” for each object, depicting it as seen under various conditions, and the representation is therefore “viewer-centered.” The principal tool that has been used to support and criticize each of these hypotheses is subjects' performance in recognizing objects under novel viewing conditions. For example, if subjects take more time in recognizing an object from an unfamiliar viewpoint, it is common to claim that the representation of that object is viewpoint-dependent and therefore viewer-centered. It is suggested here, however, that performance cost in recognition of objects under novel conditions may be misleading when studying the nature of object representation. Specifically, it is argued that viewpoint-dependent performance is not necessarily an indication of viewer-centered representation. An account for the neural basis of perceptual priming is first provided. In light of this account, it is conceivable that viewpoint dependency reflects the utilization of neural paths with different levels of sensitivity en route to the same representation, rather than the existence of viewpoint-specific representations. New experimental paradigms are required to study the validity of the viewer-centered approach.


Sign in / Sign up

Export Citation Format

Share Document