scholarly journals Stimulus context alters neural representations of faces in inferotemporal cortex

2017 ◽  
Vol 117 (1) ◽  
pp. 336-347 ◽  
Author(s):  
Behrad Noudoost ◽  
Neda Nategh ◽  
Kelsey Clark ◽  
Hossein Esteky

One goal of our nervous system is to form predictions about the world around us to facilitate our responses to upcoming events. One basis for such predictions could be the recently encountered visual stimuli, or the recent statistics of the visual environment. We examined the effect of recently experienced stimulus statistics on the visual representation of face stimuli by recording the responses of face-responsive neurons in the final stage of visual object recognition, the inferotemporal (IT) cortex, during blocks in which the probability of seeing a particular face was either 100% or only 12%. During the block with only face images, ∼30% of IT neurons exhibit enhanced anticipatory activity before the evoked visual response. This anticipatory modulation is followed by greater activity, broader view tuning, more distributed processing, and more reliable responses of IT neurons to the face stimuli. These changes in the visual response were sufficient to improve the ability of IT neurons to represent a variable property of the predictable face images (viewing angle), as measured by the performance of a simple linear classifier. These results demonstrate that the recent statistics of the visual environment can facilitate processing of stimulus information in the population neuronal representation. NEW & NOTEWORTHY Neurons in inferotemporal (IT) cortex anticipate the arrival of a predictable stimulus, and visual responses to an expected stimulus are more distributed throughout the population of IT neurons, providing an enhanced representation of second-order stimulus information (in this case, viewing angle). The findings reveal a potential neural basis for the behavioral benefits of contextual expectation.

2020 ◽  
Author(s):  
Song Zhao ◽  
Chengzhi Feng ◽  
Xinyin Huang ◽  
Yijun Wang ◽  
Wenfeng Feng

Abstract The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.


2001 ◽  
Vol 13 (6) ◽  
pp. 793-799 ◽  
Author(s):  
Moshe Bar

The nature of visual object representation in the brain is the subject of a prolonged debate. One set of theories asserts that objects are represented by their structural description and the representation is “object-centered.” Theories from the other side of the debate suggest that humans store multiple “snapshots” for each object, depicting it as seen under various conditions, and the representation is therefore “viewer-centered.” The principal tool that has been used to support and criticize each of these hypotheses is subjects' performance in recognizing objects under novel viewing conditions. For example, if subjects take more time in recognizing an object from an unfamiliar viewpoint, it is common to claim that the representation of that object is viewpoint-dependent and therefore viewer-centered. It is suggested here, however, that performance cost in recognition of objects under novel conditions may be misleading when studying the nature of object representation. Specifically, it is argued that viewpoint-dependent performance is not necessarily an indication of viewer-centered representation. An account for the neural basis of perceptual priming is first provided. In light of this account, it is conceivable that viewpoint dependency reflects the utilization of neural paths with different levels of sensitivity en route to the same representation, rather than the existence of viewpoint-specific representations. New experimental paradigms are required to study the validity of the viewer-centered approach.


2007 ◽  
Author(s):  
K. Suzanne Scherf ◽  
Marlene Behrmann ◽  
Kate Humphreys ◽  
Beatriz Luna

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.


Sign in / Sign up

Export Citation Format

Share Document