scholarly journals Impact of stimulus size and orientation on individual face decoding in monkey face-selective cortex

2018 ◽  
Author(s):  
Jessica Taubert ◽  
Goedele Van Belle ◽  
Rufin Vogels ◽  
Bruno Rossion

ABSTRACTFace-selective neurons in the monkey temporal cortex discharge at different rates in response to pictures of different faces. Here we tested whether the population response of neurons in the face-selective area ML (located in the middle Superior Temporal Sulcus) tolerates two affine transformations; one, picture-plane inversion, known to have a deleterious impact on the average response of face-selective neurons and the other, stimulus size, thought to have little or no impact on face-selective neurons. We recorded the response of 57 ML neurons in two monkeys. Face stimuli were presented at two sizes (10 and 5 degrees of visual angle) and two orientations (upright and inverted). The results indicate that different faces elicited distinct patterns of activity across ML neurons that were tolerant of changes in size. However, the results of the orientation manipulation were mixed; despite observing a reduced response to inverted faces, classifier performance was above chance for both upright and inverted faces and the classification score did not differ significantly for inverted and upright faces. We conclude that population responses in area ML to different faces are dependent on stimulus orientation but are more tolerant to changes in stimulus size.

2010 ◽  
Vol 22 (10) ◽  
pp. 2276-2288 ◽  
Author(s):  
Lisa R. Betts ◽  
Hugh R. Wilson

It is well established that the human visual system contains a distributed network of regions that are involved in processing faces, but our understanding of how faces are represented within these face-sensitive brain areas is incomplete. We used fMRI to investigate whether face-sensitive brain areas are solely tuned for whole faces, or whether they contain heterogeneous populations of neurons tuned to individual components of the face as well as whole faces, as suggested by physiological investigations in nonhuman primates. The middle fusiform gyrus (fusiform face area, or FFA) and the inferior occipital gyrus (occipital face area, or OFA) produced robust BOLD activation to synthetic whole face stimuli, but also to the internal facial features and head outlines. BOLD responses to whole face stimuli in FFA were significantly reduced after adaptation to whole faces, but not after adaptation to features or head outlines, whereas activation to head outlines was reduced after adaptation to both whole faces and head outlines. OFA showed no significant adaptation effects for matching adaptation and test conditions, but did exhibit cross-adaptation between whole faces and head outlines. The internal face features did not produce any significant adaptation within either FFA or OFA. Our results are consistent with a model in which independent populations of whole face-, feature-, and head outline-tuned neurons exist within face-sensitive regions of human occipito-temporal cortex, which in turn would support tasks such as viewpoint processing, emotion classification, and identity discrimination.


2007 ◽  
Vol 19 (3) ◽  
pp. 543-555 ◽  
Author(s):  
Bruno Rossion ◽  
Daniel Collins ◽  
Valérie Goffaux ◽  
Tim Curran

The degree of commonality between the perceptual mechanisms involved in processing faces and objects of expertise is intensely debated. To clarify this issue, we recorded occipito-temporal event-related potentials in response to faces when concurrently processing visual objects of expertise. In car experts fixating pictures of cars, we observed a large decrease of an evoked potential elicited by face stimuli between 130 and 200 msec, the N170. This sensory suppression was much lower when the car and face stimuli were separated by a 200-msec blank interval. With and without this delay, there was a strong correlation between the face-evoked N170 amplitude decrease and the subject's level of car expertise as measured in an independent behavioral task. Together, these results show that neural representations of faces and nonface objects in a domain of expertise compete for visual processes in the occipito-temporal cortex as early as 130–200 msec following stimulus onset.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110563
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Recognizing familiar faces requires a comparison of the incoming perceptual information with mental face representations stored in memory. Mounting evidence indicates that these representations adapt quickly to recently perceived facial changes. This becomes apparent in face adaptation studies where exposure to a strongly manipulated face alters the perception of subsequent face stimuli: original, non-manipulated face images then appear to be manipulated, while images similar to the adaptor are perceived as “normal.” The face adaptation paradigm serves as a good tool for investigating the information stored in facial memory. So far, most of the face adaptation studies focused on configural (second-order relationship) face information, mainly neglecting non-configural face information (i.e., that does not affect spatial face relations), such as color, although several (non-adaptation) studies were able to demonstrate the importance of color information in face perception and identification. The present study therefore focuses on adaptation effects on saturation color information and compares the results with previous findings on brightness. The study reveals differences in the effect pattern and robustness, indicating that adaptation effects vary considerably even within the same class of non-configural face information.


2007 ◽  
Vol 97 (2) ◽  
pp. 1671-1683 ◽  
Author(s):  
K. M. Gothard ◽  
F. P. Battaglia ◽  
C. A. Erickson ◽  
K. M. Spitler ◽  
D. G. Amaral

The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.


Perception ◽  
10.1068/p6291 ◽  
2009 ◽  
Vol 38 (5) ◽  
pp. 702-707
Author(s):  
Robert A Johnston ◽  
Eleanor Tomlinson ◽  
Chris Jones ◽  
Alan Weaden

The face-processing skills of people with schizophrenia were compared with those of a group of unimpaired individuals. Participants were asked to make speeded face-classification decisions to faces previously rated as being typical or distinctive. The schizophrenic group responded more slowly than the unimpaired group; however, both groups demonstrated the customary sensitivity to the distinctiveness of the face stimuli. Face-classification latencies made to typical faces were shorter than those made to distinctive faces. The implication of this finding with the schizophrenic group is discussed with reference to accounts of face-processing deficits attributed to these individuals.


2013 ◽  
Vol 113 (1) ◽  
pp. 199-216 ◽  
Author(s):  
Marcella L. Woud ◽  
Eni S. Becker ◽  
Wolf-Gero Lange ◽  
Mike Rinck

A growing body of evidence shows that the prolonged execution of approach movements towards stimuli and avoidance movements away from them affects their evaluation. However, there has been no systematic investigation of such training effects. Therefore, the present study compared approach-avoidance training effects on various valenced representations of one neutral (Experiment 1, N = 85), angry (Experiment 2, N = 87), or smiling facial expressions (Experiment 3, N = 89). The face stimuli were shown on a computer screen, and by means of a joystick, participants pulled half of the faces closer (positive approach movement), and pushed the other half away (negative avoidance movement). Only implicit evaluations of neutral-expression were affected by the training procedure. The boundary conditions of such approach-avoidance training effects are discussed.


2005 ◽  
Vol 272 (1566) ◽  
pp. 897-904 ◽  
Author(s):  
David A Leopold ◽  
Gillian Rhodes ◽  
Kai-Markus Müller ◽  
Linda Jeffery

Several recent demonstrations using visual adaptation have revealed high-level aftereffects for complex patterns including faces. While traditional aftereffects involve perceptual distortion of simple attributes such as orientation or colour that are processed early in the visual cortical hierarchy, face adaptation affects perceived identity and expression, which are thought to be products of higher-order processing. And, unlike most simple aftereffects, those involving faces are robust to changes in scale, position and orientation between the adapting and test stimuli. These differences raise the question of how closely related face aftereffects are to traditional ones. Little is known about the build-up and decay of the face aftereffect, and the similarity of these dynamic processes to traditional aftereffects might provide insight into this relationship. We examined the effect of varying the duration of both the adapting and test stimuli on the magnitude of perceived distortions in face identity. We found that, just as with traditional aftereffects, the identity aftereffect grew logarithmically stronger as a function of adaptation time and exponentially weaker as a function of test duration. Even the subtle aspects of these dynamics, such as the power-law relationship between the adapting and test durations, closely resembled that of other aftereffects. These results were obtained with two different sets of face stimuli that differed greatly in their low-level properties. We postulate that the mechanisms governing these shared dynamics may be dissociable from the responses of feature-selective neurons in the early visual cortex.


Sign in / Sign up

Export Citation Format

Share Document