Face Identification Performance Using Facial Expressions as Perturbation

Author(s):  
Minoru Nakayama ◽  
Takashi Kumakura
2011 ◽  
Vol 22 (12) ◽  
pp. 1518-1526 ◽  
Author(s):  
Sébastien Miellet ◽  
Roberto Caldara ◽  
Philippe G. Schyns

The main concern in face-processing research is to understand the processes underlying the identification of faces. In the study reported here, we addressed this issue by examining whether local or global information supports face identification. We developed a new methodology called “ iHybrid.” This technique combines two famous identities in a gaze-contingent paradigm, which simultaneously provides local, foveated information from one face and global, complementary information from a second face. Behavioral face-identification performance and eye-tracking data showed that the visual system identified faces on the basis of either local or global information depending on the location of the observer’s first fixation. In some cases, a given observer even identified the same face using local information on one trial and global information on another trial. A validation in natural viewing conditions confirmed our findings. These results clearly demonstrate that face identification is not rooted in a single, or even preferred, information-gathering strategy.


2016 ◽  
Vol 16 (12) ◽  
pp. 161
Author(s):  
Daniel Fiset ◽  
Josiane Leclerc ◽  
Jessica Royer ◽  
Valérie Plouffe ◽  
Caroline Blais

2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Valerie Goffaux ◽  
John A. Greenwood

Abstract Recent work demonstrates that human face identification is most efficient when based on horizontal, rather than vertical, image structure. Because it is unclear how this specialization for upright (compared to inverted) face processing emerges in the visual system, the present study aimed to systematically characterize the orientation sensitivity profile for face identification. With upright faces, identification performance in a delayed match-to-sample task was highest for horizontally filtered images and declined sharply with oblique and vertically filtered images. Performance was well described by a Gaussian function with a standard deviation around 25°. Face inversion reshaped this sensitivity profile dramatically, with a downward shift of the entire tuning curve as well as a reduction in the amplitude of the horizontal peak and a doubling in bandwidth. The use of naturalistic outer contours (vs. a common outline mask) was also found to reshape this sensitivity profile by increasing sensitivity to oblique information in the near-horizontal range. Altogether, although face identification is sharply tuned to horizontal angles, both inversion and outline masking can profoundly reshape this orientation sensitivity profile. This combination of image- and observer-driven effects provides an insight into the functional relationship between orientation-selective processes within primary and high-level stages of the human brain.


1994 ◽  
Vol 47 (1) ◽  
pp. 5-28 ◽  
Author(s):  
Vicki Bruce

A theme running through M.D. Vernon's discussions of visual perception was the key question of how we perceive a stable world despite continuous variation. The central problem in face identification is how we build stable representations from exemplars that vary, both rigidly and non-rigidly, from instant to instant and from encounter to encounter. Experiments reveal that people are rather poor at generalizing from one exemplar of a face to another (e.g. from one photograph to another showing a different view or expression) yet highly accurate at encoding precise details of faces within the range shown by several slightly different exemplars. Moreover, provided instructions do not encourage subjects explicitly to attend to the way that different exemplars vary, faces are retained in a way that enhances familiarity of the prototype of the set, even if this was not presented for study. It is suggested that our usual encounters with continuous variations of facial expressions, angles, and lightings provide the conditions necessary to establish stable representations of individuals within an overall category (the face) where all members share the same overall structure. These observations about face recognition would probably not have come as any great surprise to Maggie Vernon, many of whose more general observations about visual perception anticipated such conclusions.


Author(s):  
Kimberly B. Schauder ◽  
Woon Ju Park ◽  
Yuliy Tsank ◽  
Miguel P. Eckstein ◽  
Duje Tadin ◽  
...  

Abstract Background Autism spectrum disorder (ASD) is a neurodevelopmental disorder defined and diagnosed by core deficits in social communication and the presence of restricted and repetitive behaviors. Research on face processing suggests deficits in this domain in ASD but includes many mixed findings regarding the nature and extent of these differences. The first eye movement to a face has been shown to be highly informative and sufficient to achieve high performance in face identification in neurotypical adults. The current study focused on this critical moment shown to be essential in the process of face identification. Methods We applied an established eye-tracking and face identification paradigm to comprehensively characterize the initial eye movement to a face and test its functional consequence on face identification performance in adolescents with and without ASD (n = 21 per group), and in neurotypical adults. Specifically, we presented a series of faces and measured the landing location of the first saccade to each face, while simultaneously measuring their face identification abilities. Then, individuals were guided to look at specific locations on the face, and we measured how face identification performance varied as a function of that location. Adolescent participants also completed a more traditional measure of face identification which allowed us to more fully characterize face identification abilities in ASD. Results Our results indicate that the location of the initial look to faces and face identification performance for briefly presented faces are intact in ASD, ruling out the possibility that deficits in face perception, at least in adolescents with ASD, begin with the initial eye movement to the face. However, individuals with ASD showed impairments on the more traditional measure of face identification. Conclusion Together, the observed dissociation between initial, rapid face perception processes, and other measures of face perception offers new insights and hypotheses related to the timing and perceptual complexity of face processing and how these specific aspects of face identification may be disrupted in ASD.


2020 ◽  
Author(s):  
Y. Ivette Colón ◽  
Carlos D. Castillo ◽  
ALICE O'TOOLE

Facial expressions distort visual cues for identification in two-dimensional images. Face processing systems in the brain must decouple image-based information from multiple sources to operate in the social world. Deep convolutional neural networks (DCNN) trained for face identification retain identity-irrelevant, image-based information (e.g., viewpoint). We asked whether a DCNN trained for identity also retains expression information that generalizes over viewpoint change. DCNN representations were generated for a controlled dataset containing images of 70 actors posing 7 facial expressions (happy, sad, angry, surprised, fearful, disgusted, neutral), from 5 viewpoints (frontal, 90-degree and 45-degree left and right profiles). Two-dimensional visualizations of the DCNN representations revealed hierarchical groupings by identity, followed by viewpoint, and then by facial expression. Linear discriminant analysis of full-dimensional representations predicted expressions accurately (72% correct for happiness, followed by surprise, disgust, anger, neutral, sad, and fearful at 39%; chance = 14.29%). Expression classification was stable across viewpoints. Representational similarity heatmaps indicated that image similarities within identities varied more by viewpoint than by expression. We conclude that an identity-trained, deep network retains shape-deformable information about expression and viewpoint, along with identity, in a unified form—consistent with a recent hypothesis for ventral visual stream processing.


2019 ◽  
Vol 17 (1) ◽  
pp. 118-127
Author(s):  
Sanaa Ghouzali ◽  
Souad Larabi

Most biometric identification applications suffer from the curse of dimensionality as the database size becomes very large, which could negatively affect both the identification performance and speed. In this paper, we use Projection Pursuit (PP) methods to determine clusters of individuals. Support Vector Machine (SVM) classifiers are then applied on each cluster of users separately. PP clustering is conducted using Friedman and Kurtosis projection indices optimized by Genetic Algorithm and Particle Swarm Optimization methods. Experimental results obtained using YALE face database showed improvement in the performance and speed of face identification system


Sign in / Sign up

Export Citation Format

Share Document