scholarly journals Data Augmentation-Assisted Makeup-Invariant Face Recognition

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Sajid ◽  
Nouman Ali ◽  
Saadat Hanif Dar ◽  
Naeem Iqbal Ratyal ◽  
Asif Raza Butt ◽  
...  

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing synthetic makeup variations. The augmented dataset containing original face images and those with synthetic make up variations allows dCNN to learn face features in a variety of facial makeup. We also evaluate the role of partial and full makeup in face images to improve the recognition performance. The experimental results on two challenging face datasets show that the proposed approach can compete with the state of the art.

2018 ◽  
Vol 9 (1) ◽  
pp. 60-77 ◽  
Author(s):  
Souhir Sghaier ◽  
Wajdi Farhat ◽  
Chokri Souani

This manuscript presents an improved system research that can detect and recognize the person in 3D space automatically and without the interaction of the people's faces. This system is based not only on a quantum computation and measurements to extract the vector features in the phase of characterization but also on learning algorithm (using SVM) to classify and recognize the person. This research presents an improved technique for automatic 3D face recognition using anthropometric proportions and measurement to detect and extract the area of interest which is unaffected by facial expression. This approach is able to treat incomplete and noisy images and reject the non-facial areas automatically. Moreover, it can deal with the presence of holes in the meshed and textured 3D image. It is also stable against small translation and rotation of the face. All the experimental tests have been done with two 3D face datasets FRAV 3D and GAVAB. Therefore, the test's results of the proposed approach are promising because they showed that it is competitive comparable to similar approaches in terms of accuracy, robustness, and flexibility. It achieves a high recognition performance rate of 95.35% for faces with neutral and non-neutral expressions for the identification and 98.36% for the authentification with GAVAB and 100% with some gallery of FRAV 3D datasets.


2017 ◽  
Vol 23 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Steffen A. Herff ◽  
Daniela Czernochowski

When attention is divided during memory encoding, performance tends to suffer. The nature of this performance decrement, however, is domain-dependent and often governed by domain-specific expertise. In this study, 111 participants with differing levels of musical expertise (professional musicians, amateur musicians, and non-musicians) were presented with novel melodies under full- or divided-attention conditions in a continuous melody-recognition task. As hypothesized, melody recognition was modulated by musical expertise, as greater expertise was associated with better performance. Recognition performance increased with every additional presentation of a target melody. The divided-attention condition required concurrently performing a non-music related digit-monitoring task while simultaneously listening to the melodies. Memory performance decreased universally in all groups in the divided-attention condition; however, intriguingly musicians also performed significantly better in the concurrent digit-monitoring task than non-musicians. Results provide insight into the role of expertise, attention, and memory in the musical domain, and are discussed in terms of attentional resource models. In light of resource models, an asymmetrical non-linear trade-off between two simultaneous tasks is proposed to explain the present findings.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tongxin Wei ◽  
Qingbao Li ◽  
Jinjin Liu ◽  
Ping Zhang ◽  
Zhifeng Chen

In the process of face recognition, face acquisition data is seriously distorted. Many face images collected are blurred or even missing. Faced with so many problems, the traditional image inpainting was based on structure, while the current popular image inpainting method is based on deep convolutional neural network and generative adversarial nets. In this paper, we propose a 3D face image inpainting method based on generative adversarial nets. We identify two parallels of the vector to locate the planer positions. Compared with the previous, the edge information of the missing image is detected, and the edge fuzzy inpainting can achieve better visual match effect. We make the face recognition performance dramatically boost.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Zhifei Wang ◽  
Zhenjiang Miao ◽  
Yanli Wan ◽  
Zhen Tang

Low resolution (LR) in face recognition (FR) surveillance applications will cause the problem of dimensional mismatch between LR image and its high-resolution (HR) template. In this paper, a novel method called kernel coupled cross-regression (KCCR) is proposed to deal with this problem. Instead of processing in the original observing space directly, KCCR projects LR and HR face images into a unified nonlinear embedding feature space using kernel coupled mappings and graph embedding. Spectral regression is further employed to improve the generalization performance and reduce the time complexity. Meanwhile, cross-regression is developed to fully utilize the HR embedding to increase the information of the LR space, thus to improve the recognition performance. Experiments on the FERET and CMU PIE face database show that KCCR outperforms the existing structure-based methods in terms of recognition rate as well as time complexity.


Perception ◽  
10.1068/p5637 ◽  
2007 ◽  
Vol 36 (9) ◽  
pp. 1334-1352 ◽  
Author(s):  
Simone K Favelle ◽  
Stephen Palmisano ◽  
Ryan T Maloney

Previous research into the effects of viewpoint change on face recognition has typically dealt with rotations around the head's vertical axis (yaw). Another common, although less studied, source of viewpoint variation in faces is rotation around the head's horizontal pitch axis (pitch). In the current study we used both a sequential matching task and an old/new recognition task to examine the effect of viewpoint change following rotation about both pitch and yaw axes on human face recognition. The results of both tasks showed that recognition performance was better for faces rotated about yaw compared to pitch. Further, recognition performance for faces rotated upwards on the pitch axis was better than for faces rotated downwards. Thus, equivalent angular rotations about pitch and yaw do not produce equivalent viewpoint-dependent declines in recognition performance.


2021 ◽  
Vol 37 (5) ◽  
pp. 879-890
Author(s):  
Rong Wang ◽  
ZaiFeng Shi ◽  
Qifeng Li ◽  
Ronghua Gao ◽  
Chunjiang Zhao ◽  
...  

HighlightsA pig face recognition model that cascades the pig face detection network and pig face recognition network is proposed.The pig face detection network can automatically extract pig face images to reduce the influence of the background.The proposed cascaded model reaches accuracies of 99.38%, 98.96% and 97.66% on the three datasets.An application is developed to automatically recognize individual pigs.Abstract. The identification and tracking of livestock using artificial intelligence technology have been a research hotspot in recent years. Automatic individual recognition is the key to realizing intelligent feeding. Although RFID can achieve identification tasks, it is expensive and easily fails. In this article, a pig face recognition model that cascades a pig face detection network and a pig face recognition network is proposed. First, the pig face detection network is utilized to crop the pig face images from videos and eliminate the complex background of the pig shed. Second, batch normalization, dropout, skip connection, and residual modules are exploited to design a pig face recognition network for individual identification. Finally, the cascaded network model based on the pig face detection and recognition network is deployed on a GPU server, and an application is developed to automatically recognize individual pigs. Additionally, class activation maps generated by grad-CAM are used to analyze the performance of features of pig faces learned by the model. Under free and unconstrained conditions, 46 pigs are selected to make a positive pig face dataset, original multiangle pig face dataset and enhanced multiangle pig face dataset to verify the pig face recognition cascaded model. The proposed cascaded model reaches accuracies of 99.38%, 98.96%, and 97.66% on the three datasets, which are higher than those of other pig face recognition models. The results of this study improved the recognition performance of pig faces under multiangle and multi-environment conditions. Keywords: CNN, Deep learning, Pig face detection, Pig face recognition.


Perception ◽  
10.1068/p5027 ◽  
2003 ◽  
Vol 32 (3) ◽  
pp. 285-293 ◽  
Author(s):  
Javid Sadr ◽  
Izzat Jarudi ◽  
Pawan Sinha

A fundamental challenge in face recognition lies in determining which facial characteristics are important in the identification of faces. Several studies have indicated the significance of certain facial features in this regard, particularly internal ones such as the eyes and mouth. Surprisingly, however, one rather prominent facial feature has received little attention in this domain: the eyebrows. Past work has examined the role of eyebrows in emotional expression and nonverbal communication, as well as in facial aesthetics and sexual dimorphism. However, it has not been made clear whether the eyebrows play an important role in the identification of faces. Here, we report experimental results which suggest that for face recognition the eyebrows may be at least as influential as the eyes. Specifically, we find that the absence of eyebrows in familiar faces leads to a very large and significant disruption in recognition performance. In fact, a significantly greater decrement in face recognition is observed in the absence of eyebrows than in the absence of eyes. These results may have important implications for our understanding of the mechanisms of face recognition in humans as well as for the development of artificial face-recognition systems.


2013 ◽  
Vol 22 (01) ◽  
pp. 1250029 ◽  
Author(s):  
SHICAI YANG ◽  
GEORGE BEBIS ◽  
MUHAMMAD HUSSAIN ◽  
GHULAM MUHAMMAD ◽  
ANWAR M. MIRZA

Human faces can be arranged into different face categories using information from common visual cues such as gender, ethnicity, and age. It has been demonstrated that using face categorization as a precursor step to face recognition improves recognition rates and leads to more graceful errors. Although face categorization using common visual cues yields meaningful face categories, developing accurate and robust gender, ethnicity, and age categorizers is a challenging issue. Moreover, it limits the overall number of possible face categories and, in practice, yields unbalanced face categories which can compromise recognition performance. This paper investigates ways to automatically discover a categorization of human faces from a collection of unlabeled face images without relying on predefined visual cues. Specifically, given a set of face images from a group of known individuals (i.e., gallery set), our goal is finding ways to robustly partition the gallery set (i.e., face categories). The objective is being able to assign novel images of the same individuals (i.e., query set) to the correct face category with high accuracy and robustness. To address the issue of face category discovery, we represent faces using local features and apply unsupervised learning (i.e., clustering). To categorize faces in novel images, we employ nearest-neighbor algorithms or learn the separating boundaries between face categories using supervised learning (i.e., classification). To improve face categorization robustness, we allow face categories to share local features as well as to overlap. We demonstrate the performance of the proposed approach through extensive experiments and comparisons using the FERET database.


2008 ◽  
Vol 19 (10) ◽  
pp. 998-1006 ◽  
Author(s):  
Janet Hui-wen Hsiao ◽  
Garrison Cottrell

It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.


Sign in / Sign up

Export Citation Format

Share Document