scholarly journals Improving Deep-Learning-based Face Recognition to Increase Robustness against Morphing Attacks

2020 ◽  
Author(s):  
Una M. Kelly ◽  
Luuk Spreeuwers ◽  
Raymond Veldhuis

State-of-the-art face recognition systems (FRS) are vulnerable to morphing attacks, in which two photos of different people are merged in such a way that the resulting photo resembles both people. Such a photo could be used to apply for a passport, allowing both people to travel with the same identity document. Research has so far focussed on developing morphing detection methods. We suggest that it might instead be worthwhile to make face recognition systems themselves more robust to morphing attacks. We show that deep-learning-based face recognition can be improved simply by treating morphed images just like real images during training but also that, for significant improvements, more work is needed. Furthermore, we test the performance of our FRS on morphs of a type not seen during training. This addresses the problem of overfitting to the type of morphs used during training, which is often overlooked in current research.

Attendance Management System under unconstrained video using face recognition technology has made a great variation from the traditional method of attendance marking system. This attendance management system has been developed under the domain of Deep Learning by using Face recognition. Automatic Attendance Management under unconstrained video using face recognition systems which automatically mark attendance by detecting end to end face from the frames obtained from live stream video of surveillance camera which placed in center of the classroom. From the recognized faces, it will be compared with stored images in database, then the attendance report will be generated and it also provides attendance reports to parents of the absentee’s student.


2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Ahmed Jawad A. AlBdairi ◽  
Zhu Xiao ◽  
Mohammed Alghaili

The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Rui Min ◽  
Abdenour Hadid ◽  
Jean-Luc Dugelay

While there has been an enormous amount of research on face recognition under pose/illumination/expression changes and image degradations, problems caused by occlusions attracted relatively less attention. Facial occlusions, due, for example, to sunglasses, hat/cap, scarf, and beard, can significantly deteriorate performances of face recognition systems in uncontrolled environments such as video surveillance. The goal of this paper is to explore face recognition in the presence of partial occlusions, with emphasis on real-world scenarios (e.g., sunglasses and scarf). In this paper, we propose an efficient approach which consists of first analysing the presence of potential occlusion on a face and then conducting face recognition on the nonoccluded facial regions based on selective local Gabor binary patterns. Experiments demonstrate that the proposed method outperforms the state-of-the-art works including KLD-LGBPHS, S-LNMF, OA-LBP, and RSC. Furthermore, performances of the proposed approach are evaluated under illumination and extreme facial expression changes provide also significant results.


Author(s):  
Amal Seralkhatem Osman Ali ◽  
Vijanth Sagayan Asirvadam ◽  
Aamir Saeed Malik ◽  
Mohamed Meselhy Eltoukhy ◽  
Azrina Aziz

Whilst facial recognition systems are vulnerable to different acquisition conditions, most notably lighting effects and pose variations, their particular level of sensitivity to facial aging effects is yet to be researched. The face recognition vendor test (FRVT) 2012's annual statement estimated deterioration in the performance of face recognition systems due to facial aging. There was about 5% degradation in the accuracies of the face recognition systems for each single year age difference between a test image and a probe image. Consequently, developing an age-invariant platform continues to be a significant requirement for building an effective facial recognition system. The main objective of this work is to address the challenge of facial aging which affects the performance of facial recognition systems. Accordingly, this work presents a geometrical model that is based on extracting a number of triangular facial features. The proposed model comprises a total of six triangular areas connecting and surrounding the main facial features (i.e. eyes, nose and mouth). Furthermore, a set of thirty mathematical relationships are developed and used for building a feature vector for each sample image. The areas and perimeters of the extracted triangular areas are calculated and used as inputs for the developed mathematical relationships. The performance of the system is evaluated over the publicly available face and gesture recognition research network (FG-NET) face aging database. The performance of the system is compared with that of some of the state-of-the-art face recognition methods and state-of-the-art age-invariant face recognition systems. Our proposed system yielded a good performance in term of classification accuracy of more than 94%.


Author(s):  
Bo Chen ◽  
Hua Zhang ◽  
Yonglong Li ◽  
Shuang Wang ◽  
Huaifang Zhou ◽  
...  

Abstract An increasing number of detection methods based on computer vision are applied to detect cracks in water conservancy infrastructure. However, most studies directly use existing feature extraction networks to extract cracks information, which are proposed for open-source datasets. As the cracks distribution and pixel features are different from these data, the extracted cracks information is incomplete. In this paper, a deep learning-based network for dam surface crack detection is proposed, which mainly addresses the semantic segmentation of cracks on the dam surface. Particularly, we design a shallow encoding network to extract features of crack images based on the statistical analysis of cracks. Further, to enhance the relevance of contextual information, we introduce an attention module into the decoding network. During the training, we use the sum of Cross-Entropy and Dice Loss as the loss function to overcome data imbalance. The quantitative information of cracks is extracted by the imaging principle after using morphological algorithms to extract the morphological features of the predicted result. We built a manual annotation dataset containing 1577 images to verify the effectiveness of the proposed method. This method achieves the state-of-the-art performance on our dataset. Specifically, the precision, recall, IoU, F1_measure, and accuracy achieve 90.81%, 81.54%, 75.23%, 85.93%, 99.76%, respectively. And the quantization error of cracks is less than 4%.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Zobeir Raisi ◽  
Mohamed A. Naiel ◽  
Paul Fieguth ◽  
Steven Wardell ◽  
John Zelek

The reported accuracy of recent state-of-the-art text detection methods, mostly deep learning approaches, is in the order of 80% to 90% on standard benchmark datasets. These methods have relaxed some of the restrictions of structured text and environment (i.e., "in the wild") which are usually required for classical OCR to properly function. Even with this relaxation, there are still circumstances where these state-of-the-art methods fail.  Several remaining challenges in wild images, like in-plane-rotation, illumination reflection, partial occlusion, complex font styles, and perspective distortion, cause exciting methods to perform poorly. In order to evaluate current approaches in a formal way, we standardize the datasets and metrics for comparison which had made comparison between these methods difficult in the past. We use three benchmark datasets for our evaluations: ICDAR13, ICDAR15, and COCO-Text V2.0. The objective of the paper is to quantify the current shortcomings and to identify the challenges for future text detection research.


Sign in / Sign up

Export Citation Format

Share Document