scholarly journals Improvement of Identity Recognition with Occlusion Detection-Based Feature Selection

Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 167
Author(s):  
Jaeyoon Jang ◽  
Ho-Sub Yoon ◽  
Jaehong Kim

Image-based facial identity recognition has become a technology that is now used in many applications. This is because it is possible to use only a camera without the need for any other device. Besides, due to the advantage of contactless technology, it is one of the most popular certifications. However, a common recognition system is not possible if some of the face information is lost due to the user’s posture or the wearing of masks, as caused by the recent prevalent disease. In some platforms, although performance is improved through incremental updates, it is still inconvenient and inaccurate. In this paper, we propose a method to respond more actively to these situations. First, we determine whether an obscurity occurs and improve the stability by calculating the feature vector using only a significant area when the obscurity occurs. By recycling the existing recognition model, without incurring little additional costs, the results of reducing the recognition performance drop in certain situations were confirmed. Using this technique, we confirmed a performance improvement of about 1~3% in a situation where some information is lost. Although the performance is not dramatically improved, it has the big advantage that it can improve recognition performance by utilizing existing systems.

Now a days one of the critical factors that affects the recognition performance of any face recognition system is partial occlusion. The paper addresses face recognition in the presence of sunglasses and scarf occlusion. The face recognition approach that we proposed, detects the face region that is not occluded and then uses this region to obtain the face recognition. To segment the occluded and non-occluded parts, adaptive Fuzzy C-Means Clustering is used and for recognition Minimum Cost Sub-Block Matching Distance(MCSBMD) are used. The input face image is divided in to number of sub blocks and each block is checked if occlusion present or not and only from non-occluded blocks MWLBP features are extracted and are used for classification. Experiment results shows our method is giving promising results when compared to the other conventional techniques.


Author(s):  
Kalyan Chakravarthi. M

Abstract: Recognition from faces is a popular and significant technology in recent years. Face alterations and the presence of different masks make it too much challenging. In the real-world, when a person is uncooperative with the systems such as in video surveillance then masking is further common scenarios. For these masks, current face recognition performance degrades. Still, difficulties created by masks are usually disregarded. Face recognition is a promising area of applied computer vision . This technique is used to recognize a face or identify a person automatically from given images. In our daily life activates like, in a passport checking, smart door, access control, voter verification, criminal investigation, and many other purposes face recognition is widely used to authenticate a person correctly and automatically. Face recognition has gained much attention as a unique, reliable biometric recognition technology that makes it most popular than any other biometric technique likes password, pin, fingerprint, etc. Many of the governments across the world also interested in the face recognition system to secure public places such as parks, airports, bus stations, and railway stations, etc. Face recognition is one of the well-studied real-life problems. Excellent progress has been done against face recognition technology throughout the last years. The primary concern to this work is about facial masks, and especially to enhance the recognition accuracy of different masked faces. A feasible approach has been proposed that consists of first detecting the facial regions. The occluded face detection problem has been approached using Cascaded Convolutional Neural Network (CNN). Besides, its performance has been also evaluated within excessive facial masks and found attractive outcomes. Finally, a correlative study also made here for a better understanding.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhijun Guo ◽  
Shuai Liu

In the process of wireless image transmission, there are a large number of interference signals, but the traditional interference signal recognition system is limited by various modulation modes, it is difficult to accurately identify the target signal, and the reliability of the system needs to be further improved. In order to solve this problem, a wireless image transmission interference signal recognition system based on deep learning is designed in this paper. In the hardware part, STM32F107VT and SI4463 are used to form a wireless controller to control the execution of each instruction. In the software part, aiming at the time-domain characteristics of the interference signal, the feature vector of the interference signal is extracted. With the support of GAP-CNN model, the interference signal is recognized through the training and learning of feature vector. The experimental results show that the packet loss rate of the designed system is less than 0.5%, the recognition performance is good, and the reliability of the system is improved.


Author(s):  
Manoj Prabhakaran Kumar ◽  
Manoj Kumar Rajagopal

This chapter proposes the facial expression system with the entire facial feature of geometric deformable model and classifier in order to analyze the set of prototype expressions from frontal macro facial expression. In the training phase, the face detection and tracking are carried out by constrained local model (CLM) on a standardized database. Using the CLM grid node, the entire feature vector displacement is obtained by facial feature extraction, which has 66 feature points. The feature vector displacement is computed in bi-linear support vector machines (SVMs) classifier to evaluate the facial and develops the trained model. Similarly, the testing phase is carried out and the outcome is equated with the trained model for human emotion identifications. Two normalization techniques and hold-out validations are computed in both phases. Through this model, the overall validation performance is higher than existing models.


2021 ◽  
Vol 14 (1) ◽  
pp. 541-551
Author(s):  
Cahya Rahmad ◽  
◽  
Kohei Arai ◽  
Rosa Asmara ◽  
Ekojono Ekojono ◽  
...  

Face recognition plays an important role in the identity recognition system, the color and geometry feature has been claimed able to be used as parameter for face recognition. This study aims to analize the performance of geometric features, color features, and both of them on the human face using Gaussian Naïve Bayes (GNB) and the other Machine Learning method. This study using various geometric features: the distance between the eyes, nose, mouth by using Euclidean distance, and classified using GNB, K-Nearest Neighbour (KNN), and Support Vector Machine (SVM). The result compared with color feature: normalized RGB values, mean of normalized RGB, and RGB Variant as color features. The feature values obtained are assembled and processed using GNB and the other ML method to classified and recognized the faces. The dataset obtained from Aberdeen faces the dataset, which has 687 color faces from Ian Craw at Aberdeen. Between 1 and 18 images of 90 individuals. Some variations in lighting, varied viewpoints, and the resolution have varied between 336x480 to 624x544. The experimental results show that the system successfully recognized the face based on the determined algorithm and based on three models, SVM reached nearly 74.83%, GNB reached nearly 74.67%, and KNN with K = 5 reached nearly 72.17%.


Author(s):  
Chabib Arifin ◽  
Hartanto Junaedi

Speech one of the biometric characteristic owned by human being, as well as fingerprint, DNA, retina of the eyes and so not the two human beings who have the same voice. Human emotion is a matter that can only be predicted through the face of a person, or from the change of facial expression but it turns out human emotions can also be detected through the spoken voice. Someone emotion are happy, angry, neutral, sad, and surprise can be detected through speech signal. The development of voice recognition system is still running at this moment. So that ini this research, the analysis of someone emotion through speech signal. Some related research about the sound aims to have process of identity recognition gender recognition, Emotion recognition based on conversation. In this research the writer does research on the emotional classification of speech two classes started from happy, angry, neutral, sad and surprise while the used algorithm in this research is SVM (Support Vector Machine) with alghoritmMFCC (Mel-frequency cepstral coefficient)for extraction where it contains filter process that adapted to human’s listening. The result of the implementation process of both algorithms gives accuracy level ashappy=68.54%, angry=75.24%, neutral=78.50%, sad=74.22% and surprise=68.23%.


2021 ◽  
Author(s):  
Wei-Jong Yang ◽  
Cheng-Yu Lo ◽  
Pau-Choo Chung ◽  
Jar Ferr Yang

Face images with partially-occluded areas create huge deteriorated problems for face recognition systems. Linear regression classification (LRC) is a simple and powerful approach for face recognition, of course, it cannot perform well under occlusion situations as well. By segmenting the face image into small subfaces, called modules, the LRC system could achieve some improvements by selecting the best non-occluded module for face classification. However, the recognition performance will be deteriorated due to the usage of the module, a small portion of the face image. We could further enhance the performance if we can properly identify the occluded modules and utilize all the non-occluded modules as many as possible. In this chapter, we first analyze the texture histogram (TH) of the module and then use the HT difference to measure its occlusion tendency. Thus, based on TH difference, we suggest a general concept of the weighted module face recognition to solve the occlusion problem. Thus, the weighted module linear regression classification method, called WMLRC-TH, is proposed for partially-occluded fact recognition. To evaluate the performances, the proposed WMLRC-TH method, which is tested on AR and FRGC2.0 face databases with several synthesized occlusions, is compared to the well-known face recognition methods and other robust face recognition methods. Experimental results show that the proposed method achieves the best performance for recognize occluded faces. Due to its simplicity in both training and testing phases, a face recognition system based on the WMLRC-TH method is realized on Android phones for fast recognition of occluded faces.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Radhey Shyam ◽  
Yogendra Narain Singh

This paper presents a critical evaluation of multialgorithmic face recognition systems for human authentication in unconstrained environment. We propose different frameworks of multialgorithmic face recognition system combining holistic and texture methods. Our aim is to combine the uncorrelated methods of the face recognition that supplement each other and to produce a comprehensive representation of the biometric cue to achieve optimum recognition performance. The multialgorithmic frameworks are designed to combine different face recognition methods such as (i) Eigenfaces and local binary pattern (LBP), (ii) Fisherfaces and LBP, (iii) Eigenfaces and augmented local binary pattern (A-LBP), and (iv) Fisherfaces and A-LBP. The matching scores of these multialgorithmic frameworks are processed using different normalization techniques whereas their performance is evaluated using different fusion strategies. The robustness of proposed multialgorithmic frameworks of face recognition system is tested on publicly available databases, for example, AT & T (ORL) and Labeled Faces in the Wild (LFW). The experimental results show a significant improvement in recognition accuracies of the proposed frameworks of face recognition system in comparison to their individual methods. In particular, the performance of the multialgorithmic frameworks combining face recognition methods with the devised face recognition method such as A-LBP improves significantly.


Author(s):  
Almabrok Essa ◽  
Vijayan K. Asari

This paper presents an illumination invariant face recognition system that uses directional features and modular histogram. The proposed Histogram of Oriented Directional Features (HODF) produces multi-region histograms for each face image, then concatenates these histograms to form the final feature vector. This feature vector is used to recognize the face image by the help of k nearest neighbors classifier (KNN). The edge responses and the relationship among pixels are very important and play the main role for improving the face recognition accuracy. Therefore, this work presents the effectiveness of using different directional masks for detecting the edge responses on face recognition accuracy, such as Prewitt kernels, Kirsch masks, Sobel kernels, and Gaussian derivative masks. The performance evaluation of the proposed HODF algorithm is conducted on several publicly available databases and observed promising recognition rates.


2020 ◽  
Author(s):  
Ziaul Haque Choudhury

Biometrics is a rapidly developing technology, which has been broadly applied in forensics such as criminal identification, secured access, and prison security. The biometric technology is basically a pattern recognition system that acknowledges a person by finding out the legitimacy of a specific behavioral or physiological characteristic owned by that person. In this era, face is one of the commonly acceptable biometrics system which is used by humans in their visual interaction and authentication purpose. The challenges in the face recognition system arise from different issues concerned with cosmetic applied faces and of low quality images. In this thesis, we propose two novel techniques for extraction of facial features and recognition of faces when thick cosmetic is applied and of low quality images. In the face recognition technology, the facial marks identification method is one of the unique facial identification tasks using soft biometrics. Also facial marks information can enhance the face matching score to improve the face recognition performance. When faces are applied by thick cosmetics, some of the facial marks are invisible or hidden from their faces. In the literature, to detect the facial marks AAM (Active Appearance Model) and LoG (Laplacian of Gaussian) techniques are used. However, to the best of our knowledge, the methods related to the detection of facial marks are poor in performance especially when thick cosmetic is applied to the faces. A robust method is proposed to detect the facial marks such as tattoos, scars, freckles and moles etc. Initially the active appearance model (AAM) is applied for facial feature detection purpose. In addition to this prior model the Canny edge detector method is also applied to detect the facial mark edges. Finally SURF is used to detect the hidden facial marks which are covered by cosmetic items. It has been shown that the choice of this method gives high accuracy in facial marks detection of the cosmetic applied faces. Besides, another aspect of the face recognition based on low quality images is also studied. Face recognition indeed plays a major rule in the biometrics security environment. To provide secure authentication, a robust methodology for recognizing and authentication of the human face is required. However, there are numbers of difficulties in recognizing the human face and authentication of the person perfectly. The difficulty includes low quality of images due to sparse dark or light disturbances. To overcome such kind of problems, powerful algorithms are required to filter the images and detect the face and facial marks. This technique comprises extensively of detecting the different facial marks from that of low quality images which have salt and pepper noise in them. Initially (AMF) Adaptive Median Filter is applied to filter the images. The filtered images are then extracted to detect the primary facial feature using a powerful algorithm like Active Shape Model (ASM) into Active Appearance Model (AAM). Finally, the features are extracted using feature extractor algorithm Gradient Location Orientation Histogram (GLOH).Experimental results based on the CVL database and CMU PIE database with 1000 images of 1000 subjects and 2000 images of 2000 subjects show that the use of soft biometrics is able to improve face recognition performance. The results also showed that 93 percentage of accuracy is achieved. Second experiment is conducted with an Indian face database with 1000 images and results showed that 95 percentage of accuracy is achieved.


Sign in / Sign up

Export Citation Format

Share Document