scholarly journals Multi-Descriptor Random Sampling for Patch-Based Face Recognition

2021 ◽  
Vol 11 (14) ◽  
pp. 6303
Author(s):  
Ismahane Cheheb ◽  
Noor Al-Maadeed ◽  
Ahmed Bouridane ◽  
Azeddine Beghdadi ◽  
Richard Jiang

While there has been a massive increase in research into face recognition, it remains a challenging problem due to conditions present in real life. This paper focuses on the inherently present issue of partial occlusion distortions in real face recognition applications. We propose an approach to tackle this problem. First, face images are divided into multiple patches before local descriptors of Local Binary Patterns and Histograms of Oriented Gradients are applied on each patch. Next, the resulting histograms are concatenated, and their dimensionality is then reduced using Kernel Principle Component Analysis. Once completed, patches are randomly selected using the concept of random sampling to finally construct several sub-Support Vector Machine classifiers. The results obtained from these sub-classifiers are combined to generate the final recognition outcome. Experimental results based on the AR face database and the Extended Yale B database show the effectiveness of our proposed technique.

Author(s):  
Widodo Budiharto

The variation in illumination is one of the main challenging problem for face recognition. It has been proven that in face recognition, differences caused by illumination variations are more significant than differences between individuals. Recognizing face reliably across changes in pose and illumination using PCA has proved to be a much harder problem because eigenfaces method comparing the intensity of the pixel. To solve this problem, this research proposes an online face recognition system using improved PCA for a service robot in indoor environment based on stereo vision. Tested images are improved by generating random values for varying the intensity of face images. A program for online training is also developed where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face database’s as 95.4% and Indian face database’s as 72%. The results from this experiment are still evaluated to be improved in the future.


2021 ◽  
Vol 39 (4) ◽  
pp. 1190-1197
Author(s):  
Y. Ibrahim ◽  
E. Okafor ◽  
B. Yahaya

Manual grid-search tuning of machine learning hyperparameters is very time-consuming. Hence, to curb this problem, we propose the use of a genetic algorithm (GA) for the selection of optimal radial-basis-function based support vector machine (RBF-SVM) hyperparameters; regularization parameter C and cost-factor γ. The resulting optimal parameters were used during the training of face recognition models. To train the models, we independently extracted features from the ORL face image dataset using local binary patterns (handcrafted) and deep learning architectures (pretrained variants of VGGNet). The resulting features were passed as input to either linear-SVM or optimized RBF-SVM. The results show that the models from optimized RBFSVM combined with deep learning or hand-crafted features yielded performances that surpass models obtained from Linear-SVM combined with the aforementioned features in most of the data splits. The study demonstrated that it is profitable to optimize the hyperparameters of an SVM to obtain the best classification performance. Keywords: Face Recognition, Feature Extraction, Local Binary Patterns, Transfer Learning, Genetic Algorithm and Support Vector  Machines.


2008 ◽  
Vol 2008 ◽  
pp. 1-5 ◽  
Author(s):  
P. S. Hiremath ◽  
C. J. Prabhakar

We present symbolic kernel discriminant analysis (symbolic KDA) for face recognition in the framework of symbolic data analysis. Classical KDA extracts features, which are single-valued in nature to represent face images. These single-valued variables may not be able to capture variation of each feature in all the images of same subject; this leads to loss of information. The symbolic KDA algorithm extracts most discriminating nonlinear interval-type features which optimally discriminate among the classes represented in the training set. The proposed method has been successfully tested for face recognition using two databases, ORL database and Yale face database. The effectiveness of the proposed method is shown in terms of comparative performance against popular face recognition methods such as kernel Eigenface method and kernel Fisherface method. Experimental results show that symbolic KDA yields improved recognition rate.


Author(s):  
V. KABEER ◽  
N. K. NARAYANAN

This paper presents a novel biologically-inspired and wavelet-based model for extracting features of faces from face images. The biological knowledge about the distribution of light receptors, cones and rods, over the surface of the retina, and the way they are associated with the nerve ends for pattern vision forms the basis for the design of this model. A combination of classical wavelet decomposition and wavelet packet decomposition is used for simulating the functional model of cones and rods in pattern vision. The paper also describes the experiments performed for face recognition using the features extracted on the AT & T face database (formerly, ORL face database) containing 400 face images of 40 different individuals. In the recognition stage, we used the Artificial Neural Network Classifier. A feature vector of size 40 is formed for face images of each person and recognition accuracy is computed using the ANN classifier. Overall recognition accuracy obtained for the AT & T face database is 95.5%.


2019 ◽  
Vol 16 (10) ◽  
pp. 4309-4312
Author(s):  
Rajeshwar Moghekar ◽  
Sachin Ahuja

Face recognition from videos is challenging problem as the face image captured has variations in terms of pose, Occlusion, blur and resolution. It has many applications including security monitoring and authentication. A subset of Indian Movies Face database (IMFDB) which has collection of face images retrieved from movie/video of actors which vary in terms of blur, pose, noise and illumination is used in our work. Our work focuses on the use of pre-trained deep learning models and applies transfer learning to the features extracted from the CNN layers. Later we compare it Fine tuned model. The results show that the accuracy is 99.89 using CNN as feature extractor and 96.3 when we fine tune the VGG-Face. The Fine tuned network of VGG-Face learnt more generic features when compared with its counterpart transfer learning. When applied on VGG16 transfer learning achieved 93.9.


2019 ◽  
Vol 15 (5) ◽  
pp. 155014771984527
Author(s):  
Jianhu Zheng ◽  
Jinshuan Peng

In order to facilitate effective crime prevention and to issue timely warnings for the sake of public security, it is important to pinpoint the accurate position of particular pedestrians in crowded areas. Face recognition is the most popular method to detect and track pedestrian movement. During the face recognition process, feature classification ability and reliability are determined by the feature extraction methods. The primary challenge for researchers is to obtain a stable result while the targeted face is subject to varying conditions—particularly of illumination. To address this issue, we propose a novel pedestrian detection algorithm with multisource face images, which involves a face recognition algorithm based on the conjugate orthonormalized partial least-squares regression analysis under a complex lighting environment. Statistical learning theory is a research specialization of machine learning, especially applicable to small samples. Building upon the theoretical principles used to solve small-sample statistical problems, a new hypothesis has been developed; using this concept, we integrate the conjugate orthonormalized partial least-squares regression with the revised support vector machine algorithm to undertake the solution of the facial recognition problem. The experimental result proves that our algorithm achieves better performance when compared with other state-of-the-art methodologies, both numerically and visually.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Chunnian Fan ◽  
Shuiping Wang ◽  
Hao Zhang

This paper presents a novel Gabor phase based illumination invariant extraction method aiming at eliminating the effect of varying illumination on face recognition. Firstly, It normalizes varying illumination on face images, which can reduce the effect of varying illumination to some extent. Secondly, a set of 2D real Gabor wavelet with different directions is used for image transformation, and multiple Gabor coefficients are combined into one whole in considering spectrum and phase. Lastly, the illumination invariant is obtained by extracting the phase feature from the combined coefficients. Experimental results on the Yale B and the CMU PIE face database show that our method obtained a significant improvement over other related methods for face recognition under large illumination variation condition.


Author(s):  
Yu. V. Vizilter ◽  
V. S. Gorbatsevich ◽  
A. S. Moiseenko

The paper proposes an architecture and training method of a deep convolutional neural network for simultaneous face detection and recognition. The implemented approach combines the ideas of SSD (Single Shot Detector) and Faster R-CNN (Region proposal Convolutional Neural Networks) algorithms. Face detection is performed similarly to single-stage detection algorithms, and then a biometric template is built by employing RoI (Region of Interest) pooling layers and using the separate branch of the neural network. Training process includes three stages: pretraining of thebasic CNN for face recognition on face images, fine-tuning by using RoI pooling on in painted face images, adding SSD layers and fine-tuning on face detection. Wherein, at the latter stage, training is performed by using shared layers technology for two databases simultaneously. The main feature of the algorithm is high processing speed, which does not depend on the number of faces in the input image. For example, in case of using ResNet-34 as the core architecture for the algorithm, the required time for detecting faces and building biometric templates on an image with 100 faces is less than 13 ms. For training purposes we use CASIA-WebFace for face recognition task and Wider Face for face detection task. Testing is performed on FDDB (Face Detection Dataset and Benchmark), since this database is closer to practical applications than Wider. As long as the main practical task the developed method is intended for is face reidentification, we use Fei Face DataBase for face recognition quality testing. We obtain TPR (True Positive Rate) = 0.928@1000 on FDDB Face DataBase and FAR (Face Acceptance Rate) = 0.03309@FRR (Face Rejection Rate) = 10–4. Therefore, the proposed algorithm allows solving face detection and reidentification tasks in real time with any number of faces on an input image.


Sign in / Sign up

Export Citation Format

Share Document