scholarly journals Application of Multiscale Facial Feature Manifold Learning Based on VGG-16

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Huilin Ge ◽  
Zhiyu Zhu ◽  
Runbang Liu ◽  
Xuedong Wu

Purpose. In order to solve the problems of small face image samples, high size, low structure, no label, and difficulty in tracking and recapture in security videos, we propose a popular multiscale facial feature manifold (MSFFM) algorithm based on VGG16. Method. We first build the VGG16 architecture to obtain face features at different scales and construct a multiscale face feature manifold with face features at different scales as dimensions. At the same time, the recognition rate, accuracy rate, and running time are used to evaluate the performance of VGG16, LeNet-5, and DenseNet on the same database. Results. From the results of comparative experiments, it can be seen that the recognition rate and accuracy of VGG16 are the highest among the three networks. The recognition rate of VGG16 is 97.588%, and the accuracy is 95.889%. And the running time is only 3.5 seconds, which is 72.727% faster than LeNet-5 and 66.666% faster than DenseNet. Conclusion. The model proposed in this paper breaks through the key problem in the face detection and tracking problem in the public security field, predicts the position of the face target image in the time dimension manifold space, and improves the efficiency of face detection.

2013 ◽  
Vol 278-280 ◽  
pp. 1211-1214
Author(s):  
Jun Ying Zeng ◽  
Jun Ying Gan ◽  
Yi Kui Zhai

A fast sparse representation face recognition algorithm based on Gabor dictionary and SL0 norm is proposed in this paper. The Gabor filters, which could effectively extract local directional features of the image at multiple scales, are less sensitive to variations of illumination, expression and camouflage. SL0 algorithm, with the advantages of calculation speed,require fewer measurement values by continuously differentiable function approximation L0 norm and reconstructed sparse signal by minimizing the approximate L0 norm. The algorithm obtain the local feature face by extracting the Gabor face feature, reduce the dimensions by principal component analysis, fast sparse classify by the SL0 norm. Under camouflage condition, The algorithm block the Gabor facial feature and improve the speed of formation of the Gabor dictionary. The experimental results on AR face database show that the proposed algorithm can improve recognition speed and recognition rate to some extent and can generalize well to the face recognition, even with a few training image per class.


2020 ◽  
Vol 17 (5) ◽  
pp. 2342-2348
Author(s):  
Ashutosh Upadhyay ◽  
S. Vijayalakshmi

In the field of computer vision, face detection algorithms achieved accuracy to a great extent, but for the real time applications it remains a challenge to maintain the balance between the accuracy and efficiency i.e., to gain accuracy computational cost also increases to deal with the large data sets. This paper, propose half face detection algorithm to address the efficiency of the face detection algorithm. The full face detection algorithm consider complete face data set for training which incur more computation cost. To reduce the computation cost, proposed model captures the features of the half of the face by assuming that the human face is symmetric about the vertical axis passing through the nose and train the system using reduced half face features. The proposed algorithm extracts Linear Binary Pattern (LBP) features and train model using adaboost classifier. Algorithm performance is presented in terms of the accuracy i.e., True Positive Rate (TPR), False Positive Rate (FTR) and face recognition time complexity.


The problem of Face detection and recognition is becoming a challenge due to the wide variety of faces and the complexity of the noises and background of image. In this paper we have used C-sharp and Haar algorithm to detect the face. First in this paper the image is taken with a web-camera, storing it in the database and then once again when the person comes in the frame the name of the person is displayed. This paper is done in C-sharp which was a bit difficult for us to do and we have combined both the face detection and the recognition. The proposed method has good output and a good recognition rate. The limitation of the paper is that it does not display the name of the person above the face. In the future work will be carried on the above said topic. While developing the code some sample codes in python but those were basic programs. So this paper aims to find a solution for it and developed in C-sharp. In finding the XML file of haarcascade frontal face detection we found some problems and had to do a bit of research in finding it. The code for face detection and face recognition were found in different places and in this paper the codes for the both has been combined and found some difficulty. To overcome the basic programs we have written the code in C-sharp and the difficulty which we faced in combining the two codes have been solved. The solution has been successfully implemented and the code is fully running and the output has been successfully achieved.


Author(s):  
Manoj Prabhakaran Kumar ◽  
Manoj Kumar Rajagopal

This chapter proposes the facial expression system with the entire facial feature of geometric deformable model and classifier in order to analyze the set of prototype expressions from frontal macro facial expression. In the training phase, the face detection and tracking are carried out by constrained local model (CLM) on a standardized database. Using the CLM grid node, the entire feature vector displacement is obtained by facial feature extraction, which has 66 feature points. The feature vector displacement is computed in bi-linear support vector machines (SVMs) classifier to evaluate the facial and develops the trained model. Similarly, the testing phase is carried out and the outcome is equated with the trained model for human emotion identifications. Two normalization techniques and hold-out validations are computed in both phases. Through this model, the overall validation performance is higher than existing models.


2013 ◽  
Vol 433-435 ◽  
pp. 405-411
Author(s):  
Rong Bing Huang ◽  
Xiao Qun Liu

In order to alleviate the effect of illumination variations and improve the face recognition rate, this paper proposes a novel non-statistics based face representation method, which is called Center-Symmetric Local Nonsubsampled Contourlet Transform Binary Pattern Histogram Sequence (CS-LNBPHS). This method first applies NSCT to decompose a face image, and obtains NSCT coefficients in different scales and various orientations. Then, CS-LBP operator is used to get CS-LBP feature maps from NSCT coefficients. After that, feature maps are respectively divided into several blocks, the concatenated histogram, which are calculated over each block, are used as the face features. Experimental results on YaleB, ORL face databases show the validity of the proposed approach especially for illumination, face expression and position.


Author(s):  
Cahyo Darujati ◽  
Supeno Mardi Susiki Nugroho ◽  
Deny Kurniawan ◽  
Mochamad Hariadi

<p>Facial recognition is one of the most important advancements in image processing. An important job is to build an automated framework with the same human capacity’s for recognizing face. The face is a complex 3D graphical model, and constructing a computational model is a challenging task. This paper aims at a facial detection technique focused on the coding and decoding of the facial feature object theory approach to data. One of the most natural and common principal component analysis (PCA) method. This approach transforms the face features into a minimal set of basic attributes, peculiarities, which are the critical components of the original learning image collection (or the training package). The proposed technique is a combination of the PCA system and the identification of components using the neural network (NN) feed-forward propagation method. This experiment proves that recognition of deformed 3D face is doable. By taking into account almost all forms of feature extraction and engineering, the NN yields a recognition score of 95%.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Tianping Li ◽  
Hongxin Xu ◽  
Hua Zhang ◽  
Honglin Wan

How to accurately reconstruct the 3D model human face is a challenge issue in the computer vision. Due to the complexity of face reconstruction and diversity of face features, most existing methods are aimed at reconstructing a smooth face model with ignoring face details. In this paper a novel deep learning-based face reconstruction method is proposed. It contains two modules: initial face reconstruction and face details synthesis. In the initial face reconstruction module, a neural network is used to detect the facial feature points and the angle of the pose face, and 3D Morphable Model (3DMM) is used to reconstruct the rough shape of the face model. In the face detail synthesis module, Conditional Generation Adversarial Network (CGAN) is used to synthesize the displacement map. The map provides texture features to render to the face surface reconstruction, so as to reflect the face details. Our proposal is evaluated by Facescape dataset in experiments and achieved better performance than other current methods.


Author(s):  
Priyanka Agrawal

The face is seen as a key component of the human body, and humans utilise it to identify one another. Face detection in video refers to the process of detecting a person's face from a video sequence, while face tracking refers to the process of tracking the person's face throughout the video. Face detection and tracking has become a widely researched issue due to applications such as video surveillance systems and identifying criminal activity. However, working with videos is tough due to problems such as bad illumination, low resolution, and atypical posture, among others. It is critical to produce a fair analysis of various tracking and detection strategies in order to fulfil the goal of video tracking and detection. Closed-circuit television (CCTV) technology had a significant impact on how crimes were investigated and solved. The material used to review crime scenes was CCTV footage. CCTV systems, on the other hand, just offer footage and do not have the ability to analyse it. In this research, we propose a system that can be integrated with the CCTV footage or any other video input like webcam to detect, recognise, and track a person of interest. Our system will follow people as they move through a space and will be able to detect and recognise human faces. It enables video analytics, allowing existing cameras to be combined with a system that will recognise individuals and track their activities over time. It may be used for remote surveillance and can be integrated into video analytics software and CCTV security solutions as a component. It may be used on college campuses, in offices, and in shopping malls, among other places.


2014 ◽  
pp. 42-49
Author(s):  
Agata Manolova ◽  
Krasimir Tonchev

In this paper we present a comparative analysis of two algorithms for image representation with application to recognition of 3D face scans with the presence of facial expressions. We begin with processing of the input point cloud based on curvature analysis and range image representation to achieve a unique representation of the face features. Then, subspace projection using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) is performed. Finally classification with different classifiers will be performed over the 3D face scans dataset with 61 subject with 7 scans per subject (427 scans), namely two "frontal", one "look-up", one "look-down", one "smile", one "laugh", one "random expression". The experimental results show a high recognition rate for the chosen database. They demonstrate the effectiveness of the proposed 3D image representations and subspace projection for 3D face recognition.


Author(s):  
Jungong Han ◽  
Lykele Hazelhoff ◽  
Peter H.N. de With

Prematurely born infants are observed in a Neonatal Intensive Care Unit (NICU) for medical treatment. These infants are nursed in an incubator, where their vital body functions such as heart rate, respiration, blood pressure, oxygen saturation, and temperature are continuously monitored. However, the existing monitoring system is lack of the measurement for visual expression of the neonatal. Therefore, valuable information about the well being of the patient (e.g., pain and discomfort) may pass unnoticed. This chapter aims at designing a prototype of an automated video monitoring system for the detection of discomfort in newborns by analyzing their facial expression. The system consists of several algorithmic components, ranging from the face detection, ROI determination, facial feature extraction, to behavior stage classification. To further adapt this system to the real hospital environment, the authors also intend to address the problem of locating the face regions under varying lighting conditions. To this end, an adaptive face detection technique based on gamut mapping is presented. The authors have evaluated the prototype system on recordings of a healthy newborn with different conditions, and we show that our algorithm can operate with approximately 88% accuracy.


Sign in / Sign up

Export Citation Format

Share Document