scholarly journals Novel Descriptors for Effective Recognition of Face and Facial Expressions

2020 ◽  
Vol 34 (5) ◽  
pp. 521-530
Author(s):  
Farid Ayeche ◽  
Adel Alti

In this paper, we present a face recognition approach based on extended Histogram Oriented Gradient (HOG) descriptors to extract the facial expressions features allowing classifying the faces and facial expressions. The approach is based on determining the different directional codes on the face image based on edge response values to define the feature vector from the face image. Its size is reduced to improve the performance of the SVM (Support Vector Machine) classifier. Experiments are conducted using two public datasets: JAFFE for facial expression recognition and YALE for face recognition. Experimental results show that the proposed descriptor achieves recognition rate of 92.12% and execution time ranging from 0.4s to 0.7s in all evaluated databases compared with existing works. Experiments demonstrate and confirm both the effectiveness and the efficiency of the proposed descriptor.

Face recognition is one of the hot topics in the current world and one of the popular topics of computer studies. Today face recognition in the network society and access to digital data is gaining more attention. The facial recognition system technology is a biometric assessment of a human's face. There are many facial recognition techniques that are intended depending on facial expressions extraction, one of which is 3D facial recognition, as well as their fusion,is difficult. During preprocessing measures for picture recognition to remove only expression-specific characteristics from the face and prevent their issues with a convolution neural network. We can also use some theorems such as LBP and Taylor's theorem to model face recognition. In particular, for cloud robots, we can also use this facial recognition on robots. The robot can perform functions and share data between servers and devices. Seven fundamental expressions are used to identify and classify: happiness, shock, fear, disgust, sadness, rage, and a neutral condition. Until now, the recognition rate is quite up to the expectation stage, but it still tries to enhance. To enhance the recognition frequency of facial image recognition, feelings are chosen by the vibrant Bayesian network technique to depict the development of facial awareness in addition to various emotional operations of facial expressions. The ICCA techniques involve various multivariate sets of distinct facial features that could be eyes, nose, and mouth.


2014 ◽  
Vol 543-547 ◽  
pp. 2350-2353
Author(s):  
Xiao Yan Wan

In order to extract the expression features of critically ill patients, and realize the computer intelligent nursing, an improved facial expression recognition method is proposed based on the of active appearance model, the support vector machine (SVM) for facial expression recognition is taken in research, and the face recognition model structure active appearance model is designed, and the attribute reduction algorithm of rough set affine transformation theory is introduced, and the invalid and redundant feature points are removed. The critically ill patient expressions are classified and recognized based on the support vector machine (SVM). The face image attitudes are adjusted, and the self-adaptive performance of facial expression recognition for the critical patient attitudes is improved. New method overcomes the effect of patient attitude to the recognition rate to a certain extent. The highest average recognition rate can be increased about 7%. The intelligent monitoring and nursing care of critically ill patients are realized with the computer vision effect. The nursing quality is enhanced, and it ensures the timely treatment of rescue.


2012 ◽  
Vol 224 ◽  
pp. 485-488
Author(s):  
Fei Li ◽  
Yuan Yuan Wang

Abstract: In order to solve the easily copied problem of images in face recognition software, an algorithm combining the image feature with digital watermark is presented in this paper. As watermark information, image feature of the adjacent blocks are embedded to the face image. And primitive face images are not needed when recovering the watermark. So face image integrity can be well confirmed, and the algorithm can detect whether the face image is the original one and identify whether the face image is attacked by malicious aim-such as tampering, replacing or illegally adding. Experimental results show that the algorithm with good invisibility and excellent robustness has no interference on face recognition rate, and it can position the specific tampered location of human face image.


Author(s):  
KWANG IN KIM ◽  
JIN HYUNG KIM ◽  
KEECHUL JUNG

This paper presents a real-time face recognition system. For the system to be real time, no external time-consuming feature extraction method is used, rather the gray-level values of the raw pixels that make up the face pattern are fed directly to the recognizer. In order to absorb the resulting high dimensionality of the input space, support vector machines (SVMs), which are known to work well even in high-dimensional space, are used as the face recognizer. Furthermore, a modified form of polynomial kernel (local correlation kernel) is utilized to take account of prior knowledge about facial structures and is used as the alternative feature extractor. Since SVMs were originally developed for two-class classification, their basic scheme is extended for multiface recognition by adopting one-per-class decomposition. In order to make a final classification from several one-per-class SVM outputs, a neural network (NN) is used as the arbitrator. Experiments with ORL database show a recognition rate of 97.9% and speed of 0.22 seconds per face with 40 classes.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhixue Liang

In the contactless delivery scenario, the self-pickup cabinet is an important terminal delivery device, and face recognition is one of the efficient ways to achieve contactless access express delivery. In order to effectively recognize face images under unrestricted environments, an unrestricted face recognition algorithm based on transfer learning is proposed in this study. First, the region extraction network of the faster RCNN algorithm is improved to improve the recognition speed of the algorithm. Then, the first transfer learning is applied between the large ImageNet dataset and the face image dataset under restricted conditions. The second transfer learning is applied between face image under restricted conditions and unrestricted face image datasets. Finally, the unrestricted face image is processed by the image enhancement algorithm to increase its similarity with the restricted face image, so that the second transfer learning can be carried out effectively. Experimental results show that the proposed algorithm has better recognition rate and recognition speed on the CASIA-WebFace dataset, FLW dataset, and MegaFace dataset.


Author(s):  
A. BELÉN MORENO ◽  
ÁNGEL SÁNCHEZ ◽  
ENRIQUE FRÍAS-MARTÍNEZ

Automatic face recognition is becoming increasingly important due to the security applications derived from it. Although the facial recognition problem has focused on 2D images, recently, due to the proliferation of 3D scanning hardware, 3D face recognition has become a feasible application. This 3D approach does not need any color information. In this way, it has the following main advantages in comparison to more traditional 2D approaches: (1) being robust under lighting variations and (2) providing more relevant information. In this paper we present a new 3D facial model based on the curvature properties of the surface. Our system is able to detect the subset of the characteristics of the face with higher discrimination power from a large set. The robustness of the model is tested by comparing recognition rates using both controlled and noncontrolled environments regarding facial expressions and facial rotations. The difference between the recognition rates of the two environments of only 5% proves that the model has a high degree of robustness against pose and facial expressions. We consider that this robustness is enough to implement facial recognition applications, which can achieve up to 91% correct recognition rate. A publish 3D face database containing face rotations and expressions has been created to achieve the recognition experiments.


Algorithms ◽  
2019 ◽  
Vol 12 (11) ◽  
pp. 227 ◽  
Author(s):  
Yingying Wang ◽  
Yibin Li ◽  
Yong Song ◽  
Xuewen Rong

In recent years, with the development of artificial intelligence and human–computer interaction, more attention has been paid to the recognition and analysis of facial expressions. Despite much great success, there are a lot of unsatisfying problems, because facial expressions are subtle and complex. Hence, facial expression recognition is still a challenging problem. In most papers, the entire face image is often chosen as the input information. In our daily life, people can perceive other’s current emotions only by several facial components (such as eye, mouth and nose), and other areas of the face (such as hair, skin tone, ears, etc.) play a smaller role in determining one’s emotion. If the entire face image is used as the only input information, the system will produce some unnecessary information and miss some important information in the process of feature extraction. To solve the above problem, this paper proposes a method that combines multiple sub-regions and the entire face image by weighting, which can capture more important feature information that is conducive to improving the recognition accuracy. Our proposed method was evaluated based on four well-known publicly available facial expression databases: JAFFE, CK+, FER2013 and SFEW. The new method showed better performance than most state-of-the-art methods.


2013 ◽  
Vol 380-384 ◽  
pp. 3623-3628 ◽  
Author(s):  
Nan Deng ◽  
Ya Bo Pei ◽  
Zheng Guang Xu

In this study, we present a method for virtual images generation based on Candide-3 model to increase the number of training samples for the face recognition with single sample, where the Principle Component Analysis is used for feature extraction and the test samples are classified by the method of Support Vector Machine (SVM). Experimental results on from the YaleB and ORL databases show that the recognition rate of the face recognition with single sample can be improved by the proposed method.


Author(s):  
G. A. KHUWAJA ◽  
M. S. LAGHARI

The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. We address two problems: (a) automatic recognition of human faces using a novel fusion approach based on an adaptive LVQ network architecture, and (b) improve the face recognition up to 100% while maintaining the learning time per face image constant, which is an scalability issue. The learning time per face image of the recognition system remains constant irrespective of the data size. The integration of the system incorporates the "divide and conquer" modularity principles, i.e. divide the learning data into small modules, train individual modules separately using compact LVQ model structure and still encompass all the variance, and fuse trained modules to achieve recognition rate nearly 100%. The concept of Merged Classes (MCs) is introduced to enhance the accuracy rate. The proposed integrated architecture has shown its feasibility using a collection of 1130 face images of 158 subjects from three standard databases, ORL, PICS and KU. Empirical results yield an accuracy rate of 100% on the face recognition task for 40 subjects in 0.056 seconds per image. Thus, the system has shown potential to be adopted for real time application domains.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Kun Sun ◽  
Xin Yin ◽  
Mingxin Yang ◽  
Yang Wang ◽  
Jianying Fan

At present, the face recognition method based on deep belief network (DBN) has advantages of automatically learning the abstract information of face images and being affected slightly by active factors, so it becomes the main method in the face recognition area. Because DBN ignores the local information of face images, the face recognition rate based on DBN is badly affected. To solve this problem, a face recognition method based on center-symmetric local binary pattern (CS-LBP) and DBN (FRMCD) is proposed in this paper. Firstly, the face image is divided into several subblocks. Secondly, CS-LBP is used to extract texture features of each image subblock. Thirdly, texture feature histograms are formed and input into the DBN visual layer. Finally, face classification and face recognition are completed through deep learning in DBN. Through the experiments on face databases ORL, Extend Yale B, and CMU-PIE by the proposed method (FRMCD), the best partitioning way of the face image and the hidden unit number of the DBN hidden layer are obtained. Then, comparative experiments between the FRMCD and traditional methods are performed. The results show that the recognition rate of FRMCD is superior to those of traditional methods; the highest recognition rate is up to 98.82%. When the number of training samples is less, the FRMCD has more significant advantages. Compared with the method based on local binary pattern (LBP) and DBN, the time-consuming of FRMCD is shorter.


Sign in / Sign up

Export Citation Format

Share Document