scholarly journals Implementing CCTV-Based Attendance Taking Support System Using Deep Face Recognition: A Case Study at FPT Polytechnic College

Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 307 ◽  
Author(s):  
Ngo Tung Son ◽  
Bui Ngoc Anh ◽  
Tran Quy Ban ◽  
Le Phuong Chi ◽  
Bui Dinh Chien ◽  
...  

Face recognition (FR) has received considerable attention in the field of security, especially in the use of closed-circuit television (CCTV) cameras in security monitoring. Although significant advances in the field of computer vision are made, advanced face recognition systems provide satisfactory performance only in controlled conditions. They deteriorate significantly in the face of real-world scenarios such as lighting conditions, motion blur, camera resolution, etc. This article shows how we design, implement, and conduct the empirical comparisons of machine learning open libraries in building attendance taking (AT) support systems using indoor security cameras called ATSS. Our trial system was deployed to record the appearances of 120 students in five classes who study on the third floor of FPT Polytechnic College building. Our design allows for flexible system scaling, and it is not only usable for a school but a generic attendance system with CCTV. The measurement results show that the accuracy is suitable for many different environments.

2021 ◽  
Author(s):  
Susith Hemathilaka ◽  
Achala Aponso

The face mask is an essential sanitaryware in daily lives growing during the pandemic period and is a big threat to current face recognition systems. The masks destroy a lot of details in a large area of face and it makes it difficult to recognize them even for humans. The evaluation report shows the difficulty well when recognizing masked faces. Rapid development and breakthrough of deep learning in the recent past have witnessed most promising results from face recognition algorithms. But they fail to perform far from satisfactory levels in the unconstrained environment during the challenges such as varying lighting conditions, low resolution, facial expressions, pose variation and occlusions. Facial occlusions are considered one of the most intractable problems. Especially when the occlusion occupies a large region of the face because it destroys lots of official features.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xin Cheng ◽  
Hongfei Wang ◽  
Jingmei Zhou ◽  
Hui Chang ◽  
Xiangmo Zhao ◽  
...  

For face recognition systems, liveness detection can effectively avoid illegal fraud and improve the safety of face recognition systems. Common face attacks include photo printing and video replay attacks. This paper studied the differences between photos, videos, and real faces in static texture and motion information and proposed a living detection structure based on feature fusion and attention mechanism, Dynamic and Texture Fusion Attention Network (DTFA-Net). We proposed a dynamic information fusion structure of an interchannel attention block to fuse the magnitude and direction of optical flow to extract facial motion features. In addition, for the face detection failure of HOG algorithm under complex illumination, we proposed an improved Gamma image preprocessing algorithm, which effectively improved the face detection ability. We conducted experiments on the CASIA-MFSD and Replay Attack Databases. According to experiments, the DTFA-Net proposed in this paper achieved 6.9% EER on CASIA and 2.2% HTER on Replay Attack that was comparable to other methods.


Author(s):  
Taha H. Rassem ◽  
Nasrin M. Makbol ◽  
Sam Yin Yee

Nowadays, face recognition becomes one of the important topics in the computer vision and image processing area. This is due to its importance where can be used in many applications. The main key in the face recognition is how to extract distinguishable features from the image to perform high recognition accuracy.  Local binary pattern (LBP) and many of its variants used as texture features in many of face recognition systems. Although LBP performed well in many fields, it is sensitive to noise, and different patterns of LBP may classify into the same class that reduces its discriminating property. Completed Local Ternary Pattern (CLTP) is one of the new proposed texture features to overcome the drawbacks of the LBP. The CLTP outperformed LBP and some of its variants in many fields such as texture, scene, and event image classification.  In this study, we study and investigate the performance of CLTP operator for face recognition task. The Japanese Female Facial Expression (JAFFE), and FEI face databases are used in the experiments. In the experimental results, CLTP outperformed some previous texture descriptors and achieves higher classification rate for face recognition task which has reached up 99.38% and 85.22% in JAFFE and FEI, respectively.


Author(s):  
ZHENXUE CHEN ◽  
CHENGYUN LIU ◽  
FALIANG CHANG ◽  
XUZHEN HAN ◽  
KAIFANG WANG

Changes in light intensity and angle present a major challenge to the creation of reliable face recognition systems. The existence of bright regions and dark regions has been shown to have a serious negative impact on the performance of face recognition systems. This paper proposes a solution to this problem based on self-quotient image (SQI) processing method. In this method, bright and dark areas are processed separately without changing the essential characteristics of the image of the face. The dark and light areas are processed separately by SQI. Experimental results indicate that this Single-Light-Region and Single-Dark-Region SQI method removes the adverse effect of multi-bright and multi-dark areas better than competing methods.


Author(s):  
Sangamesh Hosgurmath ◽  
Viswanatha Vanjre Mallappa ◽  
Nagaraj B. Patil ◽  
Vishwanath Petli

Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).


Author(s):  
Amal Seralkhatem Osman Ali ◽  
Vijanth Sagayan Asirvadam ◽  
Aamir Saeed Malik ◽  
Mohamed Meselhy Eltoukhy ◽  
Azrina Aziz

Whilst facial recognition systems are vulnerable to different acquisition conditions, most notably lighting effects and pose variations, their particular level of sensitivity to facial aging effects is yet to be researched. The face recognition vendor test (FRVT) 2012's annual statement estimated deterioration in the performance of face recognition systems due to facial aging. There was about 5% degradation in the accuracies of the face recognition systems for each single year age difference between a test image and a probe image. Consequently, developing an age-invariant platform continues to be a significant requirement for building an effective facial recognition system. The main objective of this work is to address the challenge of facial aging which affects the performance of facial recognition systems. Accordingly, this work presents a geometrical model that is based on extracting a number of triangular facial features. The proposed model comprises a total of six triangular areas connecting and surrounding the main facial features (i.e. eyes, nose and mouth). Furthermore, a set of thirty mathematical relationships are developed and used for building a feature vector for each sample image. The areas and perimeters of the extracted triangular areas are calculated and used as inputs for the developed mathematical relationships. The performance of the system is evaluated over the publicly available face and gesture recognition research network (FG-NET) face aging database. The performance of the system is compared with that of some of the state-of-the-art face recognition methods and state-of-the-art age-invariant face recognition systems. Our proposed system yielded a good performance in term of classification accuracy of more than 94%.


Author(s):  
Daniel J. Carragher ◽  
Peter J. B. Hancock

AbstractIn response to the COVID-19 pandemic, many governments around the world now recommend, or require, that their citizens cover the lower half of their face in public. Consequently, many people now wear surgical face masks in public. We investigated whether surgical face masks affected the performance of human observers, and a state-of-the-art face recognition system, on tasks of perceptual face matching. Participants judged whether two simultaneously presented face photographs showed the same person or two different people. We superimposed images of surgical masks over the faces, creating three different mask conditions: control (no masks), mixed (one face wearing a mask), and masked (both faces wearing masks). We found that surgical face masks have a large detrimental effect on human face matching performance, and that the degree of impairment is the same regardless of whether one or both faces in each pair are masked. Surprisingly, this impairment is similar in size for both familiar and unfamiliar faces. When matching masked faces, human observers are biased to reject unfamiliar faces as “mismatches” and to accept familiar faces as “matches”. Finally, the face recognition system showed very high classification accuracy for control and masked stimuli, even though it had not been trained to recognise masked faces. However, accuracy fell markedly when one face was masked and the other was not. Our findings demonstrate that surgical face masks impair the ability of humans, and naïve face recognition systems, to perform perceptual face matching tasks. Identification decisions for masked faces should be treated with caution.


Author(s):  
Евгений Васильев ◽  
Evgeniy Vasil'ev ◽  
Валентина Кустикова ◽  
Valentina Kustikova ◽  
Иван Вихрев ◽  
...  

We represent a case study of using deep learning and computer vision library - the Intel Distribution of OpenVINO toolkit. We develop the automated “smart library” using DL and computer vision methods implemented in OpenVINO toolkit. The application involves the registration of the reader (adding information and photos of the new user); updating the machine learning model that describes the face features of the library users; authorization of the reader through face recognition; receiving and returning books by comparing the cover image with the database of flat images available in the library of books. The source code of the application is free available on GitHub: https://github.com/itlab-vision/openvino-smart-library. The developed application is planned to be published as a sample of the OpenVINO toolkit.


Author(s):  
Yallamandaiah S. ◽  
Purnachand N.

<p>In the area of computer vision, face recognition is a challenging task because of the pose, facial expression, and illumination variations. The performance of face recognition systems reduces in an unconstrained environment. In this work, a new face recognition approach is proposed using a guided image filter, and a convolutional neural network (CNN). The guided image filter is a smoothing operator and performs well near the edges. Initially, the ViolaJones algorithm is used to detect the face region and then smoothened by a guided image filter. Later the proposed CNN is used to extract the features and recognize the faces. The experiments were performed on face databases like ORL, JAFFE, and YALE and attained a recognition rate of 98.33%, 99.53%, and 98.65% respectively. The experimental results show that the suggested face recognition method attains good results than some of the state-of-the-art techniques.</p>


Sign in / Sign up

Export Citation Format

Share Document