scholarly journals Face Spoof Attack Recognition Using Discriminative Image Patches

2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Zahid Akhtar ◽  
Gian Luca Foresti

Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof) have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames) are redundant or correspond to the clutter in the image (video), thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM), Naive-Bayes, Quadratic Discriminant Analysis (QDA), and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD) shows promising results compared to existing works.

2017 ◽  
Vol 10 (1) ◽  
pp. 199-208 ◽  
Author(s):  
Hsu-Yung Cheng ◽  
Chih-Lung Lin

Abstract. Cloud detection is important for providing necessary information such as cloud cover in many applications. Existing cloud detection methods include red-to-blue ratio thresholding and other classification-based techniques. In this paper, we propose to perform cloud detection using supervised learning techniques with multi-resolution features. One of the major contributions of this work is that the features are extracted from local image patches with different sizes to include local structure and multi-resolution information. The cloud models are learned through the training process. We consider classifiers including random forest, support vector machine, and Bayesian classifier. To take advantage of the clues provided by multiple classifiers and various levels of patch sizes, we employ a voting scheme to combine the results to further increase the detection accuracy. In the experiments, we have shown that the proposed method can distinguish cloud and non-cloud pixels more accurately compared with existing works.


Author(s):  
Shweta Policepatil ◽  
Sanjeevakumar M. Hatture

As the world becomes more and more digitized, the threat to security grows at an alarming rate. The mass usage of technology has garnered the attention and curiosity of people with foul intentions, whose aim is to exploit this use of technology to commit theft and other heinous crimes. One such technology used for security purposes is “Facial Recognition”. Face recognition is a popular biometric technique. Face recognition technology has advanced fast in recent years, and when compared to other ways, it is more direct, user-friendly, and convenient. Face recognition systems, on the other hand, are vulnerable to spoof assaults by non-real faces. To protect against spoofing, a secure system requires liveness detection. This study examines researchers' attempts to address the problem of spoofing and liveness detection, including mapping the research overview from the literature survey into a suitable taxonomy, exploring the fundamental properties of the field, motivation for using liveness detection methods in face recognition, and problems that may limit the benefits.


2021 ◽  
Vol 3 (6) ◽  
Author(s):  
R. Sekhar ◽  
K. Sasirekha ◽  
P. S. Raja ◽  
K. Thangavel

Abstract Intrusion Detection Systems (IDSs) have received more attention to safeguarding the vital information in a network system of an organization. Generally, the hackers are easily entering into a secured network through loopholes and smart attacks. In such situation, predicting attacks from normal packets is tedious, much challenging, time consuming and highly technical. As a result, different algorithms with varying learning and training capacity have been explored in the literature. However, the existing Intrusion Detection methods could not meet the desired performance requirements. Hence, this work proposes a new Intrusion Detection technique using Deep Autoencoder with Fruitfly Optimization. Initially, missing values in the dataset have been imputed with the Fuzzy C-Means Rough Parameter (FCMRP) algorithm which handles the imprecision in datasets with the exploit of fuzzy and rough sets while preserving crucial information. Then, robust features are extracted from Autoencoder with multiple hidden layers. Finally, the obtained features are fed to Back Propagation Neural Network (BPN) to classify the attacks. Furthermore, the neurons in the hidden layers of Deep Autoencoder are optimized with population based Fruitfly Optimization algorithm. Experiments have been conducted on NSL_KDD and UNSW-NB15 dataset. The computational results of the proposed intrusion detection system using deep autoencoder with BPN are compared with Naive Bayes, Support Vector Machine (SVM), Radial Basis Function Network (RBFN), BPN, and Autoencoder with Softmax. Article Highlights A hybridized model using Deep Autoencoder with Fruitfly Optimization is introduced to classify the attacks. Missing values have been imputed with the Fuzzy C-Means Rough Parameter method. The discriminate features are extracted using Deep Autoencoder with more hidden layers.


Data ◽  
2021 ◽  
Vol 6 (8) ◽  
pp. 87
Author(s):  
Sara Ferreira ◽  
Mário Antunes ◽  
Manuel E. Correia

Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 997
Author(s):  
Jun Zhong ◽  
Xin Gou ◽  
Qin Shu ◽  
Xing Liu ◽  
Qi Zeng

Foreign object debris (FOD) on airport runways can cause serious accidents and huge economic losses. FOD detection systems based on millimeter-wave (MMW) radar sensors have the advantages of higher range resolution and lower power consumption. However, it is difficult for traditional FOD detection methods to detect and distinguish weak signals of targets from strong ground clutter. To solve this problem, this paper proposes a new FOD detection approach based on optimized variational mode decomposition (VMD) and support vector data description (SVDD). This approach utilizes SVDD as a classifier to distinguish FOD signals from clutter signals. More importantly, the VMD optimized by whale optimization algorithm (WOA) is used to improve the accuracy and stability of the classifier. The results from both the simulation and field case show the excellent FOD detection performance of the proposed VMD-SVDD method.


Entropy ◽  
2019 ◽  
Vol 21 (4) ◽  
pp. 329 ◽  
Author(s):  
Yunqi Tang ◽  
Zhuorong Li ◽  
Huawei Tian ◽  
Jianwei Ding ◽  
Bingxian Lin

Detecting gait events from video data accurately would be a challenging problem. However, most detection methods for gait events are currently based on wearable sensors, which need high cooperation from users and power consumption restriction. This study presents a novel algorithm for achieving accurate detection of toe-off events using a single 2D vision camera without the cooperation of participants. First, a set of novel feature, namely consecutive silhouettes difference maps (CSD-maps), is proposed to represent gait pattern. A CSD-map can encode several consecutive pedestrian silhouettes extracted from video frames into a map. And different number of consecutive pedestrian silhouettes will result in different types of CSD-maps, which can provide significant features for toe-off events detection. Convolutional neural network is then employed to reduce feature dimensions and classify toe-off events. Experiments on a public database demonstrate that the proposed method achieves good detection accuracy.


Sign in / Sign up

Export Citation Format

Share Document