scholarly journals An Entropy-Histogram Approach for Image Similarity and Face Recognition

2018 ◽  
Vol 2018 ◽  
pp. 1-18 ◽  
Author(s):  
Mohammed Abdulameer Aljanabi ◽  
Zahir M. Hussain ◽  
Song Feng Lu

Image similarity and image recognition are modern and rapidly growing technologies because of their wide use in the field of digital image processing. It is possible to recognize the face image of a specific person by finding the similarity between the images of the same person face and this is what we will address in detail in this paper. In this paper, we designed two new measures for image similarity and image recognition simultaneously. The proposed measures are based mainly on a combination of information theory and joint histogram. Information theory has a high capability to predict the relationship between image intensity values. The joint histogram is based mainly on selecting a set of local pixel features to construct a multidimensional histogram. The proposed approach incorporates the concepts of entropy and a modified 1D version of the 2D joint histogram of the two images under test. Two entropy measures were considered, Shannon and Renyi, giving a rise to two joint histogram-based, information-theoretic similarity measures: SHS and RSM. The proposed methods have been tested against powerful Zernike-moments approach with Euclidean and Minkowski distance metrics for image recognition and well-known statistical approaches for image similarity such as structural similarity index measure (SSIM), feature similarity index measure (FSIM) and feature-based structural measure (FSM). A comparison with a recent information-theoretic measure (ISSIM) has also been considered. A measure of recognition confidence is introduced in this work based on similarity distance between the best match and the second-best match in the face database during the face recognition process. Simulation results using AT&T and FEI face databases show that the proposed approaches outperform existing image recognition methods in terms of recognition confidence. TID2008 and IVC image databases show that SHS and RSM outperform existing similarity methods in terms of similarity confidence.

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5068
Author(s):  
Rita Goel ◽  
Irfan Mehmood ◽  
Hassan Ugail

Accurate identification of siblings through face recognition is a challenging task. This is predominantly because of the high degree of similarities among the faces of siblings. In this study, we investigate the use of state-of-the-art deep learning face recognition models to evaluate their capacity for discrimination between sibling faces using various similarity indices. The specific models examined for this purpose are FaceNet, VGGFace, VGG16, and VGG19. For each pair of images provided, the embeddings have been calculated using the chosen deep learning model. Five standard similarity measures, namely, cosine similarity, Euclidean distance, structured similarity, Manhattan distance, and Minkowski distance, are used to classify images looking for their identity on the threshold defined for each of the similarity measures. The accuracy, precision, and misclassification rate of each model are calculated using standard confusion matrices. Four different experimental datasets for full-frontal-face, eyes, nose, and forehead of sibling pairs are constructed using publicly available HQf subset of the SiblingDB database. The experimental results show that the accuracy of the chosen deep learning models to distinguish siblings based on the full-frontal-face and cropped face areas vary based on the face area compared. It is observed that VGGFace is best while comparing the full-frontal-face and eyes—the accuracy of classification being with more than 95% in this case. However, its accuracy degrades significantly when the noses are compared, while FaceNet provides the best result for classification based on the nose. Similarly, VGG16 and VGG19 are not the best models for classification using the eyes, but these models provide favorable results when foreheads are compared.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 304
Author(s):  
Xianglong Chen ◽  
Haipeng Wang ◽  
Yaohui Liang ◽  
Ying Meng ◽  
Shifeng Wang

The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to limit the image structure. The FTSGAN fuses infrared and visible face images that contains bio-information for heterogeneous face recognition tasks. Experiments based on the FTSGAN using hundreds of face images demonstrate its excellent performance. The principal component analysis (PCA) and linear discrimination analysis (LDA) are involved in face recognition. The face recognition performance after fusion improved by 1.9% compared to that before fusion, and the final face recognition rate was 94.4%. This proposed method has better quality, faster rate, and is more robust than the methods that only use visible images for face recognition.


2020 ◽  
Vol 2 (4) ◽  
pp. 12-16
Author(s):  
Tasaddi Maalak Hanoun ◽  
Kadhim M. Hashim

A New measure is proposed for assessing the similarity among gray-scale images. The well-known Structural Similarity Index Measure (SSIM) has been designed using a statistical approach that fails under significant noise (lowPSNR). The proposed measure, denoted by Manhattan distance and STD, uses a combination of two parts: the first part is the Geometric method, while the second part is based on the statistical feature. The concept of manhattan distance is used in the geometric part. The new measure shows the advantages of statistical approaches and geometric approaches. The proposed similarity method is an outcome for the human face. The novel measure outperforms the classical SSIM in detecting image similarity at low PSNR, with a significant difference in performance. AMS subject classification:


Author(s):  
Affan Alim ◽  
Imran Naseem ◽  
Roberto Togneri ◽  
Mohammed Bennamoun

In this paper, we propose a consolidated framework for the automatic selection of the most discriminant subbands for the problem of face recognition. Essentially, the face images are transformed into textures using the linear binary pattern (LBP) approach, these texturized-faces undergo the wavelet packet decomposition resulting in several subband images. We propose to use the energy features to effectively represent these subband images. The underlying statistical patterns of the data are harnessed in form of information-theoretic metrics to select the most discriminant subbands. The proposed algorithms are extensively evaluated on several standard databases and are shown to always pick the most significant subbands resulting in better performance. The proposed algorithms are entirely generic and do not depend on the selection of features or/and classifiers.


2021 ◽  
pp. 1-16
Author(s):  
G. Rajeswari ◽  
P. Ithaya Rani

Facial occlusions like sunglasses, masks, caps etc. have severe consequences when reconstructing the partially occluded regions of a facial picture. This paper proposes a novel hybrid machine learning approach for occlusion removal based on Structural Similarity Index Measure (SSIM) and Principal Component Analysis (PCA), called SSIM_PCA. The proposed system comprises two stages. In the first stage, a Face Similar Matrix (FSM) guided by the Structural Similarity Index Measure is generated to provide the necessary information to recover from the lost regions of the face image. The FSM generates Related Face (RF) images similar to the probe image. In the second stage, these RF images are considered as related information and used as input data to generate eigenspaces using PCA to reconstruct the occluded face region exploiting the relationship between the occluded region and related face images, which contain relevant data to recover from the occluded area. Experimental results with three standard datasets viz. Caspeal-R1, IMFDB, and FEI have proven that the proposed method works well under illumination changes and occlusion of facial images.


Sign in / Sign up

Export Citation Format

Share Document