scholarly journals Feature-Level vs. Score-Level Fusion in the Human Identification System

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Rabab A. Rasool

The design of a robust human identification system is in high demand in most modern applications such as internet banking and security, where the multifeature biometric system, also called feature fusion biometric system, is one of the common solutions that increases the system reliability and improves recognition accuracy. This paper implements a comprehensive comparison between two fusion methods, named the feature-level fusion and score-level fusion, to determine which method highly improves the overall system performance. The comparison takes into consideration the image quality for the six combination datasets as well as the type of the applied feature extraction method. The four feature extraction methods, local binary pattern (LBP), gray-level co-occurrence matrix (GLCM), principle component analysis (PCA), and Fourier descriptors (FDs), are applied separately to generate the face-iris machine vector dataset. The experimental results highlighted that the recognition accuracy has been significantly improved when the texture descriptor method, such as LBP, or the statistical method, such as PCA, is utilized with the score-level rather than feature-level fusion for all combination datasets. The maximum recognition accuracy is obtained at 97.53% with LBP and score-level fusion where the Euclidean distance (ED) is considered to measure the maximum accuracy rate at the minimum equal error rate (EER) value.

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Gayathri Rajagopal ◽  
Ramamoorthy Palaniswamy

This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.


Author(s):  
Surinder kaur ◽  
Gopal Chaudhary ◽  
Javalkar Dinesh kumar

Nowadays, Biometric systems are prevalent for personal recognition. But due to pandemic COVID 19, it is difficult to pursue a touch-based biometric system. To encourage a touchless biometric system, a less constrained multimodal personal identification system using palmprint and dorsal hand vein is presented. Hand based Touchless recognition system gives a higher user-friendly system and avoids the spread of coronavirus. A method using Convolution Neural Networks(CNN) to extract discriminative features from the data samples is proposed. A pre-trained function PCANeT is used in the experiments to show the performance of the system in fusion scheme. This method doesn’t require keeping the palm in a specific position or at a certain distance like most other papers. Different patches of ROI are used at two different layers of CNN. Fusion of palmprint and dorsal hand vein is done for final result matching. Both Feature level and score level fusion methods are compared. Results shows the accuracy of upto 98.55% and 98.86% and Equal error rate (EER) of upto 1.22% and 0.93% for score level fusion and feature level fusion, respectively. Our method gives higher accurate results in a less constrained environment.


2021 ◽  
Vol 5 (4) ◽  
pp. 229-250
Author(s):  
Chetana Kamlaskar ◽  
◽  
Aditya Abhyankar ◽  

<abstract><p>For reliable and accurate multimodal biometric based person verification, demands an effective discriminant feature representation and fusion of the extracted relevant information across multiple biometric modalities. In this paper, we propose feature level fusion by adopting the concept of canonical correlation analysis (CCA) to fuse Iris and Fingerprint feature sets of the same person. The uniqueness of this approach is that it extracts maximized correlated features from feature sets of both modalities as effective discriminant information within the features sets. CCA is, therefore, suitable to analyze the underlying relationship between two feature spaces and generates more powerful feature vectors by removing redundant information. We demonstrate that an efficient multimodal recognition can be achieved with a significant reduction in feature dimensions with less computational complexity and recognition time less than one second by exploiting CCA based joint feature fusion and optimization. To evaluate the performance of the proposed system, Left and Right Iris, and thumb Fingerprints from both hands of the SDUMLA-HMT multimodal dataset are considered in this experiment. We show that our proposed approach significantly outperforms in terms of equal error rate (EER) than unimodal system recognition performance. We also demonstrate that CCA based feature fusion excels than the match score level fusion. Further, an exploration of the correlation between Right Iris and Left Fingerprint images (EER of 0.1050%), and Left Iris and Right Fingerprint images (EER of 1.4286%) are also presented to consider the effect of feature dominance and laterality of the selected modalities for the robust multimodal biometric system.</p></abstract>


2021 ◽  
Author(s):  
Zhibing Xie

Understanding human emotional states is indispensable for our daily interaction, and we can enjoy more natural and friendly human computer interaction (HCI) experience by fully utilizing human’s affective states. In the application of emotion recognition, multimodal information fusion is widely used to discover the relationships of multiple information sources and make joint use of a number of channels, such as speech, facial expression, gesture and physiological processes. This thesis proposes a new framework of emotion recognition using information fusion based on the estimation of information entropy. The novel techniques of information theoretic learning are applied to feature level fusion and score level fusion. The most critical issues for feature level fusion are feature transformation and dimensionality reduction. The existing methods depend on the second order statistics, which is only optimal for Gaussian-like distributions. By incorporating information theoretic tools, a new feature level fusion method based on kernel entropy component analysis is proposed. For score level fusion, most previous methods focus on predefined rule based approaches, which are usually heuristic. In this thesis, a connection between information fusion and maximum correntropy criterion is established for effective score level fusion. Feature level fusion and score level fusion methods are then combined to introduce a two-stage fusion platform. The proposed methods are applied to audiovisual emotion recognition, and their effectiveness is evaluated by experiments on two publicly available audiovisual emotion databases. The experimental results demonstrate that the proposed algorithms achieve improved performance in comparison with the existing methods. The work of this thesis offers a promising direction to design more advanced emotion recognition systems based on multimodal information fusion and has great significance to the development of intelligent human computer interaction systems.


2021 ◽  
Author(s):  
SANTHAM BHARATHY ALAGARSAMY ◽  
Kalpana Murugan

Abstract More than one biometric methodology of an individual is utilized by a multimodal biometric system to moderate a portion of the impediments of a unimodal biometric system and upgrade its precision, security, and so forth. In this paper, an incorporated multimodal biometric system has proposed for the identification of people utilizing ear and face as input and pre-preparing, ring projection, data standardization, AARK limit division, extraction of DWT highlights and classifiers are utilized. Afterward, singular matches gathered from the different modalities produce the individual scores. The proposed framework indicated got brings about the investigations than singular ear and face biometrics tried. To certify the individual as genuine or an impostor, the eventual outcomes are then utilized. On the IIT Delhi ear information base and ORL face data set, the proposed framework has checked and indicated an individual exactness of 96.24%


Author(s):  
Norah Abdullah Al-johani ◽  
Lamiaa A. Elrefaei

Advancements in biometrics have attained relatively high recognition rates. However, the need for a biometric system that is reliable, robust, and convenient remains. Systems that use palmprints (PP) for verification have a number of benefits including stable line features, reduced distortion and simple self-positioning. Dorsal hand veins (DHVs) are distinctive for every person, such that even identical twins have different DHVs. DHVs appear to maintain stability over time. In the past, different features algorithms were used to implement palmprint (PP) and dorsal hand vein (DHV) systems. Previous systems relied on handcrafted algorithms. The advancements of deep learning (DL) in the features learned by the convolutional neural network (CNN) has led to its application in PP and DHV recognition systems. In this article, a multimodal biometric system based on PP and DHV using (VGG16, VGG19 and AlexNet) CNN models is proposed. The proposed system is uses two approaches: feature level fusion (FLF) and Score level fusion (SLF). In the first approach, the features from PP and DHV are extracted with CNN models. These extracted features are then fused using serial or parallel fusion and used to train error-correcting output codes (ECOC) with a support vector machine (SVM) for classification. In the second approach, the fusion at score level is done with sum, max, and product methods by applying two strategies: Transfer learning that uses CNN models for features extraction and classification for PP and DHV, then score level fusion. For the second strategy, features are extracted with CNN models for PP and DHV and used to train ECOC with SVM for classification, then score level fusion. The system was tested using two DHV databases and one PP database. The multimodal system is tested two times by repeating PP database for each DHV database. The system achieved very high accuracy rate.


Sign in / Sign up

Export Citation Format

Share Document