scholarly journals Performance Evaluation of Multimodal Multifeature Authentication System UsingKNN Classification

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Gayathri Rajagopal ◽  
Ramamoorthy Palaniswamy

This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Rabab A. Rasool

The design of a robust human identification system is in high demand in most modern applications such as internet banking and security, where the multifeature biometric system, also called feature fusion biometric system, is one of the common solutions that increases the system reliability and improves recognition accuracy. This paper implements a comprehensive comparison between two fusion methods, named the feature-level fusion and score-level fusion, to determine which method highly improves the overall system performance. The comparison takes into consideration the image quality for the six combination datasets as well as the type of the applied feature extraction method. The four feature extraction methods, local binary pattern (LBP), gray-level co-occurrence matrix (GLCM), principle component analysis (PCA), and Fourier descriptors (FDs), are applied separately to generate the face-iris machine vector dataset. The experimental results highlighted that the recognition accuracy has been significantly improved when the texture descriptor method, such as LBP, or the statistical method, such as PCA, is utilized with the score-level rather than feature-level fusion for all combination datasets. The maximum recognition accuracy is obtained at 97.53% with LBP and score-level fusion where the Euclidean distance (ED) is considered to measure the maximum accuracy rate at the minimum equal error rate (EER) value.


Author(s):  
Milind E Rane ◽  
Umesh S Bhadade

The paper proposes a t-norm-based matching score fusion approach for a multimodal heterogenous biometric recognition system. Two trait-based multimodal recognition system is developed by using biometrics traits like palmprint and face. First, palmprint and face are pre-processed, extracted features and calculated matching score of each trait using correlation coefficient and combine matching scores using t-norm based score level fusion. Face database like Face 94, Face 95, Face 96, FERET, FRGC and palmprint database like IITD are operated for training and testing of algorithm. The results of experimentation show that the proposed algorithm provides the Genuine Acceptance Rate (GAR) of 99.7% at False Acceptance Rate (FAR) of 0.1% and GAR of 99.2% at FAR of 0.01% significantly improves the accuracy of a biometric recognition system. The proposed algorithm provides the 0.53% more accuracy at FAR of 0.1% and 2.77% more accuracy at FAR of 0.01%, when compared to existing works.


2020 ◽  
Vol 8 (5) ◽  
pp. 2522-2527

In this paper, we design method for recognition of fingerprint and IRIS using feature level fusion and decision level fusion in Children multimodal biometric system. Initially, Histogram of Gradients (HOG), Gabour and Maximum filter response are extracted from both the domains of fingerprint and IRIS and considered for identification accuracy. The combination of feature vector of all the possible features is recommended by biometrics traits of fusion. For fusion vector the Principal Component Analysis (PCA) is used to select features. The reduced features are fed into fusion classifier of K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Navie Bayes(NB). For children multimodal biometric system the suitable combination of features and fusion classifiers is identified. The experimentation conducted on children’s fingerprint and IRIS database and results reveal that fusion combination outperforms individual. In addition the proposed model advances the unimodal biometrics system.


2021 ◽  
Author(s):  
Zhibing Xie

Understanding human emotional states is indispensable for our daily interaction, and we can enjoy more natural and friendly human computer interaction (HCI) experience by fully utilizing human’s affective states. In the application of emotion recognition, multimodal information fusion is widely used to discover the relationships of multiple information sources and make joint use of a number of channels, such as speech, facial expression, gesture and physiological processes. This thesis proposes a new framework of emotion recognition using information fusion based on the estimation of information entropy. The novel techniques of information theoretic learning are applied to feature level fusion and score level fusion. The most critical issues for feature level fusion are feature transformation and dimensionality reduction. The existing methods depend on the second order statistics, which is only optimal for Gaussian-like distributions. By incorporating information theoretic tools, a new feature level fusion method based on kernel entropy component analysis is proposed. For score level fusion, most previous methods focus on predefined rule based approaches, which are usually heuristic. In this thesis, a connection between information fusion and maximum correntropy criterion is established for effective score level fusion. Feature level fusion and score level fusion methods are then combined to introduce a two-stage fusion platform. The proposed methods are applied to audiovisual emotion recognition, and their effectiveness is evaluated by experiments on two publicly available audiovisual emotion databases. The experimental results demonstrate that the proposed algorithms achieve improved performance in comparison with the existing methods. The work of this thesis offers a promising direction to design more advanced emotion recognition systems based on multimodal information fusion and has great significance to the development of intelligent human computer interaction systems.


2021 ◽  
Author(s):  
SANTHAM BHARATHY ALAGARSAMY ◽  
Kalpana Murugan

Abstract More than one biometric methodology of an individual is utilized by a multimodal biometric system to moderate a portion of the impediments of a unimodal biometric system and upgrade its precision, security, and so forth. In this paper, an incorporated multimodal biometric system has proposed for the identification of people utilizing ear and face as input and pre-preparing, ring projection, data standardization, AARK limit division, extraction of DWT highlights and classifiers are utilized. Afterward, singular matches gathered from the different modalities produce the individual scores. The proposed framework indicated got brings about the investigations than singular ear and face biometrics tried. To certify the individual as genuine or an impostor, the eventual outcomes are then utilized. On the IIT Delhi ear information base and ORL face data set, the proposed framework has checked and indicated an individual exactness of 96.24%


Author(s):  
Norah Abdullah Al-johani ◽  
Lamiaa A. Elrefaei

Advancements in biometrics have attained relatively high recognition rates. However, the need for a biometric system that is reliable, robust, and convenient remains. Systems that use palmprints (PP) for verification have a number of benefits including stable line features, reduced distortion and simple self-positioning. Dorsal hand veins (DHVs) are distinctive for every person, such that even identical twins have different DHVs. DHVs appear to maintain stability over time. In the past, different features algorithms were used to implement palmprint (PP) and dorsal hand vein (DHV) systems. Previous systems relied on handcrafted algorithms. The advancements of deep learning (DL) in the features learned by the convolutional neural network (CNN) has led to its application in PP and DHV recognition systems. In this article, a multimodal biometric system based on PP and DHV using (VGG16, VGG19 and AlexNet) CNN models is proposed. The proposed system is uses two approaches: feature level fusion (FLF) and Score level fusion (SLF). In the first approach, the features from PP and DHV are extracted with CNN models. These extracted features are then fused using serial or parallel fusion and used to train error-correcting output codes (ECOC) with a support vector machine (SVM) for classification. In the second approach, the fusion at score level is done with sum, max, and product methods by applying two strategies: Transfer learning that uses CNN models for features extraction and classification for PP and DHV, then score level fusion. For the second strategy, features are extracted with CNN models for PP and DHV and used to train ECOC with SVM for classification, then score level fusion. The system was tested using two DHV databases and one PP database. The multimodal system is tested two times by repeating PP database for each DHV database. The system achieved very high accuracy rate.


Sign in / Sign up

Export Citation Format

Share Document