scholarly journals Human Gait Recognition: A Single Stream Optimal Deep Learning Features Fusion

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7584
Author(s):  
Faizan Saleem ◽  
Muhammad Attique Khan ◽  
Majed Alhaisoni ◽  
Usman Tariq ◽  
Ammar Armghan ◽  
...  

Human Gait Recognition (HGR) is a biometric technique that has been utilized for security purposes for the last decade. The performance of gait recognition can be influenced by various factors such as wearing clothes, carrying a bag, and the walking surfaces. Furthermore, identification from differing views is a significant difficulty in HGR. Many techniques have been introduced in the literature for HGR using conventional and deep learning techniques. However, the traditional methods are not suitable for large datasets. Therefore, a new framework is proposed for human gait recognition using deep learning and best feature selection. The proposed framework includes data augmentation, feature extraction, feature selection, feature fusion, and classification. In the augmentation step, three flip operations were used. In the feature extraction step, two pre-trained models were employed, Inception-ResNet-V2 and NASNet Mobile. Both models were fine-tuned and trained using transfer learning on the CASIA B gait dataset. The features of the selected deep models were optimized using a modified three-step whale optimization algorithm and the best features were chosen. The selected best features were fused using the modified mean absolute deviation extended serial fusion (MDeSF) approach. Then, the final classification was performed using several classification algorithms. The experimental process was conducted on the entire CASIA B dataset and achieved an average accuracy of 89.0. Comparison with existing techniques showed an improvement in accuracy, recall rate, and computational time.

2021 ◽  
Author(s):  
Mathivanan B ◽  
Perumal P

Abstract Gait is an individual biometric behavior which can be detected based on distance which has different submissions in social security, forensic detection and crime prevention. Hence, in this paper, Advanced Deep Belief Neural Network with Black Widow Optimization (ADBNN-BWO) Algorithm is developed to identify the human emotions by human walking style images. This proposed methodology is working based on four stages like pre-processing, feature extraction, feature selection and classification. For the pre-processing, contrast enhancement median filter is used and Hu Moments, GLCM, Fast Scale-invariant feature transform (F-SIFT), in addition skeleton features are used for the feature extraction. To extract the features efficiently, the feature extraction algorithm can be often very essential calculation. After that, feature selection is performed. Then the classification process is done by utilizing the proposed ADBNN-BWO Algorithm. Based on the proposed method, the human gait recognition is achieved which utilized to identify the emotions from the walking style. The proposed method is validated by using the open source gait databases. The proposed method is implemented in MATLAB platform and their corresponding performances/outputs are evaluated. Moreover, the statistical measures of proposed method are also determined and compared with the existing method as Artificial Neural Network (ANN), Mayfly algorithm with Particle Swarm Optimization (MA-PSO), Recurrent Neural Network -PSO (RNN-PSO) and Adaptive Neuro Fuzzy Inference System (ANFIS) respectively.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7941
Author(s):  
Seemab Khan ◽  
Muhammad Attique Khan ◽  
Majed Alhaisoni ◽  
Usman Tariq ◽  
Hwan-Seung Yong ◽  
...  

Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.


2022 ◽  
Vol 70 (1) ◽  
pp. 343-360
Author(s):  
Asif Mehmood ◽  
Muhammad Attique Khan ◽  
Usman Tariq ◽  
Chang-Won Jeong ◽  
Yunyoung Nam ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6793
Author(s):  
Inzamam Mashood Nasir ◽  
Muhammad Attique Khan ◽  
Mussarat Yasmin ◽  
Jamal Hussain Shah ◽  
Marcin Gabryel ◽  
...  

Documents are stored in a digital form across several organizations. Printing this amount of data and placing it into folders instead of storing digitally is against the practical, economical, and ecological perspective. An efficient way of retrieving data from digitally stored documents is also required. This article presents a real-time supervised learning technique for document classification based on deep convolutional neural network (DCNN), which aims to reduce the impact of adverse document image issues such as signatures, marks, logo, and handwritten notes. The proposed technique’s major steps include data augmentation, feature extraction using pre-trained neural network models, feature fusion, and feature selection. We propose a novel data augmentation technique, which normalizes the imbalanced dataset using the secondary dataset RVL-CDIP. The DCNN features are extracted using the VGG19 and AlexNet networks. The extracted features are fused, and the fused feature vector is optimized by applying a Pearson correlation coefficient-based technique to select the optimized features while removing the redundant features. The proposed technique is tested on the Tobacco3482 dataset, which gives a classification accuracy of 93.1% using a cubic support vector machine classifier, proving the validity of the proposed technique.


2022 ◽  
Vol 70 (2) ◽  
pp. 2113-2130
Author(s):  
Awais Khan ◽  
Muhammad Attique Khan ◽  
Muhammad Younus Javed ◽  
Majed Alhaisoni ◽  
Usman Tariq ◽  
...  

2020 ◽  
Vol 8 (3) ◽  
pp. 234-238
Author(s):  
Nur Choiriyati ◽  
Yandra Arkeman ◽  
Wisnu Ananta Kusuma

An open challenge in bioinformatics is the analysis of the sequenced metagenomes from the various environments. Several studies demonstrated bacteria classification at the genus level using k-mers as feature extraction where the highest value of k gives better accuracy but it is costly in terms of computational resources and computational time. Spaced k-mers method was used to extract the feature of the sequence using 111 1111 10001 where 1 was a match and 0 was the condition that could be a match or did not match. Currently, deep learning provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. In this research, two different deep learning architectures, namely Deep Neural Network (DNN) and Convolutional Neural Network (CNN), trained to approach the taxonomic classification of metagenome data and spaced k-mers method for feature extraction. The result showed the DNN classifier reached 90.89 % and the CNN classifier reached 88.89 % accuracy at the genus level taxonomy.


2021 ◽  
Author(s):  
Yu Xiang ◽  
Liwei Hu ◽  
Jun Zhang ◽  
Wenyong Wang

Abstract The perception of geometry-features of airfoils is the basis in aerodynamic area for performance prediction, parameterization, aircraft inverse design, etc. There are three approaches to percept the geometric shape of an airfoil, namely manual design of airfoil geometry parameter, polynomial definition and deep learning. The first two methods can directly extract geometry-features of airfoils or polynomial equations of airfoil curves, but the number of features extracted is limited. While deep learning algorithms can extract a large number of potential features (called latent features), however, the features extracted by deep learning are lacking of explicit geometrical meaning. Motivated by the advantages of polynomial definition and deep learning, we propose a geometry-based deep learning feature extraction scheme (named Bézier-based feature extraction, BFE) for airfoils, which consists of two parts: manifold metric feature extraction and geometry-feature fusion encoder (GF encoder). Manifold metric feature extraction, with the help of the Bézier curve, captures features from tangent space of airfoil curves, and GF encoder combines airfoil coordinate data and manifold metrics together to form a novel feature representation. A public UIUC airfoil dataset is used to verify the proposed BFE. Compared with classic Auto-Encoder, the mean square error (MSE) of BFE is reduced by 17.97% ~29.14%.


Sign in / Sign up

Export Citation Format

Share Document