Viewing angle Effect on Gait Recognition Using Joint Kinematics

Author(s):  
C.C. Charalambous ◽  
A.A. Bharath
1999 ◽  
Vol 14 (1) ◽  
pp. 74-79
Author(s):  
Hafizur Rahman ◽  
D. A. Quadir ◽  
A. Z.M. Zahedul Islam ◽  
Sukumar Dutta

Author(s):  
Xiaoyan Zhao ◽  
Wenjing Zhang ◽  
Tianyao Zhang ◽  
Zhaohui Zhang ◽  
◽  
...  

Gait recognition is a biometric identification method that can be realized under long-distance and no-contact conditions. Its applications in criminal investigations and security inspections are thus broad. Most existing gait recognition methods adopted the gait energy image (GEI) for feature extraction. However, the GEI method ignores the dynamic information of gait, which causes the recognition performance to be greatly affected by viewing angle changes and the subject’s belongings and clothes. To solve these problems, in this paper a cross-view gait recognition method that uses a dual-stream network based on the fusion of dynamic and static features (FDSN) is proposed. First, the static features are extracted from the GEI and the dynamic features are extracted from the image sequence of the human’s lower limbs. Then, the two features are fused, and finally, a nearest neighbor classifier is used for classification. Comparative experiments on the CASIA-B dataset created by the Automation Institute of the Chinese Academy of Sciences showed that the FDSN achieves a higher recognition rate than a convolutional neural network (CNN) and Gaitset under changes in viewing angle or clothing. To meet our requirements, in this study a gait image dataset was collected and produced in a campus setting. The experimental results on this dataset show the effectiveness of the FDSN in terms of eliminating the effects of disruptive changes.


2019 ◽  
Vol 11 (5) ◽  
pp. 541 ◽  
Author(s):  
Xin Jing ◽  
Larry Leigh ◽  
Cibele Teixeira Pinto ◽  
Dennis Helder

In 2013, the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) Infrared and Visible Optical Sensors Subgroup (IVOS) established the Radiometric Calibration Network (RadCalNet), consisting of four international test sites providing automated in situ measurements and estimates of propagated top-of-atmosphere (TOA) reflectance. This work evaluates the ‘reliability’ of RadCalNet TOA reflectance data at three of these sites—RVUS, LCFR, and GONA—using Landsat 7 ETM+, Landsat 8 operational land imager (OLI), and Sentinel 2A/2B (S2A/S2B) MSI TOA reflectance data. This work identified a viewing angle effect in the MSI data at the RVUS and LCFR sites; when corrected, the overall standard deviation in relative reflectance differences decreased by approximately 2% and 0.5% at the RVUS and LCFR sites, respectively. Overall, the relative mean differences between the RadCalNet surface data and sensor data for the RVUS and GONA sites are within 5% for ETM+, OLI, and S2A MSI, with an approximately 2% higher difference in the S2B MSI data at the RVUS site. The LCFR site is different from the other two sites, with relative mean differences ranging from approximately -10% to 1%, even after performing the viewing angle effect correction on the MSI data. The data from RadCalNet are easy to acquire and use. More effort is needed to better understand the behavior at LCFR. One significant improvement on the accuracy of the RadCalNet data might be the development of a site-specific BRDF characterization and correction.


2018 ◽  
Vol 7 (4.7) ◽  
pp. 127
Author(s):  
D. BEULAH DAVID ◽  
M. A.DORAIRANGASWAMY

Gait patterns have been used widely in recent years to authenticate users. Because it doesn’t require user intrusion, it is often used as a biometric to make authentication processes easier and hassle free. But there are various issues with this process. To start with, the viewing angle has to be constant which is quite difficult to achieve with limited number of cameras. Speed too can alter the way a person walks and cause inconsistencies in identification. Furthermore, complications might arise if the subject is carrying something. The weight can affect his walking pattern. Besides, a recent accident could also transform a person’s walking pattern and lead to wrong identification. Other biometrics such as face detection can be combined with this technique to reduce the issues leading to erroneous identification. In this paper, we propose a system to overcome the viewing angle discrepancies. The system takes in walking sequences as input and processes them to create images. This is converted into 3D images by means of stereovision algorithms. Using which, we can effectively match the real-time image with various image sequences in the database. Side face detection can enhance the accuracy further..  


In this paper, the authors have proposed a computationally efficient, robust, and lightweight system for gait recognition. The proposed system contains two main stages: In the first stage, a classification network identifies optical flow corners in the normalized silhouette and calculates the distances traveled in every viewpoint which is further used by a regression model to identify the viewing angle. In the second stage, a feature extraction network computes the Gait Energy Image (GEI) for every viewpoint and then uses Principal Component Analysis (PCA) to extract low dimensional feature vectors from these GEI images. Finally, a multi-layer perceptron model is trained using the extracted principal components for every viewing angle. The performance of a system is comprehensively evaluated on the CASIA B and OULP gait dataset. The experimental results demonstrate the superior performance of a proposed system in viewing angle classification (100% accuracy), gait recognition (100% accuracy in normal walk), computational efficiency, robustness to clothing, and viewing angle variation.


Sign in / Sign up

Export Citation Format

Share Document