scholarly journals Ear Localization from Side Face Images using Distance Transform and Template Matching

Author(s):  
Surya Prakash ◽  
Umarani Jayaraman ◽  
Phalguni Gupta
Author(s):  
Aman Kamboj ◽  
Rajneesh Rani ◽  
Aditya Nigam

With much concern over security, it has become essential to maintain the identity and track of an individual's activities in the modern healthcare sector. Although there are biometric authentication systems based on different modalities, recognition of a person using the ear has gained much attention as ears are unique. Ear localization is a first step for ear-based biometric authentication systems, and this needs to be accurate, since it plays a crucial role in the overall performance of the system. The localization of ear in the side face images captured in the wild possess great challenges due to varying angles, light, scale, background clutter, blur and occlusion, etc. In this chapter, the authors have proposed EarLocalizer model to localize the ear, which is inspired by Faster-RCNN. The model is evaluated on two wild ear databases, UBEAR-II and USTB-III, and has achieved an accuracy of 95% and 99.08%, respectively, at IOU (Intersection over Union) = 0.5. The results of the proposed model signify that the model is invariant to the environmental conditions.


2007 ◽  
Vol 2007 ◽  
pp. 1-9 ◽  
Author(s):  
Koji Iwano ◽  
Tomoaki Yoshinaga ◽  
Satoshi Tamura ◽  
Sadaoki Furui

Author(s):  
Xiaoli Zhou ◽  
Bir Bhanu

This chapter introduces a new video based recognition system to recognize noncooperating individuals at a distance in video, who expose side views to the camera. Information from two biometric sources, side face and gait, is utilized and integrated for recognition. For side face, an enhanced side face image (ESFI), a higher resolution image compared with the image directly obtained from a single video frame, is constructed, which integrates face information from multiple video frames. For gait, the gait energy image (GEI), a spatiotemporal compact representation of gait in video, is used to characterize human walking properties. The features of face and gait are extracted from ESFI and GEI, respectively. They are integrated at both of the match score level and the feature level by using different fusion strategies. The system is tested on a database of video sequences, corresponding to 45 people, which are collected over several months. The performance of different fusion methods are compared and analyzed. The experimental results show that (a) the idea of constructing ESFI from multiple frames is promising for human recognition in video and better face features are extracted from ESFI compared to those from the original side face images; (b) the synchronization of face and gait is not necessary for face template ESFI and gait template GEI; (c) integrated information from side face and gait is effective for human recognition in video. The feature level fusion methods achieve better performance than the match score level methods fusion overall.


2019 ◽  
Vol 2019 (5) ◽  
pp. 528-1-528-6
Author(s):  
Xinwei Liu ◽  
Christophe Charrier ◽  
Marius Pedersen ◽  
Patrick Bours

2014 ◽  
Vol 1 (3) ◽  
pp. 23-31
Author(s):  
Basava Raju ◽  
◽  
K. Y. Rama Devi ◽  
P. V. Kumar ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document