High-Performance Rotation Invariant Multiview Face Detection

2007 ◽  
Vol 29 (4) ◽  
pp. 671-686 ◽  
Author(s):  
Chang Huang ◽  
Haizhou Ai ◽  
Yuan Li ◽  
Shihong Lao
1997 ◽  
Author(s):  
Henry A. Rowley ◽  
Shumeet Baluja ◽  
Takeo Kanade

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 558
Author(s):  
Anping Song ◽  
Xiaokang Xu ◽  
Xinyi Zhai

Rotation-Invariant Face Detection (RIPD) has been widely used in practical applications; however, the problem of the adjusting of the rotation-in-plane (RIP) angle of the human face still remains. Recently, several methods based on neural networks have been proposed to solve the RIP angle problem. However, these methods have various limitations, including low detecting speed, model size, and detecting accuracy. To solve the aforementioned problems, we propose a new network, called the Searching Architecture Calibration Network (SACN), which utilizes architecture search, fully convolutional network (FCN) and bounding box center cluster (CC). SACN was tested on the challenging Multi-Oriented Face Detection Data Set and Benchmark (MOFDDB) and achieved a higher detecting accuracy and almost the same speed as existing detectors. Moreover, the average angle error is optimized from the current 12.6° to 10.5°.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


IERI Procedia ◽  
2014 ◽  
Vol 6 ◽  
pp. 33-38
Author(s):  
Jeahoon Choi ◽  
Seong Joon Yoo ◽  
Sung Wook Baik ◽  
Ho Chul Shin ◽  
Dongil Han

Sign in / Sign up

Export Citation Format

Share Document