Robust, real-time people tracking in open environments using integrated stereo, color, and face detection

Author(s):  
T. Darrell ◽  
G. Gordon ◽  
J. Woodfill ◽  
H. Baker ◽  
M. Harville
Author(s):  
Mr. Shubham Ingole

This article describes the technique of real-time face detection, mask detection, and vacant seat available in the vehicle. There are so many technologies for finding seat availability in the vehicle. But image processing technology is very popular today. Face detection is part of image processing. It is used to find the face of a human being in a certain area. Face detection is used in many applications, such as facial recognition, people tracking or photography. In this paper, the face detection technique is used to detect the vacant seat availability in the vehicle and also to detect whether the passenger wear the mask on his face or not. The webcam is installed in the vehicle and connected with the Raspberry Pi 3 model B. When the vehicle leaves the station, the webcam will capture images of the passengers in the seating area. The webcam will be mounted on the vehicle. The images will be adjusted and enhanced to reduce noise made by the software application. The system obtains the maximum number of passengers in the vehicle that processes the images and then calculates the availability of seats in the vehicle. In covid-19 situation mask detection is necessary. so this system also used to detect the mask on face.


2014 ◽  
Vol 971-973 ◽  
pp. 1710-1713
Author(s):  
Wen Huan Wu ◽  
Ying Jun Zhao ◽  
Yong Fei Che

Face detection is the key point in automatic face recognition system. This paper introduces the face detection algorithm with a cascade of Adaboost classifiers and how to configure OpenCV in MCVS. Using OpenCV realized the face detection. And a detailed analysis of the face detection results is presented. Through experiment, we found that the method used in this article has a high accuracy rate and better real-time.


Author(s):  
MING-SHAUNG CHANG ◽  
JUNG-HUA CHOU

In this paper, we design a robust and friendly human–robot interface (HRI) system for our intelligent mobile robot based only on natural human gestures. It consists of a triple-face detection method and a fuzzy logic controller (FLC)-Kalman filter tracking system to check the users and predict their current position in a dynamic and cluttered working environment. In addition, through the combined classifier of the principal component analysis (PCA) and back-propagation artificial neural network (BPANN), single and successive commands defined by facial positions and hand gestures are identified for real-time command recognition after dynamic programming (DP). Therefore, the users can instruct this HRI system to make member recognition or expression recognition corresponding to their gesture commands, respectively based on the linear discriminant analysis (LDA) and BPANN. The experimental results prove that the proposed HRI system perform accurately in real-time face detection and tracking, and robustly react to the corresponding gesture commands at eight frames per second (fps).


Sign in / Sign up

Export Citation Format

Share Document