Hand Region Extraction by Background Subtraction with Renewable Background for Hand Gesture Recognition

Author(s):  
Akio Ogihara ◽  
Hiroshi Matsumoto ◽  
Akira Shiozaki
2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Zhi-hua Chen ◽  
Jung-Tae Kim ◽  
Jianning Liang ◽  
Jing Zhang ◽  
Yu-Bo Yuan

Hand gesture recognition is very significant for human-computer interaction. In this work, we present a novel real-time method for hand gesture recognition. In our framework, the hand region is extracted from the background with the background subtraction method. Then, the palm and fingers are segmented so as to detect and recognize the fingers. Finally, a rule classifier is applied to predict the labels of hand gestures. The experiments on the data set of 1300 images show that our method performs well and is highly efficient. Moreover, our method shows better performance than a state-of-art method on another data set of hand gestures.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6525
Author(s):  
Beiwei Zhang ◽  
Yudong Zhang ◽  
Jinliang Liu ◽  
Bin Wang

Gesture recognition has been studied for decades and still remains an open problem. One important reason is that the features representing those gestures are not sufficient, which may lead to poor performance and weak robustness. Therefore, this work aims at a comprehensive and discriminative feature for hand gesture recognition. Here, a distinctive Fingertip Gradient orientation with Finger Fourier (FGFF) descriptor and modified Hu moments are suggested on the platform of a Kinect sensor. Firstly, two algorithms are designed to extract the fingertip-emphasized features, including palm center, fingertips, and their gradient orientations, followed by the finger-emphasized Fourier descriptor to construct the FGFF descriptors. Then, the modified Hu moment invariants with much lower exponents are discussed to encode contour-emphasized structure in the hand region. Finally, a weighted AdaBoost classifier is built based on finger-earth mover’s distance and SVM models to realize the hand gesture recognition. Extensive experiments on a ten-gesture dataset were carried out and compared the proposed algorithm with three benchmark methods to validate its performance. Encouraging results were obtained considering recognition accuracy and efficiency.


Gestures are the simplest way of conveying a message, rather simpler than verbal means. It is the most primitive way of conversation. Gestures can also be the easiest and intuitive way of communicating with a computer, they can be used to communicate or convey information to computers, robots, smart appliances and many other pieces of machinery. It can eliminate the use of mouse and keyboard to some extent. The gestures cited are basically the variable positions as well as orientations of the hand. They can be detected by a simple webcam attached to the computer. The image is first changed into its corresponding RGB values and then to HSV values for better handling and feature recognition. The hand is segregated from the background using feature extraction. Then the values are matched in proximity of the coded values. Then the region of interest is calculated using the concept of convexity and background subtraction. The convex defect helps to define the contour efficiently. This method is invariant for different positions or direction of the gesture. It is able to detect the number of fingers individually and efficiently


2013 ◽  
Vol 11 (5) ◽  
pp. 2634-2640
Author(s):  
Hazem Khaled Mohamed ◽  
S. Sayed ◽  
El Sayed Mostafa ◽  
Hossam Ali

This paper introduces a hand gesture recognition algorithm for Human Computer Interaction using real-time video streaming .The background subtraction technique is used to extract the ROI (Region Of Interest) of the hand. Fingertip is detected using logical heuristics equations that are applied on hand contour , convex hull and convexity defects points. A combination between background subtraction and logical heuristic technique that leads to more accurate results is introduced. Experimental results prove that the proposed algorithm improve the finger's tips detection by 52 % compared to the reference model.


2019 ◽  
Vol 17 (1) ◽  
pp. 137-145
Author(s):  
Tukhtaev Sokhib ◽  
Taeg Keun Whangbo

Kinect is a promising acquisition device that provides useful information on a scene through color and depth data. There has been a keen interest in utilizing Kinect in many computer vision areas such as gesture recognition. Given the advantages that Kinect provides, hand gesture recognition can be deployed efficiently with minor drawbacks. This paper proposes a simple and yet efficient way of hand gesture recognition via segmenting a hand region from both color and depth data acquired by Kinect v1. The Inception model of the image recognition system is used to check the reliability of the proposed method. Experimental results are derived from a sample dataset of Microsoft Kinect hand acquisitions. Under the appropriate conditions, it is possible to achieve high accuracy in close to real time


Computer vision has great attention in recent years as it identifies and similarly processes images that human vision does and they provide suitable output. In computer vision, hand gesture recognition is one of the important and fundamental problems. The hand gesture recognition system has gained significant importance in the recent few years because of its manifoldness applications. This paper aims to give a new approach for vision-based, fast and real time hand gesture recognition, a new light that can be used in many HCI applications. The proposed algorithm first detects and segments the hand region and by using our innovative approach, it finds the fingers and classifies the gesture. The proposed algorithm is invariant to orientation, hand position or distance from the webcam. Based on this proposed algorithm we have progressively developed a gesture-based mathematical tool (calculator) as a practical application.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document