scholarly journals Hand Gesture Recognition in Automotive Human–Machine Interaction Using Depth Cameras

Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 59 ◽  
Author(s):  
Nico Zengeler ◽  
Thomas Kopinski ◽  
Uwe Handmann

In this review, we describe current Machine Learning approaches to hand gesture recognition with depth data from time-of-flight sensors. In particular, we summarise the achievements on a line of research at the Computational Neuroscience laboratory at the Ruhr West University of Applied Sciences. Relating our results to the work of others in this field, we confirm that Convolutional Neural Networks and Long Short-Term Memory yield most reliable results. We investigated several sensor data fusion techniques in a deep learning framework and performed user studies to evaluate our system in practice. During our course of research, we gathered and published our data in a novel benchmark dataset (REHAP), containing over a million unique three-dimensional hand posture samples.

2021 ◽  
Vol 6 (22) ◽  
pp. 25-35
Author(s):  
A F M Saifuddin Saif ◽  
Zainal Rasyid Mahayuddin

Integration of technology for the Fourth Industrial Revolution (IR 4.0) has increased the need for efficient methods for developing dynamic human computer interfaces and virtual environments. In this context, hand gesture recognition can play a vital role to serve as a natural mode of interactive human machine interaction. Unfixed brightness, complex backgrounds, color constraints, dependency on hand shape, rotation, and scale variance are the challenging issues which have an impact on robust performance for the existing methods as per outlined in previous researches. This research presents an efficient method for hand gesture recognition by constructing a robust features vector. The proposed method is performed in two phases, where in the first phase the features vector is constructed by selecting interest points at distinctive locations using a blob detector based on Hessian matrix approximation. After detecting the area of the hand from the features vector, edge detection is applied in the isolated hand followed by edge orientation computation. After this, templates are generated using one and two dimensional mapping to compare candidate and prototype images using adaptive threshold. The proposed research performed extensive experimentation, where a recognition accuracy rate of 98.33% was achieved by it, which is higher as compared to previous research results. Experimental results reveal the effectiveness of the proposed methodology in real time.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3827 ◽  
Author(s):  
Minwoo Kim ◽  
Jaechan Cho ◽  
Seongjoo Lee ◽  
Yunho Jung

We propose an efficient hand gesture recognition (HGR) algorithm, which can cope with time-dependent data from an inertial measurement unit (IMU) sensor and support real-time learning for various human-machine interface (HMI) applications. Although the data extracted from IMU sensors are time-dependent, most existing HGR algorithms do not consider this characteristic, which results in the degradation of recognition performance. Because the dynamic time warping (DTW) technique considers the time-dependent characteristic of IMU sensor data, the recognition performance of DTW-based algorithms is better than that of others. However, the DTW technique requires a very complex learning algorithm, which makes it difficult to support real-time learning. To solve this issue, the proposed HGR algorithm is based on a restricted column energy (RCE) neural network, which has a very simple learning scheme in which neurons are activated when necessary. By replacing the metric calculation of the RCE neural network with DTW distance, the proposed algorithm exhibits superior recognition performance for time-dependent sensor data while supporting real-time learning. Our verification results on a field-programmable gate array (FPGA)-based test platform show that the proposed HGR algorithm can achieve a recognition accuracy of 98.6% and supports real-time learning and recognition at an operating frequency of 150 MHz.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 564 ◽  
Author(s):  
Shahzad Ahmed ◽  
Sung Ho Cho

The emerging integration of technology in daily lives has increased the need for more convenient methods for human–computer interaction (HCI). Given that the existing HCI approaches exhibit various limitations, hand gesture recognition-based HCI may serve as a more natural mode of man–machine interaction in many situations. Inspired by an inception module-based deep-learning network (GoogLeNet), this paper presents a novel hand gesture recognition technique for impulse-radio ultra-wideband (IR-UWB) radars which demonstrates a higher gesture recognition accuracy. First, methodology to demonstrate radar signals as three-dimensional image patterns is presented and then, the inception module-based variant of GoogLeNet is used to analyze the pattern within the images for the recognition of different hand gestures. The proposed framework is exploited for eight different hand gestures with a promising classification accuracy of 95%. To verify the robustness of the proposed algorithm, multiple human subjects were involved in data acquisition.


2014 ◽  
Vol 14 (6) ◽  
pp. 1898-1903 ◽  
Author(s):  
Kui Liu ◽  
Chen Chen ◽  
Roozbeh Jafari ◽  
Nasser Kehtarnavaz

Sign in / Sign up

Export Citation Format

Share Document