scholarly journals A novel autonomous learning framework to enhance sEMG-based hand gesture recognition using depth information

2021 ◽  
Vol 66 ◽  
pp. 102444 ◽  
Author(s):  
Salih Ertug Ovur ◽  
Xuanyi Zhou ◽  
Wen Qi ◽  
Longbin Zhang ◽  
Yingbai Hu ◽  
...  
2013 ◽  
Vol 765-767 ◽  
pp. 2826-2829 ◽  
Author(s):  
Song Lin ◽  
Rui Min Hu ◽  
Yu Lian Xiao ◽  
Li Yu Gong

In this paper, we propose a novel real-time 3D hand gesture recognition algorithm based on depth information. We segment out the hand region from depth image and convert it to a point cloud. Then, 3D moment invariant features are computed at the point cloud. Finally, support vector machine (SVM) is employed to classify the shape of hand into different categories. We collect a benchmark dataset using Microsoft Kinect for Xbox and test the propose algorithm on it. Experimental results prove the robustness of our proposed algorithm.


Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 59 ◽  
Author(s):  
Nico Zengeler ◽  
Thomas Kopinski ◽  
Uwe Handmann

In this review, we describe current Machine Learning approaches to hand gesture recognition with depth data from time-of-flight sensors. In particular, we summarise the achievements on a line of research at the Computational Neuroscience laboratory at the Ruhr West University of Applied Sciences. Relating our results to the work of others in this field, we confirm that Convolutional Neural Networks and Long Short-Term Memory yield most reliable results. We investigated several sensor data fusion techniques in a deep learning framework and performed user studies to evaluate our system in practice. During our course of research, we gathered and published our data in a novel benchmark dataset (REHAP), containing over a million unique three-dimensional hand posture samples.


2021 ◽  
Author(s):  
Digang Sun ◽  
Ping Zhang ◽  
Mingxuan Chen ◽  
Jiaxin Chen

With an increasing number of robots are employed in manufacturing, a human-robot interaction method that can teach robots in a natural, accurate, and rapid manner is needed. In this paper, we propose a novel human-robot interface based on the combination of static hand gestures and hand poses. In our proposed interface, the pointing direction of the index finger and the orientation of the whole hand are extracted to indicate the moving direction and orientation of the robot in a fast-teaching mode. A set of hand gestures are designed according to their usage in humans' daily life and recognized to control the position and orientation of the robot in a fine-teaching mode. We employ the feature extraction ability of the hand pose estimation network via transfer learning and utilize attention mechanisms to improve the performance of the hand gesture recognition network. The inputs of hand pose estimation and hand gesture recognition networks are monocular RGB images, making our method independent of depth information input and applicable to more scenarios. In the regular shape reconstruction experiments on the UR3 robot, the mean error of the reconstructed shape is less than 1 mm, which demonstrates the effectiveness and efficiency of our method.


2021 ◽  
Author(s):  
Digang Sun ◽  
Ping Zhang ◽  
Mingxuan Chen ◽  
Jiaxin Chen

With an increasing number of robots are employed in manufacturing, a human-robot interaction method that can teach robots in a natural, accurate, and rapid manner is needed. In this paper, we propose a novel human-robot interface based on the combination of static hand gestures and hand poses. In our proposed interface, the pointing direction of the index finger and the orientation of the whole hand are extracted to indicate the moving direction and orientation of the robot in a fast-teaching mode. A set of hand gestures are designed according to their usage in humans' daily life and recognized to control the position and orientation of the robot in a fine-teaching mode. We employ the feature extraction ability of the hand pose estimation network via transfer learning and utilize attention mechanisms to improve the performance of the hand gesture recognition network. The inputs of hand pose estimation and hand gesture recognition networks are monocular RGB images, making our method independent of depth information input and applicable to more scenarios. In the regular shape reconstruction experiments on the UR3 robot, the mean error of the reconstructed shape is less than 1 mm, which demonstrates the effectiveness and efficiency of our method.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Jun Xu ◽  
Xiong Zhang ◽  
Meng Zhou

In this work, we propose a vision-based hand gesture recognition system to provide a high-security and smart node in the application layer of Internet of Things. The system can be installed in any terminal device with a monocular camera and interact with users by recognizing pointing gestures in the captured images. The interaction information is determined by a straight line from the user’s eye to the tip of the index finger, which achieves real-time and authentic data communication. The system mainly contains two modules. The first module is an edge repair-based hand subpart segmentation algorithm which combines pictorial structures and edge information to extract hand regions from complex backgrounds. Second, the position which the user focuses on is located by an adaptive method of pointing gesture estimation, which adjusts the offsets between the target position and the calculated position due to lack of depth information.


Sign in / Sign up

Export Citation Format

Share Document