Low-cost assistive device for hand gesture recognition using sEMG (Erratum)

Author(s):  
Dávid Cymbalák ◽  
Slavomír Kardoš ◽  
Peter Feciľak ◽  
František Jakab ◽  
Andrej Mak ◽  
...  

Over recent times, deep learning has been challenged extensively to automatically read and interpret characteristic features from large volumes of data. Human Action Recognition (HAR) has been experimented with variety of techniques like wearable devices, mobile devices etc., but they can cause unnecessary discomfort to people especially elderly and child. Since it is very vital to monitor the movements of elderly and children in unattended scenarios, thus, HAR is focused. A smart human action recognition method to automatically identify the human activities from skeletal joint motions and combines the competencies are focused. We can also intimate the near ones about the status of the people. Also, it is a low-cost method and has high accuracy. Thus, this provides a way to help the senior citizens and children from any kind of mishaps and health issues. Hand gesture recognition is also discussed along with human activities using deep learning.


Author(s):  
DHARANI MAZUMDAR ◽  
ANJAN KUMAR TALUKDAR ◽  
Kandarpa Kumar Sarma

Hand gesture recognition system can be used for human-computer interaction (HCI). Proper hand segmentation from the background and other body parts of the video is the primary requirement for the design of a hand-gesture based application. These video frames can be captured from a low cost webcam (camera) for use in a vision based gesture recognition technique. This paper discusses about the continuous hand gesture recognition. The aim of this paper is to report a robust and efficient hand segmentation algorithm where a new method, wearing glove on the hand is utilized. After that a new idea called “Finger-Pen”, is developed by segmenting only one finger from the hand for proper tracking. In this technique only a finger tip is segmented in spite of the full hand part. Hence this technique allows the hand (excepting the segmented finger tip) to move freely during the tracking time also. Problems such as skin colour detection, complexity from large numbers of people in front of the camera, complex background removal and variable lighting condition are found to be efficiently handled by the system. Noise present in the segmented image due to dynamic background can be removed with the help of this adaptive technique which is found to be effective for the application conceived.


2020 ◽  
Vol 17 (4) ◽  
pp. 1889-1893
Author(s):  
T. Archana ◽  
Srigitha S. Nath ◽  
S. Praveenkumar

The objective of this paper has been the development of a prototype of articulated Robotic arm and implementation of a control strategy for gesture recognition through (Leap motion sensor), by means the natural movement of the fore-arm and hand. The series of advances relative to the control techniques have caused that the robotics it has also introduced as an educational and complement in obligatory basic teachings. To develop and to control Robotic elements locally or remotely, it has always proven to be a clear example of additional motivation. The prototype developed has exceeded the initial expectations and at low cost.


2019 ◽  
Vol E102.B (2) ◽  
pp. 233-240
Author(s):  
Shengchang LAN ◽  
Zonglong HE ◽  
Weichu CHEN ◽  
Kai YAO

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document