scholarly journals A COLORED FINGER TIP-BASED TRACKING METHOD FOR CONTINUOUS HAND GESTURE RECOGNITION

Author(s):  
DHARANI MAZUMDAR ◽  
ANJAN KUMAR TALUKDAR ◽  
Kandarpa Kumar Sarma

Hand gesture recognition system can be used for human-computer interaction (HCI). Proper hand segmentation from the background and other body parts of the video is the primary requirement for the design of a hand-gesture based application. These video frames can be captured from a low cost webcam (camera) for use in a vision based gesture recognition technique. This paper discusses about the continuous hand gesture recognition. The aim of this paper is to report a robust and efficient hand segmentation algorithm where a new method, wearing glove on the hand is utilized. After that a new idea called “Finger-Pen”, is developed by segmenting only one finger from the hand for proper tracking. In this technique only a finger tip is segmented in spite of the full hand part. Hence this technique allows the hand (excepting the segmented finger tip) to move freely during the tracking time also. Problems such as skin colour detection, complexity from large numbers of people in front of the camera, complex background removal and variable lighting condition are found to be efficiently handled by the system. Noise present in the segmented image due to dynamic background can be removed with the help of this adaptive technique which is found to be effective for the application conceived.

Author(s):  
Ananya Choudhury ◽  
Anjan Kumar Talukdar ◽  
Kandarpa Kumar Sarma

In the present scenario, vision based hand gesture recognition has become a highly emerging research area for the purpose of human computer interaction. Such recognition systems are deployed to serve as a replacement for the commonly used human-machine interactive devices such as keyboard, mouse, joystick etc. in real world situations. The major challenges faced by a vision based hand gesture recognition system include recognition in complex background, in dynamic background, in presence of multiple gestures in the background, under variable lighting condition, under different viewpoints etc. In the context of sign language recognition, which is a highly demanding application of hand gesture recognition system, coarticulation detection is a challenging task. The main objective of this chapter is to provide a general overview of vision based hand gesture recognition system as well as to bring into light some of the research works that have been done in this field.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


2020 ◽  
Vol 17 (4) ◽  
pp. 1764-1769
Author(s):  
S. Gobhinath ◽  
T. Vignesh ◽  
R. Pavankumar ◽  
R. Kishore ◽  
K. S. Koushik

This paper presents about an overview on several methods of segmentation techniques for hand gesture recognition. Hand gesture recognition has evolved tremendously in the recent years because of its ability to interact with machine. Mankind tries to incorporate human gestures into modern technologies like touching movement on screen, virtual reality gaming and sign language prediction. This research aims towards employed on hand gesture recognition for sign language interpretation as a human computer interaction application. Sign Language which uses transmits the sign patterns to convey meaning by hand shapes, orientation and movements to fluently express their thoughts with other person and is normally used by the physically challenged people who cannot speak or hear. Automatic Sign Language which requires robust and accurate techniques for identifying hand signs or a sequence of produced gesture to help interpret their correct meaning. Hand segmentation algorithm where segmentation using different hand detection schemes with required morphological processing. There are many methods which can be used to acquire the respective results depending on its advantage.


Over recent times, deep learning has been challenged extensively to automatically read and interpret characteristic features from large volumes of data. Human Action Recognition (HAR) has been experimented with variety of techniques like wearable devices, mobile devices etc., but they can cause unnecessary discomfort to people especially elderly and child. Since it is very vital to monitor the movements of elderly and children in unattended scenarios, thus, HAR is focused. A smart human action recognition method to automatically identify the human activities from skeletal joint motions and combines the competencies are focused. We can also intimate the near ones about the status of the people. Also, it is a low-cost method and has high accuracy. Thus, this provides a way to help the senior citizens and children from any kind of mishaps and health issues. Hand gesture recognition is also discussed along with human activities using deep learning.


2020 ◽  
Vol 17 (4) ◽  
pp. 1889-1893
Author(s):  
T. Archana ◽  
Srigitha S. Nath ◽  
S. Praveenkumar

The objective of this paper has been the development of a prototype of articulated Robotic arm and implementation of a control strategy for gesture recognition through (Leap motion sensor), by means the natural movement of the fore-arm and hand. The series of advances relative to the control techniques have caused that the robotics it has also introduced as an educational and complement in obligatory basic teachings. To develop and to control Robotic elements locally or remotely, it has always proven to be a clear example of additional motivation. The prototype developed has exceeded the initial expectations and at low cost.


Author(s):  
Dávid Cymbalák ◽  
Slavomír Kardoš ◽  
Peter Feciľak ◽  
František Jakab ◽  
Andrej Mak ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6368
Author(s):  
Lianqing Zheng ◽  
Jie Bai ◽  
Xichan Zhu ◽  
Libo Huang ◽  
Chewu Shan ◽  
...  

Hand gesture recognition technology plays an important role in human-computer interaction and in-vehicle entertainment. Under in-vehicle conditions, it is a great challenge to design gesture recognition systems due to variable driving conditions, complex backgrounds, and diversified gestures. In this paper, we propose a gesture recognition system based on frequency-modulated continuous-wave (FMCW) radar and transformer for an in-vehicle environment. Firstly, the original range-Doppler maps (RDMs), range-azimuth maps (RAMs), and range-elevation maps (REMs) of the time sequence of each gesture are obtained by radar signal processing. Then we preprocess the obtained data frames by region of interest (ROI) extraction, vibration removal algorithm, background removal algorithm, and standardization. We propose a transformer-based radar gesture recognition network named RGTNet. It fully extracts and fuses the spatial-temporal information of radar feature maps to complete the classification of various gestures. The experimental results show that our method can better complete the eight gesture classification tasks in the in-vehicle environment. The recognition accuracy is 97.56%.


Sign in / Sign up

Export Citation Format

Share Document