Method for Text Input with Google Cardboard: An Approach Using Smartwatches and Continuous Gesture Recognition

Author(s):  
Thamer Horbylon Nascimento ◽  
Fabrizzio Alphonsus A. M. Nunes Soares ◽  
Danilo Vieira Oliveira ◽  
Rogerio Lopes Salvini ◽  
Ronaldo Martins da Costa ◽  
...  
2013 ◽  
Vol 401-403 ◽  
pp. 1377-1380 ◽  
Author(s):  
Kun Wei Chen ◽  
Xing Guo ◽  
Jian Guo Wu

Vision-based gesturerecognition is a key technique to achieve a new generation of human-computerinteraction. As few text input search system by gesture recognition isdeveloped, based on the existing gesture recognition techniques, we use thegestures which are corresponding to the Chinese letters and numbers as inputgesture and use Microsoft kinect to obtain depth image to conduct hand gesturesegmentation. First, the edge of the gesture is extracted by Canny algorithm,and then the feature is extracted based on wavelet moment. Finally the gestureletters are obtained. Achieved the text input system based on gesturerecognition. Experiments show that the system is able to achieve Chinesecharacters accurately and effectively.


2020 ◽  
Vol 55 (1) ◽  
Author(s):  
Husam Al-Behadili ◽  
Alaa H. Ahmed ◽  
Hasan M.A. Kadhim

The article describes a new text input method based on gesture recognition, which enables direct physical-to-digital text input. This enables hand-free and in-air writing without any need for keyboards, mice, etc. This is done with the help of state-of-the-art deep learning methods and a single Kinect sensor. The authors were able to achieve a high-accuracy recognition rate by using any wearable device, in contrast to the existing methods, and utilizing a single sensor. Furthermore, among several existing deep learning structures, the authors determined that the best deep learning structure suitable for the character-based gesture data is the DenseNet Convolutional neural network. For instance, the training loss curve shows that DenseNet has the fastest converging curve compared to the others despite maintaining the highest accuracy rate. Our proposed method allows for the improvement of the recognition rate from 96.6% (in the existing algorithms) to 98.01% when the DenseNet structure is used despite using only a single sensor instead of multiple cameras. The use of the Kinect sensor not only reduces the number of cameras to one but also overrides the necessity for any additional hand detection algorithms. These results aid in improving the speed and the efficiency of the character-based gesture recognition. The proposed system can be used in applications that require accurate decision making, such as in operation rooms.


2013 ◽  
Vol 765-767 ◽  
pp. 2653-2656
Author(s):  
Xing Guo ◽  
Zhen Yu Lu ◽  
Rong Bin Xu ◽  
Zheng Yi Liu ◽  
Jian Guo Wu

Now more and more based on gesture interaction system applications, but there are simple gestures to operate the mouse interaction, no text input to the system function. In this paper, the blind letters gestures as input gestures ,using kinect to get the depth image, gestures split, use SIFT feature extraction, then get manual alphabet as Pinyin input method to provide an Chinese character input to the system.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1582
Author(s):  
Yahui Wang ◽  
Yueyang Wang ◽  
Jingzhou Chen ◽  
Yincheng Wang ◽  
Jie Yang ◽  
...  

Although the interaction technology for virtual reality (VR) systems has evolved significantly over the past years, the text input efficiency in the virtual environment is still an ongoing problem. We deployed a word-gesture text entry technology based on gesture recognition in the virtual environment. This study aimed to investigate the performance of the word-gesture text entry technology with different input postures and VR experiences in the virtual environment. The study revealed that the VR experience (how long or how often using VR) had little effect on input performance. The hand-up posture has a better input performance when using word-gesture text entry technology in a virtual environment. In addition, the study found that the perceived exertion to complete the text input with word-gesture text entry technology was relatively high. Furthermore, the typing accuracy and perceived usability for using the hand-up posture were obviously higher than that for the hand-down posture. The hand-up posture also had less task workload than the hand-down posture. This paper supports that the word-gesture text entry technology with hand-up posture has greater application potential than hand-down posture.


2016 ◽  
Vol 3 (2) ◽  
pp. 1
Author(s):  
Seong Jeong ◽  
HongJun Ju ◽  
Hyo-Rim Choi ◽  
TaeYong Kim

2020 ◽  
Vol 79 (1) ◽  
pp. 47-57
Author(s):  
O. G. Viunytskyi ◽  
A. V. Totsky ◽  
Karen O. Egiazarian

Sign in / Sign up

Export Citation Format

Share Document