scholarly journals Air Gesture Recognition Using WLAN Physical Layer Information

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Xiaochao Dang ◽  
Yang Liu ◽  
Zhanjun Hao ◽  
Xuhao Tang ◽  
Chenguang Shao

In recent years, the researchers have witnessed the important role of air gesture recognition in human-computer interactive (HCI), smart home, and virtual reality (VR). The traditional air gesture recognition method mainly depends on external equipment (such as special sensors and cameras) whose costs are high and also with a limited application scene. In this paper, we attempt to utilize channel state information (CSI) derived from a WLAN physical layer, a Wi-Fibased air gesture recognition system, namely, WiNum, which solves the problems of users’ privacy and energy consumption compared with the approaches using wearable sensors and depth cameras. In the process of recognizing the WiNum method, the collected raw data of CSI should be screened, among which can reflect the gesture motion. Meanwhile, the screened data should be preprocessed by noise reduction and linear transformation. After preprocessing, the joint of amplitude information and phase information is extracted, to match and recognize different air gestures by using the S-DTW algorithm which combines dynamic time warping algorithm (DTW) and support vector machine (SVM) properties. Comprehensive experiments demonstrate that under two different indoor scenes, WiNum can achieve higher recognition accuracy for air number gestures; the average recognition accuracy of each motion reached more than 93%, in order to achieve effective recognition of air gestures.

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4757
Author(s):  
Dehao Jiang ◽  
Mingqi Li ◽  
Chunling Xu

In recent years, a series of research experiments have been conducted on WiFi-based gesture recognition. However, current recognition systems are still facing the challenge of small samples and environmental dependence. To deal with the problem of performance degradation caused by these factors, we propose a WiFi-based gesture recognition system, WiGAN, which uses Generative Adversarial Network (GAN) to extract and generate gesture features. With GAN, WiGAN expands the data capacity to reduce time cost and increase sample diversity. The proposed system extracts and fuses multiple convolutional layer feature maps as gesture features before gesture recognition. After fusing features, Support Vector Machine (SVM) is exploited for human activity classification because of its accuracy and convenience. The key insight of WiGAN is to generate samples and merge multi-grained feature maps in our designed GAN, which not only enhances the data but also allows the neural network to select different grained features for gesture recognition. According to the result of experiments conducted on two existing datasets, the average recognition accuracy of WiGAN reaches 98% and 95.6%, respectively, outperforming the existing system. Moreover, the recognition accuracy under different experimental environments and different users shows the robustness of WiGAN.


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


2020 ◽  
Vol 5 (2) ◽  
pp. 609
Author(s):  
Segun Aina ◽  
Kofoworola V. Sholesi ◽  
Aderonke R. Lawal ◽  
Samuel D. Okegbile ◽  
Adeniran I. Oluwaranti

This paper presents the application of Gaussian blur filters and Support Vector Machine (SVM) techniques for greeting recognition among the Yoruba tribe of Nigeria. Existing efforts have considered different recognition gestures. However, tribal greeting postures or gestures recognition for the Nigerian geographical space has not been studied before. Some cultural gestures are not correctly identified by people of the same tribe, not to mention other people from different tribes, thereby posing a challenge of misinterpretation of meaning. Also, some cultural gestures are unknown to most people outside a tribe, which could also hinder human interaction; hence there is a need to automate the recognition of Nigerian tribal greeting gestures. This work hence develops a Gaussian Blur – SVM based system capable of recognizing the Yoruba tribe greeting postures for men and women. Videos of individuals performing various greeting gestures were collected and processed into image frames. The images were resized and a Gaussian blur filter was used to remove noise from them. This research used a moment-based feature extraction algorithm to extract shape features that were passed as input to SVM. SVM is exploited and trained to perform the greeting gesture recognition task to recognize two Nigerian tribe greeting postures. To confirm the robustness of the system, 20%, 25% and 30% of the dataset acquired from the preprocessed images were used to test the system. A recognition rate of 94% could be achieved when SVM is used, as shown by the result which invariably proves that the proposed method is efficient.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 222
Author(s):  
Tao Li ◽  
Chenqi Shi ◽  
Peihao Li ◽  
Pengpeng Chen

In this paper, we propose a novel gesture recognition system based on a smartphone. Due to the limitation of Channel State Information (CSI) extraction equipment, existing WiFi-based gesture recognition is limited to the microcomputer terminal equipped with Intel 5300 or Atheros 9580 network cards. Therefore, accurate gesture recognition can only be performed in an area relatively fixed to the transceiver link. The new gesture recognition system proposed by us breaks this limitation. First, we use nexmon firmware to obtain 256 CSI subcarriers from the bottom layer of the smartphone in IEEE 802.11ac mode on 80 MHz bandwidth to realize the gesture recognition system’s mobility. Second, we adopt the cross-correlation method to integrate the extracted CSI features in the time and frequency domain to reduce the influence of changes in the smartphone location. Third, we use a new improved DTW algorithm to classify and recognize gestures. We implemented vast experiments to verify the system’s recognition accuracy at different distances in different directions and environments. The results show that the system can effectively improve the recognition accuracy.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 92758-92767 ◽  
Author(s):  
Shujie Ren ◽  
Huaibin Wang ◽  
Liangyi Gong ◽  
Chaocan Xiang ◽  
Xuangou Wu ◽  
...  

Author(s):  
Xian Wang ◽  
Paula Tarrío ◽  
Ana María Bernardos ◽  
Eduardo Metola ◽  
José Ramón Casar

Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone


2021 ◽  
Vol 7 ◽  
pp. e596
Author(s):  
Rodney Pino ◽  
Renier Mendoza ◽  
Rachelle Sambayan

Baybayin is a pre-Hispanic Philippine writing system used in Luzon island. With the effort in reintroducing the script, in 2018, the Committee on Basic Education and Culture of the Philippine Congress approved House Bill 1022 or the ”National Writing System Act,” which declares the Baybayin script as the Philippines’ national writing system. Since then, Baybayin OCR has become a field of research interest. Numerous works have proposed different techniques in recognizing Baybayin scripts. However, all those studies anchored on the classification and recognition at the character level. In this work, we propose an algorithm that provides the Latin transliteration of a Baybayin word in an image. The proposed system relies on a Baybayin character classifier generated using the Support Vector Machine (SVM). The method involves isolation of each Baybayin character, then classifying each character according to its equivalent syllable in Latin script, and finally concatenate each result to form the transliterated word. The system was tested using a novel dataset of Baybayin word images and achieved a competitive 97.9% recognition accuracy. Based on our review of the literature, this is the first work that recognizes Baybayin scripts at the word level. The proposed system can be used in automated transliterations of Baybayin texts transcribed in old books, tattoos, signage, graphic designs, and documents, among others.


Author(s):  
Nourelhoda M. Mahmoud ◽  
Hassan Fouad ◽  
Ahmed M. Soliman

Abstract Patient gesture recognition is a promising method to gain knowledge and assist patients. Healthcare monitoring systems integrated with the Internet of Things (IoT) paradigm to perform the remote solutions for the acquiring inputs. In recent years, wearable sensors, and information and communication technologies are assisting for remote monitoring and recommendations in smart healthcare. In this paper, the dependable gesture recognition (DGR) using a series learning method for identifying the action of patient monitoring through remote access is presented. The gesture recognition systems connect to the end-user (remote) and the patient for instantaneous gesture identification. The gesture is recognized by the analysis of the intermediate and structuring features using series learning. The proposed gesture recognition system is capable of monitoring patient activities and differentiating the gestures from the regular actions to improve the convergence. Gesture recognition through remote monitoring is indistinguishable due to the preliminary errors. Further, it is convertible using series learning. Therefore, the misdetections and classifications are promptly identified using the DGR and verified by comparative analysis and experimental study. From the analysis, the proposed DGR approach attains 94.92% high precision for the varying gestures and 89.85% high accuracy for varying mess factor. The proposed DGR reduces recognition time to 4.97 s and 4.93 s for the varying gestures and mess factor, respectively.


Author(s):  
D. A. Kalina ◽  
R. V. Golovanov ◽  
D. V. Vorotnev

We present the monocamera approach of static hand gestures recognition based on skeletonization. The problem of creating skeleton of the human’s hand, as well as body, became solvable a few years ago after inventing so called convolutional pose machines – the novel architecture of artificial neural network. Our solution uses such kind of pretrained convolutional artificial network for extracting hand joints keypoints with further skeleton reconstruction. In this work we also propose special skeleton descriptor with proving its stability and distinguishability in terms of classification. We considered a few widespread machine learning algorithms to build and verify different classifiers. The quality of the classifier’s recognition is estimated using the wellknown Accuracy metric, which identified that classical SVM (Support Vector Machines) with radial basis kernel gives the best results. The testing of the whole system was conducted using public databases containing about 3000 of test images for more than 10 types of gestures. The results of a comparative analysis of the proposed system with existing approaches are demonstrated. It is shown that our gesture recognition system provides better quality in comparison with existing solutions. The performance of the proposed system was estimated for two configurations of standard personal computer: with CPU (Central Processing Unit) only and with GPU (Graphics Processing Unit) in addition where the latest one provides realtime processing with up to 60 frames per second. Thus we demonstrate that the proposed approach can find an application in the practice.


Sign in / Sign up

Export Citation Format

Share Document