Improving arm segmentation in sign language recognition systems using image processing

2020 ◽  
pp. 1-14
Author(s):  
Qiuhong Tian ◽  
Jiaxin Bao ◽  
Huimin Yang ◽  
Yingrou Chen ◽  
Qiaoli Zhuang

BACKGROUND: For a traditional vision-based static sign language recognition (SLR) system, arm segmentation is a major factor restricting the accuracy of SLR. OBJECTIVE: To achieve accurate arm segmentation for different bent arm shapes, we designed a segmentation method for a static SLR system based on image processing and combined it with morphological reconstruction. METHODS: First, skin segmentation was performed using YCbCr color space to extract the skin-like region from a complex background. Then, the area operator and the location of the mass center were used to remove skin-like regions and obtain the valid hand-arm region. Subsequently, the transverse distance was calculated to distinguish different bent arm shapes. The proposed segmentation method then extracted the hand region from different types of hand-arm images. Finally, the geometric features of the spatial domain were extracted and the sign language image was identified using a support vector machine (SVM) model. Experiments were conducted to determine the feasibility of the method and compare its performance with that of neural network and Euclidean distance matching methods. RESULTS: The results demonstrate that the proposed method can effectively segment skin-like regions from complex backgrounds as well as different bent arm shapes, thereby improving the recognition rate of the SLR system.

Author(s):  
Pradip Ramanbhai Patel ◽  
Narendra Patel

Sign Language Recognition (SLR) is immerging as current area of research in the field of machine learning. SLR system recognizes gestures of sign language and converts them into text/voice thus making the communication possible between deaf and ordinary people. Acceptable performance of such system demands invariance of the output with respect to certain transformations of the input. In this paper, we introduce the real time hand gesture recognition system for Indian Sign Language (ISL). In order to obtain very high recognition accuracy, we propose a hybrid feature vector by combining shape oriented features like Fourier Descriptors and region oriented features like Hu Moments & Zernike Moments. Support Vector Machine (SVM) classifier is trained using feature vectors of images of training dataset. During experiment it is found that the proposed hybrid feature vector enhanced the performance of the system by compactly representing the fundamentals of invariance with respect transformation like scaling, translation and rotation. Being invariant with respect to transformation, system is easy to use and achieved a recognition rate of 95.79%.


Author(s):  
Karishma Dixit ◽  
Anand Singh Jalal

The sign language is the essential communication method between the deaf and dumb people. In this paper, the authors present a vision based approach which efficiently recognize the signs of Indian Sign Language (ISL) and translate the accurate meaning of those recognized signs. A new feature vector is computed by fusing Hu invariant moment and structural shape descriptor to recognize sign. A multi-class Support Vector Machine (MSVM) is utilized for training and classifying signs of ISL. The performance of the algorithm is illustrated by simulations carried out on a dataset having 720 images. Experimental results demonstrate that the proposed approach can successfully recognize hand gesture with 96% recognition rate.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4025
Author(s):  
Zhanjun Hao ◽  
Yu Duan ◽  
Xiaochao Dang ◽  
Yang Liu ◽  
Daiyang Zhang

In recent years, with the development of wireless sensing technology and the widespread popularity of WiFi devices, human perception based on WiFi has become possible, and gesture recognition has become an active topic in the field of human-computer interaction. As a kind of gesture, sign language is widely used in life. The establishment of an effective sign language recognition system can help people with aphasia and hearing impairment to better interact with the computer and facilitate their daily life. For this reason, this paper proposes a contactless fine-grained gesture recognition method using Channel State Information (CSI), namely Wi-SL. This method uses a commercial WiFi device to establish the correlation mapping between the amplitude and phase difference information of the subcarrier level in the wireless signal and the sign language action, without requiring the user to wear any device. We combine an efficient denoising method to filter environmental interference with an effective selection of optimal subcarriers to reduce the computational cost of the system. We also use K-means combined with a Bagging algorithm to optimize the Support Vector Machine (SVM) classification (KSB) model to enhance the classification of sign language action data. We implemented the algorithms and evaluated them for three different scenarios. The experimental results show that the average accuracy of Wi-SL gesture recognition can reach 95.8%, which realizes device-free, non-invasive, high-precision sign language gesture recognition.


Prospectiva ◽  
2018 ◽  
Vol 16 (2) ◽  
pp. 41-48
Author(s):  
Betsy Villa ◽  
Valeria Valencia ◽  
Julie Berrio

El lenguaje de señas es el autóctono, utilizado por las personas sordas para comunicarse. Se compone de movimientos y expresiones realizadas a través de diferentes partes del cuerpo. En Colombia, hay gran ausencia de tecnologías encaminadas al aprendizaje e interpretación de éste; por ende, es un compromiso social, llevar a cabo iniciativas que promuevan la mejora de la calidad de vida de este grupo social del país, el cual está representado por una minoría considerable. En este artículo, se muestra el proceso de diseño e implementación de un sistema de reconocimiento de gestos no móviles mediante el entorno de Matlab y el método SIFT; a través del cual se visualiza la imagen de la letra adquirida, junto con la traducción de la misma en el lenguaje de señas colombiano, aplicando identificación de puntos claves y comparación con imágenes almacenadas en base de datos. La herramienta realiza el reconocimiento de las 20 letras no móviles de este conjunto, implementando una interfaz gráfica en Matlab para una mejor visualización, fácil acceso al sistema y uso por parte del usuario. Se comprueba una mejor respuesta del sistema mediante la utilización de un elemento estandarizado de la imagen, en este caso, un guante quirúrgico, y se propone la mejora de la herramienta aplicando métodos de redes neuronales para que posteriormente pueda ser desarrollada de forma online; generando un mayor impacto para las necesidades actuales de la población colombiana.


Author(s):  
Astri Novianty ◽  
Fairuz Azmi

The World Health Organization (WHO) estimates that over five percent of the world's population are hearing-impaired. One of the communication problems that often arise between deaf or speech impaired with normal people is the low level of knowledge and understanding of the deaf or speech impaired's normal sign language in their daily communication. To overcome this problem, we build a sign language recognition system, especially for the Indonesian language. The sign language system for Bahasa Indonesia, called Bisindo, is unique from the others. Our work utilizes two image processing algorithms for the pre-processing, namely the grayscale conversion and the histogram equalization. Subsequently, the principal component analysis (PCA) is employed for dimensional reduction and feature extraction. Finally, the support vector machine (SVM) is applied as the classifier. Results indicate that the use of the histogram equalization significantly enhances the accuracy of the recognition. Comprehensive experiments by applying different random seeds for testing data confirm that our method achieves 76.8% accuracy. Accordingly, a more robust method is still open to enhance the accuracy in sign language recognition.


Sign in / Sign up

Export Citation Format

Share Document