SonicASL

Author(s):  
Yincheng Jin ◽  
Yang Gao ◽  
Yanjun Zhu ◽  
Wei Wang ◽  
Jiyang Li ◽  
...  

We propose SonicASL, a real-time gesture recognition system that can recognize sign language gestures on the fly, leveraging front-facing microphones and speakers added to commodity earphones worn by someone facing the person making the gestures. In a user study (N=8), we evaluate the recognition performance of various sign language gestures at both the word and sentence levels. Given 42 frequently used individual words and 30 meaningful sentences, SonicASL can achieve an accuracy of 93.8% and 90.6% for word-level and sentence-level recognition, respectively. The proposed system is tested in two real-world scenarios: indoor (apartment, office, and corridor) and outdoor (sidewalk) environments with pedestrians walking nearby. The results show that our system can provide users with an effective gesture recognition tool with high reliability against environmental factors such as ambient noises and nearby pedestrians.

2017 ◽  
Vol 26 (2) ◽  
pp. 371-385 ◽  
Author(s):  
H.S. Nagendraswamy ◽  
B.M. Chethana Kumara

AbstractRecognition of signs made by deaf people to produce equivalent textual description for normal people to communicate with deaf people is an essential and challenging task for the pattern recognition and image processing research community. Many researchers have made an attempt to standardize and to propose a sign language recognition system. To the best our knowledge, according to the literature survey, most of the work reported has concentrated at the fingerspelling level or at the word level, and less work at the sentence level has been reported. As sign languages are very abstract, fingerspelling or word level interpretation of signs seems to be a tedious and cumbersome task. Although existing research in sign language recognition is active and extensive, it still remains a challenge to achieve accurate recognition and interpretation of signs at the sentence level. In this paper, we made an attempt to address this problem by proposing an approach that exploits the texture description technique and symbolic data analysis concept to characterize and effectively represent a sign, taking into account the intra-class variations due to different signers or the same signers at different instances of time. In order to study the efficacy of the proposed approach, extensive experiments were carried out on a considerably large database of Indian sign language created by us. The experimental results demonstrated that the proposed method has shown good recognition performance in terms of F-measure rates.


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Gayathri. R ◽  
K. Sheela Sobana Rani ◽  
R. Lavanya

Silent speakers face a lot of problems when it comes to communicate their thoughts and views. Furthermore, only few people know the sign language of these silent speakers. They tend to feel awkward to take part any exercises with the typical individuals. They require gesture based communication mediators for their interchanges. The solution to this problem is to provide them a better way to take their message across, “Smart Finger Gesture Recognition System for Silent Speakers” which has been proposed. Instead of using sign language, gesture recognition is done with the help of finger movements. The system consists of data glove, flex sensors, raspberry pi. The flex sensors are fitted on the data gloves and it is used to recognize the finger gestures. Then the ADC module is used to convert the analog values into digital form. After signal conversion, the value is given to Raspberry Pi 3, and it converts the signals into audio output as well as text format using software tool. The proposed framework limits correspondence boundary between moronic individuals and ordinary individuals. Therefore, the recognized finger gestures are conveyed into speech and text so that the normal people can easily communicate with dumb people.


Sign in / Sign up

Export Citation Format

Share Document