scholarly journals Real Time Full Duplex Gesture Vocalizer for Mute People

Author(s):  
Aman Mehta ◽  
Vikas Narhare ◽  
Sara Shaikh ◽  
Sanjay Koli

Making a smart full Duplex interacting system of communication for Mute people. Using this system, normal people can communicate with Mute person by speaking in normal way and the mute person will be able to express what he/she wants to say. Hence the system is termed as full Duplex mode of communication system. When the system is used by mute person, input to the system will be the sign language and output will be in the form of audio(voice) that means the system will vocalize the gesture. When the system is used by normal person, input to the system will be his/her voice and output will be appropriate sign language image which will be understandable to the mute person. In this way this system helps to reduce the time consumption required to communicate between a normal and mute person. This proposed system aims to lower the barrier in communication.

Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 196-210
Author(s):  
Dr.P. Golda Jeyasheeli ◽  
N. Indumathi

Nowadays the interaction among deaf and mute people and normal people is difficult, because normal people scuffle to understand the sense of the gestures. The deaf and dumb people find problem in sentence formation and grammatical correction. To alleviate the issues faced by these people, an automatic sign language sentence generation approach is propounded. In this project, Natural Language Processing (NLP) based methods are used. NLP is a powerful tool for translation in the human language and also responsible for the formation of meaningful sentences from sign language symbols which is also understood by the normal person. In this system, both conventional NLP methods and Deep learning NLP methods are used for sentence generation. The efficiency of both the methods are compared. The generated sentence is displayed in the android application as an output. This system aims to connect the gap in the interaction among the deaf and dumb people and the normal people.


our country India consists of very high population. Almost 9 millions of population in this world can be counted as a deaf or dumb people or both. We all know that the most valuable gift from the god given to human is the capability to watch, hear, spoke & to give response as per the situation arises. We all know communication is one of the most important medium through which one may carve up his/her feelings or express the information to others. The key points in communication are ability of listening and speak the word but so many from us are unlucky because they are not gifted by this ability from the god these are deaf & dumb and people. Now a days so many researches are going on to solve difficulties of these people of our society because they had to face very hard to communicate with normal person. It is too hard for mute(deaf & dumb) people to transmit their information to the normal people. As all normal people are not fully trained to understand different sign lingo, the communication between these two types of people becomes too complex. At the time emergency or other days whenever a mute (Deaf & Dumb) people are travelling passing a message (data) becomes too hard. Due to this disability one that has hearing and speaking disability doesn’t want to stand and face the race with normal person. Since the communication for the mute people is image (Visibility), not acoustic (Audio). For these mute people Hand motion plays a very part for communication. The data transmission between mute-deaf & normal people is always a very challenging job. Deaf people make use of sign language or gestures to make understand what he/she trying to say but it is impossible to understand by hearing people. The admittance to various communication based technologies plays an essential role for these (mute) handicapped peoples. Developing a small and compressed gadget for these mute people is a difficult task. Deaf-Dumb people find a difficulty in communicating with normal people and hence they always stay apart in their societies. Basically deaf-dumb ones uses various sign language for communication, & they always face lots of difficulty while communicating with normal people because they all not able to understand sign language every time. Hence there is always a hurdle in communication between these two type of peoples. There are so many researches has done in regulate to search a simple and easy way for mute people to communicate with the normal people and articulate themselves to the rest of the real world. So many improvements have been done in sign language but all are based on American gesture (sign) Language. Our research work is purely designed to provide an aid to this deaf-mute by developing and designing a smart communication module which will help to renovate sign language to text & speech communication with other and to help them lead a life in a much better way. Our paper represents a static gesture recognition algorithm that will be designed and implemented will be used in the smart communication system to bridge the communication gap between deaf & dumb and normal people. If the algorithm is implemented completely it can be can be also used to capture and analyze emotions of the people in area where high security is desired.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 556
Author(s):  
Lucia Lo Bello ◽  
Gaetano Patti ◽  
Giancarlo Vasta

The IEEE 802.1Q-2018 standard embeds in Ethernet bridges novel features that are very important for automated driving, such as the support for time-driven communications. However, cars move in a world where unpredictable events may occur and determine unforeseen situations. To properly react to such situations, the in-car communication system has to support event-driven transmissions with very low and bounded delays. This work provides the performance evaluation of EDSched, a traffic management scheme for IEEE 802.1Q bridges and end nodes that introduces explicit support for event-driven real-time traffic. EDSched works at the MAC layer and builds upon the mechanisms defined in the IEEE 802.1Q-2018 standard.


Author(s):  
HyeonJung Park ◽  
Youngki Lee ◽  
JeongGil Ko

In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.


Sign in / Sign up

Export Citation Format

Share Document