scholarly journals Sign Language Translator Using YOLO Algorithm

2021 ◽  
Author(s):  
Bhavadharshini M ◽  
Josephine Racheal J ◽  
Kamali M ◽  
Sankar S ◽  
Bhavadharshini M

Sign language is a terminology that encloses a motion of hand gestures which is an environment for the auditory impairment, individual (deaf or dumb) to deal with others. Nevertheless, so as to impart with the hearing impaired individual, the communicator obtains to acquire acquaintance in sign language. As follows is frequent to make undoubted that the message provided by the hearing impaired person acknowledged. This implemented system propounds an implementation of real time American Sign Language perception in Convolutional Neural Network (CNN) with the support of You Only Look Once version (YOLO) algorithm. The algorithm initially executes data acquisition, subsequently the pre-processing of gestures and are conducted to trace hand movement utilize a combinational algorithm.

TEM Journal ◽  
2020 ◽  
pp. 937-943
Author(s):  
Rasha Amer Kadhim ◽  
Muntadher Khamees

In this paper, a real-time ASL recognition system was built with a ConvNet algorithm using real colouring images from a PC camera. The model is the first ASL recognition model to categorize a total of 26 letters, including (J & Z), with two new classes for space and delete, which was explored with new datasets. It was built to contain a wide diversity of attributes like different lightings, skin tones, backgrounds, and a wide variety of situations. The experimental results achieved a high accuracy of about 98.53% for the training and 98.84% for the validation. As well, the system displayed a high accuracy for all the datasets when new test data, which had not been used in the training, were introduced.


Sign language is a language that involves a movement of hand gestures. It is a medium for the hearing impaired person (deaf or mute) to communicate with others. However, in order to communicate with the hearing impaired person, the communicator has to have knowledge in sign language. This is to ensure that the message delivered by the hearing impaired person is understood. This project proposes a real time Malaysian sign language detection based on the Convolutional Neural Network (CNN) technique utilizing the You Only Look Once version 3 (YOLOv3) algorithm. Sign language images from web sources and recorded sign language videos by frames were collected. The images were labelled either alphabets or movements. Once the preprocessing phase was completed, the system was trained and tested on the Darknet framework. The system achieved 63 percent accuracy with learning saturation (overfitting) at 7000 iterations. Once it is successfully conducted, this model will be integrated with other platform in the future such as mobile application.


2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


Sign in / Sign up

Export Citation Format

Share Document