Similarity Learning for CNN-Based ASL Alphabet Recognition

2021 ◽  
Author(s):  
Atoany Nazareth Fierro Radilla ◽  
Karina Ruby Perez Daniel ◽  
Gibran Benitez-Garcia ◽  
Pedro Najera Garcia ◽  
Ramona Fuentes Valdez

Sign language is an important communication way to convey information among the deaf community, and it is primarily used by people who have hearing or speech impairments. Besides, sign language represents a direct Human-Computer-Interaction (HCI) similar to voice commands. Therefore, the purpose of this study is to investigate and develop a system for American Sign Language (ASL) alphabet recognition using convolutional neural networks. Our proposal is based on semantic similarity learning using Siamese Convolutional Neural Network to reduce the intra-class variation and inter-class similarity among sign images in a Euclidean space. The results of the siamese architecture applied to the ASL alphabet dataset outperform previous works found in the literature. From these results, using t-SNE visualization, we demonstrate that our hypothesis is correct; the ASL recognition improves when increasing the similarity among encoding of the images belonging to the same class and reducing it otherwise.

2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


TEM Journal ◽  
2020 ◽  
pp. 937-943
Author(s):  
Rasha Amer Kadhim ◽  
Muntadher Khamees

In this paper, a real-time ASL recognition system was built with a ConvNet algorithm using real colouring images from a PC camera. The model is the first ASL recognition model to categorize a total of 26 letters, including (J & Z), with two new classes for space and delete, which was explored with new datasets. It was built to contain a wide diversity of attributes like different lightings, skin tones, backgrounds, and a wide variety of situations. The experimental results achieved a high accuracy of about 98.53% for the training and 98.84% for the validation. As well, the system displayed a high accuracy for all the datasets when new test data, which had not been used in the training, were introduced.


Sign in / Sign up

Export Citation Format

Share Document