scholarly journals AI at the Edge for Sign Language Learning Support

Author(s):  
Pietro Battistoni

In the field of multimodal communication, sign language is and continues to be, one of the most understudied areas. Thanks to the recent advances in the field of deep learning, there are far-reaching implications and applications that neural networks can have for sign language mastering. This paper describes a method for ASL alphabet recognition using Convolutional Neural Networks (CNN), which allows to monitor user’s learning progress. American Sign Language (ASL) alphabet recognition by computer vision is a challenging task due to the complexity in ASL signs, high interclass similarities, large intraclass variations, and constant occlusions. We produced a robust model that classifies letters correctly in a majority of cases. The experimental results encouraged us to investigate the adoption of AI techniques to support learning of a sign language, as a natural language with its own syntax and lexicon. The challenge was to deliver a mobile sign language training solution that users may adopt during their everyday life. To satisfy the indispensable additional computational resources to the locally connected end- user devices, we propose the adoption of a Fog-Computing Architecture.

2016 ◽  
Vol 39 (4) ◽  
pp. 833-850 ◽  
Author(s):  
Joshua T. Williams ◽  
Isabelle Darcy ◽  
Sharlene D. Newman

Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which cognitive-linguistic skills predict successful L2 sign language acquisition. A group (n = 25) of adult hearing L2 learners of American Sign Language underwent a cognitive-linguistic test battery before and after one semester of sign language instruction. A number of cognitive-linguistic measures of verbal memory, phonetic categorization skills, and vocabulary knowledge were examined to determine whether they predicted proficiency in a multiple linear regression analysis. Results indicated that English vocabulary knowledge and phonetic categorization skills predicted both vocabulary growth and self-rated proficiency at the end of one semester of instruction. Memory skills did not significantly predict either proficiency measures. These results highlight how linguistic skills in the first language (L1) directly predict L2 learning outcomes regardless of differences in L1 and L2 language modalities.


Author(s):  
Rachaell Nihalaani

Abstract: Sign Language is invaluable to hearing and speaking impaired people and is their only way of communicating among themselves. However, it has limitations with its reach as the rest of the people have no information regarding sign language interpretation. Sign language is communicated via hand gestures and visual modes and is therefore used by hearing and speaking impaired people to intercommunicate. These languages have alphabets and grammar of their own, which cannot be understood by people who have no knowledge about the specific symbols and rules. Thus, it has become essential for everyone to interpret, understand and communicate via sign language to overcome and alleviate the barriers of speech and communication. This can be tackled with the help of machine learning. This model is a Sign Language Interpreter that uses a dataset of images and interprets the sign language alphabets and sentences with 90.9% accuracy. For this paper, we have used an ASL (American Sign Language) Alphabet. We have used the CNN algorithm for this project. This paper ends with a summary of the model’s viability and its usefulness for interpretation of Sign Language. Keywords: Sign Language, Machine Learning, Interpretation model, Convoluted Neural Networks, American Sign Language


2018 ◽  
Vol 31 (2) ◽  
pp. 405-449 ◽  
Author(s):  
Paul McGhee

AbstractThis article examines available (mainly anecdotal) evidence related to the experience of humor among chimpanzees and gorillas in the wild, in captivity and following systematic sign language training. Humor is defined as one form of symbolic play. Positive evidence of object permanence, cross-modal perception, deferred imitation and deception among chimpanzees and gorillas is used to document their cognitive capacity for humor. Playful teasing is proposed as the primordial form of humor among apes in the wild. This same form of humor is commonly found among signing apes, both in overt behavior and in signed communications. A second form of humor emerges in the context of captivity, consisting of throwing feces at human onlookers—who often respond to this with laughter. This early form of humor shows up in signing apes in the form of calling others “dirty,” a sign associated with feces. The diversity of forms of signing humor shown by apes is linked to McGhee, Paul E.Humor: Its origin and development. San Francisco, CA: W. H. Freeman & Co, McGhee, Paul E.Understanding and promoting the development of children’s humor. Dubuque, IA: Kendall/Hunt. model of humor development.


Author(s):  
Aniket Wattamwar

Abstract: This research work presents a prototype system that helps to recognize hand gesture to normal people in order to communicate more effectively with the special people. Aforesaid research work focuses on the problem of gesture recognition in real time that sign language used by the community of deaf people. The problem addressed is based on Digital Image Processing using CNN (Convolutional Neural Networks), Skin Detection and Image Segmentation techniques. This system recognizes gestures of ASL (American Sign Language) including the alphabet and a subset of its words. Keywords: gesture recognition, digital image processing, CNN (Convolutional Neural Networks), image segmentation, ASL (American Sign Language), alphabet


Sign in / Sign up

Export Citation Format

Share Document