scholarly journals The emergence of the phonetic and phonological features in sign language

Nordlyd ◽  
2015 ◽  
Vol 41 (2) ◽  
pp. 183-212 ◽  
Author(s):  
Wendy Sandler

Sign languages offer a unique and informative perspective on the question of the origin of phonological and phonetic features. Here I review research showing that signs are comprised of distinctive features which can be discretely listed and which are organized hierarchically. In these ways sign language feature systems are comparable to those of spoken language. However, the inventory of features and aspects of their organization, while similar across sign languages, are completely unlike those of spoken languages, calling into question claims about innateness of features for either modality. Studies of a young village sign language, Al-Sayyid Bedouin Sign Language (ABSL), demonstrate that phonological structuring is not in evidence at the outset, but rather self-organizes gradually (Sandler et al 2011). However, our new research shows that signature phonetic features of ABSL already can be detected when ABSL signers use signs from Israeli Sign Language. This ABSL ‘accent’ points to the existence of phonetic features that may not be distinctive in any sign language but can distinguish one sign language from another, even at an early stage in the history of a language. Taken together, the findings suggest that physiological, cognitive, and social factors are at play in the emergence of phonetic and phonological features.

2021 ◽  
Vol 14 (2) ◽  
pp. 1-45
Author(s):  
Danielle Bragg ◽  
Naomi Caselli ◽  
Julie A. Hochgesang ◽  
Matt Huenerfauth ◽  
Leah Katz-Hernandez ◽  
...  

Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


2019 ◽  
Vol 39 (4) ◽  
pp. 367-395 ◽  
Author(s):  
Matthew L. Hall ◽  
Wyatte C. Hall ◽  
Naomi K. Caselli

Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.


Gesture ◽  
2013 ◽  
Vol 13 (3) ◽  
pp. 253-286 ◽  
Author(s):  
Oksana Tkachman ◽  
Wendy Sandler

Many sign languages have semantically related noun-verb pairs, such as ‘hairbrush/brush-hair’, which are similar in form due to iconicity. Researchers studying this phenomenon in sign languages have found that the two are distinguished by subtle differences, for example, in type of movement. Here we investigate two young sign languages, Israeli Sign Language (ISL) and Al-Sayyid Bedouin Sign Language (ABSL), to determine whether they have developed a reliable distinction in the formation of noun-verb pairs, despite their youth, and, if so, how. These two young language communities differ from each other in terms of heterogeneity within the community, contact with other languages, and size of population. Using methodology we developed for cross-linguistic comparison, we identify reliable formational distinctions between nouns and related verbs in ISL, but not in ABSL, although early tendencies can be discerned. Our results show that a formal distinction in noun-verb pairs in sign languages is not necessarily present from the beginning, but may develop gradually instead. Taken together with comparative analyses of other linguistic phenomena, the results lend support to the hypothesis that certain social factors such as population size, domains of use, and heterogeneity/homogeneity of the community play a role in the emergence of grammar.


2009 ◽  
Vol 21 (2) ◽  
pp. 193-231 ◽  
Author(s):  
Adam Schembri ◽  
David McKee ◽  
Rachel McKee ◽  
Sara Pivac ◽  
Trevor Johnston ◽  
...  

AbstractIn this study, we consider variation in a class of signs in Australian and New Zealand Sign Languages that includes the signs think, name, and clever. In their citation form, these signs are specified for a place of articulation at or near the signer's forehead or above, but are sometimes produced at lower locations. An analysis of 2667 tokens collected from 205 deaf signers in five sites across Australia and of 2096 tokens collected from 138 deaf signers from three regions in New Zealand indicates that location variation in these signs reflects both linguistic and social factors, as also reported for American Sign Language (Lucas, Bayley, & Valli, 2001). Despite similarities, however, we find that some of the particular factors at work, and the kinds of influence they have, appear to differ in these three signed languages. Moreover, our results suggest that lexical frequency may also play a role.


Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


2020 ◽  
pp. 016502542095819
Author(s):  
Julia Krebs ◽  
Dietmar Roehm ◽  
Ronnie B. Wilbur ◽  
Evie A. Malaia

Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.


PEDIATRICS ◽  
1994 ◽  
Vol 93 (1) ◽  
pp. A62-A62

Just as no one can pinpoint the origins of spoken language in prehistory, the roots of sign language remain hidden from view. What linguists do know is that sign languages have sprung up independently in many different places. Signing probably began with simple gestures, but then evolved into a true language with structured grammar. "In every place we've ever found deaf people, there's sign," says anthropological linguist Bob Johnson. But it's not the same language. "I went to a Mayan village where, out of 400 people, 13 were deaf, and they had their own Mayan Sign - I'd guess it's been maintained for thousands of years." Today at least 50 native sign languages are "spoken" worldwide, all mutually incomprehensible, from British and Israeli Sign to Chinese Sign.


Sign in / Sign up

Export Citation Format

Share Document