scholarly journals Sensorimotor characteristics of sign translations modulate EEG when deaf signers read English

2018 ◽  
Author(s):  
Lorna C. Quandt ◽  
Emily Kubicek

ABSTRACTBilingual individuals automatically translate written words from one language to another. While this process is established in spoken-language bilinguals, there is less known about its occurrence in deaf bilinguals who know signed and spoken languages. Since sign language uses motion and space to convey linguistic content, it is possible that action simulation in the brain’s sensorimotor system plays a role in this process. We recorded EEG from deaf participants fluent in ASL as they read individual English words and found significant differences in alpha and beta EEG at central electrode sites during the reading of English words whose ASL translations use two hands, compared to English words whose ASL translations use one hand. Hearing non-signers did not show any differences between conditions. These results demonstrate the involvement of the sensorimotor system in cross-linguistic, cross-modal translation, and suggest that action simulation processes may be key to deaf signers’ language concepts.

2020 ◽  
pp. 016502542095819
Author(s):  
Julia Krebs ◽  
Dietmar Roehm ◽  
Ronnie B. Wilbur ◽  
Evie A. Malaia

Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.


2019 ◽  
Author(s):  
Emily Kubicek ◽  
Lorna Quandt

ABSTRACTWhen a person observes someone else performing an action, the observer’s sensorimotor cortex activates as if the observer is the one performing the action, a phenomenon known as action simulation. While this process has been well-established for basic (e.g. grasping) and complex (e.g. dancing) actions, it remains unknown if the framework of action simulation is applicable to visual languages such as American Sign Language (ASL). We conducted an EEG experiment with deaf signers and hearing non-signers to compare overall sensorimotor EEG between groups, and to test whether sensorimotor systems are differentially sensitive to signs that are produced with one hand (“1H”) or two hands (“2H”). We predicted greater alpha and beta event-related desynchronization (previously correlated with action simulation) during the perception of 2H ASL signs compared to 1H ASL signs, due to greater demands on sensorimotor processing systems required for producing two-handed actions. We recorded EEG from both groups as they observed videos of ASL signs, half 1H and half 2H. Event-related spectral perturbations (ERSPs) in the alpha and beta ranges were computed for the two conditions at central electrode sites overlying the sensorimotor cortex. Sensorimotor EEG responses in both Hearing and Deaf groups were sensitive to the observed gross motor characteristics of the observed signs. We show for the first time that despite hearing non-signers showing overall more sensorimotor cortex involvement during sign observation, mirroring-related processes are in fact involved when deaf signers observe signs.


2020 ◽  
Vol 32 (6) ◽  
pp. 1079-1091
Author(s):  
Stephanie K. Riès ◽  
Linda Nadalet ◽  
Soren Mickelsen ◽  
Megan Mott ◽  
Katherine J. Midgley ◽  
...  

A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture–word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.


2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


2019 ◽  
Vol 5 (2) ◽  
pp. 95-120 ◽  
Author(s):  
Jemina Napier ◽  
Rosemary Oram ◽  
Alys Young ◽  
Robert Skinner

Abstract Deaf people’s lives are predicated to some extent on working with sign language interpreters. The self is translated on a regular basis and is a long-term state of being. Identity becomes known and performed through the translated self in many interactions, especially at work. (Hearing) others’ experience of deaf people, largely formed indirectly through the use of sign language interpreters, is rarely understood as intercultural or from a sociocultural linguistic perspective. This study positions itself at the cross-roads of translation studies, sociolinguistics and deaf studies, to specifically discuss findings from a scoping study that sought, for the first time, to explore whether the experience of being ‘known’ through translation is a pertinent issue for deaf signers. Through interviews with three deaf signers, we examine how they draw upon their linguistic repertoires and adopt bimodal translanguaging strategies in their work to assert or maintain their professional identity, including bypassing their representation through interpreters. This group we refer to as ‘Deaf Contextual Speakers’ (DCS). The DCS revealed the tensions they experienced as deaf signers in reinforcing, contravening or perpetuating language ideologies, with respect to assumptions that hearing people make about them as deaf people, their language use in differing contexts; the status of sign language; as well as the perceptions of other deaf signers about their translanguaging choices. This preliminary discussion of DCS’ engagement with translation, translanguaging and professional identity(ies) will contribute to theoretical discussions of translanguaging through the examination of how this group of deaf people draw upon their multilingual and multimodal repertoires, contingent and situational influences on these choices, and extend our understanding of the relationship between language use, power, identity, translation and representation.


2014 ◽  
Author(s):  
Evie Malaia ◽  
Thomas M Talavage ◽  
Ronnie B Wilbur

Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud, et al., 2012 inter alia). We report the first investigation of the task-negative network in Deaf signers and its functional connectivity – the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG), but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.


Author(s):  
Marc Marschark ◽  
Harry Knoors ◽  
Shirin Antia

This chapter discusses similarities and differences among the co-enrollment programs described in this volume. In doing so, it emphasizes the diversity among deaf learners and the concomitant difficulty of a “one size fits all” approach to co-enrollment programs as well as to deaf education at large. The programs described in this book thus understandably are also diverse in their approach to programming and to communication, in particular. For example, many encourage flexible use of spoken and sign modalities to encourage communication between DHH students, their hearing peers, and their classroom teachers. Others emphasize spoken language or sign language. Several programs include multi-grade classrooms, allowing DHH students to benefit socially and academically from active engagement in the classroom, and some report positive social and academic outcomes. Most programs follow a general education curriculum; all emphasize collaboration among staff as the key to success.


Sign in / Sign up

Export Citation Format

Share Document