The Frame/Content theory of evolution of speech

2005 ◽  
Vol 6 (2) ◽  
pp. 173-199 ◽  
Author(s):  
Peter F. MacNeilage ◽  
Barbara L. Davis

The Frame/Content theory deals with how and why the first language evolved the present-day speech mode of programming syllable “Frame” structures with segmental (consonant and vowel) “Content” elements. The first words are considered, for biomechanical reasons, to have had the simple syllable frame structures of pre-speech babbling (e.g., “bababa”), and were perhaps parental terms, generated within the parent–infant dyad. Although all gestural origins theories (including Arbib’s theory reviewed here) have iconicity as a plausible alternative hypothesis for the origin of the meaning-signal link for words, they all share the problems of how and why a fully fledged sign language, necessarily involving a structured phonology, changed to a spoken language.

2015 ◽  
Vol 19 (2) ◽  
pp. 128-148 ◽  
Author(s):  
Joshua Williams ◽  
Isabelle Darcy ◽  
Sharlene Newman

AbstractLittle is known about the acquisition of another language modality on second language (L2) working memory (WM) capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken) on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL). Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.


2020 ◽  
pp. 016502542095819
Author(s):  
Julia Krebs ◽  
Dietmar Roehm ◽  
Ronnie B. Wilbur ◽  
Evie A. Malaia

Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.


2017 ◽  
Vol 2 (12) ◽  
pp. 81-88
Author(s):  
Sandy K. Bowen ◽  
Silvia M. Correa-Torres

America's population is more diverse than ever before. The prevalence of students who are culturally and/or linguistically diverse (CLD) has been steadily increasing over the past decade. The changes in America's demographics require teachers who provide services to students with deafblindness to have an increased awareness of different cultures and diversity in today's classrooms, particularly regarding communication choices. Children who are deafblind may use spoken language with appropriate amplification, sign language or modified sign language, and/or some form of augmentative and alternative communication (AAC).


Author(s):  
Stein Erik Ohna

The Norwegian National Curriculum in 1997 introduced four subject curricula for deaf students as part of new legislation giving deaf students who have acquired sign language as their first language the right to instruction in the use of sign language and through the medium of sign language. A few years later, new hearing technologies contributed to substantial changes in the educational context. This situation has challenged the school system, schools, and teachers. The chapter is organized in three sections. First, the educational system and the process leading to the introduction of new legislation is presented. The second section deals with information about the use of curricula for deaf students. The last section discusses issues of students’ achievements, classroom processes, and national policies.


1999 ◽  
Vol 2 (2) ◽  
pp. 187-215 ◽  
Author(s):  
Wendy Sandler

In natural communication, the medium through which language is transmitted plays an important and systematic role. Sentences are broken up rhythmically into chunks; certain elements receive special stress; and, in spoken language, intonational tunes are superimposed onto these chunks in particular ways — all resulting in an intricate system of prosody. Investigations of prosody in Israeli Sign Language demonstrate that sign languages have comparable prosodic systems to those of spoken languages, although the phonetic medium is completely different. Evidence for the prosodic word and for the phonological phrase in ISL is examined here within the context of the relationship between the medium and the message. New evidence is offered to support the claim that facial expression in sign languages corresponds to intonation in spoken languages, and the term “superarticulation” is coined to describe this system in sign languages. Interesting formaldiffer ences between the intonationaltunes of spoken language and the “superarticulatory arrays” of sign language are shown to offer a new perspective on the relation between the phonetic basis of language, its phonological organization, and its communicative content.


2018 ◽  
Vol 44 (3-4) ◽  
pp. 123-208 ◽  
Author(s):  
Philippe Schlenker

AbstractWhile it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some casessign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimensionsign languages are strictly more expressive than spoken languagesbecause iconic phenomena can be found at their logical core. This applies to loci themselves, which maysimultaneouslyfunction as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.


Author(s):  
Cicik Aini

Body language is the first language used by human. It is based on movement, writing code, and sign modern society majority used body language as communication in every sector, thus we use body language in daily life and in every moment. And several legal cauncils use body language in their working side. Sign is an international language used by many circles as like crossing sign, in steel factory military, flight company, and maritime company.تعتبر لغة الاشارة اقدم لغة استخدمها الانسان منذ بدء الخليقة للتحاور والتواصل , نظراً لبساطتها واعتمادها على الحركة والرموز والإيماءات. وفي معظم المجتمعات الحضرية و الريفية يستخدم الافراد ايماءات و اشارات يفهمونها ويقومون بانتاجها للتعبير عن حاجاتهم المتنوعة , وقد نلجأ احياناً لاستخدام الاشارات في حياتنا اليومية ونعتمدها في ظروف خاصة كالتواصل مع شخص لا نفهم لغته وتستعمل بعض الهيئات الرسمية الاشارات في ميادين عملها , وهي لغة عالمية يستخدمها الجميع مثل : اشارات المرور و الاشارات التي يؤديها العاملون في البورصة او السكك الحديدية او في الجيش او مجال الطيران او البحرية او الكشافة وتبين ان هذه الاشارات يصعب الاستغناء عنها بمجتمعنا.


2020 ◽  
Vol 15 (4) ◽  
pp. 199-220
Author(s):  
Matic Pavlič

The basic sign order in Slovenian Sign Language (SZJ) is Subject-Verb-Object (SVO). This is shown by analysing non-topicalised or focalised transitive and ditransitive sentences that were elicited from first language SZJ informants using Picture Description Task. The data further reveal that the visual-gestural modality, through which SZJ is transmitted, plays a role in linearization since visually influenced classifier predicates trigger the non-basic SOV sign order in this language.


Sign in / Sign up

Export Citation Format

Share Document