scholarly journals Perceptual categorization of handling handshapes in British Sign Language

2015 ◽  
Vol 8 (4) ◽  
pp. 501-532 ◽  
Author(s):  
ZED SEVCIKOVA SEHYR ◽  
KEARSY CORMIER

abstractSign languages like British Sign Language (BSL) include partially lexicalized constructions depicting object handling or manipulation – handling constructions. Object sizes gradiently vary, yet it is unclear whether handling handshapes depict handled objects categorically or gradiently. This study investigates the influence of sign language experience on perception of handling handshapes. Deaf signers and hearing non-signers completed perceptual handshape identification and discrimination tasks. We examined whether deaf BSL signers perceived handshape continua categorically or continuously compared with hearing non-signers, and whether reaction times were modulated by linguistic representations. The results revealed similar binary categorization of dynamically presented handling handshapes as deaf and hearing perceivers displayed higher discrimination accuracy on category boundaries, and lower, but above chance, within-category discrimination, suggesting that perceptual categorization was not uniquely mediated by linguistic experience. However, RTs revealed critical differences between groups in processing times; deaf signers’ RTs reflected stronger category bias and increased sensitivity to boundaries, suggesting underlying linguistic representations. Further, handshape variability within categories influenced deaf signers’ discrimination RTs in a manner that suggested graded category organization, with handshape prototype grounding the category. These findings provide an insight into the internal organization of handling handshapes and highlight the complex relationship between sign language, cognition, and gesture.

2020 ◽  
pp. 026765832090685
Author(s):  
Sannah Gulamani ◽  
Chloë Marshall ◽  
Gary Morgan

Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously.


2020 ◽  
Vol 37 (4) ◽  
pp. 571-608
Author(s):  
Diane Brentari ◽  
Laura Horton ◽  
Susan Goldin-Meadow

Abstract Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.


1982 ◽  
Vol 1031 (1) ◽  
pp. 155-178
Author(s):  
James G. Kyle ◽  
Bencie Woll ◽  
Peter Llewellyn-Jones

2016 ◽  
Vol 28 (1) ◽  
pp. 20-40 ◽  
Author(s):  
Velia Cardin ◽  
Eleni Orfanidou ◽  
Lena Kästner ◽  
Jerker Rönnberg ◽  
Bencie Woll ◽  
...  

The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.


Author(s):  
Li Hsieh

Bilingual speakers rely on attentional and executive control to continuously inhibit or activate linguistic representations of competing languages, which leads to an increased efficiency known as “bilingual advantage”. Both monolingual and bilingual speakers were asked to perform multiple tasks of talking on a cell phone while simultaneously attending to simulated driving events. This study examined the effect of bilingualism on participants' performance during a dual-task experiment based on 20 monolingual and 13 bilingual healthy adults. The within-subject and between-subject comparisons were conducted on reaction times of a visual event detection task for (a) only driving and (b) driving while simultaneously engaged in a phone conversation. Results of this study showed that bilingual speakers performed significantly faster than monolingual speakers during the multitasking condition, but not during the driving only condition. Further, bilingual speakers consistently showed a bilingual advantage in reaction times during the multitasking condition, despite varying degrees on a bilingual dominance scale. Overall, experiences in more than one language yield bilingual advantage in better performance than monolingual speakers during a multitasking condition, but not during a single task condition. Regardless of the difference in bilingual proficiency level, such language experience reveals a positive impact on bilingual speakers for multitasking.


Sign in / Sign up

Export Citation Format

Share Document