The Impact of Aging on Spatial Abilities in Deaf Users of a Sign Language

Author(s):  
Stéphanie Luna ◽  
Sven Joubert ◽  
Marion Blondel ◽  
Carlo Cecchetto ◽  
Jean-Pierre Gagné

Abstract Research involving the general population of people who use a spoken language to communicate has demonstrated that older adults experience cognitive and physical changes associated with aging. Notwithstanding the differences in the cognitive processes involved in sign and spoken languages, it is possible that aging can also affect cognitive processing in deaf signers. This research aims to explore the impact of aging on spatial abilities among sign language users. Results showed that younger signers were more accurate than older signers on all spatial tasks. Therefore, the age-related impact on spatial abilities found in the older hearing population can be generalized to the population of signers. Potential implications for sign language production and comprehension are discussed.

1986 ◽  
Vol 9 (3) ◽  
pp. 503-517 ◽  
Author(s):  
Ralph E. Hoffman

AbstractHow is it that many schizophrenics identify certain instances of verbal imagery as hallucinatory? Most investigators have assumed that alterations in sensory features of imagery explain this. This approach, however, has not yielded a definitive picture of the nature of verbal hallucinations. An alternative perspective suggests itself if one allows the possibility that the nonself quality of hallucinations is inferred on the basis of the experience of unintendedness that accompanies imagery production. Information-processing models of “intentional” cognitive processes call for abstract planning representations that are linked to goals and beliefs. Unintended actions - and imagery - can reflect planning disruptions whereby cognitive products do not cohere with concurrent goals. A model of schizophrenic speech disorganization is presented that postulates a disturbance of discourse planning. Insofar as verbal imagery can be viewed as inwardly directed speech, a consequence of such planning disturbances could be the production of unintended imagery. This link between the outward disorganization of schizophrenic speech and unintended verbal imagery is statistically supported by comparing the speech behavior of hallucinating and nonhallucinating schizophrenics. Studies of “borderline” hallucinations during normal, “goal-less” relaxation and drowsiness suggest that experiential unintendedness leads to a nonpathological variant of hallucinatory otherness that is correctable upon emerging from such passive cognitive states. This contrasts with the schizophrenic case, where nonconcordance with cognitive goals reinforces the unintendedness of verbal images and sustains the conviction of an external source. This model compares favorably with earlier models of verbal hallucinations and provides further evidence for a language production disorder in many schizophrenics.Short Abstract: How is it that many schizophrenics identify certain instances of verbal imagery as hallucinatory? This paper proposes that the critical feature identifying hallucinations is the experience of unintendedness. This experience is nonpathological during passive conscious states but pathological if occurring during goal-directed cognitive processing. A model of schizophrenic speech disorganization is presented that postulates a disturbance of discourse planning that specifies communicative intentions. These alterations could generate unintended verbal imagery as well. Statistical data are offered to support the model, and relevant empirical studies are reviewed.


2018 ◽  
Vol 75 (1) ◽  
pp. 155-161 ◽  
Author(s):  
Joanna M Blodgett ◽  
Diana Kuh ◽  
Rebecca Hardy ◽  
Daniel H J Davis ◽  
Rachel Cooper

Abstract Background Cognitive processing plays a crucial role in the integration of sensory input and motor output that facilitates balance. However, whether balance ability in adulthood is influenced by cognitive pathways established in childhood is unclear, especially as no study has examined if these relationships change with age. We aimed to investigate associations between childhood cognition and age-related change in standing balance between mid and later life. Methods Data on 2,380 participants from the MRC National Survey of Health and Development were included in analyses. Repeated measures multilevel models estimated the association between childhood cognition, assessed at age 15, and log-transformed balance time, assessed at ages 53, 60–64, and 69 using the one-legged stand with eyes closed. Adjustments were made for sex, death, attrition, anthropometric measures, health conditions, health behaviors, education, other indicators of socioeconomic position (SEP), and adult verbal memory. Results In a sex-adjusted model, 1 standard deviation increase in childhood cognition was associated with a 13% (95% confidence interval: 10, 16; p < .001) increase in balance time at age 53, and this association got smaller with age (cognition × age interaction: p < .001). Adjustments for education, adult verbal memory, and SEP largely explained these associations. Conclusions Higher childhood cognition was associated with better balance performance in midlife, with diminishing associations with increasing age. The impact of adjustment for education, cognition and other indicators of SEP suggested a common pathway through which cognition is associated with balance across life. Further research is needed to understand underlying mechanisms, which may have important implications for falls risk and maintenance of physical capability.


Author(s):  
Jack Kuhns ◽  
Dayna R. Touron

The study of aging and cognitive skill learning is concerned with age-related changes and differences in how we gather, store, and use information and abilities. As life expectancy continues to rise, resulting in greater numbers and proportions of older individuals in the population, understanding the development and retention of skills across the lifespan is increasingly important. Older adults’ task performance in cognitive skill learning is often equal to that of young adults, albeit not as efficient, where older adults often require more time to complete training. Investigations of age differences in fundamental cognitive processes of attention, memory, or executive functioning generally reveal declines in older adults. These are related to a slowing of cognitive processing. Slowing in cognitive processing results in longer time necessary to complete tasks which can interfere with the fidelity of older adults’ cognitive processes in time-limited scenarios. Despite this, older adults maintain comparable rates of learning with young adults, albeit with some reduced efficiency in more complex tasks. The effectiveness of older adults’ learning is also impacted by a lesser tendency to recognize and adopt efficient learning strategies, as well as less flexibility in strategy use relative to younger adults. In learning tasks that involve a transition from using a complex initial strategy to relying on memory retrieval, older adults show a volitional avoidance of memory that is related to lower memory confidence and an impoverished mental model of the task. Declines in learning are not entirely problematic from a functional perspective, however, as older adults can often rely upon their extensive knowledge to compensate for certain deficiencies, particularly in everyday tasks. Indeed, domains where older adults have maintained expertise are somewhat insulated from other age-related declines.


2019 ◽  
Vol 62 (7) ◽  
pp. 2455-2472
Author(s):  
Jean K. Gordon ◽  
Kim Andersen ◽  
Gabriella Perez ◽  
Eileen Finnegan

Purpose Spoken language serves as a primary means of social interaction, but speech and language skills change with age, a potential source of age-related stereotyping. The goals of this study were to examine how accurately age could be estimated from language samples, to determine which speech and language cues were most informative, and to assess the impact of perceived age on judgments of the speakers' communication skills. Method We analyzed narratives from 84 speakers aged 30–89 years to identify age-related differences and compared these differences to factors affecting perceptions of age and communicative competence. Three groups of raters estimated the speakers' ages and judged the quality of their communication: 44 listeners listened to audio-recorded narratives, 51 readers read transcripts of the narratives, and 24 voice raters listened to 10-s samples of speech extracted from one of the narratives. Results Older speakers spoke more slowly but showed minimal linguistic differences compared to younger speakers. Speakers' ages were estimated quite accurately, even from 10-s samples. Estimates were largely based on cues available in the acoustic signal—speech rate and vocal characteristics—so listeners were more accurate than readers. However, an overreliance on these cues also contributed to overestimates of speakers' ages. Communication ratings were not strongly related to perceived age but were influenced by various aspects of speech and language. In particular, speakers who produced longer narratives and spoke more quickly were judged to be better communicators. Conclusion Speakers tend to be judged on relatively superficial aspects of spoken language, in part because age-related change is most evident at these levels. Implications of these findings for age-related theories of stereotyping and speech-language intervention are discussed.


Target ◽  
1995 ◽  
Vol 7 (1) ◽  
pp. 135-149 ◽  
Author(s):  
William P. Isham

Abstract Research using interpreters who work with signed languages can aid us in understanding the cognitive processes of interpretation in general. Using American Sign Language (ASL) as an example, the nature of signed languages is outlined first. Then the difference between signed languages and manual codes for spoken languages is delineated, and it is argued that these two manners of communicating through the visual channel offer a unique research opportunity. Finally, an example from recent research is used to demonstrate how comparisons between spoken-language interpreters and signed-language interpreters can be used to test hypotheses regarding interpretation.


2019 ◽  
Author(s):  
Chad S. Rogers ◽  
Jonathan E. Peelle

Understanding spoken language requires transmission of the acoustic signal up the ascending auditory pathway. However, in many cases speech understanding also relies on cognitive processes that act on the acoustic signal. One area in which cognitive processing is particularly striking during speech comprehension is when the acoustic signal is made less challenging, which might happen due to background noise, talker characteristics, or hearing loss. This chapter focuses on the interaction between hearing and cognition in hearing loss in aging. The chapter begins with a review of common age-related changes in hearing and cognition, followed by summary evidence from behavioral, pupillometric, and neuroimaging paradigms that elucidate the interplay between hearing ability and cognition. Across a variety of experimental paradigms, there is compelling evidence that when listeners process acoustically challenging speech, additional cognitive processing is required compared to acoustically clear speech. This increase in cognitive processing is associated with specific brain networks, with the clearest evidence implicating the cingulo-opercular and executive attention networks and prefrontal cortex. Individual differences in hearing and cognitive ability thus determine the cognitive demand faced by a particular listener, and the cognitive and neural resources needed to aid in speech perception.


2022 ◽  
Vol 12 ◽  
Author(s):  
Pheobe Wenyi Sun ◽  
Andrew Hines

Perceived quality of experience for speech listening is influenced by cognitive processing and can affect a listener's comprehension, engagement and responsiveness. Quality of Experience (QoE) is a paradigm used within the media technology community to assess media quality by linking quantifiable media parameters to perceived quality. The established QoE framework provides a general definition of QoE, categories of possible quality influencing factors, and an identified QoE formation pathway. These assist researchers to implement experiments and to evaluate perceived quality for any applications. The QoE formation pathways in the current framework do not attempt to capture cognitive effort effects and the standard experimental assessments of QoE minimize the influence from cognitive processes. The impact of cognitive processes and how they can be captured within the QoE framework have not been systematically studied by the QoE research community. This article reviews research from the fields of audiology and cognitive science regarding how cognitive processes influence the quality of listening experience. The cognitive listening mechanism theories are compared with the QoE formation mechanism in terms of the quality contributing factors, experience formation pathways, and measures for experience. The review prompts a proposal to integrate mechanisms from audiology and cognitive science into the existing QoE framework in order to properly account for cognitive load in speech listening. The article concludes with a discussion regarding how an extended framework could facilitate measurement of QoE in broader and more realistic application scenarios where cognitive effort is a material consideration.


2020 ◽  
Vol 32 (6) ◽  
pp. 1079-1091
Author(s):  
Stephanie K. Riès ◽  
Linda Nadalet ◽  
Soren Mickelsen ◽  
Megan Mott ◽  
Katherine J. Midgley ◽  
...  

A domain-general monitoring mechanism is proposed to be involved in overt speech monitoring. This mechanism is reflected in a medial frontal component, the error negativity (Ne), present in both errors and correct trials (Ne-like wave) but larger in errors than correct trials. In overt speech production, this negativity starts to rise before speech onset and is therefore associated with inner speech monitoring. Here, we investigate whether the same monitoring mechanism is involved in sign language production. Twenty deaf signers (American Sign Language [ASL] dominant) and 16 hearing signers (English dominant) participated in a picture–word interference paradigm in ASL. As in previous studies, ASL naming latencies were measured using the keyboard release time. EEG results revealed a medial frontal negativity peaking within 15 msec after keyboard release in the deaf signers. This negativity was larger in errors than correct trials, as previously observed in spoken language production. No clear negativity was present in the hearing signers. In addition, the slope of the Ne was correlated with ASL proficiency (measured by the ASL Sentence Repetition Task) across signers. Our results indicate that a similar medial frontal mechanism is engaged in preoutput language monitoring in sign and spoken language production. These results suggest that the monitoring mechanism reflected by the Ne/Ne-like wave is independent of output modality (i.e., spoken or signed) and likely monitors prearticulatory representations of language. Differences between groups may be linked to several factors including differences in language proficiency or more variable lexical access to motor programming latencies for hearing than deaf signers.


2003 ◽  
Vol 15 (5) ◽  
pp. 718-730 ◽  
Author(s):  
David P. Corina ◽  
Lucila San Jose-Robertson ◽  
Andre Guillemin ◽  
Julia High ◽  
Allen R. Braun

Unlike spoken languages, sign languages of the deaf make use of two primary articulators, the right and left hands, to produce signs. This situation has no obvious parallel in spoken languages, in which speech articulation is carried out by symmetrical unitary midline vocal structures. This arrangement affords a unique opportunity to examine the robustness of linguistic systems that underlie language production in the face of contrasting articulatory demands and to chart the differential effects of handedness for highly skilled movements. Positron emission tomography (PET) technique was used to examine brain activation in 16 deaf users of American Sign Language (ASL) while subjects generated verb signs independently with their right dominant and left nondominant hands (compared to the repetition of noun signs). Nearly identical patterns of left inferior frontal and right cerebellum activity were observed. This pattern of activation during signing is consistent with patterns that have been reported for spoken languages including evidence for specializations of inferior frontal regions related to lexical–semantic processing, search and retrieval, and phonological encoding. These results indicate that lexical–semantic processing in production relies upon left-hemisphere regions regardless of the modality in which a language is realized, and that this left-hemisphere activation is stable, even in the face of conflicting articulatory demands. In addition, these data provide evidence for the role of the right posterolateral cerebellum in linguistic–cognitive processing and evidence of a left ventral fusiform contribution to sign language processing


Languages ◽  
2018 ◽  
Vol 3 (3) ◽  
pp. 32
Author(s):  
Ronice Quadros ◽  
Diane Lillo-Martin

This paper presents an analysis of heritage signers: bimodal bilinguals, who are adult hearing children of Deaf parents who acquired sign language at home with their parents and the spoken language from the surrounding community. Analyzing heritage language with bimodal bilinguals who possess pairs of languages in different modalities provides a new kind of evidence for understanding the heritage language phenomenon as well as for theoretical issues regarding human language. Language production data were collected from four Brazilian bimodal bilinguals separately in both sign and speech, as well as from monolingual comparison Deaf signers and hearing speakers. The data were subsequently analyzed for various grammatical components. As with other types of heritage speakers, we observed a great degree of individual variation in the sign (heritage) language of balanced participants who patterned similarly to the monolingual signers, compared to those whose use of sign language differed greatly from monolinguals. One participant showed some weaknesses in the second (spoken) language. We approach the variation in language fluency in the two languages by considering the different contexts of language development and continuing use.


Sign in / Sign up

Export Citation Format

Share Document