scholarly journals Degree of Language Experience Modulates Visual Attention to Visible Speech and Iconic Gestures During Clear and Degraded Speech Comprehension

2019 ◽  
Vol 43 (10) ◽  
Author(s):  
Linda Drijvers ◽  
Julija Vaitonytė ◽  
Asli Özyürek
2021 ◽  
Vol 12 ◽  
Author(s):  
Kendra Gimhani Kandana Arachchige ◽  
Wivine Blekic ◽  
Isabelle Simoes Loureiro ◽  
Laurent Lefebvre

Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants’ gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.


2019 ◽  
Vol 63 (2) ◽  
pp. 209-220 ◽  
Author(s):  
Linda Drijvers ◽  
Asli Özyürek

Native listeners benefit from both visible speech and iconic gestures to enhance degraded speech comprehension (Drijvers & Ozyürek, 2017). We tested how highly proficient non-native listeners benefit from these visual articulators compared to native listeners. We presented videos of an actress uttering a verb in clear, moderately, or severely degraded speech, while her lips were blurred, visible, or visible and accompanied by a gesture. Our results revealed that unlike native listeners, non-native listeners were less likely to benefit from the combined enhancement of visible speech and gestures, especially since the benefit from visible speech was minimal when the signal quality was not sufficient.


2017 ◽  
Vol 60 (1) ◽  
pp. 212-222 ◽  
Author(s):  
Linda Drijvers ◽  
Asli Özyürek

Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. Conclusions When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.


2019 ◽  
Author(s):  
Chloé Stoll ◽  
Matthew William Geoffrey Dye

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers – both deaf and hearing – in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.


Author(s):  
Douglas B. Quine ◽  
David Regan ◽  
Thomas J. Murray

SUMMARY:Delays of auditory perception at three frequencies were measured in 30 multiple sclerosis patients using a pscyhophysical technique. Nineteen patients had abnormal delays at one or more tone frequencies, though 15 had normal audiograms at those frequencies. In addition, auditory acuity for left-right asynchrony was abnormally poor in 13 patients, 9 of whom had normal audiograms. Such delays of auditory perception within a restricted frequency band may provide a partial explanation for degraded speech comprehension in some multiple sclerosis patients.


2016 ◽  
Vol 11 (1) ◽  
pp. 55-75 ◽  
Author(s):  
Andrea Schremm ◽  
Pelle Söderström ◽  
Merle Horne ◽  
Mikael Roll

Swedish native speakers (NSs) unconsciously use tones realized on word stems to predict upcoming suffixes during speech comprehension. The present response time study investigated whether relatively proficient second language (L2) learners of Swedish have acquired the underlying association between tones and suffixes without explicit instruction, internalizing a feature that is specific to their L2. Learners listened to sentences in which the tone on the verb stem either validly or invalidly cued the following present or past tense inflection. Invalidly cued suffixes led to increased decision latencies in a verb tense identification task, suggesting that learners pre-activated suffixes associated with stem tones in a manner similar to NSs. Thus, L2 learners seemed to have acquired the tone-suffix connections through implicit mechanisms. Correctly cued suffixes were associated with a smaller processing advantage in the L2 group relative to NSs performing the same task; nevertheless, results suggest a tendency for increasingly native-like tone processing with cumulative language experience. The way suffix type affected response times also indicates exposure-related effects.


2020 ◽  
Author(s):  
Briony Banks ◽  
Emma Gowen ◽  
Kevin Munro ◽  
patti adank

Visual cues from a speaker’s face may improve perceptual adaptation to degraded speech over time, but current evidence is limited. We aimed to replicate results from previous studies and extend them to more demanding speech stimuli (sentences), to better represent real-life, challenging speech comprehension. In addition, we investigated whether particular eye gaze patterns towards the speaker’s mouth were related to adaptation, hypothesising that listeners who looked more at the speaker’s mouth would show greater adaptation. A group of listeners were presented with noise-vocoded sentences in audiovisual format while a control group were presented with the audio signal only, presented congruently with a still image of the speaker’s face. Results of previous adaptation studies were partially replicated: the audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall (after exposure to 90 sentences). Longer fixations on the speaker’s mouth in the audiovisual group were related to better overall accuracy, although evidence for this relationship was relatively weak. An exploratory analysis further showed that the duration of fixations to the speaker’s mouth decreased over time. The results suggest that the benefits from visual cues to adaptation to unfamiliar speech vary more than previously thought. Longer fixations on a speaker’s mouth may play a role in successfully decoding these cues, but more evidence is needed to fully establish how patterns of eye gaze are related to audiovisual speech recognition.


2020 ◽  
Vol 10 (8) ◽  
pp. 550
Author(s):  
Natsuki Atagi ◽  
Scott P. Johnson

Early social-linguistic experience influences infants’ attention to faces but little is known about how infants attend to the faces of speakers engaging in conversation. Here, we examine how monolingual and bilingual infants attended to speakers during a conversation, and we tested for the possibility that infants’ visual attention may be modulated by familiarity with the language being spoken. We recorded eye movements in monolingual and bilingual 15-to-24-month-olds as they watched video clips of speakers using infant-directed speech while conversing in a familiar or unfamiliar language, with each other and to the infant. Overall, findings suggest that bilingual infants visually shift attention to a speaker prior to speech onset more when an unfamiliar, rather than a familiar, language is being spoken. However, this same effect was not found for monolingual infants. Thus, infants’ familiarity with the language being spoken, and perhaps their language experiences, may modulate infants’ visual attention to speakers.


Sign in / Sign up

Export Citation Format

Share Document