Lexical bias and prosodic cues: An eye-tracking study of compound/phrase disambiguation

2013 ◽  
Vol 133 (5) ◽  
pp. 3568-3568 ◽  
Author(s):  
Jessica L. Gamache
2020 ◽  
pp. 026765832096315
Author(s):  
Nick Henry ◽  
Carrie N Jackson ◽  
Holger Hopp

This article explores how multiple linguistic cues interact in predictive processing among second language (L2) learners. In a visual-world eye-tracking experiment, we investigated whether learners of German use case and prosody cues together to assign thematic roles and predict post-verbal arguments. During the experiment, participants listened to subject-first and object-first sentences that contained (1) case cues only, or (2) both case and prosody. The results showed that the learners successfully predicted post-verbal arguments on the basis of lexical-semantic information but were less successful in using case cues. However, prediction success increased when both case and prosody were present, suggesting that predictive processing is supported by prosodic cues. Additionally, results show that higher proficiency was associated with faster processing and a greater ability to generate predictions. We conclude that the presence of cue coalitions allows L2 learners to process information more efficiently, and that the L2 processor can exploit the additive use of cues for prediction.


Probus ◽  
2016 ◽  
Vol 28 (1) ◽  
Author(s):  
Meghan E. Armstrong ◽  
Llorenç Andreu ◽  
Núria Esteve-Gibert ◽  
Pilar Prieto

AbstractThis research explores children’s ability to integrate contextual and linguistic cues. Prior work has shown that children are not able to weigh contextual information in an adult-like way and that between the age of 4 and 6 they show difficulties in revising a hypothesis they have made based on early-arriving linguistic information in sentence processing. Therefore we considered children’s ability to confirm or override a context-based hypothesis based on linguistic information. Our objective in this study was to test (1) children’s (ages 4–6) ability to form a hypothesis based on contextual information, (2) their ability to override such a hypothesis based on linguistic information and (3) how children are able to use different types of linguistic cues (morphosyntactic versus prosodic) to confirm or override the initial hypothesis. Results from both offline (pointing) and online (eye tracking) tasks suggest that children in this age group indeed form hypotheses based on contextual information. Age effects were found regarding children’s ability to override these hypotheses. Overall, 4-year-olds were not shown to be able to override their hypotheses using linguistic information of interest. For 5- and 6-year-olds, it depended on the types of linguistic cues that were available to them. Children were better at using morphosyntactic cues to override an initial hypothesis than they were at using prosodic cues to do so. Our results suggest that children slowly develop the ability to override hypotheses based on early-arriving information, even when that information is extralinguistic and contextual. Children must learn to weight different types of cues in an adult-like way. This developmental period of learning to prioritize different cues in an adult-like way is consistent with a constraint-based model of learning.


2019 ◽  
Vol 40 (1) ◽  
pp. 41-63
Author(s):  
Peng Zhou ◽  
Weiyi Ma ◽  
Likan Zhan

The present study investigated whether Mandarin-speaking preschool children with autism spectrum disorders (ASD) were able to use prosodic cues to understand others’ communicative intentions. Using the visual world eye-tracking paradigm, the study found that unlike typically developing (TD) 4-year-olds, both 4-year-olds with ASD and 5-year-olds with ASD exhibited an eye gaze pattern that reflected their inability to use prosodic cues to infer the intended meaning of the speaker. Their performance was relatively independent of their verbal IQ and mean length of utterance. In addition, the findings also show that there was no development in this ability from 4 years of age to 5 years of age. The findings indicate that Mandarin-speaking preschool children with ASD exhibit a deficit in using prosodic cues to understand the communicative intentions of the speaker, and this ability might be inherently impaired in ASD.


2021 ◽  
pp. 1-23
Author(s):  
Cristina Lozano-Argüelles ◽  
Nuria Sagarra

Abstract Prediction underlies many life’s situations including language. Monolinguals and advanced L2 learners use prosodic cues such as stress and tone in a word’s first syllable to predict the word’s suffix. To determine whether the same findings extend to words with non-morphological endings, we investigate whether Spanish monolinguals and advanced learners of Spanish with and without interpreting experience use stress (stressed, unstressed) and syllabic structure (CV, CVC) in a word’s initial syllable to predict its ending. This is crucial to understand whether associations underlying prediction are morphophonolexical associations or purely phonolexical. Interpreters were included due to their extensive experience predicting incoming speech. Participants completed an eye-tracking study where they listened to a sentence while seeing two words and selected the word they heard. Results revealed that monolinguals and interpreters predicted word endings under all conditions, but non-interpreters only predicted in the CVC oxytone condition. These findings are relevant for (1) prediction accounts, showing that phonolexical associations trigger prediction; (2) phonological models, revealing that stress and syllable information in the initial syllable are key for accessing and predicting meaning; and (3) L2 processing models, indicating that L2 learners with interpreting experience use suprasegmental information to access and predict lexical items similar to monolinguals.


2020 ◽  
Vol 63 (7) ◽  
pp. 2245-2254 ◽  
Author(s):  
Jianrong Wang ◽  
Yumeng Zhu ◽  
Yu Chen ◽  
Abdilbar Mamat ◽  
Mei Yu ◽  
...  

Purpose The primary purpose of this study was to explore the audiovisual speech perception strategies.80.23.47 adopted by normal-hearing and deaf people in processing familiar and unfamiliar languages. Our primary hypothesis was that they would adopt different perception strategies due to different sensory experiences at an early age, limitations of the physical device, and the developmental gap of language, and others. Method Thirty normal-hearing adults and 33 prelingually deaf adults participated in the study. They were asked to perform judgment and listening tasks while watching videos of a Uygur–Mandarin bilingual speaker in a familiar language (Standard Chinese) or an unfamiliar language (Modern Uygur) while their eye movements were recorded by eye-tracking technology. Results Task had a slight influence on the distribution of selective attention, whereas subject and language had significant influences. To be specific, the normal-hearing and the d10eaf participants mainly gazed at the speaker's eyes and mouth, respectively, in the experiment; moreover, while the normal-hearing participants had to stare longer at the speaker's mouth when they confronted with the unfamiliar language Modern Uygur, the deaf participant did not change their attention allocation pattern when perceiving the two languages. Conclusions Normal-hearing and deaf adults adopt different audiovisual speech perception strategies: Normal-hearing adults mainly look at the eyes, and deaf adults mainly look at the mouth. Additionally, language and task can also modulate the speech perception strategy.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Pirita Pyykkönen ◽  
Juhani Järvikivi

A visual world eye-tracking study investigated the activation and persistence of implicit causality information in spoken language comprehension. We showed that people infer the implicit causality of verbs as soon as they encounter such verbs in discourse, as is predicted by proponents of the immediate focusing account ( Greene & McKoon, 1995 ; Koornneef & Van Berkum, 2006 ; Van Berkum, Koornneef, Otten, & Nieuwland, 2007 ). Interestingly, we observed activation of implicit causality information even before people encountered the causal conjunction. However, while implicit causality information was persistent as the discourse unfolded, it did not have a privileged role as a focusing cue immediately at the ambiguous pronoun when people were resolving its antecedent. Instead, our study indicated that implicit causality does not affect all referents to the same extent, rather it interacts with other cues in the discourse, especially when one of the referents is already prominently in focus.


Sign in / Sign up

Export Citation Format

Share Document