Emotion word processing: Evidence from eye movements

2006 ◽  
Author(s):  
Graham G. Scott ◽  
Patrick J. O'Donnell ◽  
Sara C. Sereno
2021 ◽  
Vol 11 (5) ◽  
pp. 553
Author(s):  
Chenggang Wu ◽  
Juan Zhang ◽  
Zhen Yuan

In order to explore the affective priming effect of emotion-label words and emotion-laden words, the current study used unmasked (Experiment 1) and masked (Experiment 2) priming paradigm by including emotion-label words (e.g., sadness, anger) and emotion-laden words (e.g., death, gift) as primes and examined how the two kinds of words acted upon the processing of the target words (all emotion-laden words). Participants were instructed to decide the valence of target words, and their electroencephalogram was recorded at the same time. The behavioral and event-related potential (ERP) results showed that positive words produced a priming effect whereas negative words inhibited target word processing (Experiment 1). In Experiment 2, the inhibition effect of negative emotion-label words on emotion word recognition was found in both behavioral and ERP results, suggesting that modulation of emotion word type on emotion word processing could be observed even in the masked priming paradigm. The two experiments further supported the necessity of defining emotion words under an emotion word type perspective. The implications of the findings are proffered. Specifically, a clear understanding of emotion-label words and emotion-laden words can improve the effectiveness of emotional communications in clinical settings. Theoretically, the emotion word type perspective awaits further explorations and is still at its infancy.


2015 ◽  
Vol 6 ◽  
Author(s):  
Sara C. Sereno ◽  
Graham G. Scott ◽  
Bo Yao ◽  
Elske J. Thaden ◽  
Patrick J. O'Donnell
Keyword(s):  

2020 ◽  
Vol 11 ◽  
Author(s):  
Kimihiro Nakamura ◽  
Tomoe Inomata ◽  
Akira Uno
Keyword(s):  

2011 ◽  
Vol 32 (3) ◽  
pp. 533-551 ◽  
Author(s):  
TUOMO HÄIKIÖ ◽  
RAYMOND BERTRAM ◽  
JUKKA HYÖNÄ

ABSTRACTThe role of morphology in reading development was examined by measuring participants’ eye movements while they read sentences containing either a hyphenated (e.g., ulko-ovi “front door”) or concatenated (e.g., autopeli “racing game”) compound. The participants were Finnish second, fourth, and sixth graders (aged 8, 10, and 12 years, respectively). Fast second graders and all four and sixth graders read concatenated compounds faster than hyphenated compounds. This suggests that they resort to slower morpheme-based processing for hyphenated compounds but prefer to process concatenated compounds via whole-word representations. In contrast, slow second graders’ fixation durations were shorter for hyphenated than concatenated compounds. This implies that they process all compounds via constituent morphemes and that hyphenation comes to aid in this process.


2020 ◽  
Vol 63 (3) ◽  
pp. 896-912
Author(s):  
Yi Lin ◽  
Hongwei Ding ◽  
Yang Zhang

Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics–prosody Stroop task) and cross-modal audiovisual task (i.e., semantics–prosody–face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.


2015 ◽  
Vol 27 (02) ◽  
pp. 1550016 ◽  
Author(s):  
Anwesha Banerjee ◽  
Shreyasi Datta ◽  
Monalisa Pal ◽  
D. N. Tibarewala ◽  
Amit Konar

Dyslexia is a well-known reading disorder that involves difficulty in fluent reading, decoding and processing of words despite adequate intelligence. It is common that the reading speed of dyslexic patients is lower than their normal counterparts, because of slow letter and word processing. Eye movements in dyslexic patients are significantly different from that of normal individuals, in terms of the presence of frequent fixations and stares in the former. This work proposes a Human Computer Interactive system to assist individuals having low reading speed to increase their reading speed by the analysis of eye movements. Eye movement data for different reading speeds is recorded using a laboratory developed Electrooculogram acquisition system. From the data, Adaptive Autoregressive (AAR) parameters, Band Power Estimates and Wavelet Coefficients are extracted as signal features. Reading speeds are classified using different pattern classifiers from which an average accuracy of 94.67% over all classes and participants is obtained using Radial Basis Function (RBF) Support Vector Machine (SVM) Tree classifier and AAR Parameters as features. Friedman test is done to select the best classifier. The trained classifier is used to recognize the reading speeds of a set of new normal individuals. If the reading speeds are less than a preset threshold, that individual is trained repeatedly for 10 days for improvement. An improvement of reading speed is observed by the decrease in the misclassification rate from 45.1% to 9.92% in 10 days for the fastest speed (1 sentence/2 s) over all the subjects. This work is carried out on healthy individuals. However, the results reveal that the proposed system may also be used for training and assisting children with dyslexia or other similar reading disabilities children.


Author(s):  
Max R. Freeman ◽  
Viorica Marian

Abstract A bilingual’s language system is highly interactive. When hearing a second language (L2), bilinguals access native-language (L1) words that share sounds across languages. In the present study, we examine whether input modality and L2 proficiency moderate the extent to which bilinguals activate L1 phonotactic constraints (i.e., rules for combining speech sounds) during L2 processing. Eye movements of English monolinguals and Spanish–English bilinguals were tracked as they searched for a target English word in a visual display. On critical trials, displays included a target that conflicted with the Spanish vowel-onset rule (e.g., spa), as well as a competitor containing the potentially activated “e” onset (e.g., egg). The rule violation was processed either in the visual modality (Experiment 1) or audio-visually (Experiment 2). In both experiments, bilinguals with lower L2 proficiency made more eye movements to competitors than fillers. Findings suggest that bilinguals who have lower L2 proficiency access L1 phonotactic constraints during L2 visual word processing with and without auditory input of the constraint-conflicting structure (e.g., spa). We conclude that the interactivity between a bilingual’s two languages is not limited to words that share form across languages, but also extends to sublexical, rule-based structures.


Sign in / Sign up

Export Citation Format

Share Document