emotional prosody
Recently Published Documents


TOTAL DOCUMENTS

284
(FIVE YEARS 23)

H-INDEX

36
(FIVE YEARS 0)

Emotion ◽  
2022 ◽  
Author(s):  
Weiyi Ma ◽  
Peng Zhou ◽  
William Forde Thompson
Keyword(s):  

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261354
Author(s):  
Mattias Ekberg ◽  
Josefine Andin ◽  
Stefan Stenfelt ◽  
Örjan Dahlström

Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.


2022 ◽  
Author(s):  
Manuela Filippa ◽  
Doris Lima ◽  
Alicia Grandjean ◽  
Carolina Labbé ◽  
Selim Coll ◽  
...  

Abstract Background: Emotional prosody is the result of the dynamic variation of acoustical non-verbal aspects of language that allow people to convey and recognize emotions. Understanding how this recognition develops during childhood to adolescence is the goal of the present paper. We also aim to test the maturation of the ability to perceive mixed emotions in voice. Methods: We tested 133 children and adolescents, aged between 6 and 17 years old, exposed to 4 kinds of emotional (anger, fear, happiness, and sadness) and neutral linguistic meaningless stimuli. Participants were asked to judge the type and degree of perceived emotion on continuous scales. Results: By means of a general linear mixed model analysis, as predicted, a significant interaction between age and emotion was found. The ability to recognize emotions significantly increased with age for all emotional and neutral vocalizations. Girls recognized anger better than boys, who instead confused fear with neutral prosody more than girls did. Across all ages, only marginally significant differences were found between anger, happiness, and neutral versus sadness, which was more difficult to recognize. Finally, as age increased, participants were significantly more likely to attribute mixed emotions to emotional prosody, showing the progressive complexification of the emotional content representation that young adults perceived in emotional prosody. Conclusions: The ability to identify basic emotions from linguistically meaningless stimuli develops from childhood to adolescence. Interestingly, this maturation was not only evidenced in the accuracy of emotion detection, but also in a complexification of emotion attribution in prosody.


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


Data ◽  
2021 ◽  
Vol 6 (12) ◽  
pp. 130
Author(s):  
Mathilde Marie Duville ◽  
Luz María Alonso-Valerdi ◽  
David I. Ibarra-Zarate

In this paper, the Mexican Emotional Speech Database (MESD) that contains single-word emotional utterances for anger, disgust, fear, happiness, neutral and sadness with adult (male and female) and child voices is described. To validate the emotional prosody of the uttered words, a cubic Support Vector Machines classifier was trained on the basis of prosodic, spectral and voice quality features for each case study: (1) male adult, (2) female adult and (3) child. In addition, cultural, semantic, and linguistic shaping of emotional expression was assessed by statistical analysis. This study was registered at BioMed Central and is part of the implementation of a published study protocol. Mean emotional classification accuracies yielded 93.3%, 89.4% and 83.3% for male, female and child utterances respectively. Statistical analysis emphasized the shaping of emotional prosodies by semantic and linguistic features. A cultural variation in emotional expression was highlighted by comparing the MESD with the INTERFACE for Castilian Spanish database. The MESD provides reliable content for linguistic emotional prosody shaped by the Mexican cultural environment. In order to facilitate further investigations, a corpus controlled for linguistic features and emotional semantics, as well as one containing words repeated across voices and emotions are provided. The MESD is made freely available.


PsyCh Journal ◽  
2021 ◽  
Author(s):  
Nali Deng ◽  
Yifan Sun ◽  
Xuhai Chen ◽  
Weijun Li

2021 ◽  
Vol 39 (2) ◽  
pp. 103-117
Author(s):  
Laurène Léard-Schneider ◽  
Yohana Lévêque

The present study aimed to examine the perception of music and prosody in patients who had undergone a severe traumatic brain injury (TBI). Our second objective was to describe the association between music and prosody impairments in clinical individual presentations. Thirty-six patients who were out of the acute phase underwent a set of music and prosody tests: two subtests of the Montreal Battery for Evaluation of Amusia evaluating respectively melody (scale) and rhythm perception, two subtests of the Montreal Evaluation of Communication on prosody understanding in sentences, and two other tests evaluating prosody understanding in vowels. Forty-two percent of the patients were impaired in the melodic test, 51% were impaired in the rhythmic test, and 71% were impaired in at least one of the four prosody tests. The amusic patients performed significantly worse than non-amusics on the four prosody tests. This descriptive study shows for the first time the high prevalence of music deficits after severe TBI. It also suggests associations between prosody and music impairments, as well as between linguistic and emotional prosody impairments. Causes of these impairments remain to be explored.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Leonor Neves ◽  
Marta Martins ◽  
Ana Isabel Correia ◽  
São Luís Castro ◽  
César F. Lima

The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.


Sign in / Sign up

Export Citation Format

Share Document