Role of spectral and temporal cues in restoring missing speech information

2010 ◽  
Vol 128 (5) ◽  
pp. EL294-EL299 ◽  
Author(s):  
Gaëtan Gilbert ◽  
Christian Lorenzi
2014 ◽  
Vol 76 (8) ◽  
pp. 2212-2220 ◽  
Author(s):  
Troy A. W. Visser ◽  
Matthew F. Tang ◽  
David R. Badcock ◽  
James T. Enns

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Leah Banellis ◽  
Damian Cruse

Abstract Several theories propose that emotions and self-awareness arise from the integration of internal and external signals and their respective precision-weighted expectations. Supporting these mechanisms, research indicates that the brain uses temporal cues from cardiac signals to predict auditory stimuli and that these predictions and their prediction errors can be observed in the scalp heartbeat-evoked potential (HEP). We investigated the effect of precision modulations on these cross-modal predictive mechanisms, via attention and interoceptive ability. We presented auditory sequences at short (perceived synchronous) or long (perceived asynchronous) cardio-audio delays, with half of the trials including an omission. Participants attended to the cardio-audio synchronicity of the tones (internal attention) or the auditory stimuli alone (external attention). Comparing HEPs during omissions allowed for the observation of pure predictive signals, without contaminating auditory input. We observed an early effect of cardio-audio delay, reflecting a difference in heartbeat-driven expectations. We also observed a larger positivity to the omissions of sounds perceived as synchronous than to the omissions of sounds perceived as asynchronous when attending internally only, consistent with the role of attentional precision for enhancing predictions. These results provide support for attentionally modulated cross-modal predictive coding and suggest a potential tool for investigating its role in emotion and self-awareness.


1996 ◽  
Vol 19 ◽  
pp. 515
Author(s):  
Derek Houston ◽  
Peter Jusczyk ◽  
Ann Marie Jusczyk

2004 ◽  
Vol 69 (1-2) ◽  
pp. 147-156 ◽  
Author(s):  
R�diger Flach ◽  
G�nther Knoblich ◽  
Wolfgang Prinz
Keyword(s):  

2019 ◽  
Vol 9 (3) ◽  
pp. 53 ◽  
Author(s):  
Mark Reybrouck ◽  
Piotr Podlipniak

This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener’s attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.


2002 ◽  
Vol 16 (8) ◽  
pp. 947-961 ◽  
Author(s):  
Lara Pelizzon ◽  
Maria A. Brandimonte ◽  
Riccardo Luccio

2013 ◽  
Vol 56 (5) ◽  
pp. 1402-1408 ◽  
Author(s):  
Daniel Fogerty

Purpose Temporal interruption limits the perception of speech to isolated temporal glimpses. An analysis was conducted to determine the acoustic parameter that best predicts speech recognition from temporal fragments that preserve different types of speech information—namely, consonants and vowels. Method Young listeners with normal hearing previously completed word and sentence recognition tasks that required them to repeat word and sentence material that was temporally interrupted. Interruptions were designed to replace various portions of consonants or vowels with low-level noise. Acoustic analysis of preserved consonant and vowel segments was conducted to investigate the role of the preserved temporal envelope, voicing, and speech duration in predicting performance. Results Results demonstrate that the temporal envelope, predominantly from vowels, is most important for sentence recognition and largely predicts results across consonant and vowel conditions. In contrast, for isolated words the proportion of speech preserved was the best predictor of performance regardless of whether glimpses were from consonants or vowels. Conclusion These findings suggest consideration of the vowel temporal envelope in speech transmission and amplification technologies for improving the intelligibility of temporally interrupted sentences.


2008 ◽  
Vol 124 (5) ◽  
pp. 3249-3260 ◽  
Author(s):  
Sandra Gordon-Salant ◽  
Grace Yeni-Komshian ◽  
Peter Fitzgibbons

Sign in / Sign up

Export Citation Format

Share Document