scholarly journals Automatic and feature-specific (anticipatory) prediction-related neural activity in the human auditory system

2018 ◽  
Author(s):  
Gianpaolo Demarchi ◽  
Gaëtan Sanchez ◽  
Nathan Weisz

AbstractPrior experience shapes sensory perception by enabling the formation of expectations with regards to the occurrence of upcoming sensory events. Especially in the visual modality, an increasing number of studies show that prediction-related neural signals carry feature-specific information about the stimulus. This is less established in the auditory modality, in particular without bottom-up signals driving neural activity. We studied whether auditory predictions are sharply tuned to even carry tonotopic specific information. For this purpose, we conducted a Magnetoencephalography (MEG) experiment in which participants passively listened to sound sequences that varied in their regularity (i.e. entropy). Sound presentations were temporally predictable (3 Hz rate), but were occasionally omitted. Training classifiers on the random (high entropy) sound sequence and applying them to all conditions in a time-generalized manner, allowed us to assess whether and how carrier frequency specific information in the MEG signal is modulated according to the entropy level. We show that especially in an ordered (most predictable) sensory context neural activity during the anticipatory and omission periods contains carrier-frequency specific information. Overall our results illustrate in the human auditory system that prediction-related neural activity can be tuned in a tonotopically specific manner.

2019 ◽  
Author(s):  
Kaitlin Fitzgerald ◽  
Ryszard Auksztulewicz ◽  
Alexander Provost ◽  
Bryan Paton ◽  
Zachary Howard ◽  
...  

AbstractThe nervous system is endowed with predictive capabilities, updating neural activity to reflect recent stimulus statistics in a manner which optimises processing of expected future states. This process has previously been formulated within a predictive coding framework, where sensory input is either “explained away” by accurate top-down predictions, or leads to a salient prediction error which triggers an update to the existing prediction when inaccurate. However, exactly how the brain optimises predictive processes in the stochastic and multi-faceted real-world environment remains unclear. Auditory evoked potentials have proven a useful measure of monitoring unsupervised learning of patterning in sound sequences through modulations of the mismatch negativity component which is associated with “change detection” and widely used as a proxy for indexing learnt regularities. Here we used dynamic causal modelling to analyse scalp-recorded auditory evoked potentials collected during presentation of sound sequences consisting of multiple, nested regularities and extend on previous observations of pattern learning restricted to the scalp level or based on single-outcome events. Patterns included the regular characteristics of the two tones presented, consistency in their relative probabilities as either common standard (p = .875) or rare deviant (p = .125), and the regular rate at which these tone probabilities alternated. Significant changes in connectivity reflecting a drop in the precision of prediction errors based on learnt patterns were observed at three points in the sound sequence, corresponding to the three hierarchical levels of nested regularities: (1) when an unexpected “deviant” sound was encountered; (2) when the probabilities of the two tonal states altered; and (3) when there was a change in rate at which probabilities in tonal state changed. These observations provide further evidence of simultaneous pattern learning over multiple timescales, reflected through changes in neural activity below the scalp.Author summaryOur physical environment is comprised of regularities which give structure to our world. This consistency provides the basis for experiential learning, where we can increasingly master our interactions with our surroundings based on prior experience. This type of learning also guides how we sense and perceive the world. The sensory system is known to reduce responses to regular and predictable patterns of input, and conserve neural resources for processing input which is new and unexpected. Temporal pattern learning is particularly important for auditory processing, in disentangling overlapping sound streams and deciphering the information value of sound. For example, understanding human language requires an exquisite sensitivity to the rhythm and tempo of speech sounds. Here we elucidate the sensitivity of the auditory system to concurrent temporal patterning during a sound sequence consisting of nested patterns over three timescales. We used dynamic causal modelling to demonstrate that the auditory system monitors short, intermediate and longer-timescale patterns in sound simultaneously. We also show that these timescales are each represented by distinct connections between different brain areas. These findings support complex interactions between different areas of the brain as responsible for the ability to learn sophisticated patterns in sound even without conscious attention.


2019 ◽  
Author(s):  
Yamil Vidal ◽  
Perrine Brusini ◽  
Michela Bonfieni ◽  
Jacques Mehler ◽  
Tristan Bekinschtein

AbstractAs the evidence of predictive processes playing a role in a wide variety of cognitive domains increases, the brain as a predictive machine becomes a central idea in neuroscience. In auditory processing a considerable amount of progress has been made using variations of the Oddball design, but most of the existing work seems restricted to predictions based on physical features or conditional rules linking successive stimuli. To characterise the predictive capacity of the brain to abstract rules, we present here two experiments that use speech-like stimuli to overcome limitations and avoid common confounds. Pseudowords were presented in isolation, intermixed with infrequent deviants that contained unexpected phoneme sequences. As hypothesized, the occurrence of unexpected sequences of phonemes reliably elicited an early prediction error signal. These prediction error signals do not seemed to be modulated by attentional manipulations due to different task instructions, suggesting that the predictions are deployed even when the task at hand does not volitionally involve error detection. In contrast, the amount of syllables congruent with a standard pseudoword presented before the point of deviance exerted a strong modulation. Prediction error’s amplitude doubled when two congruent syllables were presented instead of one, despite keeping local transitional probabilities constant. This suggest that auditory predictions can be built integrating information beyond the immediate past. In sum, the results presented here further contribute to the understanding of the predictive capabilities of the human auditory system when facing complex stimuli and abstract rules.Significance StatementThe generation of predictions seem to be a prevalent brain computation. In the case of auditory processing this information is intrinsically temporal. The study of auditory predictions has been largely circumscribed to unexpected physical stimuli features or rules connecting consecutive stimuli. In contrast, our everyday experience suggest that the human auditory system is capable of more sophisticated predictions. This becomes evident in the case of speech processing, where abstract rules with long range dependencies are universal. In this article, we present two electroencephalography experiments that use speech-like stimuli to explore the predictive capabilities of the human auditory system. The results presented here increase the understanding of the ability of our auditory system to implement predictions using information beyond the immediate past.


2009 ◽  
Vol 26 (4) ◽  
pp. 377-386 ◽  
Author(s):  
Olivia Ladinig ◽  
Henkjan Honing ◽  
Gáábor Hááden ◽  
Istváán Winkler

BEAT AND METER INDUCTION ARE CONSIDERED important structuring mechanisms underlying the perception of rhythm. Meter comprises two or more levels of hierarchically ordered regular beats with different periodicities. When listening to music, adult listeners weight events within a measure in a hierarchical manner. We tested if listeners without advanced music training form such hierarchical representations for a rhythmical sound sequence under different attention conditions (Attend, Unattend, and Passive). Participants detected occasional weakly and strongly syncopated rhythmic patterns within the context of a strictly metrical rhythmical sound sequence. Detection performance was better and faster when syncopation occurred in a metrically strong as compared to a metrically weaker position. Compatible electrophysiological differences (earlier and higher-amplitude MMN responses) were obtained when participants did not attend the rhythmical sound sequences. These data indicate that hierarchical representations for rhythmical sound sequences are formed preattentively in the human auditory system.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


1978 ◽  
Vol 12 (2) ◽  
pp. 77-79
Author(s):  
I. V. Marchuk ◽  
A. N. Tsisarenko ◽  
�. A. Bakai

Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


Sign in / Sign up

Export Citation Format

Share Document