scholarly journals Does the Phase of Ongoing EEG Oscillations Predict Auditory Perception?

2020 ◽  
Author(s):  
Idan Tal ◽  
Marcin Leszczynski ◽  
Nima Mesgarani ◽  
Charles E. Schroeder
2020 ◽  
Author(s):  
I. Tal ◽  
M. Leszczynski ◽  
N. Mesgarani ◽  
C.E. Schroeder

SummaryEffective processing of information from the environment requires the brain to selectively sample relevant inputs. The visual perceptual system has been shown to sample information rhythmically, oscillating rapidly between more and less input-favorable states. Evidence of parallel effects in auditory perception is inconclusive. Here, we combined a bilateral pitch-identification task with electroencephalography (EEG) to investigate whether the phase of ongoing EEG predicts auditory discrimination accuracy. We compared prestimulus phase distributions between correct and incorrect trials. Shortly before stimulus onset, each of these distributions showed significant phase concentration, but centered at different phase angles. The effects were strongest in theta and beta frequency bands. The divergence between phase distributions showed a linear relation with accuracy, accounting for at least 10% of inter-individual variance. Discrimination performance oscillated rhythmically at a rate predicted by the neural data. These findings indicate that auditory discrimination threshold oscillates over time along with the phase of ongoing EEG activity. Thus, it appears that auditory perception is discrete rather than continuous, with the phase of ongoing EEG oscillations shaping auditory perception by providing a temporal reference frame for information processing.


1973 ◽  
Vol 16 (3) ◽  
pp. 482-487 ◽  
Author(s):  
June D. Knafle

One hundred and eighty-nine kindergarten children were given a CVCC rhyming test which included four slightly different types of auditory differentiation. They obtained a greater number of correct scores on categories that provided maximum contrasts of final consonant sounds than they did on categories that provided less than maximum contrasts of final consonant sounds. For both sexes, significant differences were found between the categories; although the sex differences were not significant, girls made more correct rhyming responses than boys on the most difficult category.


Author(s):  
Rachel L. C. Mitchell ◽  
Rachel A. Kingston

It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.


1991 ◽  
Vol 36 (10) ◽  
pp. 839-840
Author(s):  
William A. Yost
Keyword(s):  

2021 ◽  
Vol 213 ◽  
pp. 103219
Author(s):  
Clémence Bonnet ◽  
Bénédicte Poulin-Charronnat ◽  
Patrick Bard ◽  
Carine Michel

2021 ◽  
Vol 11 (3) ◽  
pp. 1150
Author(s):  
Stephan Werner ◽  
Florian Klein ◽  
Annika Neidhardt ◽  
Ulrike Sloma ◽  
Christian Schneiderwind ◽  
...  

For a spatial audio reproduction in the context of augmented reality, a position-dynamic binaural synthesis system can be used to synthesize the ear signals for a moving listener. The goal is the fusion of the auditory perception of the virtual audio objects with the real listening environment. Such a system has several components, each of which help to enable a plausible auditory simulation. For each possible position of the listener in the room, a set of binaural room impulse responses (BRIRs) congruent with the expected auditory environment is required to avoid room divergence effects. Adequate and efficient approaches are methods to synthesize new BRIRs using very few measurements of the listening room. The required spatial resolution of the BRIR positions can be estimated by spatial auditory perception thresholds. Retrieving and processing the tracking data of the listener’s head-pose and position as well as convolving BRIRs with an audio signal needs to be done in real-time. This contribution presents work done by the authors including several technical components of such a system in detail. It shows how the single components are affected by psychoacoustics. Furthermore, the paper also discusses the perceptive effect by means of listening tests demonstrating the appropriateness of the approaches.


Sign in / Sign up

Export Citation Format

Share Document