scholarly journals Perceptual weights for loudness judgments of six-tone complexes

2014 ◽  
Vol 136 (2) ◽  
pp. 728-735 ◽  
Author(s):  
Walt Jesteadt ◽  
Daniel L. Valente ◽  
Suyash N. Joshi ◽  
Kendra K. Schmid
Keyword(s):  
2016 ◽  
Vol 13 (118) ◽  
pp. 20160057 ◽  
Author(s):  
Erin E. Sutton ◽  
Alican Demir ◽  
Sarah A. Stamper ◽  
Eric S. Fortune ◽  
Noah J. Cowan

Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens , relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish ( n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion.


2020 ◽  
Author(s):  
Cora Kubetschek ◽  
Christoph Kayser

AbstractMany studies speak in favor of a rhythmic mode of listening, by which the encoding of acoustic information is structured by rhythmic neural processes at the time scale of about 1 to 4 Hz. Indeed, psychophysical data suggest that humans sample acoustic information in extended soundscapes not uniformly, but weigh the evidence at different moments for their perceptual decision at the time scale of about 2 Hz. We here test the critical prediction that such rhythmic perceptual sampling is directly related to the state of ongoing brain activity prior to the stimulus. Human participants judged the direction of frequency sweeps in 1.2 s long soundscapes while their EEG was recorded. Computing the perceptual weights attributed to different epochs within these soundscapes contingent on the phase or power of pre-stimulus oscillatory EEG activity revealed a direct link between the 4Hz EEG phase and power prior to the stimulus and the phase of the rhythmic component of these perceptual weights. Hence, the temporal pattern by which the acoustic information is sampled over time for behavior is directly related to pre-stimulus brain activity in the delta/theta band. These results close a gap in the mechanistic picture linking ongoing delta band activity with their role in shaping the segmentation and perceptual influence of subsequent acoustic information.


2006 ◽  
Vol 120 (5) ◽  
pp. 3246-3246
Author(s):  
Lori J. Leibold ◽  
Walt Jesteadt

2021 ◽  
Author(s):  
Kyle Jasmin ◽  
Adam Tierney ◽  
Lori Holt

AbstractSegmental speech units (e.g. phonemes) are described as multidimensional categories wherein perception involves contributions from multiple acoustic input dimensions, and the relative perceptual weights of these dimensions respond dynamically to context. Can prosodic aspects of speech spanning multiple phonemes, syllables or words be characterized similarly? Here we investigated the relative contribution of two acoustic dimensions to word emphasis. Participants categorized instances of a two-word phrase pronounced with typical covariation of fundamental frequency (F0) and duration, and in the context of an artificial ‘accent’ in which F0 and duration covaried atypically. When categorizing ‘accented’ speech, listeners rapidly down-weighted the secondary dimension (duration) while continuing to rely on the primary dimension (F0). This clarifies two core theoretical questions: 1) prosodic categories are signalled by multiple input acoustic dimensions and 2) perceptual cue weights for prosodic categories dynamically adapt to local regularities of speech input.HighlightsProsodic categories are signalled by multiple acoustic dimensions.The influence of these dimensions flexibly adapts to changes in local speech input.This adaptive plasticity may help tune perception to atypical accented speech.Similar learning models may account for segmental and suprasegmental flexibility.


2019 ◽  
Author(s):  
Christoph Kayser

AbstractConverging results suggest that perception is controlled by rhythmic processes in the brain. In the auditory domain, neuroimaging studies show that the perception of brief sounds is shaped by rhythmic activity prior to the stimulus and electrophysiological recordings have linked delta band (1-2 Hz) activity to the functioning of individual neurons. These results have promoted theories of rhythmic modes of listening and generally suggest that the perceptually relevant encoding of acoustic information is structured by rhythmic processes along auditory pathways. A prediction from this perspective – which so far has not been tested – is that such rhythmic processes also shape how acoustic information is combined over time to judge extended soundscapes. The present study was designed to directly test this prediction. Human participants judged the overall change in perceived frequency content in temporally extended (1.2 to 1.8 s) soundscapes, while the perceptual use of the available sensory evidence was quantified using psychophysical reverse correlation. Model-based analysis of individual participant’s perceptual weights revealed a rich temporal structure, including linear trends, a U-shaped profile tied to the overall stimulus duration, and importantly, rhythmic components at the time scale of 1 to 2Hz. The collective evidence found here across four versions of the experiment supports the notion that rhythmic processes operating on the delta band time scale structure how perception samples temporally extended acoustic scenes.


2002 ◽  
Vol 45 (6) ◽  
pp. 1276-1284 ◽  
Author(s):  
Andrea L. Pittman ◽  
Patricia G. Stelmachowicz ◽  
Dawna E. Lewis ◽  
Brenda M. Hoover

To accommodate growing vocabularies, young children are thought to modify their perceptual weights as they gain experience with speech and language. The purpose of the present study was to determine whether the perceptual weights of children and adults with hearing loss differ from those of their normal-hearing counterparts. Adults and children with normal hearing and with hearing loss served as participants. Fricative and vowel segments within consonant-vowel-consonant stimuli were presented at randomly selected levels under two conditions: unaltered and with the formant transition removed. Overall performance for each group was calculated as a function of segment level. Perceptual weights were also calculated for each group using point-biserial correlation coefficients that relate the level of each segment to performance. Results revealed child-adult differences in overall performance and also revealed an effect of hearing loss. Despite these performance differences, the pattern of perceptual weights was similar across all four groups for most conditions.


Sign in / Sign up

Export Citation Format

Share Document