Can pitch rating methods converge on the frequencies within tonal stimuli?

2019 ◽  
Vol 146 (4) ◽  
pp. 3047-3047
Author(s):  
Jennifer Lentz
Keyword(s):  
2002 ◽  
Vol 13 (04) ◽  
pp. 205-224 ◽  
Author(s):  
Andrew Dimitrijevic ◽  
Sasha M. John ◽  
Patricia Van Roon ◽  
David W. Purcell ◽  
Julija Adamonis ◽  
...  

Multiple auditory steady-state responses were evoked by eight tonal stimuli (four per ear), with each stimulus simultaneously modulated in both amplitude and frequency. The modulation frequencies varied from 80 to 95 Hz and the carrier frequencies were 500, 1000, 2000, and 4000 Hz. For air conduction, the differences between physiologic thresholds for these mixed-modulation (MM) stimuli and behavioral thresholds for pure tones in 31 adult subjects with a sensorineural hearing impairment and 14 adult subjects with normal hearing were 14 ± 11, 5 ± 9, 5 ± 9, and 9 ± 10 dB (correlation coefficients .85, .94, .95, and .95) for the 500-, 1000-, 2000-, and 4000-Hz carrier frequencies, respectively. Similar results were obtained in subjects with simulated conductive hearing losses. Responses to stimuli presented through a forehead bone conductor showed physiologic-behavioral threshold differences of 22 ± 8, 14 ± 5, 5 ± 8, and 5 ± 10 dB for the 500-, 1000-, 2000-, and 4000-Hz carrier frequencies, respectively. These responses were attenuated by white noise presented concurrently through the bone conductor.


2021 ◽  
Author(s):  
Konstantinos Giannos ◽  
George Athanasopoulos ◽  
Emilios Cambouropoulos

Visual associations with auditory stimuli have been the subject of numerous studies. Colour, shape, size, and several other parameters have been linked to musical elements like timbre and pitch. In this paper we aim to examine the relationship between harmonisations with varying degrees of dissonance and visual roughness. Based on past research in which high sensory dissonance was associated with angular shapes, we argued that non-tonal and highly dissonant harmonisations will be associated with angular and rough images, while more consonant stimuli will be associated with images of low visual roughness. A fixed melody was harmonised in 7 different styles, including highly tonal, non-tonal, and random variations. Through a listening task, musically trained participants rated the stimuli in terms of enjoyment, familiarity, and matched them to images of variable roughness. The overall consonance of the stimuli was calculated using two distinct models (Wang et al., 2013; Harrison & Pearce, 2020) and a variant of the aggregate dyadic consonance index (Huron, 1994). Our results demonstrate that dissonance, as calculated by all models, was highly correlated with visual roughness, and enjoyment and familiarity followed expected patterns compared to tonal and non-tonal stimuli. In addition to sensory dissonance, however, it appears that other factors, such as the typicality of chord progressions and the sense of tonality may also influence this cross-modal interaction.


1994 ◽  
Vol 37 (3) ◽  
pp. 662-670 ◽  
Author(s):  
Peter J. Fitzgibbons ◽  
Sandra Gordon-Salant

This study examined auditory temporal sensitivity in young adult and elderly listeners using psychophysical tasks that measured duration discrimination. Listeners in the experiments were divided into groups of young and elderly subjects with normal hearing sensitivity and with mild-to-moderate sloping sensorineural hearing loss. Temporal thresholds in all tasks were measured with an adaptive forced-choice procedure using tonal stimuli centered at 500 Hz and 4000 Hz. Difference limens for duration were measured for tone bursts (250 msec reference duration) and for silent intervals between tone bursts (250 msec and 6.4 msec reference durations). Results showed that the elderly listeners exhibited diminished duration discrimination for both tones and silent intervals when the reference duration was 250 msec. Hearing loss did not affect these results. Discrimination of the brief temporal gap (6.4 msec) was influenced by age and hearing loss, but these effects were not consistent across all listeners. Effects of stimulus frequency were not evident for most of the duration discrimination conditions.


1984 ◽  
Vol 27 (1) ◽  
pp. 20-27 ◽  
Author(s):  
Daniel Geller ◽  
Robert H. Margolis

Three experiments were conducted to explore the utility of magnitude estimation of loudness for hearing aid selection. In Experiment 1 the loudness discomfort level (LDL), most comfortable loudness (MCL), and magnitude estimations (MEs) of loudness were obtained from normal-hearing subjects. MCLs fell within a range of loudnesses that was relatively low on the loudness function. The LDLs were lower than previously published values. Experiment 2 was performed to identify the source of disparity between our LDL data and previously reported results. The effects of instructions are demonstrated and discussed. In Experiment 3 magnitude estimations of loudness were used to determine the loudness of tonal stimuli selected to represent ⅓ octave band levels of speech. Over the 500–4000 Hz range, the contributions of the various frequency regions to the loudness of speech appears to be nearly constant. Methods are proposed for (a) predicting the frequency-gain response of a hearing aid that restores normal loudness for speech for the hearing-impaired listener and (b) psychophysically evaluating the compression characteristic of a hearing aid.


1975 ◽  
Vol 38 (2) ◽  
pp. 231-249 ◽  
Author(s):  
M. M. Merzenich ◽  
P. L. Knight ◽  
G. L. Roth

The representation of sound frequency (and of the cochlear partition) within primary auditory cortex has been investigated with use of microelectrode-mapping techniques in a series of 25 anesthetized cats. Among the results were the following: 1) Within vertical penetrations into AI, best frequency and remarkably constant for successively studied neurons across the active middle and deep cortical layers. 2) There is an orderly representation of frequency (and of represented cochlear place) within AI. Frequency is rerepresented across the mediolateral dimension of the field. On an axis perpendicular to this plane of rerepresentation, best-frequency (represented cochlear place) changes as a simple function of cortical location. 3) Any given frequency band (or sector of the cochlear partition) is represented across a belt of cortex of nearly constant width that runs on a nearly straight axis across AI. 4) There is a disproportionately large cortical surface representation of the highest-frequency octaves (basal cochlea) within AI. 5) The primary and secondary field locations were somewhat variable, when referenced to cortical surface landmarks. 6) Data from long penetrations passing down the rostral bank of the posterior ectosylvian sulcus were consistent with the existence of a vertical unit of organization in AI, akin to cortical columns described in primary visual and somatosensory cortex. 7) Responses to tonal stimuli were encountered in fields dorsocaudal, caudal, ventral, and rostral to AI. There is an orderly representation of the cochlea within the field rostal to AI, with a reversal in best frequencies across its border with AI. 8) Physiological definitions of AI boundaries are consistent with their cytoarchitectonic definition. Some of the implications of these findings are discussed.


1974 ◽  
Vol 2 (1) ◽  
pp. 112-116 ◽  
Author(s):  
Barry Leshowltz ◽  
Raquel Hanzi

Sign in / Sign up

Export Citation Format

Share Document