auditory illusion
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 2)

H-INDEX

10
(FIVE YEARS 0)

PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0250042
Author(s):  
Hollie A. C. Mullin ◽  
Evan A. Norkey ◽  
Anisha Kodwani ◽  
Michael S. Vitevitch ◽  
Nichol Castro

The Speech-to-Song Illusion is an auditory illusion that occurs when a spoken phrase is repeatedly presented. After several presentations, listeners report that the phrase seems to be sung rather than spoken. Previous work [1] indicates that the mechanisms—priming, activation, and satiation—found in the language processing model, Node Structure Theory (NST), may account for the Speech-to-Song Illusion. NST also accounts for other language-related phenomena, including increased experiences in older adults of the tip-of-the-tongue state (where you know a word, but can’t retrieve it). Based on the mechanism in NST used to account for the age-related increase in the tip-of-the-tongue phenomenon, we predicted that older adults may be less likely to experience the Speech-to-Song Illusion than younger adults. Adults of a wide range of ages heard a stimulus known to evoke the Speech-to-Song Illusion. Then, they were asked to indicate if they experienced the illusion or not (Study 1), to respond using a 5-point song-likeness rating scale (Study 2), or to indicate when the percept changed from speech to song (Study 3). The results of these studies suggest that the illusion is experienced with similar frequency and strength, and after the same number of repetitions by adult listeners regardless of age.


Author(s):  
Carole Leung ◽  
De-Hui Ruth Zhou

The speech-to-song illusion is a type of auditory illusion that the repetition of a part of a sentence would change people’s perception tendency from speech-like to song-like. The study aims to examine how pace, emotion, and language tonality affect people’s experience of the speech-to-song illusion. It uses a between-subject (Pace: fast, normal, vs. slow) and within-subject (Emotion: positive, negative, vs. neutral; language tonality: tonal language vs. non-tonal language) design. Sixty Hong Kong college students were randomly assigned to one of the three conditions characterized by pace. They listened to 12 audio stimuli, each with repetitions of a short excerpt, and rated their subjective perception of the presented phrase, whether it sounded like a speech or a song, on a five-point Likert-scale. Paired-sample t-tests and repeated measures ANOVAs were used to analyze the data. The findings reveal that a faster speech pace could strengthen the tendency of the speech-to-song illusion. Neither emotion nor language tonality show a statistically significant influence on the speech-to-song illusion. This study suggests that the perception of sound should be in a continuum and facilitates the understanding of song production in which speech can turn into music by having repetitive phrases and to be played in a relatively fast pace.


2017 ◽  
Vol 141 (5) ◽  
pp. 3997-3997
Author(s):  
Karlheinz Brandenburg ◽  
Florian Klein ◽  
Annika Neidhardt ◽  
Stephan Werner
Keyword(s):  

2017 ◽  
Vol 141 (5) ◽  
pp. 3800-3800
Author(s):  
Diana Deutsch ◽  
Miren Edelstein ◽  
Trevor Henthorn

2017 ◽  
Vol 372 (1714) ◽  
pp. 20160114 ◽  
Author(s):  
Anahita H. Mehta ◽  
Nori Jacoby ◽  
Ifat Yasin ◽  
Andrew J. Oxenham ◽  
Shihab A. Shamma

This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutsch's ‘octave illusion’, in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high–low–high … and a low–high–low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2016 ◽  
Vol 140 (4) ◽  
pp. 2225-2233 ◽  
Author(s):  
Anahita H. Mehta ◽  
Ifat Yasin ◽  
Andrew J. Oxenham ◽  
Shihab Shamma

2016 ◽  
Author(s):  
Karlheinz Brandenburg ◽  
Stephan Werner ◽  
Florian Klein ◽  
Christoph Sladeczek
Keyword(s):  

2015 ◽  
Vol 114 (2) ◽  
pp. 1272-1285
Author(s):  
Yan Gai ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

The precedence effect (PE) is an auditory illusion that occurs when listeners localize nearly coincident and similar sounds from different spatial locations, such as a direct sound and its echo. It has mostly been studied in humans and animals with immobile heads in the horizontal plane; speaker pairs were often symmetrically located in the frontal hemifield. The present study examined the PE in head-unrestrained cats for a variety of paired-sound conditions along the horizontal, vertical, and diagonal axes. Cats were trained with operant conditioning to direct their gaze to the perceived sound location. Stereotypical PE-like behaviors were observed for speaker pairs placed in azimuth or diagonally in the frontal hemifield as the interstimulus delay was varied. For speaker pairs in the median sagittal plane, no clear PE-like behavior occurred. Interestingly, when speakers were placed diagonally in front of the cat, certain PE-like behavior emerged along the vertical dimension. However, PE-like behavior was not observed when both speakers were located in the left hemifield. A Hodgkin-Huxley model was used to simulate responses of neurons in the medial superior olive (MSO) to sound pairs in azimuth. The novel simulation incorporated a low-threshold potassium current and frequency mismatches to generate internal delays. The model exhibited distinct PE-like behavior, such as summing localization and localization dominance. The simulation indicated that certain encoding of the PE could have occurred before information reaches the inferior colliculus, and MSO neurons with binaural inputs having mismatched characteristic frequencies may play an important role.


Sign in / Sign up

Export Citation Format

Share Document