scholarly journals Transformation from auditory to linguistic representations across auditory cortex is rapid and attention dependent for continuous speech

2018 ◽  
Author(s):  
Christian Brodbeck ◽  
L. Elliot Hong ◽  
Jonathan Z. Simon

SummaryDuring speech perception, a central task of the auditory cortex is to analyze complex acoustic patterns to allow detection of the words that encode a linguistic message. It is generally thought that this process includes at least one intermediate, phonetic, level of representations [1–6], localized bilaterally in the superior temporal lobe [7–10]. Phonetic representations reflect a transition from acoustic to linguistic information, classifying acoustic patterns into linguistically meaningful units, which can serve as input to mechanisms that access abstract word representations [11–13]. While recent research has identified neural signals arising from successful recognition of individual words in continuous speech [14–17], no explicit neurophysiological signal has been found demonstrating the transition from acoustic/phonetic to symbolic, lexical representations. Here we report a response reflecting the incremental integration of phonetic information for word identification, dominantly localized to the left temporal lobe. The short response latency, approximately 110 ms relative to phoneme onset, suggests that phonetic information is used for lexical processing as soon as it becomes available. Responses also tracked word boundaries, confirming previous reports of immediate lexical segmentation [18,19]. These new results were further investigated using a cocktail-party paradigm [20,21] in which participants listened to a mix of two talkers, attending to one and ignoring the other. Analysis indicates neural lexical processing of only the attended, but not the unattended, speech stream. Thus, while responses to acoustic features reflect attention through selective amplification of attended speech, responses consistent with a lexical processing model reveal categorically selective processing.

2017 ◽  
Vol 61 (3) ◽  
pp. 430-465 ◽  
Author(s):  
Miquel Llompart ◽  
Miquel Simonet

This study investigates the production and auditory lexical processing of words involved in a patterned phonological alternation in two dialects of Catalan spoken on the island of Majorca, Spain. One of these dialects, that of Palma, merges /ɔ/ and /o/ as [o] in unstressed position, and it maintains /u/ as an independent category, [u]. In the dialect of Sóller, a small village, speakers merge unstressed /ɔ/, /o/, and /u/ to [u]. First, a production study asks whether the discrete, rule-based descriptions of the vowel alternations provided in the dialectological literature are able to account adequately for these processes: are mergers complete? Results show that mergers are complete with regards to the main acoustic cue to these vowel contrasts, that is, F1. However, minor differences are maintained for F2 and vowel duration. Second, a lexical decision task using cross-modal priming investigates the strength with which words produced in the phonetic form of the neighboring (versus one’s own) dialect activate the listeners’ lexical representations during spoken word recognition: are words within and across dialects accessed efficiently? The study finds that listeners from one of these dialects, Sóller, process their own and the neighboring forms equally efficiently, while listeners from the other one, Palma, process their own forms more efficiently than those of the neighboring dialect. This study has implications for our understanding of the role of lifelong linguistic experience on speech performance.


2005 ◽  
Vol 16 (6) ◽  
pp. 451-459 ◽  
Author(s):  
Luca L. Bonatti ◽  
Marcela Peña ◽  
Marina Nespor ◽  
Jacques Mehler

Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment “words” from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.


2019 ◽  
Vol 40 (1) ◽  
pp. 231-248
Author(s):  
Andrew Wedel ◽  
Adam Ussishkin ◽  
Adam King

AbstractListeners incrementally process words as they hear them, progressively updating inferences about what word is intended as the phonetic signal unfolds in time. As a consequence, phonetic cues positioned early in the signal for a word are on average more informative about word-identity because they disambiguate the intended word from more lexical alternatives than cues late in the word. In this contribution, we review two new findings about structure in lexicons and phonological grammars, and argue that both arise through the same biases on phonetic reduction and enhancement resulting from incremental processing.(i) Languages optimize their lexicons over time with respect to the amount of signal allocated to words relative to their predictability: words that are on average less predictable in context tend to be longer, while those that are on average more predictable tend to be shorter. However, the fact that phonetic material earlier in the word plays a larger role in word identification suggests that languages should also optimize the distribution of that information across the word. In this contribution we review recent work on a range of different languages that supports this hypothesis: less frequent words are not only on average longer, but also contain more highly informative segments early in the word.(ii) All languages are characterized by phonological grammars of rules describing predictable modifications of pronunciation in context. Because speakers appear to pronounce informative phonetic cues more carefully than less informative cues, it has been predicted that languages should be less likely to evolve phonological rules that reduce lexical contrast at word beginnings. A recent investigation through a statistical analysis of a cross-linguistic dataset of phonological rules strongly supports this hypothesis. Taken together, we argue that these findings suggest that the incrementality of lexical processing has wide-ranging effects on the evolution of phonotactic patterns.


2015 ◽  
Vol 282 (1811) ◽  
pp. 20151203 ◽  
Author(s):  
Gregory S. Berns ◽  
Peter F. Cook ◽  
Sean Foxley ◽  
Saad Jbabdi ◽  
Karla L. Miller ◽  
...  

The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of ‘associative′ regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin ( Delphinus delphis ) and a pantropical dolphin ( Stenella attenuata ) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species.


2009 ◽  
Vol 62 (5) ◽  
pp. 858-867 ◽  
Author(s):  
Erin Maloney ◽  
Evan F. Risko ◽  
Shannon O'Malley ◽  
Derek Besner

Participants read aloud nonword letter strings, one at a time, which varied in the number of letters. The standard result is observed in two experiments; the time to begin reading aloud increases as letter length increases. This result is standardly understood as reflecting the operation of a serial, left-to-right translation of graphemes into phonemes. The novel result is that the effect of letter length is statistically eliminated by a small number of repetitions. This elimination suggests that these nonwords are no longer always being read aloud via a serial left-to-right sublexical process. Instead, the data are taken as evidence that new orthographic and phonological lexical entries have been created for these nonwords and are now read at least sometimes by recourse to the lexical route. Experiment 2 replicates the interaction between nonword letter length and repetition observed in Experiment 1 and also demonstrates that this interaction is not seen when participants merely classify the string as appearing in upper or lower case. Implications for existing dual-route models of reading aloud and Share's self-teaching hypothesis are discussed.


2006 ◽  
Vol 111 (5) ◽  
pp. 459-464 ◽  
Author(s):  
Steven A. Chance ◽  
Manuel F. Casanova ◽  
Andy E. Switala ◽  
Timothy J. Crow ◽  
Margaret M. Esiri

2008 ◽  
Vol 29 (4) ◽  
pp. 553-584 ◽  
Author(s):  
ANNIE TREMBLAY

ABSTRACTThe objectives of this study are (a) to determine if native speakers of Canadian French at different English proficiencies can use primary stress for recognizing English words and (b) to specify how the second language (L2) learners' (surface-level) knowledge of L2 stress placement influences their use of primary stress in L2 word recognition. Two experiments were conducted: a cross-modal word-identification task investigating (a) and a vocabulary production task investigating (b). The results show that several L2 learners can use primary stress for recognizing English words, but only the L2 learners with targetlike knowledge of stress placement can do so. The results also indicate that knowing where primary stress falls in English words is not sufficient for L2 learners to be able to use stress for L2 lexical access. This suggests that the problem that L2 word stress poses for many native speakers of (Canadian) French is at the level of lexical processing.


Sign in / Sign up

Export Citation Format

Share Document