Neural response patterns to speech sounds—A model

1983 ◽  
Vol 74 (S1) ◽  
pp. S68-S68
Author(s):  
Pierre L. Divenyi ◽  
Robert V. Shannon ◽  
Stephen R. Saunders
NeuroImage ◽  
2011 ◽  
Vol 56 (1) ◽  
pp. 363-372 ◽  
Author(s):  
Ulrike Lueken ◽  
Johann Daniel Kruschwitz ◽  
Markus Muehlhan ◽  
Jens Siegert ◽  
Jürgen Hoyer ◽  
...  

Author(s):  
Carolyn Parkinson

Abstract Recent years have seen a surge of exciting developments in the computational tools available to social neuroscientists. This paper highlights and synthesizes recent advances that have been enabled by the application of such tools, as well as methodological innovations likely to be of interest and utility to social neuroscientists, but that have been concentrated in other sub-fields. Papers in this special issue are emphasized, many of which contain instructive materials (e.g., tutorials, code) for researchers new to the highlighted methods. These include approaches for modeling social decisions, characterizing multivariate neural response patterns at varying spatial scales, using decoded neurofeedback to draw causal links between specific neural response patterns and psychological and behavioral phenomena, examining time-varying patterns of connectivity between brain regions, and characterizing the social networks in which social thought and behavior unfold in everyday life. By combining computational methods for characterizing participants’ rich social environments – at the levels of stimuli, paradigms, and the webs of social relationships that surround people – with those for capturing the psychological processes that undergird social behavior and the wealth of information contained in neuroimaging datasets, social neuroscientists can gain new insights into how people create, understand, and navigate their complex social worlds.


2017 ◽  
Author(s):  
Joshua S. Cetron ◽  
Andrew C. Connolly ◽  
Solomon G. Diamond ◽  
Vicki V. May ◽  
James V. Haxby ◽  
...  

How does STEM knowledge learned in school change students’ brains? Using fMRI, we presented photographs of real-world structures to engineering students with classroom-based knowledge and hands-on lab experience, examining how their brain activity differentiated them from their “novice” peers not pursuing engineering degrees. A data-driven MVPA and machine- learning approach revealed that neural response patterns of engineering students were convergent with each other and distinct from novices when considering physical forces acting on the structures. Furthermore, informational network analysis demonstrated that the distinct neural response patterns of engineering students reflected relevant concept knowledge: learned categories of mechanical structures. Information about mechanical categories was predominantly represented in bilateral anterior ventral occipitotemporal regions. Importantly, mechanical categories were not explicitly referenced in the experiment, nor does visual similarity between stimuli account for mechanical category distinctions. The results demonstrate how learning abstract STEM concepts in the classroom influences neural representations of objects in the world.


2021 ◽  
Author(s):  
Basil C Preisig ◽  
Lars Riecke ◽  
Alexis Hervais-Adelman

What processes lead to categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. We used a binaural integration task, where the inputs to the two ears were complementary so that phonemic identity emerged from their integration into a single percept. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with a meaning-differentiating acoustic feature (third formant) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in the left anterior insula (AI), the left supplementary motor cortex, the left ventral motor cortex and the right motor and somatosensory cortex (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. The same areas have been previously implicated in decision-making (AI), response selection (SMA), and response initiation and feedback (M1/S1). Our results indicate that the emergence of categorical speech sounds implicates decision-making mechanisms and auditory-motor transformations acting on sensory inputs.


2021 ◽  
Vol 118 (36) ◽  
pp. e2101777118
Author(s):  
Han G. Yi ◽  
Bharath Chandrasekaran ◽  
Kirill V. Nourski ◽  
Ariane E. Rhone ◽  
William L. Schuerman ◽  
...  

Adults can learn to identify nonnative speech sounds with training, albeit with substantial variability in learning behavior. Increases in behavioral accuracy are associated with increased separability for sound representations in cortical speech areas. However, it remains unclear whether individual auditory neural populations all show the same types of changes with learning, or whether there are heterogeneous encoding patterns. Here, we used high-resolution direct neural recordings to examine local population response patterns, while native English listeners learned to recognize unfamiliar vocal pitch patterns in Mandarin Chinese tones. We found a distributed set of neural populations in bilateral superior temporal gyrus and ventrolateral frontal cortex, where the encoding of Mandarin tones changed throughout training as a function of trial-by-trial accuracy (“learning effect”), including both increases and decreases in the separability of tones. These populations were distinct from populations that showed changes as a function of exposure to the stimuli regardless of trial-by-trial accuracy. These learning effects were driven in part by more variable neural responses to repeated presentations of acoustically identical stimuli. Finally, learning effects could be predicted from speech-evoked activity even before training, suggesting that intrinsic properties of these populations make them amenable to behavior-related changes. Together, these results demonstrate that nonnative speech sound learning involves a wide array of changes in neural representations across a distributed set of brain regions.


Sign in / Sign up

Export Citation Format

Share Document