scholarly journals Extended categorization of conjunction object stimuli decreases the latency of attentional feature selection and recruits orthography-linked ERPs

2019 ◽  
Author(s):  
Jonathan R. Folstein ◽  
Shamsi S. Monfared

AbstractThe role of attention in driving perceptual expertise effects is controversial. The current study addressed the effect of training on ERP components related to and independent of attentional feature selection. Participants learned to categorize cartoon animals over six training sessions (8,800 trials) after which ERPs were recorded during a target detection task performed on trained and untrained stimulus sets. The onset of the selection negativity, an ERP component indexing attentional modulation, was about 60 ms earlier for trained than untrained stimuli. Trained stimuli also elicited centro-parietal N200 and N320 components that were insensitive to attentional feature selection. The scalp distribution and timecourse of these components were better matched by studies of orthography than object expertise. Source localization using eLORETA suggested that the strongest neural sources of the selection negativity were in right ventral temporal cortex whereas the strongest sources of the N200/N320 components were in left ventral temporal cortex, again consistent with the hypothesis that training recruited orthography related areas. Overall, training altered neural processes related to attentional selection, but also affected neural processes that were independent of feature selection.

Author(s):  
Barış Kabak ◽  
Kazumi Maniwa ◽  
Nina Kazanina

AbstractThe study explores the role of stress and vowel harmony as cues for speech segmentation. Both in French and in Turkish stress is demarcative, typically falling on word-final syllables. Additionally, Turkish (but not French) has a regular front-back vowel harmony which dictates that all vowels within a word must be either front or back. French and Turkish participants performed a target detection task in which they had to spot nonsense words embedded in a longer auditory string. The results show that word-final stress can successfully signal an upcoming word boundary and is used for speech segmentation by speakers of both languages. In the Turkish group but not in the French group we also found a facilitatory effect of vowel disharmony. We conclude that both vowel harmony and stress can independently signal word boundaries and suggest that listeners can exploit these phonological regularities during speech segmentation.


2020 ◽  
Author(s):  
D. Proklova ◽  
M.A. Goodale

AbstractAnimate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal’s ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that this effect is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC appears to treat the face as a proxy for agency, a ubiquitous feature of familiar animals.Significance StatementMany studies have shown that images of animals are processed differently from inanimate objects in the human brain, particularly in the ventral temporal cortex (VTC). However, what features drive this distinction remains unclear. One important feature that distinguishes many animals from inanimate objects is a face. Here, we used fMRI to test whether the animate/inanimate distinction is driven by the presence of faces. We found that the presence of faces did indeed boost activity related to animacy in the VTC. A more detailed analysis, however, revealed that it was the association between faces and other attributes such as the capacity for self-movement and thinking, not the faces per se, that was driving the activity we observed.


2020 ◽  
Author(s):  
Tianyu Gao ◽  
Yue Pu ◽  
Jingyi Zhou ◽  
Guo Zheng ◽  
Yuqing Zhou ◽  
...  

AbstractDeath awareness influences multiple aspects of human lives, but its psychological constructs and underlying brain mechanisms remain unclear. We address these by measuring behavioral and brain responses to images of human skulls. We show that skulls relative to control stimuli delay responses to life-related words but speed responses to death-related words. Skulls compared to the control stimuli induce early deactivations in the posterior ventral temporal cortex followed by activations in the posterior and anterior ventral temporal cortices. The early and late neural modulations by perceived skulls respectively predict skull-induced changes of behavioral responses to life- and death-related words and the early neural modulation further predicts death anxiety. Our findings decompose skull-induced death awareness into two-stage neural processes of a lifeless state of a former life.One sentence summaryBehavioral and brain imaging findings decompose skull-induced death awareness into two-stage neural processes of a lifeless state of a former life.


Author(s):  
James Head ◽  
Kyle Wilson ◽  
William S. Helton ◽  
Simon Kemp

Author(s):  
Lindsey M. Kitchell ◽  
Francisco J. Parada ◽  
Brandi L. Emerick ◽  
Tom A. Busey

Author(s):  
Shihab Shamma ◽  
Prachi Patel ◽  
Shoutik Mukherjee ◽  
Guilhem Marion ◽  
Bahar Khalighinejad ◽  
...  

Abstract Action and Perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70 Hz-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally-modulated neural activity projecting in both forward (motor-to-sensory) and inverse directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in learning to control the vocal tract, beyond its commonly-postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.


Sign in / Sign up

Export Citation Format

Share Document