scholarly journals The relative contributions of visual and semantic information in the neural representation of object categories

2019 ◽  
Vol 9 (10) ◽  
Author(s):  
Lindsay W. Victoria ◽  
John A. Pyles ◽  
Michael J. Tarr
2015 ◽  
Vol 15 (12) ◽  
pp. 8
Author(s):  
Marius Catalin Iordan ◽  
Michelle Greene ◽  
Diane Beck ◽  
Li Fei-Fei

2015 ◽  
Vol 114 (3) ◽  
pp. 1819-1826 ◽  
Author(s):  
Yune Sang Lee ◽  
Jonathan E. Peelle ◽  
David Kraemer ◽  
Samuel Lloyd ◽  
Richard Granger

Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.


2019 ◽  
Vol 31 (1) ◽  
pp. 155-173 ◽  
Author(s):  
J. Brendan Ritchie ◽  
Hans Op de Beeck

The human capacity for visual categorization is core to how we make sense of the visible world. Although a substantive body of research in cognitive neuroscience has localized this capacity to regions of human visual cortex, relatively few studies have investigated the role of abstraction in how representations for novel object categories are constructed from the neural representation of stimulus dimensions. Using human fMRI coupled with formal modeling of observer behavior, we assess a wide range of categorization models that vary in their level of abstraction from collections of subprototypes to representations of individual exemplars. The category learning tasks range from simple linear and unidimensional category rules to complex crisscross rules that require a nonlinear combination of multiple dimensions. We show that models based on neural responses in primary visual cortex favor a variable, but often limited, extent of abstraction in the construction of representations for novel categories, which differ in degree across tasks and individuals.


2019 ◽  
Author(s):  
Bobby Stojanoski ◽  
Stephen M. Emrich ◽  
Rhodri Cusack

AbstractWe rely upon visual short-term memory (VSTM) for continued access to perceptual information that is no longer available. Despite the complexity of our visual environments, the majority of research on VSTM has focused on memory for lower-level perceptual features. Using more naturalistic stimuli, it has been found that recognizable objects are remembered better than unrecognizable objects. What remains unclear, however, is how semantic information changes brain representations in order to facilitate this improvement in VSTM for real-world objects. To address this question, we used a continuous report paradigm to assess VSTM (precision and guessing rate) while participants underwent functional magnetic resonance imaging (fMRI) to measure the underlying neural representation of 96 objects from 4 animate and 4 inanimate categories. To isolate semantic content, we used a novel image generation method that parametrically warps images until they are no longer recognizable while preserving basic visual properties. We found that intact objects were remembered with greater precision and a lower guessing rate than unrecognizable objects (this also emerged when objects were grouped by category and animacy). Representational similarity analysis of the ventral visual stream found evidence of category and animacy information in anterior visual areas during encoding only, but not during maintenance. These results suggest that the effect of semantic information during encoding in ventral visual areas boosts visual short-term memory for real-world objects.


2016 ◽  
Author(s):  
Alona Fyshe ◽  
Gustavo Sudre ◽  
Leila Wehbe ◽  
Nicole Rafidi ◽  
Tom M. Mitchell

AbstractAs a person reads, the brain performs complex operations to create higher order semantic representations from individual words. While these steps are effortless for competent readers, we are only beginning to understand how the brain performs these actions. Here, we explore semantic composition using magnetoencephalography (MEG) recordings of people reading adjective-noun phrases presented one word at a time. We track the neural representation of semantic information over time, through different brain regions. Our results reveal two novel findings: 1) a neural representation of the adjective is present during noun presentation, but this neural representation is different from that observed during adjective presentation 2) the neural representation of adjective semantics observed during adjective reading is reactivated after phrase reading, with remarkable consistency. We also note that while the semantic representation of the adjective during the reading of the adjective is very distributed, the later representations are concentrated largely to temporal and frontal areas previously associated with composition. Taken together, these results paint a picture of information flow in the brain as phrases are read and understood.


2012 ◽  
Author(s):  
Darya L. Zabelina ◽  
Emmanuel Guzman-Martinez ◽  
Laura Ortega ◽  
Marcia Grabowecky ◽  
Mark Beeman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document