scholarly journals MEG sensor patterns reflect perceptual but not categorical similarity of animate and inanimate objects

2017 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

AbstractHuman high-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. Which object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical object properties by presenting perceptually matched objects (e.g., snake and rope) that were nonetheless easily recognizable as being animate or inanimate. In a series of behavioral experiments, three aspects of perceptual dissimilarity of these objects were quantified: overall dissimilarity, outline dissimilarity, and texture dissimilarity. Neural dissimilarity of MEG sensor patterns was modeled using regression analysis, in which perceptual dissimilarity (from the behavioral experiments) and categorical dissimilarity served as predictors of neural dissimilarity. We found that perceptual dissimilarity was strongly reflected in MEG sensor patterns from 80ms after stimulus onset, with separable contributions of outline and texture dissimilarity. Surprisingly, when controlling for perceptual dissimilarity, MEG patterns did not carry information about object category (animate vs inanimate) at any time point. Nearly identical results were found in a second MEG experiment that required basic-level object recognition. These results suggest that MEG sensor patterns do not capture object animacy independently of perceptual differences between animate and inanimate objects. This is in contrast to results observed in fMRI using the same stimuli, task, and analysis approach: fMRI showed a highly reliable categorical distinction in visual cortex even when controlling for perceptual dissimilarity. Results thus point to a discrepancy in the information contained in multivariate fMRI and MEG patterns.

2016 ◽  
Vol 115 (4) ◽  
pp. 2246-2250 ◽  
Author(s):  
Daniel Kaiser ◽  
Damiano C. Azzalini ◽  
Marius V. Peelen

Neuroimaging research has identified category-specific neural response patterns to a limited set of object categories. For example, faces, bodies, and scenes evoke activity patterns in visual cortex that are uniquely traceable in space and time. It is currently debated whether these apparently categorical responses truly reflect selectivity for categories or instead reflect selectivity for category-associated shape properties. In the present study, we used a cross-classification approach on functional MRI (fMRI) and magnetoencephalographic (MEG) data to reveal both category-independent shape responses and shape-independent category responses. Participants viewed human body parts (hands and torsos) and pieces of clothing that were closely shape-matched to the body parts (gloves and shirts). Category-independent shape responses were revealed by training multivariate classifiers on discriminating shape within one category (e.g., hands versus torsos) and testing these classifiers on discriminating shape within the other category (e.g., gloves versus shirts). This analysis revealed significant decoding in large clusters in visual cortex (fMRI) starting from 90 ms after stimulus onset (MEG). Shape-independent category responses were revealed by training classifiers on discriminating object category (bodies and clothes) within one shape (e.g., hands versus gloves) and testing these classifiers on discriminating category within the other shape (e.g., torsos versus shirts). This analysis revealed significant decoding in bilateral occipitotemporal cortex (fMRI) and from 130 to 200 ms after stimulus onset (MEG). Together, these findings provide evidence for concurrent shape and category selectivity in high-level visual cortex, including category-level responses that are not fully explicable by two-dimensional shape properties.


2016 ◽  
Vol 28 (5) ◽  
pp. 680-692 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake–rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate–inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.


2016 ◽  
Author(s):  
Darren Seibert ◽  
Daniel L Yamins ◽  
Diego Ardila ◽  
Ha Hong ◽  
James J DiCarlo ◽  
...  

Human visual object recognition is subserved by a multitude of cortical areas. To make sense of this system, one line of research focused on response properties of primary visual cortex neurons and developed theoretical models of a set of canonical computations such as convolution, thresholding, exponentiating and normalization that could be hierarchically repeated to give rise to more complex representations. Another line or research focused on response properties of high-level visual cortex and linked these to semantic categories useful for object recognition. Here, we hypothesized that the panoply of visual representations in the human ventral stream may be understood as emergent properties of a system constrained both by simple canonical computations and by top-level, object recognition functionality in a single unified framework (Yamins et al., 2014; Khaligh-Razavi and Kriegeskorte, 2014; Guclu and van Gerven, 2015). We built a deep convolutional neural network model optimized for object recognition and compared representations at various model levels using representational similarity analysis to human functional imaging responses elicited from viewing hundreds of image stimuli. Neural network layers developed representations that corresponded in a hierarchical consistent fashion to visual areas from V1 to LOC. This correspondence increased with optimization of the model's recognition performance. These findings support a unified view of the ventral stream in which representations from the earliest to the latest stages can be understood as being built from basic computations inspired by modeling of early visual cortex shaped by optimization for high-level object-based performance constraints.


2014 ◽  
Vol 26 (8) ◽  
pp. 1629-1643 ◽  
Author(s):  
Yetta Kwailing Wong ◽  
Cynthia Peng ◽  
Kristyn N. Fratus ◽  
Geoffrey F. Woodman ◽  
Isabel Gauthier

Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40–60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.


2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


2018 ◽  
Vol 119 (1) ◽  
pp. 160-176 ◽  
Author(s):  
Hee-kyoung Ko ◽  
Rüdiger von der Heydt

Figure-ground organization in the visual cortex is generally assumed to be based partly on general rules and partly on specific influences of object recognition in higher centers as found in the temporal lobe. To see if shape familiarity influences figure-ground organization, we tested border ownership-selective neurons in monkey V1/V2 with silhouettes of human and monkey face profiles and “nonsense” silhouettes constructed by mirror-reversing the front part of the profile. We found no superiority of face silhouettes compared with nonsense shapes in eliciting border-ownership signals overall. However, in some neurons, border-ownership signals differed strongly between the two categories consistently across many different profile shapes. Surprisingly, this category selectivity appeared as early as 70 ms after stimulus onset, which is earlier than the typical latency of shape-selective responses but compatible with the earliest face-selective responses in the inferior temporal lobe. Although our results provide no evidence for a delayed top-down influence from object recognition centers, they indicate sophisticated shape categorization mechanisms that are much faster than generally assumed. NEW & NOTEWORTHY A long-standing question is whether low-level sensory representations in cortex are influenced by cognitive “top-down” signals. We studied figure-ground organization in the visual cortex by comparing border-ownership signals for face profiles and matched nonsense shapes. We found no sign of “face superiority” in the population border-ownership signal. However, some neurons consistently differentiated between the face and nonsense categories early on, indicating the presence of shape classification mechanisms that are much faster than previously assumed.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2016 ◽  
Author(s):  
Heeyoung Choo ◽  
Dirk B Walther

Humans efficiently grasp complex visual environments, making highly consistent judgments of entry-level category despite their high variability in visual appearance. How does the human brain arrive at the invariant neural representations underlying categorization of real-world environments? We here show that the neural representation of visual environments in scenes-selective human visual cortex relies on statistics of contour junctions, which provide cues for the three-dimensional arrangement of surfaces in a scene. We manipulated line drawings of real-world environments such that statistics of contour orientations or junctions were disrupted. Manipulated and intact line drawings were presented to participants in an fMRI experiment. Scene categories were decoded from neural activity patterns in the parahippocampal place area (PPA), the occipital place area (OPA) and other visual brain regions. Disruption of junctions but not orientations led to a drastic decrease in decoding accuracy in the PPA and OPA, indicating the reliance of these areas on intact junction statistics. Accuracy of decoding from early visual cortex, on the other hand, was unaffected by either image manipulation. We further show that the correlation of error patterns between decoding from the scene-selective brain areas and behavioral experiments is contingent on intact contour junctions. Finally, a searchlight analysis exposes the reliance of visually active brain regions on different sets of contour properties. Statistics of contour length and curvature dominate neural representations of scene categories in early visual areas and contour junctions in high-level scene-selective brain regions.


2017 ◽  
pp. 142-154 ◽  
Author(s):  
A. Yusupova ◽  
S. Khalimova

The paper deals with the research devoted to characteristics of high tech business development in Russia. Companies’ performance indicators have been analyzed with the help of regression analysis and author’s scheme of leadership stability and sustainability assessment. Data provided by Russia’s Fast Growing High-Tech Companies’ National Rating (TechUp) during 2012-2016 were used. The results have revealed that the high tech sector is characterized by high level of uncertainty. Limited number of regions and sectors which form the basis for high tech business have been defined. Relationship between innovation activity’s indicators and export potential is determined.


Sign in / Sign up

Export Citation Format

Share Document