scholarly journals Figure-ground organization in the visual cortex: does meaning matter?

2018 ◽  
Vol 119 (1) ◽  
pp. 160-176 ◽  
Author(s):  
Hee-kyoung Ko ◽  
Rüdiger von der Heydt

Figure-ground organization in the visual cortex is generally assumed to be based partly on general rules and partly on specific influences of object recognition in higher centers as found in the temporal lobe. To see if shape familiarity influences figure-ground organization, we tested border ownership-selective neurons in monkey V1/V2 with silhouettes of human and monkey face profiles and “nonsense” silhouettes constructed by mirror-reversing the front part of the profile. We found no superiority of face silhouettes compared with nonsense shapes in eliciting border-ownership signals overall. However, in some neurons, border-ownership signals differed strongly between the two categories consistently across many different profile shapes. Surprisingly, this category selectivity appeared as early as 70 ms after stimulus onset, which is earlier than the typical latency of shape-selective responses but compatible with the earliest face-selective responses in the inferior temporal lobe. Although our results provide no evidence for a delayed top-down influence from object recognition centers, they indicate sophisticated shape categorization mechanisms that are much faster than generally assumed. NEW & NOTEWORTHY A long-standing question is whether low-level sensory representations in cortex are influenced by cognitive “top-down” signals. We studied figure-ground organization in the visual cortex by comparing border-ownership signals for face profiles and matched nonsense shapes. We found no sign of “face superiority” in the population border-ownership signal. However, some neurons consistently differentiated between the face and nonsense categories early on, indicating the presence of shape classification mechanisms that are much faster than previously assumed.

2014 ◽  
Vol 26 (8) ◽  
pp. 1629-1643 ◽  
Author(s):  
Yetta Kwailing Wong ◽  
Cynthia Peng ◽  
Kristyn N. Fratus ◽  
Geoffrey F. Woodman ◽  
Isabel Gauthier

Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40–60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.


2014 ◽  
Vol 26 (5) ◽  
pp. 1154-1167 ◽  
Author(s):  
Jacqueline C. Snow ◽  
Lars Strother ◽  
Glyn W. Humphreys

Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.


2017 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

AbstractHuman high-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. Which object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical object properties by presenting perceptually matched objects (e.g., snake and rope) that were nonetheless easily recognizable as being animate or inanimate. In a series of behavioral experiments, three aspects of perceptual dissimilarity of these objects were quantified: overall dissimilarity, outline dissimilarity, and texture dissimilarity. Neural dissimilarity of MEG sensor patterns was modeled using regression analysis, in which perceptual dissimilarity (from the behavioral experiments) and categorical dissimilarity served as predictors of neural dissimilarity. We found that perceptual dissimilarity was strongly reflected in MEG sensor patterns from 80ms after stimulus onset, with separable contributions of outline and texture dissimilarity. Surprisingly, when controlling for perceptual dissimilarity, MEG patterns did not carry information about object category (animate vs inanimate) at any time point. Nearly identical results were found in a second MEG experiment that required basic-level object recognition. These results suggest that MEG sensor patterns do not capture object animacy independently of perceptual differences between animate and inanimate objects. This is in contrast to results observed in fMRI using the same stimuli, task, and analysis approach: fMRI showed a highly reliable categorical distinction in visual cortex even when controlling for perceptual dissimilarity. Results thus point to a discrepancy in the information contained in multivariate fMRI and MEG patterns.


2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.


2016 ◽  
Author(s):  
Jonathan R. Williford ◽  
Rüdiger von der Heydt

AbstractFigure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ~30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge.Significance StatementHere we show, for the first time, that neurons in primate visual area V2 signal border-ownership for objects in complex natural scenes. Surprisingly, these signals appear as early as the border-ownership signals for simple figure displays. In fact, they emerge well before object selective activity appears in infero-temporal cortex, which rules out feedback from that region as an explanation. Thus, “objectness” is detected by extremely fast mechanisms that do not depend on feedback from the known object-recognition centers.


2018 ◽  
Author(s):  
Hyehyeon Kim ◽  
Gayoung Kim ◽  
Sue-Hyun Lee

AbstractTop-down signals can influence our visual perception by providing guidance on information processing. Especially, top-down control between two basic frameworks, “Individuation” and “grouping”, is critical for information processing during face perception. Individuation of faces supports identity recognition while grouping subserves higher category level face perception such as race or gender. However, it still remains elusive how top-down dependent control between individuation and grouping affects cortical representations during face perception. Here we performed an fMRI experiment to investigate whether representations across early and high-level visual areas can be altered by top-down control between individuation and grouping process during face perception. Focusing on neural response patterns across the early visual cortex (EVC) and the face-selective area (the fusiform face area (FFA)), we found that the discriminability of individual faces from the response patterns was strong in the FFA but weak in the EVC during the individuation task whereas the EVC but not the FFA showed significant face discrimination during the grouping tasks. Thus, these findings suggest that the representation of face information across the early and high-level visual cortex is flexible depending on the top-down control of the perceptual framework between individuation and grouping.


2021 ◽  
pp. 1-14
Author(s):  
Jie Huang ◽  
Paul Beach ◽  
Andrea Bozoki ◽  
David C. Zhu

Background: Postmortem studies of brains with Alzheimer’s disease (AD) not only find amyloid-beta (Aβ) and neurofibrillary tangles (NFT) in the visual cortex, but also reveal temporally sequential changes in AD pathology from higher-order association areas to lower-order areas and then primary visual area (V1) with disease progression. Objective: This study investigated the effect of AD severity on visual functional network. Methods: Eight severe AD (SAD) patients, 11 mild/moderate AD (MAD), and 26 healthy senior (HS) controls undertook a resting-state fMRI (rs-fMRI) and a task fMRI of viewing face photos. A resting-state visual functional connectivity (FC) network and a face-evoked visual-processing network were identified for each group. Results: For the HS, the identified group-mean face-evoked visual-processing network in the ventral pathway started from V1 and ended within the fusiform gyrus. In contrast, the resting-state visual FC network was mainly confined within the visual cortex. AD disrupted these two functional networks in a similar severity dependent manner: the more severe the cognitive impairment, the greater reduction in network connectivity. For the face-evoked visual-processing network, MAD disrupted and reduced activation mainly in the higher-order visual association areas, with SAD further disrupting and reducing activation in the lower-order areas. Conclusion: These findings provide a functional corollary to the canonical view of the temporally sequential advancement of AD pathology through visual cortical areas. The association of the disruption of functional networks, especially the face-evoked visual-processing network, with AD severity suggests a potential predictor or biomarker of AD progression.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Domenica Veniero ◽  
Joachim Gross ◽  
Stephanie Morand ◽  
Felix Duecker ◽  
Alexander T. Sack ◽  
...  

AbstractVoluntary allocation of visual attention is controlled by top-down signals generated within the Frontal Eye Fields (FEFs) that can change the excitability of lower-level visual areas. However, the mechanism through which this control is achieved remains elusive. Here, we emulated the generation of an attentional signal using single-pulse transcranial magnetic stimulation to activate the FEFs and tracked its consequences over the visual cortex. First, we documented changes to brain oscillations using electroencephalography and found evidence for a phase reset over occipital sites at beta frequency. We then probed for perceptual consequences of this top-down triggered phase reset and assessed its anatomical specificity. We show that FEF activation leads to cyclic modulation of visual perception and extrastriate but not primary visual cortex excitability, again at beta frequency. We conclude that top-down signals originating in FEF causally shape visual cortex activity and perception through mechanisms of oscillatory realignment.


2019 ◽  
Vol 35 (05) ◽  
pp. 525-533
Author(s):  
Evrim Gülbetekin ◽  
Seda Bayraktar ◽  
Özlenen Özkan ◽  
Hilmi Uysal ◽  
Ömer Özkan

AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.


Sign in / Sign up

Export Citation Format

Share Document