scholarly journals Three-stage processing of category and variation information by entangled interactive mechanisms of peri-occipital and peri-frontal cortices

2018 ◽  
Author(s):  
Hamid Karimi-Rouzbahani

AbstractInvariant object recognition, which refers to the ability of precisely and rapidly recognizing objects in the presence of variations, has been a central question in human vision research. The general consensus is that the ventral and dorsal visual streams are the major processing pathways which undertake category and variation encoding in entangled layers. This overlooks the mounting evidence which support the role of peri-frontal areas in category encoding. These recent studies, however, have left open several aspects of visual processing in peri-frontal areas including whether these areas contributed only in active tasks, whether they interacted with peri-occipital areas or processed information independently and differently. To address these concerns, a passive EEG paradigm was designed in which subjects viewed a set of variation-controlled object images. Using multivariate pattern analysis, noticeable category and variation information were observed in occipital, parietal, temporal and prefrontal areas, supporting their contribution to visual processing. Using task specificity indices, phase and Granger causality analyses, three distinct stages of processing were identified which revealed transfer of information between peri-frontal and peri-occipital areas suggesting their parallel and interactive processing of visual information. A brain-plausible computational model supported the possibility of parallel processing mechanisms in peri-occipital and peri-frontal areas. These findings, while advocating previous results on the role of prefrontal areas in object recognition, extend their contribution from active recognition, in which peri-frontal to peri-occipital feedback mechanisms are activated, to the general case of object and variation processing, which is an integral part of visual processing and play role even during passive viewing.

2018 ◽  
Vol 24 (6) ◽  
pp. 582-608 ◽  
Author(s):  
Fernando M. Ramírez

Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons—including neurons bimodally tuned to mirror-symmetric face-views—followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.


2020 ◽  
Vol 123 (1) ◽  
pp. 167-177 ◽  
Author(s):  
Quentin Moreau ◽  
Eleonora Parrotta ◽  
Vanessa Era ◽  
Maria Luisa Martelli ◽  
Matteo Candidi

Neuroimaging and EEG studies have shown that passive observation of the full body and of specific body parts is associated with 1) activity of an occipito-temporal region named the extrastriate body area (EBA), 2) amplitude modulations of a specific posterior event-related potential (ERP) component (N1/N190), and 3) a theta-band (4–7 Hz) synchronization recorded from occipito-temporal electrodes compatible with the location of EBA. To characterize the functional role of the occipito-temporal theta-band increase during the processing of body-part stimuli, we recorded EEG from healthy participants while they were engaged in an identification task (match-to-sample) of images of hands and nonbody control images (leaves). In addition to confirming that occipito-temporal electrodes show a larger N1 for hand images compared with control stimuli, cluster-based analysis revealed an occipito-temporal cluster showing an increased theta power when hands are presented (compared with leaves) and show that this theta increase is higher for identified hands compared with nonidentified ones while not being significantly different between not identified nonhand stimuli. Finally, single trial multivariate pattern analysis revealed that time-frequency modulation in the theta band is a better marker for classifying the identification of hand images than the ERP modulation. The present results support the notion that theta activity over the occipito-temporal cortex is an informative marker of hand visual processing and may reflect the activity of a network coding for stimulus identity. NEW & NOTEWORTHY Hands provide crucial information regarding the identity of others, which is a key information for social processes. We recorded EEG activity of healthy participants during the visual identification of hand images. The combination of univariate and multivariate pattern analysis in time- and time-frequency domain highlights the functional role of theta (4–7 Hz) activity over visual areas during hand identification and emphasizes the robustness of this neuromarker in occipito-temporal visual processing dynamics.


2018 ◽  
Author(s):  
Tijl Grootswagers ◽  
Amanda K. Robinson ◽  
Thomas A. Carlson

AbstractIn our daily lives, we are bombarded with a stream of rapidly changing visual input. Humans have the remarkable capacity to detect and identify objects in fast-changing scenes. Yet, when studying brain representations, stimuli are generally presented in isolation. Here, we studied the dynamics of human vision using a combination of fast stimulus presentation rates, electroencephalography and multivariate decoding analyses. Using a presentation rate of 5 images per second, we obtained the representational structure of a large number of stimuli, and showed the emerging abstract categorical organisation of this structure. Furthermore, we could separate the temporal dynamics of perceptual processing from higher-level target selection effects. In a second experiment, we used the same paradigm at 20Hz to show that shorter image presentation limits the categorical abstraction of object representations. Our results show that applying multivariate pattern analysis to every image in rapid serial visual processing streams has unprecedented potential for studying the temporal dynamics of the structure of representations in the human visual system.


2019 ◽  
Author(s):  
Adyasha Tejaswi Khuntia ◽  
Rechu Divakar ◽  
Fabio Apicella ◽  
Filippo Muratori ◽  
Koel Das

AbstractAutism Spectrum Disorder results in deficit in social interaction, non-verbal communication and social reciprocity. Cognitive tasks pertaining to emotion processing are often preferred to distinguish the ASD children from the typically developing ones. We analysed the role of face and emotion processing in ASD and explored the feasibility of using EEG as a neural marker for detecting ASD. Subjects performed a visual perceptual task with face and nonface stimuli. Successful ASD detection was possible as early as 50 ms. post stimulus onset. Alpha and Beta oscillations seem to best identify autistic individuals. Multivariate pattern analysis and source localization studies points to the role of early visual processing and attention rather than emotion and face processing in detecting autism.


2020 ◽  
Author(s):  
Karola Schlegelmilch ◽  
Annie E. Wertz

Visual processing of a natural environment occurs quickly and effortlessly. Yet, little is known about how young children are able to visually categorize naturalistic structures, since their perceptual abilities are still developing. We addressed this question by asking 76 children (age: 4.1-6.1 years) and 72 adults (age: 18-50 years) to first sort cards with greyscale images depicting vegetation, manmade artifacts, and non-living natural elements (e.g., stones) into groups according to visual similarity. Then, they were asked to choose the images' superordinate categories. We analyzed the relevance of different visual properties to the decisions of the participant groups. Children were very well able to interpret complex visual structures. However, children relied on fewer visual properties and, in general, were less likely to include properties which afforded the analysis of detailed visual information in their categorization decisions than adults, suggesting that immaturities of the still-developing visual system affected categorization. Moreover, when sorting according to visual similarity, both groups attended to the images' assumed superordinate categories—in particular to vegetation—in addition to visual properties. Children had a higher relative sensitivity for vegetation than adults did in the classification task when controlling for overall performance differences. Taken together, these findings add to the sparse literature on the role of developing perceptual abilities in processing naturalistic visual input.


Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 148-148
Author(s):  
B J Stankiewicz ◽  
J E Hummel

Researchers in the field of visual perception have dedicated a great deal of effort to understanding how humans recognise known objects from novel viewpoints (often referred to as shape constancy). This research has produced a variety of theories—some that emphasise the use of invariant representations, others that emphasise alignment processes used in conjunction with viewpoint-specific representations. Although researchers disagree on the specifics of the representations and processes used during human object recognition, most agree that achieving shape constancy is computationally expensive—that is, it requires work. If it is assumed that attention provides the necessary resources for these computations, these theories suggest that recognition with attention should be qualitatively different from recognition without attention. Specifically, recognition with attention should be more invariant with viewpoint than recognition without attention. We recently reported a series of experiments, in which we used a response-time priming paradigm in which attention and viewpoint were manipulated, that showed attention is necessary for generating a representation of shape that is invariant with left-right reflection. We are now reporting new experiments showing that shape representation activated without attention is not completely view-specific. These experiments demonstrate that the automatic shape representation is invariant with the size and location of an image in the visual field. The results are reported in the context of a recent model proposed by Hummel and Stankiewicz ( Attention and Performance16 in press), as well as in the context of other models of human object recognition that make explicit predictions about the role of attention in generating a viewpoint-invariant representation of object shape.


2000 ◽  
Vol 17 (1) ◽  
pp. 77-89 ◽  
Author(s):  
ROSARIO M. BALBOA ◽  
NORBERTO M. GRZYWACZ

Lateral inhibition is one of the first and most important stages of visual processing. There are at least four theories related to information theory in the literature for the role of early retinal lateral inhibition. They are based on the spatial redundancy in natural images and the advantage of removing this redundancy from the visual code. Here, we contrast these theories with data from the retina's outer plexiform layer. The horizontal cells' lateral-inhibition extent displays a bell-shape behavior as function of background luminance, whereas all the theories show a fall as luminance increases. It is remarkable that different theories predict the same luminance behavior, explaining “half” of the biological data. We argue that the main reason is how these theories deal with photon-absorption noise. At dim light levels, for which this noise is relatively large, large receptive fields would increase the signal-to-noise ratio through averaging. Unfortunately, such an increase at low luminance levels may smooth out basic visual information of natural images. To explain the biological behavior, we describe an alternate hypothesis, which proposes that the role of early visual lateral inhibition is to deal with noise without missing relevant clues from the visual world, most prominently, the occlusion boundaries between objects.


2017 ◽  
Vol 34 ◽  
Author(s):  
ELIZABETH Y. LITVINA ◽  
CHINFEI CHEN

AbstractThe thalamocortical (TC) relay neuron of the dorsoLateral Geniculate Nucleus (dLGN) has borne its imprecise label for many decades in spite of strong evidence that its role in visual processing transcends the implied simplicity of the term “relay”. The retinogeniculate synapse is the site of communication between a retinal ganglion cell and a TC neuron of the dLGN. Activation of retinal fibers in the optic tract causes reliable, rapid, and robust postsynaptic potentials that drive postsynaptics spikes in a TC neuron. Cortical and subcortical modulatory systems have been known for decades to regulate retinogeniculate transmission. The dynamic properties that the retinogeniculate synapse itself exhibits during and after developmental refinement further enrich the role of the dLGN in the transmission of the retinal signal. Here we consider the structural and functional substrates for retinogeniculate synaptic transmission and plasticity, and reflect on how the complexity of the retinogeniculate synapse imparts a novel dynamic and influential capacity to subcortical processing of visual information.


Scientifica ◽  
2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Ruchi Kothari ◽  
Pradeep Bokariya ◽  
Smita Singh ◽  
Ramji Singh

Visual information is fundamental to how we appreciate our environment and interact with others. The visual evoked potential (VEP) is among those evoked potentials that are the bioelectric signals generated in the striate and extrastriate cortex when the retina is stimulated with light which can be recorded from the scalp electrodes. In the current paper, we provide an overview of the various modalities, techniques, and methodologies which have been employed for visual evoked potentials over the years. In the first part of the paper, we cast a cursory glance on the historical aspect of evoked potentials. Then the growing clinical significance and advantages of VEPs in clinical disorders have been briefly described, followed by the discussion on the earlier and currently available methods for VEPs based on the studies in the past and recent times. Next, we mention the standards and protocols laid down by the authorized agencies. We then summarize the recently developed techniques for VEP. In the concluding section, we lay down prospective research directives related to fundamental and applied aspects of VEPs as well as offering perspectives for further research to stimulate inquiry into the role of visual evoked potentials in visual processing impairment related disorders.


2018 ◽  
Vol 4 (1) ◽  
pp. 311-336 ◽  
Author(s):  
Yaoda Xu

Visual information processing contains two opposite needs. There is both a need to comprehend the richness of the visual world and a need to extract only pertinent visual information to guide thoughts and behavior at a given moment. I argue that these two aspects of visual processing are mediated by two complementary visual systems in the primate brain—specifically, the occipitotemporal cortex (OTC) and the posterior parietal cortex (PPC). The role of OTC in visual processing has been documented extensively by decades of neuroscience research. I review here recent evidence from human imaging and monkey neurophysiology studies to highlight the role of PPC in adaptive visual processing. I first document the diverse array of visual representations found in PPC. I then describe the adaptive nature of visual representation in PPC by contrasting visual processing in OTC and PPC and by showing that visual representations in PPC largely originate from OTC.


Sign in / Sign up

Export Citation Format

Share Document