scholarly journals Category-Selective Visual Regions Have Distinctive Signatures of Connectivity in Neonates

2019 ◽  
Author(s):  
Laura Cabral ◽  
Leire Zubiaurre ◽  
Conor Wild ◽  
Annika Linke ◽  
Rhodri Cusack

AbstractThe development of the ventral visual stream is shaped both by an innate proto-organization and by experience. The fusiform face area (FFA), for example, has stronger connectivity to early visual regions representing the fovea and lower spatial frequencies. In adults, category-selective regions in the ventral stream (e.g. the FFA) also have distinct signatures of connectivity to widely distributed brain regions, which are thought to encode rich cross-modal, motoric, and affective associations (e.g., tool regions to the motor cortex). It is unclear whether this long-range connectivity is also innate, or if it develops with experience. We used MRI diffusion-weighted imaging with tractography to characterize the connectivity of face, place, and tool category-selective regions in neonates (N=445), 1-9 month old infants (N=11), and adults (N=14). Using a set of linear-discriminant classifiers, category-selective connectivity was found to be both innate and shaped by experience. Connectivity for faces was the most developed, with no evidence of significant change in the time period studied. Place and tool networks were present at birth but also demonstrated evidence of development with experience, with tool connectivity developing over a more protracted period (9 months). Taken together, the results support an extended proto-organizon to include long-range connectivity that could provide additional constraints on experience dependent development.

2016 ◽  
Author(s):  
Benjamin Gagl ◽  
Fabio Richlan ◽  
Philipp Ludersdorfer ◽  
Jona Sassenhagen ◽  
Susanne Eisenhauer ◽  
...  

AbstractTo characterize the left-ventral occipito-temporal cortex (lvOT) role during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filter out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging. Empirically, using functional magnetic resonance imaging, we demonstrate that quantitative LCM simulations predict lvOT activation across three studies better than alternative models. Besides, we found that word-likeness, which is assumed as input to LCM, is represented posterior to lvOT. In contrast, a dichotomous word/non-word contrast, which is assumed as the LCM’s output, could be localized to upstream frontal brain regions. Finally, we found that training lexical categorization results in more efficient reading. Thus, we propose a ventral-visual-stream processing framework for reading involving word-likeness extraction followed by lexical categorization, before meaning extraction.


2018 ◽  
Author(s):  
Richard Ramsey

The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health and personality, as well as emotional state and intentions, which influence social behaviour. To date, most neuroscience research on body perception has aimed to characterise the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current paper reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream, but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions and theory-of-mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscores that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalises to inform the organisational structure of social perception and cognition more generally, and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.


2018 ◽  
Vol 30 (10) ◽  
pp. 1442-1451 ◽  
Author(s):  
Richard Ramsey

The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.


2021 ◽  
Author(s):  
Aran Nayebi ◽  
Javier Sagastuy-Brena ◽  
Daniel M. Bear ◽  
Kohitij Kar ◽  
Jonas Kubilius ◽  
...  

The ventral visual stream (VVS) is a hierarchically connected series of cortical areas known to underlie core object recognition behaviors, enabling humans and non-human primates to effortlessly recognize objects across a multitude of viewing conditions. While recent feedforward convolutional neural networks (CNNs) provide quantitatively accurate predictions of temporally-averaged neural responses throughout the ventral pathway, they lack two ubiquitous neuroanatomical features: local recurrence within cortical areas and long-range feedback from downstream areas to upstream areas. As a result, such models are unable to account for the temporally-varying dynamical patterns thought to arise from recurrent visual circuits, nor can they provide insight into the behavioral goals that these recurrent circuits might help support. In this work, we augment CNNs with local recurrence and long-range feedback, developing convolutional RNN (ConvRNN) network models that more correctly mimic the gross neuroanatomy of the ventral pathway. Moreover, when the form of the recurrent circuit is chosen properly, ConvRNNs with comparatively small numbers of layers can achieve high performance on a core recognition task, comparable to that of much deeper feedforward networks. We then compared these models to temporally fine-grained neural and behavioral recordings from primates to thousands of images. We found that ConvRNNs better matched these data than alternative models, including the deepest feedforward networks, on two metrics: 1) neural dynamics in V4 and inferotemporal (IT) cortex at late timepoints after stimulus onset, and 2) the varying times at which object identity can be decoded from IT, including more challenging images that take longer to decode. Moreover, these results differentiate within the class of ConvRNNs, suggesting that there are strong functional constraints on the recurrent connectivity needed to match these phenomena. Finally, we find that recurrent circuits that attain high task performance while having a smaller network size as measured by number of units, rather than another metric such as the number of parameters, are overall most consistent with these data. Taken together, our results evince the role of recurrence and feedback in the ventral pathway to reliably perform core object recognition while subject to a strong total network size constraint.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


NeuroImage ◽  
2016 ◽  
Vol 128 ◽  
pp. 316-327 ◽  
Author(s):  
Marianna Boros ◽  
Jean-Luc Anton ◽  
Catherine Pech-Georgel ◽  
Jonathan Grainger ◽  
Marcin Szwed ◽  
...  

2018 ◽  
Author(s):  
Simona Monaco ◽  
Giulia Malfatti ◽  
Alessandro Zendron ◽  
Elisa Pellencin ◽  
Luca Turella

AbstractPredictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available, and vice-versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.


2014 ◽  
Vol 14 (10) ◽  
pp. 985-985
Author(s):  
R. Lafer-Sousa ◽  
A. Kell ◽  
A. Takahashi ◽  
J. Feather ◽  
B. Conway ◽  
...  

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Thomas SA Wallis ◽  
Christina M Funke ◽  
Alexander S Ecker ◽  
Leon A Gatys ◽  
Felix A Wichmann ◽  
...  

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated ‘Bouma’s Law’ of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


Sign in / Sign up

Export Citation Format

Share Document