scholarly journals Cross-modal and non-monotonic representations of statistical regularity are encoded in local neural response patterns

2018 ◽  
Author(s):  
Samuel A. Nastase ◽  
Ben Davis ◽  
Uri Hasson

AbstractCurrent neurobiological models assign a central role to predictive processes calibrated to environmental statistics. Neuroimaging studies examining the encoding of stimulus uncertainty have relied almost exclusively on manipulations in which stimuli were presented in a single sensory modality, and further assumed that neural responses vary monotonically with uncertainty. This has left a gap in theoretical development with respect to two core issues: i) are there cross-modal brain systems that encode input uncertainty in way that generalizes across sensory modalities, and ii) are there brain systems that track input uncertainty in a non-monotonic fashion? We used multivariate pattern analysis to address these two issues using auditory, visual and audiovisual inputs. We found signatures of cross-modal encoding in frontoparietal, orbitofrontal, and association cortices using a searchlight cross-classification analysis where classifiers trained to discriminate levels of uncertainty in one modality were tested in another modality. Additionally, we found widespread systems encoding uncertainty non-monotonically using classifiers trained to discriminate intermediate levels of uncertainty from both the highest and lowest uncertainty levels. These findings comprise the first comprehensive report of cross-modal and non-monotonic neural sensitivity to statistical regularities in the environment, and suggest that conventional paradigms testing for monotonic responses to uncertainty in a single sensory modality may have limited generalizability.

2020 ◽  
Author(s):  
Zhiyao Gao ◽  
Li Zheng ◽  
Rocco Chiou ◽  
André Gouws ◽  
Katya Krieger-Redwood ◽  
...  

AbstractThe flexible retrieval of knowledge is critical in everyday situations involving problem solving, reasoning and social interaction. Current theories have emphasised the importance of a left-lateralised semantic control network (SCN) in supporting flexible semantic behaviour. Apart from the SCN, a bilateral multiple-demand network (MDN) is implicated in executive functions across domains. No study, however, has examined whether semantic and non-semantic demands are reflected in a common neural code within regions specifically implicated in semantic control. Using functional MRI and univariate parametric modulation analysis as well as multivariate pattern analysis, we found that semantic and non-semantic demands gave rise to both similar and distinct neural responses across control-related networks. Though activity patterns in SCN and MDN could decode the difficulty of both semantic and verbal working memory decisions, there was no shared common neural coding of cognitive demands in SCN. In contrast, regions in MDN showed common patterns across manipulations of semantic and working memory control demands, with successful cross-classification of difficulty across tasks. Therefore, SCN and MDN can be dissociated according to the information they maintain about cognitive demands.


2016 ◽  
Author(s):  
Ayan Sengupta ◽  
Renat Yakupov ◽  
Oliver Speck ◽  
Stefan Pollmann ◽  
Michael Hanke

AbstractA decade after it was shown that the orientation of visual grating stimuli can be decoded from human visual cortex activity by means of multivariate pattern classification of BOLD fMRI data, numerous studies have investigated which aspects of neuronal activity are reflected in BOLD response patterns and are accessible for decoding. However, it remains inconclusive what the effect of acquisition resolution on BOLD fMRI decoding analyses is. The present study is the first to provide empirical ultra high-field fMRI data recorded at four spatial resolutions (0.8 mm, 1.4 mm, 2 mm, and 3 mm isotropic voxel size) on this topic — in order to test hypotheses on the strength and spatial scale of orientation discriminating signals. We present detailed analysis, in line with predictions from previous simulation studies, about how the performance of orientation decoding varies with different acquisition resolutions. Moreover, we also examine different spatial filtering procedures and its effects on orientation decoding. Here we show that higher-resolution scans with subsequent down-sampling or low-pass filtering yield no benefit over scans natively recorded in the corresponding lower resolution regarding decoding accuracy. The orientation-related signal in the BOLD fMRI data is spatially broadband in nature, includes both high spatial frequency components, as well as large-scale biases previously proposed in the literature. Moreover, we found above chance-level contribution from large draining veins to orientation decoding. Acquired raw data were publicly released to facilitate further investigation.


2014 ◽  
Vol 111 (1) ◽  
pp. 82-90 ◽  
Author(s):  
Daniel Kaiser ◽  
Lukas Strnad ◽  
Katharina N. Seidl ◽  
Sabine Kastner ◽  
Marius V. Peelen

Visual cues from the face and the body provide information about another's identity, emotional state, and intentions. Previous neuroimaging studies that investigated neural responses to (bodiless) faces and (headless) bodies have reported overlapping face- and body-selective brain regions in right fusiform gyrus (FG). In daily life, however, faces and bodies are typically perceived together and are effortlessly integrated into the percept of a whole person, raising the possibility that neural responses to whole persons are qualitatively different than responses to isolated faces and bodies. The present study used fMRI to examine how FG activity in response to a whole person relates to activity in response to the same face and body but presented in isolation. Using multivoxel pattern analysis, we modeled person-evoked response patterns in right FG through a linear combination of face- and body-evoked response patterns. We found that these synthetic patterns were able to accurately approximate the response patterns to whole persons, with face and body patterns each adding unique information to the response patterns evoked by whole person stimuli. These results suggest that whole person responses in FG primarily arise from the coactivation of independent face- and body-selective neural populations.


2019 ◽  
Author(s):  
David Richter ◽  
Floris P. de Lange

AbstractPerception and behavior can be guided by predictions, which are often based on learned statistical regularities. Neural responses to expected stimuli are frequently found to be attenuated after statistical learning. However, whether this sensory attenuation following statistical learning occurs automatically or depends on attention remains unknown. In the present fMRI study, we exposed human volunteers to sequentially presented object stimuli, in which the first object predicted the identity of the second object. We observed a strong attenuation of neural activity for expected compared to unexpected stimuli in the ventral visual stream. Crucially, this sensory attenuation was only apparent when stimuli were attended, and vanished when attention was directed away from the predictable objects. These results put important constraints on neurocomputational theories that cast perception as a process of probabilistic integration of prior knowledge and sensory information.


2021 ◽  
Author(s):  
Veronika Vilgis ◽  
Debbie Yee ◽  
Tim J. Silk ◽  
Alasdair Vance

AbstractWorking memory deficits are common in attention-deficit/hyperactivity disorder (ADHD) and depression, two common neurodevelopmental disorders with overlapping cognitive profiles but distinct clinical presentation. Multivariate techniques have previously been utilized to understand working memory processes in functional brain networks in healthy adults, but have not yet been applied to investigate how working memory processes within the same networks differ within typical and atypical developing populations. We used multivariate pattern analysis (MVPA) to identify whether brain networks discriminated between spatial vs. verbal working memory processes in ADHD and Persistent Depressive Disorder (PDD). 36 male clinical participants and 19 typically developing (TD) boys participated in a fMRI scan while completing a verbal and a spatial working memory task. Within a priori functional brain networks (frontoparietal, default mode, salience) the TD group demonstrated differential response patterns to verbal and spatial working memory. Both clinical groups show less differentiation than TD, with neural profiles suggesting ADHD is associated with weaker differentiation in both frontoparietal and salience networks and PDD is associated with weaker differentiation in left frontoparietal and default mode networks. Whereas the TD group’s neural profile indicates network response patterns that are sensitive to task demands, the neural profiles of the ADHD and PDD group suggest less specificity in neural representations of spatial and verbal working memory. We highlight within-group classification as innovative tool for understanding the neural mechanisms of how cognitive processes may deviate in clinical disorders, an important intermediary step towards improving translational psychiatry to inform clinical diagnoses and treatment.


2018 ◽  
Author(s):  
Giuseppe Notaro ◽  
Wieske van Zoest ◽  
David Melcher ◽  
Uri Hasson

ABSTRACTA core question underlying neurobiological and computational models of behavior is how individuals learn environmental statistics and use them for making predictions. Treatment of this issue largely relies on reactive paradigms, where inferences about predictive processes are derived by modeling responses to stimuli that vary in likelihood. Here we deployed a novel proactive oculomotor metric to determine how input statistics impact anticipatory behavior, decoupled from stimulus-response. We implemented transition constraints between target locations, and quantified a subtle fixation bias (FB) discernible while individuals fixated a screen center awaiting target presentation. We show that FB is informative with respect the input statistics, reflects learning at different temporal scales, predicts saccade latencies on a trial level, and can be linked to fundamental oculomotor metrics. We also present an extension of this approach to a more complex paradigm. Our work demonstrates how learning impacts strictly predictive processes and presents a novel direction for studying learning and prediction.


2020 ◽  
Vol 30 (11) ◽  
pp. 5792-5805 ◽  
Author(s):  
Shiri Makov ◽  
Elana Zion Golumbic

Abstract Dynamic attending theory suggests that predicting the timing of upcoming sounds can assist in focusing attention toward them. However, whether similar predictive processes are also applied to background noises and assist in guiding attention “away” from potential distractors, remains an open question. Here we address this question by manipulating the temporal predictability of distractor sounds in a dichotic listening selective attention task. We tested the influence of distractors’ temporal predictability on performance and on the neural encoding of sounds, by comparing the effects of Rhythmic versus Nonrhythmic distractors. Using magnetoencephalography we found that, indeed, the neural responses to both attended and distractor sounds were affected by distractors’ rhythmicity. Baseline activity preceding the onset of Rhythmic distractor sounds was enhanced relative to nonrhythmic distractor sounds, and sensory response to them was suppressed. Moreover, detection of nonmasked targets improved when distractors were Rhythmic, an effect accompanied by stronger lateralization of the neural responses to attended sounds to contralateral auditory cortex. These combined behavioral and neural results suggest that not only are temporal predictions formed for task-irrelevant sounds, but that these predictions bear functional significance for promoting selective attention and reducing distractibility.


2007 ◽  
Vol 97 (6) ◽  
pp. 4235-4257 ◽  
Author(s):  
Mark M. Churchland ◽  
Krishna V. Shenoy

The relationship between neural activity in motor cortex and movement is highly debated. Although many studies have examined the spatial tuning (e.g., for direction) of cortical responses, less attention has been paid to the temporal properties of individual neuron responses. We developed a novel task, employing two instructed speeds, that allows meaningful averaging of neural responses across reaches with nearly identical velocity profiles. Doing so preserves fine temporal structure and reveals considerable complexity and heterogeneity of response patterns in primary motor and premotor cortex. Tuning for direction was prominent, but the preferred direction was frequently inconstant with respect to time, instructed-speed, and/or reach distance. Response patterns were often temporally complex and multiphasic, and varied with direction and instructed speed in idiosyncratic ways. A wide variety of patterns was observed, and it was not uncommon for a neuron to exhibit a pattern shared by no other neuron in our dataset. Response patterns of individual neurons rarely, if ever, matched those of individual muscles. Indeed, the set of recorded responses spanned a much higher dimensional space than would be expected for a model in which neural responses relate to a moderate number of factors—dynamic, kinematic, or otherwise. Complex responses may provide a basis-set representing many parameters. Alternately, it may be necessary to discard the notion that responses exist to “represent” movement parameters. It has been argued that complex and heterogeneous responses are expected of a recurrent network that produces temporally patterned outputs, and the present results would seem to support this view.


2021 ◽  
Vol 118 (31) ◽  
pp. e2020410118
Author(s):  
Giulia Gennari ◽  
Sébastien Marti ◽  
Marie Palu ◽  
Ana Fló ◽  
Ghislaine Dehaene-Lambertz

Creating invariant representations from an everchanging speech signal is a major challenge for the human brain. Such an ability is particularly crucial for preverbal infants who must discover the phonological, lexical, and syntactic regularities of an extremely inconsistent signal in order to acquire language. Within the visual domain, an efficient neural solution to overcome variability consists in factorizing the input into a reduced set of orthogonal components. Here, we asked whether a similar decomposition strategy is used in early speech perception. Using a 256-channel electroencephalographic system, we recorded the neural responses of 3-mo-old infants to 120 natural consonant–vowel syllables with varying acoustic and phonetic profiles. Using multivariate pattern analyses, we show that syllables are factorized into distinct and orthogonal neural codes for consonants and vowels. Concerning consonants, we further demonstrate the existence of two stages of processing. A first phase is characterized by orthogonal and context-invariant neural codes for the dimensions of manner and place of articulation. Within the second stage, manner and place codes are integrated to recover the identity of the phoneme. We conclude that, despite the paucity of articulatory motor plans and speech production skills, pre-babbling infants are already equipped with a structured combinatorial code for speech analysis, which might account for the rapid pace of language acquisition during the first year.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
David Richter ◽  
Floris P de Lange

Perception and behavior can be guided by predictions, which are often based on learned statistical regularities. Neural responses to expected stimuli are frequently found to be attenuated after statistical learning. However, whether this sensory attenuation following statistical learning occurs automatically or depends on attention remains unknown. In the present fMRI study, we exposed human volunteers to sequentially presented object stimuli, in which the first object predicted the identity of the second object. We observed a reliable attenuation of neural activity for expected compared to unexpected stimuli in the ventral visual stream. Crucially, this sensory attenuation was only apparent when stimuli were attended, and vanished when attention was directed away from the predictable objects. These results put important constraints on neurocomputational theories that cast perception as a process of probabilistic integration of prior knowledge and sensory information.


Sign in / Sign up

Export Citation Format

Share Document