scholarly journals Attention is required for knowledge-based sequential grouping of syllables into words

2017 ◽  
Author(s):  
Nai Ding ◽  
Xunyi Pan ◽  
Cheng Luo ◽  
Naifei Su ◽  
Wen Zhang ◽  
...  

AbstractHow the brain sequentially groups sensory events into temporal chunks and how this process is modulated by attention are fundamental questions in cognitive neuroscience. Sequential grouping includes bottom-up primitive grouping and top-down knowledge-based grouping. In speech perception, grouping acoustic features into syllables can rely on bottom-up acoustic continuity cues but grouping syllables into words critically relies on the listener’s lexical knowledge. This study investigates whether top-down attention is required to apply lexical knowledge to group syllables into words, by concurrently monitoring neural entrainment to syllables and words using electroencephalography (EEG). When attention is directed to a competing speech stream or cross-modally to a silent movie, neural entrainment to syllables is weakened but neural entrainment to words largely diminishes. These results strongly suggest that knowledge-based grouping of syllables into words requires top-down attention and is a bottleneck for the neural processing of unattended speech.

2001 ◽  
Vol 39 (2-3) ◽  
pp. 137-150 ◽  
Author(s):  
S Karakaş ◽  
C Başar-Eroğlu ◽  
Ç Özesmi ◽  
H Kafadar ◽  
Ö.Ü Erzengin
Keyword(s):  
Top Down ◽  

Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2013 ◽  
Vol 09 (02) ◽  
pp. 1350010 ◽  
Author(s):  
MATTEO CACCIOLA ◽  
GIANLUIGI OCCHIUTO ◽  
FRANCESCO CARLO MORABITO

Many computer vision problems consist of making a suitable content description of images usually aiming to extract the relevant information content. In case of images representing paintings or artworks, the information extracted is rather subject-dependent, thus escaping any universal quantification. However, we proposed a measure of complexity of such kinds of oeuvres which is related to brain processing. The artistic complexity measures the brain inability to categorize complex nonsense forms represented in modern art, in a dynamic process of acquisition that most involves top-down mechanisms. Here, we compare the quantitative results of our analysis on a wide set of paintings of various artists to the cues extracted from a standard bottom-up approach based on visual saliency concept. In every painting inspection, the brain searches for more informative areas at different scales, then connecting them in an attempt to capture the full impact of information content. Artistic complexity is able to quantify information which might have been individually lost in the fruition of a human observer thus identifying the artistic hand. Visual saliency highlights the most salient areas of the paintings standing out from their neighbours and grabbing our attention. Nevertheless, we will show that a comparison on the ways the two algorithms act, may manifest some interesting links, finally indicating an interplay between bottom-up and top-down modalities.


2021 ◽  
Author(s):  
Gabriel A. Nespoli

Music has a long history of being associated with movement synchronization such as foot-tapping or dance. These behaviours are easier with some music compared to others, and the reasons for this are not well understood. Groove is a quality of music that compels synchronous movement in the listener, and certain acoustic and musical features have been identified that contribute to a sense of groove.Neurons have been found to entrain to the beat of music. Combining these two ideas, it is reasonable to predict that neural populations involved in movement (i.e. premotor areas) would entrain more to high-groove than to low-groove music. This dissertation explores some of the psychological, musical and acoustic aspects of music that contribute to neural entrainment in premotor areas of the brain. Study 1 investigates the effects of feelings of groove on pre-motor entrainment, using stimuli that have been rated on extent of groove in a previous study. Study 2 investigates the musical feature of syncopation – which has previously been found to be associated with sense of groove – on extent of premotor entrainment and behavioural synchronization ability. Study 3 investigates the effects of acoustic features that have been found to be related to groove and movement synchronization such as event density and percussiveness. The pattern of results across all studies suggests that the complexity of the rhythms in the stimulus determines the extent of beat entrainment. Feelings of groove, however, are better characterized by “beat complexity”, which depends on a) the extent to which the listener perceives the beat, and b) the extent to which other rhythmic elements of the music compete with the beat. A network of brain areas integral to the perception of groove is proposed, where activation of premotor areas enables music to drive motor output.


Author(s):  
Mariana von Mohr ◽  
Aikaterini Fotopoulou

Pain and pleasant touch have been recently classified as interoceptive modalities. This reclassification lies at the heart of long-standing debates questioning whether these modalities should be defined as sensations on their basis of neurophysiological specificity at the periphery or as homeostatic emotions on the basis of top-down convergence and modulation at the spinal and brain levels. Here, we outline the literature on the peripheral and central neurophysiology of pain and pleasant touch. We next recast this literature within a recent Bayesian predictive coding framework, namely active inference. This recasting puts forward a unifying model of bottom-up and top-down determinants of pain and pleasant touch and the role of social factors in modulating the salience of peripheral signals reaching the brain.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Kendrick N Kay ◽  
Jason D Yeatman

The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.


2019 ◽  
Author(s):  
Pantelis Leptourgos ◽  
Charles-Edouard Notredame ◽  
Marion Eck ◽  
Renaud Jardri ◽  
Sophie Denève

AbstractWhen facing fully ambiguous images, the brain cannot commit to a single percept and instead switches between mutually exclusive interpretations every few seconds, a phenomenon known as bistable perception. Despite years of research, there is still no consensus on whether bistability, and perception in general, is driven primarily by bottom-up or top-down mechanisms. Here, we adopted a Bayesian approach in an effort to reconcile these two theories. Fifty-five healthy participants were exposed to an adaptation of the Necker cube paradigm, in which we manipulated sensory evidence (by shadowing the cube) and prior knowledge (e.g., by varying instructions about what participants should expect to see). We found that manipulations of both sensory evidence and priors significantly affected the way participants perceived the Necker cube. However, we observed an interaction between the effect of the cue and the effect of the instructions, a finding incompatible with Bayes-optimal integration. In contrast, the data were well predicted by a circular inference model. In this model, ambiguous sensory evidence is systematically biased in the direction of current expectations, ultimately resulting in a bistable percept.


2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Sander Erik Bosch ◽  
Marcel van Gerven

For decades, the extent to which visual imagery relies on similar neural mechanisms as visual perception has been a topic of debate. Here, we review recent neuroimaging studies comparing these two forms of visual experience. Their results suggest that there is large overlap in neural processing during perception and imagery: neural representations of imagined and perceived stimuli are similar in visual, parietal and frontal cortex. Furthermore, perception and imagery seem to rely on similar top-down connectivity. The most prominent difference is the absence of bottom-up processing during imagery. These findings fit well with the idea that imagery and perception rely on similar emulation or prediction processes.


Author(s):  
Benjamin Schuman ◽  
Shlomo Dellal ◽  
Alvar Prönneke ◽  
Robert Machold ◽  
Bernardo Rudy

Many of our daily activities, such as riding a bike to work or reading a book in a noisy cafe, and highly skilled activities, such as a professional playing a tennis match or a violin concerto, depend upon the ability of the brain to quickly make moment-to-moment adjustments to our behavior in response to the results of our actions. Particularly, they depend upon the ability of the neocortex to integrate the information provided by the sensory organs (bottom-up information) with internally generated signals such as expectations or attentional signals (top-down information). This integration occurs in pyramidal cells (PCs) and their long apical dendrite, which branches extensively into a dendritic tuft in layer 1 (L1). The outermost layer of the neocortex, L1 is highly conserved across cortical areas and species. Importantly, L1 is the predominant input layer for top-down information, relayed by a rich, dense mesh of long-range projections that provide signals to the tuft branches of the PCs. Here, we discuss recent progress in our understanding of the composition of L1 and review evidence that L1 processing contributes to functions such as sensory perception, cross-modal integration, controlling states of consciousness, attention, and learning. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Sign in / Sign up

Export Citation Format

Share Document