scholarly journals Enhanced instructed fear learning in delusion-proneness

2018 ◽  
Author(s):  
Anaïs Louzolo ◽  
Rita Almeida ◽  
Marc Guitart-Masip ◽  
Malin Björnsdotter ◽  
Martin Ingvar ◽  
...  

AbstractPsychosis is characterized by distorted perceptions and deficient low-level learning, including reward learning and fear conditioning. This has been interpreted as reflecting imprecise priors in a predictive coding system. However, this idea is not compatible with formation of overly strong beliefs and delusions in psychosis-associated states. A reconciliation of these paradoxical observations is that these individuals actively develop and use higher-order beliefs in order to interpret a chaotic environment. In the present behavioural and fMRI study, we compared delusion-prone individuals (n=20), a trait related to psychotic disorders, with controls (n=23; n=20 in fMRI-part) to study the effect of beliefs on fear learning. We show that instructed fear learning, involving explicit change of beliefs and an associated activation of lateral orbitofrontal cortex, is expressed to a higher degree in delusion-prone subjects. Our results suggest that strong high-level top-down learning co-exists with previously reported weak low-level bottom-up learning in psychosis-associated states.

2021 ◽  
Vol 11 (12) ◽  
pp. 1581
Author(s):  
Alexis E. Whitton ◽  
Kathryn E. Lewandowski ◽  
Mei-Hua Hall

Motivational and perceptual disturbances co-occur in psychosis and have been linked to aberrations in reward learning and sensory gating, respectively. Although traditionally studied independently, when viewed through a predictive coding framework, these processes can both be linked to dysfunction in striatal dopaminergic prediction error signaling. This study examined whether reward learning and sensory gating are correlated in individuals with psychotic disorders, and whether nicotine—a psychostimulant that amplifies phasic striatal dopamine firing—is a common modulator of these two processes. We recruited 183 patients with psychotic disorders (79 schizophrenia, 104 psychotic bipolar disorder) and 129 controls and assessed reward learning (behavioral probabilistic reward task), sensory gating (P50 event-related potential), and smoking history. Reward learning and sensory gating were correlated across the sample. Smoking influenced reward learning and sensory gating in both patient groups; however, the effects were in opposite directions. Specifically, smoking was associated with improved performance in individuals with schizophrenia but impaired performance in individuals with psychotic bipolar disorder. These findings suggest that reward learning and sensory gating are linked and modulated by smoking. However, disorder-specific associations with smoking suggest that nicotine may expose pathophysiological differences in the architecture and function of prediction error circuitry in these overlapping yet distinct psychotic disorders.


2021 ◽  
Vol 14 ◽  
Author(s):  
Alexander Asilador ◽  
Daniel A. Llano

It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.


2016 ◽  
Author(s):  
Biao Han ◽  
Rufin VanRullen

AbstractPredictive coding is an influential model emphasizing interactions between feedforward and feedback signals. Here, we investigated its temporal dynamics. Two gray disks with different versions of the same stimulus, one enabling predictive feedback (a 3D-shape) and one impeding it (random-lines), were simultaneously presented on the left and right of fixation. Human subjects judged the luminance of the two disks while EEG was recorded. Independently of the spatial response (left/right), we found that the choice of 3D-shape or random-lines as the brighter disk (our measure of post-stimulus predictive coding efficiency on each trial) fluctuated along with the pre-stimulus phase of two spontaneous oscillations: a ~5Hz oscillation in contralateral frontal electrodes and a ~16Hz oscillation in contralateral occipital electrodes. This pattern of results demonstrates that predictive coding is a rhythmic process, and suggests that it could take advantage of faster oscillations in low-level areas and slower oscillations in high-level areas.


Author(s):  
Le Dong ◽  
Ebroul Izquierdo ◽  
Shuzhi Ge

In this chapter, research on visual information classification based on biologically inspired visually selective attention with knowledge structuring is presented. The research objective is to develop visual models and corresponding algorithms to automatically extract features from selective essential areas of natural images, and finally, to achieve knowledge structuring and classification within a structural description scheme. The proposed scheme consists of three main aspects: biologically inspired visually selective attention, knowledge structuring and classification of visual information. Biologically inspired visually selective attention closely follow the mechanisms of the visual “what” and “where” pathways in the human brain. The proposed visually selective attention model uses a bottom-up approach to generate essential areas based on low-level features extracted from natural images. This model also exploits a low-level top-down selective attention mechanism which performs decisions on interesting objects by human interaction with preference or refusal inclination. Knowledge structuring automatically creates a relevance map from essential areas generated by visually selective attention. The developed algorithms derive a set of well-structured representations from low-level description to drive the final classification. The knowledge structuring relays on human knowledge to produce suitable links between low-level descriptions and high-level representation on a limited training set. The backbone is a distribution mapping strategy involving two novel modules: structured low-level feature extraction using convolution neural network and topology preservation based on sparse representation and unsupervised learning algorithm. Classification is achieved by simulating high-level top-down visual information perception and classification using an incremental Bayesian parameter estimation method. The utility of the proposed scheme for solving relevant research problems is validated. The proposed modular architecture offers straightforward expansion to include user relevance feedback, contextual input, and multimodal information if available.


2020 ◽  
Author(s):  
Alexander R. Asilador ◽  
Daniel Adolfo Llano

It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on the cerebral cortex, and consequently subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early stages of processing complements modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.


Author(s):  
Xue Dong ◽  
Mingxia Zhang ◽  
Bo Dong ◽  
Yi Jiang ◽  
Min Bao

AbstractReward has significant impacts on behavior and perception. Most past work in associative reward learning has used distinct visual cues to associate with different reward values. Thus, it remains unknown to what extent the learned associations depend on the consciousness. Here we resolved this issue by using an inter-ocular suppression paradigm with the monetary rewarding and non-rewarding cues identical to each other except for the eye-of-origin. Thus, the reward coding system cannot rely on the consciousness to select the reward-associated cue. Surprisingly, the targets in the rewarded eye broke into awareness faster than those in the non-rewarded eye. We further revealed that producing this effect required both attention and inter-ocular suppression. These findings suggest that the human’s reward coding system can produce two different types of reward-based learning. One is independent of the consciousness yet fairly consuming attentional resource. The other one results from volitional selections guided by top-down attention.


Author(s):  
Alan Wee-Chung Liew ◽  
Ngai-Fong Law

With the rapid growth of Internet and multimedia systems, the use of visual information has increased enormously, such that indexing and retrieval techniques have become important. Historically, images are usually manually annotated with metadata such as captions or keywords (Chang & Hsu, 1992). Image retrieval is then performed by searching images with similar keywords. However, the keywords used may differ from one person to another. Also, many keywords can be used for describing the same image. Consequently, retrieval results are often inconsistent and unreliable. Due to these limitations, there is a growing interest in content-based image retrieval (CBIR). These techniques extract meaningful information or features from an image so that images can be classified and retrieved automatically based on their contents. Existing image retrieval systems such as QBIC and Virage extract the so-called low-level features such as color, texture and shape from an image in the spatial domain for indexing. Low-level features sometimes fail to represent high level semantic image features as they are subjective and depend greatly upon user preferences. To bridge the gap, a top-down retrieval approach involving high level knowledge can complement these low-level features. This articles deals with various aspects of CBIR. This includes bottom-up feature- based image retrieval in both the spatial and compressed domains, as well as top-down task-based image retrieval using prior knowledge.


2019 ◽  
Author(s):  
Lilla Hodossy ◽  
Manos Tsakiris

The experience of one’s embodied sense of self is dependent on the integration of signals originating both from within and outwith one’s body. During the processing and integration of these signals, the bodily self must maintain a fine balance between stability and malleability. Here we investigate the potential role of autonomic responses in interoceptive processing and their contribution to the stability of the bodily self. Using a biofeedback paradigm, we manipulated the congruency of cardiac signals across two hierarchical levels: (i) the low-level congruency between a visual feedback and participant’s own cardiac signal and (ii) the high-level congruency between the participants’ beliefs about the identity of the cardiac feedback and its true identity. We measured the effects of these manipulations on high-frequency heart rate variability (HF-HRV), a selective index of phasic vagal cardiac control. In Experiment 1, HF-HRV was sensitive to low-level congruency, independently of whether participants attempted to regulate or simply attend to the biofeedback. Experiment 2 revealed a higher-level congruency effect, as participants’ prior veridical beliefs increased HF-HRV while when false they decreased HF-HRV. Our results demonstrate that autonomic changes in HF-HRV are sensitive to congruencies across multiple hierarchical levels. Our findings have important theoretical implications for predictive coding models of the self as they pave the way for a more direct way to track the subtle changes in the co-processing of the internal and external milieus.


2019 ◽  
Vol 19 (3) ◽  
pp. 1 ◽  
Author(s):  
Heiko H. Schütt ◽  
Lars O. M. Rothkegel ◽  
Hans A. Trukenbrod ◽  
Ralf Engbert ◽  
Felix A. Wichmann

Sign in / Sign up

Export Citation Format

Share Document