Robust decoding for reduced error propagation of DC/AC prediction errors

Author(s):  
Ximin Zhang ◽  
A. Vetro ◽  
Huifang Sun ◽  
Yun-Qing Shi
Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1146 ◽  
Author(s):  
Hu ◽  
Lo ◽  
Wu

This paper proposes a reversible data hiding technique based on the residual histogram shifting technique. To improve the hiding capacity, this study proposes a multiple-round hierarchical prediction mechanism that generates the prediction errors of each image block. The prediction errors of each block are collected to produce the residual histogram and the secret data are then embedded into the residual histogram to obtain the embedded image. Experimental results demonstrate that the proposed technique not only provides good hiding capacity, but also maintains good image quality of the embedded image. In addition, this technique can be easily extended for image integrity protection as it is capable of resisting error propagation.


Author(s):  
Kaitao Song ◽  
Xu Tan ◽  
Jianfeng Lu

Neural machine translation (NMT) generates the next target token given as input the previous ground truth target tokens during training while the previous generated target tokens during inference, which causes discrepancy between training and inference as well as error propagation, and affects the translation accuracy. In this paper, we introduce an error correction mechanism into NMT, which corrects the error information in the previous generated tokens to better predict the next token. Specifically, we introduce two-stream self-attention from XLNet into NMT decoder, where the query stream is used to predict the next token, and meanwhile the content stream is used to correct the error information from the previous predicted tokens. We leverage scheduled sampling to simulate the prediction errors during training. Experiments on three IWSLT translation datasets and two WMT translation datasets demonstrate that our method achieves improvements over Transformer baseline and scheduled sampling. Further experimental analyses also verify the effectiveness of our proposed error correction mechanism to improve the translation quality.


2020 ◽  
Vol 43 ◽  
Author(s):  
Kellen Mrkva ◽  
Luca Cian ◽  
Leaf Van Boven

Abstract Gilead et al. present a rich account of abstraction. Though the account describes several elements which influence mental representation, it is worth also delineating how feelings, such as fluency and emotion, influence mental simulation. Additionally, though past experience can sometimes make simulations more accurate and worthwhile (as Gilead et al. suggest), many systematic prediction errors persist despite substantial experience.


Author(s):  
Roberto Limongi ◽  
Angélica M. Silva

Abstract. The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production – where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.


2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


Sign in / Sign up

Export Citation Format

Share Document