Biologically-inspired episodic memory model considering the context information

Author(s):  
Gyeong-Moon Park ◽  
Sanghyun Cho ◽  
Jong-Hwan Kim
2016 ◽  
Vol 41 (8) ◽  
pp. 2089-2125 ◽  
Author(s):  
Jennifer S. Trueblood ◽  
Pernille Hemmer
Keyword(s):  

2018 ◽  
Author(s):  
Christoph Stahl ◽  
Frederik Aust

The article proposes a view of evaluative conditioning (EC) as resulting from judgments based on learning instances stored in memory. It is based on the formal episodic memory model MINERVA 2. Additional assumptions specify how the information retrieved from memory is used to inform specific evaluative dependent measures. The present approach goes beyond previous accounts in that it uses a well-specified formal model of episodic memory; it is however more limited in scope as it aims at explaining EC phenomena that do not involve reasoning processes. The article illustrates how the memory-based-judgment view accounts for several empirical findings in the EC literature that are often discussed as evidence for dual-process models of attitude learning. It sketches novel predictions, discusses limitations of the present approach, and identifies challenges and opportunities for its future development.


2018 ◽  
Vol 13 (3) ◽  
Author(s):  
Christoph Stahl ◽  
Frederik Aust

The article proposes a view of evaluative conditioning (EC) as resulting from judgments based on learning instances stored in memory. It is based on the formal episodic memory model MINERVA 2. Additional assumptions specify how the information retrieved from memory is used to inform specific evaluative dependent measures. The present approach goes beyond previous accounts in that it uses a well-specified formal model of episodic memory; it is however more limited in scope as it aims to explain EC phenomena that do not involve reasoning processes. The article illustrates how the memory-based-judgment view accounts for several empirical findings in the EC literature that are often discussed as evidence for dual-process models of attitude learning. It sketches novel predictions, discusses limitations of the present approach, and identifies challenges and opportunities for its future development.


Author(s):  
Zichuan Lin ◽  
Tianqi Zhao ◽  
Guangwen Yang ◽  
Lintao Zhang

Reinforcement learning (RL) algorithms have made huge progress in recent years by leveraging the power of deep neural networks (DNN). Despite the success, deep RL algorithms are known to be sample inefficient, often requiring many rounds of interactions with the environments to obtain satisfactory performances. Recently, episodic memory based RL has attracted attention due to its ability to latch on good actions quickly. In this paper, we present a simple yet effective biologically inspired RL algorithm called Episodic Memory Deep Q-Networks (EMDQN), which leverages episodic memory to supervise an agent during training. Experiments show that our proposed method leads to better sample efficiency and is more likely to find good policy. It only requires 1/5 of the interactions of DQN to achieve many state-of-the-art performances on Atari games, significantly outperforming regular DQN and other episodic memory based RL algorithms.


Sign in / Sign up

Export Citation Format

Share Document