scholarly journals Affine processes with compact state space

2018 ◽  
Vol 23 (0) ◽  
Author(s):  
Paul Krühner ◽  
Martin Larsson
Author(s):  
Hanhua Zhu

Deep reinforcement learning (DRL) increases the successful applications of reinforcement learning (RL) techniques but also brings challenges such as low sample efficiency. In this work, I propose generalized representation learning methods to obtain compact state space suitable for RL from a raw observation state. I expect my new methods will increase sample efficiency of RL by understandable representations of state and therefore improve the performance of RL.


2003 ◽  
Vol 31 (4) ◽  
pp. 2270-2300 ◽  
Author(s):  
Serguei Pergamenchtchikov ◽  
Claudia Kl�ppelberg

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Anugu Sumith Reddy ◽  
Amit Apte

<p style='text-indent:20px;'>This papers shows that nonlinear filter in the case of deterministic dynamics is stable with respect to the initial conditions under the conditions that observations are sufficiently rich, both in the context of continuous and discrete time filters. Earlier works on the stability of the nonlinear filters are in the context of stochastic dynamics and assume conditions like compact state space or time independent observation model, whereas we prove filter stability for deterministic dynamics with more general assumptions on the state space and observation process. We give several examples of systems that satisfy these assumptions. We also show that the asymptotic structure of the filtering distribution is related to the dynamical properties of the signal.</p>


2021 ◽  
Author(s):  
Xiaoting Zhang ◽  
Jiafeng Zhang ◽  
Zhong Zheng ◽  
Hanyu Zheng ◽  
Minglong Pu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document