automaton learning
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 70 ◽  
pp. 1031-1116
Author(s):  
Daniel Furelos-Blanco ◽  
Mark Law ◽  
Anders Jonsson ◽  
Krysia Broda ◽  
Alessandra Russo

In this paper we present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks. ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task’s subgoals expressed as propositional logic formulas over a set of high-level events. A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding. A state-of-the-art inductive logic programming system is used to learn a subgoal automaton that covers the traces of high-level events observed by the RL agent. When the currently exploited automaton does not correctly recognize a trace, the automaton learner induces a new automaton that covers that trace. The interleaving process guarantees the induction of automata with the minimum number of states, and applies a symmetry breaking mechanism to shrink the search space whilst remaining complete. We evaluate ISA in several gridworld and continuous state space problems using different RL algorithms that leverage the automaton structures. We provide an in-depth empirical analysis of the automaton learning performance in terms of the traces, the symmetry breaking and specific restrictions imposed on the final learnable automaton. For each class of RL problem, we show that the learned automata can be successfully exploited to learn policies that reach the goal, achieving an average reward comparable to the case where automata are not learned but handcrafted and given beforehand.


2020 ◽  
Vol 34 (04) ◽  
pp. 3890-3897
Author(s):  
Daniel Furelos-Blanco ◽  
Mark Law ◽  
Alessandra Russo ◽  
Krysia Broda ◽  
Anders Jonsson

In this work we present ISA, a novel approach for learning and exploiting subgoals in reinforcement learning (RL). Our method relies on inducing an automaton whose transitions are subgoals expressed as propositional formulas over a set of observable events. A state-of-the-art inductive logic programming system is used to learn the automaton from observation traces perceived by the RL agent. The reinforcement learning and automaton learning processes are interleaved: a new refined automaton is learned whenever the RL agent generates a trace not recognized by the current automaton. We evaluate ISA in several gridworld problems and show that it performs similarly to a method for which automata are given in advance. We also show that the learned automata can be exploited to speed up convergence through reward shaping and transfer learning across multiple tasks. Finally, we analyze the running time and the number of traces that ISA needs to learn an automata, and the impact that the number of observable events have on the learner's performance.


1998 ◽  
Vol 118 (3) ◽  
pp. 291-299
Author(s):  
Kotaro HIRASAWA ◽  
Masaaki HARADA ◽  
Masanao OHBAYASHI ◽  
Juuichi MURATA ◽  
Jinglu HU

1997 ◽  
Vol 117 (8) ◽  
pp. 1069-1075
Author(s):  
Mitsuo Ikeuchi ◽  
Kotaro Hirasawa ◽  
Masanao Obayashi

Sign in / Sign up

Export Citation Format

Share Document