Self-Learning Fuzzy Logic Controller using Q-Learning
Fuzzy logic controllers consist of if-then fuzzy rules generally adopted from a priori expert knowledge. However, it is not always easy or cheap to obtain expert knowledge. Q-learning can be used to acquire knowledge from experiences even without the model of the environment. The conventional Q-learning algorithm cannot deal with continuous states and continuous actions. However, the fuzzy logic controller can inherently receive continuous input values and generate continuous output values. Thus, in this paper, the Q-learning algorithm is incorporated into the fuzzy logic controller to compensate for each method’s disadvantages. Modified fuzzy rules are proposed in order to incorporate the Q-learning algorithm into the fuzzy logic controller. This combination results in the fuzzy logic controller that can learn through experience. Since Q-values in Q-learning are functional values of the state and the action, we cannot directly apply the conventional Q-learning algorithm to the proposed fuzzy logic controller. Interpolation is used in each modified fuzzy rule so that the Q-value is updatable.