scholarly journals Data-driven Optimal Control Strategy for Virtual Synchronous Generator via Deep Reinforcement Learning Approach

2021 ◽  
Vol 9 (4) ◽  
pp. 919-929
Author(s):  
Yushuai Li ◽  
Wei Gao ◽  
Weihang Yan ◽  
Shuo Huang ◽  
Rui Wang ◽  
...  
Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 271
Author(s):  
Yusung Lee ◽  
Woohyun Kim

In this study, an optimal control strategy for the variable refrigerant flow (VRF) system is developed using a data-driven model and on-site data to save the building energy. Three data-based models are developed to improve the on-site applicability. The presented models are used to determine the length of time required to bring each zone from its current temperature to the set point. The existing data are used to evaluate and validated the predictive performance of three data-based models. Experiments are conducted using three outdoor units and eight indoor units on site. The experimental test is performed to validate the performance of proposed optimal control by comparing between conventional and optimal control methods. Then, the ability to save energy wasted for maintaining temperature after temperature reaches the set points is evaluated through the comparison of energy usage. Given these results, 30.5% of energy is saved on average for each outdoor unit and the proposed optimal control strategy makes the zones comfortable.


Energies ◽  
2018 ◽  
Vol 11 (10) ◽  
pp. 2628 ◽  
Author(s):  
Dechang Yang ◽  
Wenlong Liao ◽  
Yusen Wang ◽  
Keqing Zeng ◽  
Qiuyue Chen ◽  
...  

To improve the reliability and reduce power loss of distribution network, the dynamic reconfiguration is widely used. It is employed to find an optimal topology for each time interval while satisfying all the physical constraints. Dynamic reconfiguration is a non-deterministic polynomial problem, which is difficult to find the optimal control strategy in a short time. The conventional methods solved complex model of dynamic reconfiguration in different ways, but only local optimal solutions can be found. In this paper, a data-driven optimization control for dynamic reconfiguration of distribution network is proposed. Through two stages that include rough matching and fine matching, the historical cases which are similar to current case are chosen as candidate cases. The optimal control strategy suitable for the current case is selected according to dynamic time warping (DTW) distances which evaluate the similarity between the candidate cases and the current case. The advantage of the proposed approach is that it does not need to solve complex model of dynamic reconfiguration, and only uses historical data to obtain the optimal control strategy for the current case. The cases study shows that the optimization results and the computation time of the proposed approach are superior to conventional methods.


Author(s):  
Ernst Moritz Hahn ◽  
Mateo Perez ◽  
Sven Schewe ◽  
Fabio Somenzi ◽  
Ashutosh Trivedi ◽  
...  

AbstractWe study reinforcement learning for the optimal control of Branching Markov Decision Processes (BMDPs), a natural extension of (multitype) Branching Markov Chains (BMCs). The state of a (discrete-time) BMCs is a collection of entities of various types that, while spawning other entities, generate a payoff. In comparison with BMCs, where the evolution of a each entity of the same type follows the same probabilistic pattern, BMDPs allow an external controller to pick from a range of options. This permits us to study the best/worst behaviour of the system. We generalise model-free reinforcement learning techniques to compute an optimal control strategy of an unknown BMDP in the limit. We present results of an implementation that demonstrate the practicality of the approach.


Sign in / Sign up

Export Citation Format

Share Document