scholarly journals Reinforcement-Learning-Based Virtual Energy Storage System Operation Strategy for Wind Power Forecast Uncertainty Management

2020 ◽  
Vol 10 (18) ◽  
pp. 6420
Author(s):  
Eunsung Oh

Uncertainties related to wind power generation (WPG) restrict its usage. Energy storage systems (ESSs) are key elements employed in managing this uncertainty. This study proposes a reinforcement learning (RL)-based virtual ESS (VESS) operation strategy for WPG forecast uncertainty management. The VESS logically shares a physical ESS to multiple units, while VESS operation reduces the cost barrier of the ESS. In this study, the VESS operation model is suggested considering not only its own operation but also the operation of other units, and the VESS operation problem is formulated as a decision-making problem. To solve this problem, a policy-learning strategy is proposed based on an expected state-action-reward-state-action (SARSA) approach that is robust to variations in uncertainty. Moreover, multi-dimensional clustering is performed according to the WPG forecast data of multiple units to enhance performance. Simulation results using real datasets recorded by the National Renewable Energy Laboratory project of U.S. demonstrate that the proposed strategy provides a near-optimal performance with a less than 2%-point gap with the optimal solution. In addition, the performance of the VESS operation is enhanced by multi-user diversity gain in comparison with individual ESS operation.

Author(s):  
Seyed Mohammad Jafar Jalali ◽  
Gerardo J. Osorio ◽  
Sajad Ahmadian ◽  
Mohamed Lotfi ◽  
Vasco Campos ◽  
...  

Author(s):  
Zhen Yang ◽  
Xiaoteng Ma ◽  
Li Xia ◽  
Qianchuan Zhao ◽  
Xiaohong Guan

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiali Li ◽  
Zhengyu Lv ◽  
Licheng Wu ◽  
Yue Zhao ◽  
Xiaona Xu

In this study, hybrid state-action-reward-state-action (SARSAλ) and Q-learning algorithms are applied to different stages of an upper confidence bound applied to tree search for Tibetan Jiu chess. Q-learning is also used to update all the nodes on the search path when each game ends. A learning strategy that uses SARSAλ and Q-learning algorithms combining domain knowledge for a feedback function for layout and battle stages is proposed. An improved deep neural network based on ResNet18 is used for self-play training. Experimental results show that hybrid online and offline reinforcement learning with a deep neural network can improve the game program’s learning efficiency and understanding ability for Tibetan Jiu chess.


Sign in / Sign up

Export Citation Format

Share Document