Optimal Scheduling of Software Projects Using Reinforcement Learning

Author(s):  
Frank Padberg ◽  
David Weiss
2020 ◽  
Vol 193 ◽  
pp. 105443
Author(s):  
Amir Ebrahimi Zade ◽  
Seyedhamidreza Shahabi Haghighi ◽  
Madjid Soltani

Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2120
Author(s):  
Ying Ji ◽  
Jianhui Wang ◽  
Jiacan Xu ◽  
Donglin Li

The proliferation of distributed renewable energy resources (RESs) poses major challenges to the operation of microgrids due to uncertainty. Traditional online scheduling approaches relying on accurate forecasts become difficult to implement due to the increase of uncertain RESs. Although several data-driven methods have been proposed recently to overcome the challenge, they generally suffer from a scalability issue due to the limited ability to optimize high-dimensional continuous control variables. To address these issues, we propose a data-driven online scheduling method for microgrid energy optimization based on continuous-control deep reinforcement learning (DRL). We formulate the online scheduling problem as a Markov decision process (MDP). The objective is to minimize the operating cost of the microgrid considering the uncertainty of RESs generation, load demand, and electricity prices. To learn the optimal scheduling strategy, a Gated Recurrent Unit (GRU)-based network is designed to extract temporal features of uncertainty and generate the optimal scheduling decisions in an end-to-end manner. To optimize the policy with high-dimensional and continuous actions, proximal policy optimization (PPO) is employed to train the neural network-based policy in a data-driven fashion. The proposed method does not require any forecasting information on the uncertainty or a prior knowledge of the physical model of the microgrid. Simulation results using realistic power system data of California Independent System Operator (CAISO) demonstrate the effectiveness of the proposed method.


Transport ◽  
2012 ◽  
Vol 26 (4) ◽  
pp. 383-393 ◽  
Author(s):  
Qingcheng Zeng ◽  
Zhongzhen Yang ◽  
Xiangpei Hu

The objective of operation scheduling in container terminals is to determine a schedule that minimizes time for loading or unloading a given set of containers. This paper presents a method integrating reinforcement learning and simulation to optimize operation scheduling in container terminals. The introduced method uses a simulation model to construct the system environment while the Q-learning algorithm (reinforcement learning algorithm) is applied to learn optimal dispatching rules for different equipment (e.g. yard cranes, yard trailers). The optimal scheduling scheme is obtained by the interaction of the Q-learning algorithm and simulation environment. To evaluate the effectiveness of the proposed method, a lower bound is calculated considering the characteristics of the scheduling problem in container terminals. Finally, numerical experiments are provided to illustrate the validity of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document