Scheduling and control of linear projects

1994 ◽  
Vol 21 (2) ◽  
pp. 219-230 ◽  
Author(s):  
Neil N. Eldin ◽  
Ahmed B. Senouci

A two-state-variable, N-stage dynamic programming approach to scheduling and control of linear projects is presented. This approach accounts for practical considerations related to work continuity, interruptions, and lags between successive activities. In the dynamic programming formulation, stages represent project activities and state variables represent possible activity resources and interruptions at each location. The objective of the dynamic programming solution is to provide for the selection of resources, interruptions, and lags for production activities that lead to the minimum project total cost. In addition, the presented system produces a graphical presentation of the optimum project schedule and updates the original schedule based on update information input by the user. The updated schedule determines the new completion date, and forecasts the project new total cost based on the current project performance. A small linear project is provided as a numerical illustration of the system. Key words: dynamic programming, linear projects, scheduling systems, optimization of cost and scheduling durations.

1968 ◽  
Vol 5 (3) ◽  
pp. 679-692 ◽  
Author(s):  
Richard Morton

Suppose that the state variables x = (x1,…,xn)′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


1968 ◽  
Vol 5 (03) ◽  
pp. 679-692
Author(s):  
Richard Morton

Suppose that the state variables x = (x 1,…,x n )′ where the dot refers to derivatives with respect to time t, and u ∊ U is a vector of controls. The object is to transfer x to x 1 by choosing the controls so that the functional takes on its minimum value J(x) called the Bellman function (although we shall define it in a different way). The Dynamic Programming Principle leads to the maximisation with respect to u of and equality is obtained upon maximisation.


2009 ◽  
Vol 47 (20) ◽  
pp. 5811-5827 ◽  
Author(s):  
Aayush Dhawan ◽  
Samashivan Srinivasan ◽  
Prabina Rajib ◽  
Bopaya Bidanda

Sign in / Sign up

Export Citation Format

Share Document