Optimal control of the service rate in an M/G/1 queueing system

1978 ◽  
Vol 10 (3) ◽  
pp. 682-701 ◽  
Author(s):  
Bharat T. Doshi

We consider an M/G/1 queue in which the service rate is subject to control. The control is exercised continuously and is based on the observations of the residual workload process. For both the discounted cost and the average cost criteria we obtain conditions which are sufficient for a stationary policy to be optimal. When the service cost rate and the holding cost rates are non-decreasing and convex it is shown that these sufficient conditions are satisfied by a monotonic policy, thus showing its optimality.

1978 ◽  
Vol 10 (03) ◽  
pp. 682-701 ◽  
Author(s):  
Bharat T. Doshi

We consider an M/G/1 queue in which the service rate is subject to control. The control is exercised continuously and is based on the observations of the residual workload process. For both the discounted cost and the average cost criteria we obtain conditions which are sufficient for a stationary policy to be optimal. When the service cost rate and the holding cost rates are non-decreasing and convex it is shown that these sufficient conditions are satisfied by a monotonic policy, thus showing its optimality.


1991 ◽  
Vol 28 (01) ◽  
pp. 210-220
Author(s):  
Kazuyoshi Wakuta

We consider the optimal control of an M/G/1 queue with finite input source. The queue length, however, can be only imperfectly observed through the observations at the initial time and the times of successive departures. At these times, the service rate can be chosen, based on the observable histories. A service cost and a holding cost are incurred. We show that such a control problem can be formulated as a semi-Markov decision process with imperfect state information, and present sufficient conditions for the existence of an optimal stationary I-policy.


1991 ◽  
Vol 28 (1) ◽  
pp. 210-220 ◽  
Author(s):  
Kazuyoshi Wakuta

We consider the optimal control of an M/G/1 queue with finite input source. The queue length, however, can be only imperfectly observed through the observations at the initial time and the times of successive departures. At these times, the service rate can be chosen, based on the observable histories. A service cost and a holding cost are incurred. We show that such a control problem can be formulated as a semi-Markov decision process with imperfect state information, and present sufficient conditions for the existence of an optimal stationary I-policy.


2010 ◽  
Vol 42 (04) ◽  
pp. 953-985 ◽  
Author(s):  
Xianping Guo ◽  
Liuer Ye

This paper deals with continuous-time Markov decision processes in Polish spaces, under the discounted and average cost criteria. All underlying Markov processes are determined by given transition rates which are allowed to be unbounded, and the costs are assumed to be bounded below. By introducing an occupation measure of a randomized Markov policy and analyzing properties of occupation measures, we first show that the family of all randomized stationary policies is ‘sufficient’ within the class of all randomized Markov policies. Then, under the semicontinuity and compactness conditions, we prove the existence of a discounted cost optimal stationary policy by providing a value iteration technique. Moreover, by developing a new average cost, minimum nonnegative solution method, we prove the existence of an average cost optimal stationary policy under some reasonably mild conditions. Finally, we use some examples to illustrate applications of our results. Except that the costs are assumed to be bounded below, the conditions for the existence of discounted cost (or average cost) optimal policies are much weaker than those in the previous literature, and the minimum nonnegative solution approach is new.


2010 ◽  
Vol 42 (4) ◽  
pp. 953-985 ◽  
Author(s):  
Xianping Guo ◽  
Liuer Ye

This paper deals with continuous-time Markov decision processes in Polish spaces, under the discounted and average cost criteria. All underlying Markov processes are determined by given transition rates which are allowed to be unbounded, and the costs are assumed to be bounded below. By introducing an occupation measure of a randomized Markov policy and analyzing properties of occupation measures, we first show that the family of all randomized stationary policies is ‘sufficient’ within the class of all randomized Markov policies. Then, under the semicontinuity and compactness conditions, we prove the existence of a discounted cost optimal stationary policy by providing a value iteration technique. Moreover, by developing a new average cost, minimum nonnegative solution method, we prove the existence of an average cost optimal stationary policy under some reasonably mild conditions. Finally, we use some examples to illustrate applications of our results. Except that the costs are assumed to be bounded below, the conditions for the existence of discounted cost (or average cost) optimal policies are much weaker than those in the previous literature, and the minimum nonnegative solution approach is new.


2022 ◽  
Author(s):  
Varun Gupta ◽  
Jiheng Zhang

The paper studies approximations and control of a processor sharing (PS) server where the service rate depends on the number of jobs occupying the server. The control of such a system is implemented by imposing a limit on the number of jobs that can share the server concurrently, with the rest of the jobs waiting in a first-in-first-out (FIFO) buffer. A desirable control scheme should strike the right balance between efficiency (operating at a high service rate) and parallelism (preventing small jobs from getting stuck behind large ones). We use the framework of heavy-traffic diffusion analysis to devise near optimal control heuristics for such a queueing system. However, although the literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete PS server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers that then yields a drift function. We establish diffusion approximations and use them to obtain insightful and closed-form approximations for the original system under a static concurrency limit control policy. We extend our study to control policies that dynamically adjust the concurrency limit. We provide two novel numerical algorithms to solve the associated diffusion control problem. Our algorithms can be viewed as “average cost” iteration: The first algorithm uses binary-search on the average cost, while the second faster algorithm uses Newton-Raphson method for root finding. Numerical experiments demonstrate the accuracy of our approximation for choosing optimal or near-optimal static and dynamic concurrency control heuristics.


1993 ◽  
Vol 7 (1) ◽  
pp. 69-83 ◽  
Author(s):  
Linn I. Sennott

A Markov decision chain with denumerable state space incurs two types of costs — for example, an operating cost and a holding cost. The objective is to minimize the expected average operating cost, subject to a constraint on the expected average holding cost. We prove the existence of an optimal constrained randomized stationary policy, for which the two stationary policies differ on at most one state. The examples treated are a packet communication system with reject option and a single-server queue with service rate control.


1990 ◽  
Vol 27 (04) ◽  
pp. 888-898
Author(s):  
M. Abdel-hameed ◽  
Y. Nakhi

Zuckermann [10] considers the problem of optimal control of a finite dam using policies, assuming that the input process is Wiener with drift term μ ≧ 0. Lam Yeh and Lou Jiann Hua [7] treat the case where the input is a Wiener process with a reflecting boundary at zero, with drift term μ ≧ 0, using the long-run average cost and total discounted cost criteria. Attia [1] obtains results similar to those of Lam Yeh and Lou Jiann Hua for the long-run average case and extends them to include μ < 0. In this paper we look further into the results of Zuckerman [10], simplify some of the work of Attia [1], [2], offering corrections to some of his formulae and extend the results of Lam Yeh and Lou Jiann Hua [7].


2003 ◽  
Vol 17 (1) ◽  
pp. 119-135 ◽  
Author(s):  
E.G. Kyriakidis

This article is concerned with the problem of controlling a simple immigration process, which represents a pest population, by the introduction of a predator. It is assumed that the cost rate caused by the pests is an increasing function of their population size and that the cost rate of the controlling action is constant. The existence of a control-limit policy that minimizes the expected long-run average cost per unit time is established. The proof is based on the variation of a fictitious parameter over the entire real line.


Sign in / Sign up

Export Citation Format

Share Document