Optimal R&D programs in a random environment

1990 ◽  
Vol 27 (2) ◽  
pp. 343-350 ◽  
Author(s):  
M. J. M. Posner ◽  
D. Zuckerman

Our study examines a stochastic R&D model with flexible termination time and without rivalry. Specifically, we assume a stochastic relationship between expenditures rate and the project's status. Furthermore, the termination time of the project is incorporated into the R&D model as a decision variable by allowing the controller to ‘sell' the obtained technology from the project at any point of time. The proposed framework extends the classical approach in the R&D literature.The main purpose of our study is to determine the optimal stopping time of the project and to characterize qualitatively the firm's expenditure strategy. We show that under certain realistic conditions, the optimal stopping strategy is a control limit policy. Furthermore, the research effort increases monotonically over the development time of the project.

1990 ◽  
Vol 27 (02) ◽  
pp. 343-350 ◽  
Author(s):  
M. J. M. Posner ◽  
D. Zuckerman

Our study examines a stochastic R&D model with flexible termination time and without rivalry. Specifically, we assume a stochastic relationship between expenditures rate and the project's status. Furthermore, the termination time of the project is incorporated into the R&D model as a decision variable by allowing the controller to ‘sell' the obtained technology from the project at any point of time. The proposed framework extends the classical approach in the R&D literature. The main purpose of our study is to determine the optimal stopping time of the project and to characterize qualitatively the firm's expenditure strategy. We show that under certain realistic conditions, the optimal stopping strategy is a control limit policy. Furthermore, the research effort increases monotonically over the development time of the project.


1986 ◽  
Vol 23 (02) ◽  
pp. 514-518 ◽  
Author(s):  
Dror Zuckerman

We examine a continuous search model in which rewards (e.g. job offers in a search model in the labor market, price offers for a given asset, etc.) are received randomly according to a renewal process determined by a known distribution function. The rewards are non-negative independent and have a common distribution with finite mean. Over the search period there is a constant cost per unit time. The searcher's objective is to choose a stopping time at which he receives the highest available reward (offer), so as to maximize the net expected discounted return. If the interarrival time distribution in the renewal process is new better than used (NBU), it is shown that the optimal stopping strategy possesses the control limit property. The term ‘control limit policy' refers to a strategy in which we accept the first reward (offer) which exceeds a critical control level ξ.


1986 ◽  
Vol 23 (2) ◽  
pp. 514-518 ◽  
Author(s):  
Dror Zuckerman

We examine a continuous search model in which rewards (e.g. job offers in a search model in the labor market, price offers for a given asset, etc.) are received randomly according to a renewal process determined by a known distribution function. The rewards are non-negative independent and have a common distribution with finite mean. Over the search period there is a constant cost per unit time. The searcher's objective is to choose a stopping time at which he receives the highest available reward (offer), so as to maximize the net expected discounted return. If the interarrival time distribution in the renewal process is new better than used (NBU), it is shown that the optimal stopping strategy possesses the control limit property. The term ‘control limit policy' refers to a strategy in which we accept the first reward (offer) which exceeds a critical control level ξ.


2020 ◽  
Vol 81 (7) ◽  
pp. 1192-1210
Author(s):  
O.V. Zverev ◽  
V.M. Khametov ◽  
E.A. Shelemekh

1997 ◽  
Vol 34 (1) ◽  
pp. 66-73 ◽  
Author(s):  
S. E. Graversen ◽  
G. Peškir

The solution is presented to all optimal stopping problems of the form supτE(G(|Β τ |) – cτ), where is standard Brownian motion and the supremum is taken over all stopping times τ for B with finite expectation, while the map G : ℝ+ → ℝ satisfies for some being given and fixed. The optimal stopping time is shown to be the hitting time by the reflecting Brownian motion of the set of all (approximate) maximum points of the map . The method of proof relies upon Wald's identity for Brownian motion and simple real analysis arguments. A simple proof of the Dubins–Jacka–Schwarz–Shepp–Shiryaev (square root of two) maximal inequality for randomly stopped Brownian motion is given as an application.


1998 ◽  
Vol 35 (04) ◽  
pp. 856-872 ◽  
Author(s):  
S. E. Graversen ◽  
G. Peskir

Explicit formulas are found for the payoff and the optimal stopping strategy of the optimal stopping problem supτ E (max0≤t≤τ X t − c τ), where X = (X t ) t≥0 is geometric Brownian motion with drift μ and volatility σ > 0, and the supremum is taken over all stopping times for X. The payoff is shown to be finite, if and only if μ < 0. The optimal stopping time is given by τ* = inf {t > 0 | X t = g * (max0≤t≤s X s )} where s ↦ g *(s) is the maximal solution of the (nonlinear) differential equation under the condition 0 < g(s) < s, where Δ = 1 − 2μ / σ2 and K = Δ σ2 / 2c. The estimate is established g *(s) ∼ ((Δ − 1) / K Δ)1 / Δ s 1−1/Δ as s → ∞. Applying these results we prove the following maximal inequality: where τ may be any stopping time for X. This extends the well-known identity E (sup t>0 X t ) = 1 − (σ 2 / 2 μ) and is shown to be sharp. The method of proof relies upon a smooth pasting guess (for the Stephan problem with moving boundary) and the Itô–Tanaka formula (being applied two-dimensionally). The key point and main novelty in our approach is the maximality principle for the moving boundary (the optimal stopping boundary is the maximal solution of the differential equation obtained by a smooth pasting guess). We think that this principle is by itself of theoretical and practical interest.


2012 ◽  
Vol 49 (3) ◽  
pp. 806-820
Author(s):  
Pieter C. Allaart

Let (Xt)0 ≤ t ≤ T be a one-dimensional stochastic process with independent and stationary increments, either in discrete or continuous time. In this paper we consider the problem of stopping the process (Xt) ‘as close as possible’ to its eventual supremum MT := sup0 ≤ t ≤ TXt, when the reward for stopping at time τ ≤ T is a nonincreasing convex function of MT - Xτ. Under fairly general conditions on the process (Xt), it is shown that the optimal stopping time τ takes a trivial form: it is either optimal to stop at time 0 or at time T. For the case of a random walk, the rule τ ≡ T is optimal if the steps of the walk stochastically dominate their opposites, and the rule τ ≡ 0 is optimal if the reverse relationship holds. An analogous result is proved for Lévy processes with finite Lévy measure. The result is then extended to some processes with nonfinite Lévy measure, including stable processes, CGMY processes, and processes whose jump component is of finite variation.


2006 ◽  
Vol 43 (01) ◽  
pp. 102-113
Author(s):  
Albrecht Irle

We consider the optimal stopping problem for g(Z n ), where Z n , n = 1, 2, …, is a homogeneous Markov sequence. An algorithm, called forward improvement iteration, is presented by which an optimal stopping time can be computed. Using an iterative step, this algorithm computes a sequence B 0 ⊇ B 1 ⊇ B 2 ⊇ · · · of subsets of the state space such that the first entrance time into the intersection F of these sets is an optimal stopping time. Various applications are given.


1989 ◽  
Vol 26 (04) ◽  
pp. 695-706
Author(s):  
Gerold Alsmeyer ◽  
Albrecht Irle

Consider a population of distinct species Sj , j∈J, members of which are selected at different time points T 1 , T 2,· ··, one at each time. Assume linear costs per unit of time and that a reward is earned at each discovery epoch of a new species. We treat the problem of finding a selection rule which maximizes the expected payoff. As the times between successive selections are supposed to be continuous random variables, we are dealing with a continuous-time optimal stopping problem which is the natural generalization of the one Rasmussen and Starr (1979) have investigated; namely, the corresponding problem with fixed times between successive selections. However, in contrast to their discrete-time setting the derivation of an optimal strategy appears to be much harder in our model as generally we are no longer in the monotone case. This note gives a general point process formulation for this problem, leading in particular to an equivalent stopping problem via stochastic intensities which is easier to handle. Then we present a formal derivation of the optimal stopping time under the stronger assumption of i.i.d. (X 1 , A 1) (X2, A2 ), · ·· where Xn gives the label (j for Sj ) of the species selected at Tn and An denotes the time between the nth and (n – 1)th selection, i.e. An = Tn – Tn– 1. In the case where even Xn and An are independent and An has an IFR (increasing failure rate) distribution, an explicit solution for the optimal strategy is derived as a simple consequence.


Sign in / Sign up

Export Citation Format

Share Document