scholarly journals Value Function and Optimal Rule on the Optimal Stopping Problem for Continuous-Time Markov Processes

2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Lu Ye

This paper considers the optimal stopping problem for continuous-time Markov processes. We describe the methodology and solve the optimal stopping problem for a broad class of reward functions. Moreover, we illustrate the outcomes by some typical Markov processes including diffusion and Lévy processes with jumps. For each of the processes, the explicit formula for value function and optimal stopping time is derived. Furthermore, we relate the derived optimal rules to some other optimal problems.

2021 ◽  
pp. 2150049
Author(s):  
Siham Bouhadou ◽  
Youssef Ouknine

In the first part of this paper, we study RBSDEs in the case where the filtration is non-quasi-left-continuous and the lower obstacle is given by a predictable process. We prove the existence and uniqueness by using some results of optimal stopping theory in the predictable setting, some tools from general theory of processes as the Mertens decomposition of predictable strong supermartingale. In the second part, we introduce an optimal stopping problem indexed by predictable stopping times with the nonlinear predictable [Formula: see text] expectation induced by an appropriate backward stochastic differential equation (BSDE). We establish some useful properties of [Formula: see text]-supremartingales. Moreover, we show the existence of an optimal predictable stopping time, and we characterize the predictable value function in terms of the first component of RBSDEs studied in the first part.


2012 ◽  
Vol 45 (2) ◽  
Author(s):  
Ł. Stettner

AbstractIn the paper we use penalty method to approximate a number of general stopping problems over finite horizon. We consider optimal stopping of discrete time or right continuous stochastic processes, and show that suitable version of Snell’s envelope can by approximated by solutions to penalty equations. Then we study optimal stopping problem for Markov processes on a general Polish space, and again show that the optimal stopping value function can be approximated by a solution to a Markov version of the penalty equation.


2012 ◽  
Vol 49 (3) ◽  
pp. 806-820
Author(s):  
Pieter C. Allaart

Let (Xt)0 ≤ t ≤ T be a one-dimensional stochastic process with independent and stationary increments, either in discrete or continuous time. In this paper we consider the problem of stopping the process (Xt) ‘as close as possible’ to its eventual supremum MT := sup0 ≤ t ≤ TXt, when the reward for stopping at time τ ≤ T is a nonincreasing convex function of MT - Xτ. Under fairly general conditions on the process (Xt), it is shown that the optimal stopping time τ takes a trivial form: it is either optimal to stop at time 0 or at time T. For the case of a random walk, the rule τ ≡ T is optimal if the steps of the walk stochastically dominate their opposites, and the rule τ ≡ 0 is optimal if the reverse relationship holds. An analogous result is proved for Lévy processes with finite Lévy measure. The result is then extended to some processes with nonfinite Lévy measure, including stable processes, CGMY processes, and processes whose jump component is of finite variation.


2006 ◽  
Vol 43 (01) ◽  
pp. 102-113
Author(s):  
Albrecht Irle

We consider the optimal stopping problem for g(Z n ), where Z n , n = 1, 2, …, is a homogeneous Markov sequence. An algorithm, called forward improvement iteration, is presented by which an optimal stopping time can be computed. Using an iterative step, this algorithm computes a sequence B 0 ⊇ B 1 ⊇ B 2 ⊇ · · · of subsets of the state space such that the first entrance time into the intersection F of these sets is an optimal stopping time. Various applications are given.


1989 ◽  
Vol 26 (04) ◽  
pp. 695-706
Author(s):  
Gerold Alsmeyer ◽  
Albrecht Irle

Consider a population of distinct species Sj , j∈J, members of which are selected at different time points T 1 , T 2,· ··, one at each time. Assume linear costs per unit of time and that a reward is earned at each discovery epoch of a new species. We treat the problem of finding a selection rule which maximizes the expected payoff. As the times between successive selections are supposed to be continuous random variables, we are dealing with a continuous-time optimal stopping problem which is the natural generalization of the one Rasmussen and Starr (1979) have investigated; namely, the corresponding problem with fixed times between successive selections. However, in contrast to their discrete-time setting the derivation of an optimal strategy appears to be much harder in our model as generally we are no longer in the monotone case. This note gives a general point process formulation for this problem, leading in particular to an equivalent stopping problem via stochastic intensities which is easier to handle. Then we present a formal derivation of the optimal stopping time under the stronger assumption of i.i.d. (X 1 , A 1) (X2, A2 ), · ·· where Xn gives the label (j for Sj ) of the species selected at Tn and An denotes the time between the nth and (n – 1)th selection, i.e. An = Tn – Tn– 1. In the case where even Xn and An are independent and An has an IFR (increasing failure rate) distribution, an explicit solution for the optimal strategy is derived as a simple consequence.


2010 ◽  
Vol 47 (04) ◽  
pp. 947-966 ◽  
Author(s):  
F. Dufour ◽  
A. B. Piunovskiy

The purpose of this paper is to study an optimal stopping problem with constraints for a Markov chain with general state space by using the convex analytic approach. The costs are assumed to be nonnegative. Our model is not assumed to be transient or absorbing and the stopping time does not necessarily have a finite expectation. As a consequence, the occupation measure is not necessarily finite, which poses some difficulties in the analysis of the associated linear program. Under a very weak hypothesis, it is shown that the linear problem admits an optimal solution, guaranteeing the existence of an optimal stopping strategy for the optimal stopping problem with constraints.


1986 ◽  
Vol 23 (2) ◽  
pp. 341-354 ◽  
Author(s):  
G. Mazziotto

The resolution of the optimal stopping problem for a partially observed Markov state process reduces to the computation of a function — the Snell envelope — defined on a measure space which is in general infinite-dimensional. To avoid these computational difficulties, we propose in this paper to approximate the optimal stopping time as the limit of times associated to similar problems for a sequence of processes converging towards the true state. We show on two examples that these approximating states can be chosen such that the Snell envelopes can be explicitly computed.


2006 ◽  
Vol 43 (4) ◽  
pp. 984-996 ◽  
Author(s):  
Anne Laure Bronstein ◽  
Lane P. Hughston ◽  
Martijn R. Pistorius ◽  
Mihail Zervos

We consider the problem of optimally stopping a general one-dimensional Itô diffusion X. In particular, we solve the problem that aims at maximising the performance criterion Ex[exp(-∫0τr(Xs)ds)f(Xτ)] over all stopping times τ, where the reward function f can take only a finite number of values and has a ‘staircase’ form. This problem is partly motivated by applications to financial asset pricing. Our results are of an explicit analytic nature and completely characterise the optimal stopping time. Also, it turns out that the problem's value function is not C1, which is due to the fact that the reward function f is not continuous.


4OR ◽  
2016 ◽  
Vol 15 (3) ◽  
pp. 277-302 ◽  
Author(s):  
Benoîte de Saporta ◽  
François Dufour ◽  
Christophe Nivot

Sign in / Sign up

Export Citation Format

Share Document