Approximations of the optimal stopping problem in partial observation

1986 ◽  
Vol 23 (2) ◽  
pp. 341-354 ◽  
Author(s):  
G. Mazziotto

The resolution of the optimal stopping problem for a partially observed Markov state process reduces to the computation of a function — the Snell envelope — defined on a measure space which is in general infinite-dimensional. To avoid these computational difficulties, we propose in this paper to approximate the optimal stopping time as the limit of times associated to similar problems for a sequence of processes converging towards the true state. We show on two examples that these approximating states can be chosen such that the Snell envelopes can be explicitly computed.

1986 ◽  
Vol 23 (02) ◽  
pp. 341-354 ◽  
Author(s):  
G. Mazziotto

The resolution of the optimal stopping problem for a partially observed Markov state process reduces to the computation of a function — the Snell envelope — defined on a measure space which is in general infinite-dimensional. To avoid these computational difficulties, we propose in this paper to approximate the optimal stopping time as the limit of times associated to similar problems for a sequence of processes converging towards the true state. We show on two examples that these approximating states can be chosen such that the Snell envelopes can be explicitly computed.


2006 ◽  
Vol 43 (01) ◽  
pp. 102-113
Author(s):  
Albrecht Irle

We consider the optimal stopping problem for g(Z n ), where Z n , n = 1, 2, …, is a homogeneous Markov sequence. An algorithm, called forward improvement iteration, is presented by which an optimal stopping time can be computed. Using an iterative step, this algorithm computes a sequence B 0 ⊇ B 1 ⊇ B 2 ⊇ · · · of subsets of the state space such that the first entrance time into the intersection F of these sets is an optimal stopping time. Various applications are given.


2010 ◽  
Vol 47 (04) ◽  
pp. 947-966 ◽  
Author(s):  
F. Dufour ◽  
A. B. Piunovskiy

The purpose of this paper is to study an optimal stopping problem with constraints for a Markov chain with general state space by using the convex analytic approach. The costs are assumed to be nonnegative. Our model is not assumed to be transient or absorbing and the stopping time does not necessarily have a finite expectation. As a consequence, the occupation measure is not necessarily finite, which poses some difficulties in the analysis of the associated linear program. Under a very weak hypothesis, it is shown that the linear problem admits an optimal solution, guaranteeing the existence of an optimal stopping strategy for the optimal stopping problem with constraints.


4OR ◽  
2016 ◽  
Vol 15 (3) ◽  
pp. 277-302 ◽  
Author(s):  
Benoîte de Saporta ◽  
François Dufour ◽  
Christophe Nivot

2005 ◽  
Vol 42 (03) ◽  
pp. 826-838 ◽  
Author(s):  
X. Guo ◽  
J. Liu

Consider a geometric Brownian motion X t (ω) with drift. Suppose that there is an independent source that sends signals at random times τ 1 < τ 2 < ⋯. Upon receiving each signal, a decision has to be made as to whether to stop or to continue. Stopping at time τ will bring a reward S τ , where S t = max(max0≤u≤t X u , s) for some constant s ≥ X 0. The objective is to choose an optimal stopping time to maximize the discounted expected reward E[e−r τ i S τ i | X 0 = x, S 0 = s], where r is a discount factor. This problem can be viewed as a randomized version of the Bermudan look-back option pricing problem. In this paper, we derive explicit solutions to this optimal stopping problem, assuming that signal arrival is a Poisson process with parameter λ. Optimal stopping rules are differentiated by the frequency of the signal process. Specifically, there exists a threshold λ* such that if λ>λ*, the optimal stopping problem is solved via the standard formulation of a ‘free boundary’ problem and the optimal stopping time τ * is governed by a threshold a * such that τ * = inf{τ n : X τ n ≤a * S τ n }. If λ≤λ* then it is optimal to stop immediately a signal is received, i.e. at τ * = τ 1. Mathematically, it is intriguing that a smooth fit is critical in the former case while irrelevant in the latter.


2021 ◽  
pp. 2150049
Author(s):  
Siham Bouhadou ◽  
Youssef Ouknine

In the first part of this paper, we study RBSDEs in the case where the filtration is non-quasi-left-continuous and the lower obstacle is given by a predictable process. We prove the existence and uniqueness by using some results of optimal stopping theory in the predictable setting, some tools from general theory of processes as the Mertens decomposition of predictable strong supermartingale. In the second part, we introduce an optimal stopping problem indexed by predictable stopping times with the nonlinear predictable [Formula: see text] expectation induced by an appropriate backward stochastic differential equation (BSDE). We establish some useful properties of [Formula: see text]-supremartingales. Moreover, we show the existence of an optimal predictable stopping time, and we characterize the predictable value function in terms of the first component of RBSDEs studied in the first part.


2015 ◽  
Vol 52 (01) ◽  
pp. 167-179 ◽  
Author(s):  
Bruno Buonaguidi ◽  
Pietro Muliere

We study the Bayesian disorder problem for a negative binomial process. The aim is to determine a stopping time which is as close as possible to the random and unknown moment at which a sequentially observed negative binomial process changes the value of its characterizing parameter p ∈ (0, 1). The solution to this problem is explicitly derived through the reduction of the original optimal stopping problem to an integro-differential free-boundary problem. A careful analysis of the free-boundary equation and of the probabilistic nature of the boundary point allows us to specify when the smooth fit principle holds and when it breaks down in favour of the continuous fit principle.


2005 ◽  
Vol 42 (3) ◽  
pp. 826-838 ◽  
Author(s):  
X. Guo ◽  
J. Liu

Consider a geometric Brownian motion Xt(ω) with drift. Suppose that there is an independent source that sends signals at random times τ1 < τ2 < ⋯. Upon receiving each signal, a decision has to be made as to whether to stop or to continue. Stopping at time τ will bring a reward Sτ, where St = max(max0≤u≤tXu, s) for some constant s ≥ X0. The objective is to choose an optimal stopping time to maximize the discounted expected reward E[e−rτiSτi | X0 = x, S0 = s], where r is a discount factor. This problem can be viewed as a randomized version of the Bermudan look-back option pricing problem. In this paper, we derive explicit solutions to this optimal stopping problem, assuming that signal arrival is a Poisson process with parameter λ. Optimal stopping rules are differentiated by the frequency of the signal process. Specifically, there exists a threshold λ* such that if λ>λ*, the optimal stopping problem is solved via the standard formulation of a ‘free boundary’ problem and the optimal stopping time τ* is governed by a threshold a* such that τ* = inf{τn: Xτn≤a*Sτn}. If λ≤λ* then it is optimal to stop immediately a signal is received, i.e. at τ* = τ1. Mathematically, it is intriguing that a smooth fit is critical in the former case while irrelevant in the latter.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Lu Ye

This paper considers the optimal stopping problem for continuous-time Markov processes. We describe the methodology and solve the optimal stopping problem for a broad class of reward functions. Moreover, we illustrate the outcomes by some typical Markov processes including diffusion and Lévy processes with jumps. For each of the processes, the explicit formula for value function and optimal stopping time is derived. Furthermore, we relate the derived optimal rules to some other optimal problems.


2005 ◽  
Vol 05 (02) ◽  
pp. 271-280 ◽  
Author(s):  
BERNT ØKSENDAL

We study a general optimal stopping problem for a strong Markov process in the case when there is a time lag δ > 0 from the time τ when the decision to stop is taken (a stopping time) to the time τ + δ when the system actually stops. Equivalently, we impose the constraint that the admissible times for stopping are stopping (Markov) times with respect to the delayed flow of information. It is shown that such a problem can be reduced to a classical optimal stopping problem by a simple transformation. The results are applied: (i) to find the optimal time to sell an asset (ii) to solve an optimal resource extraction problem, in both cases under delayed information flow.


Sign in / Sign up

Export Citation Format

Share Document