scholarly journals Delay Network Tomography Using a Partially Observable Bivariate Markov Chain

2017 ◽  
Vol 25 (1) ◽  
pp. 126-138 ◽  
Author(s):  
Neshat Etemadi Rad ◽  
Yariv Ephraim ◽  
Brian L. Mark
1988 ◽  
Vol 25 (A) ◽  
pp. 335-346
Author(s):  
J. Gani

This paper considers a bivariate random walk modelon a rectangular lattice for a particle injected into a fluid flowing in a tank. The numbers of jumps of the particle in thexandydirections in this particular model are correlated. It is shown that when the random walk forms a bivariate Markov chain in continuous time, it is possible to obtain the state probabilitiespxy(t) through their Laplace transforms. Two exit rules are considered and results for both of them derived.


2006 ◽  
Vol 60 (1) ◽  
pp. 173-191 ◽  
Author(s):  
M. V. Koutras ◽  
S. Bersimis ◽  
D. L. Antzoulakos

1989 ◽  
Vol 2 (1) ◽  
pp. 53-70 ◽  
Author(s):  
Marcel F. Neuts ◽  
Ushio Sumita ◽  
Yoshitaka Takahashi

A Markov Modulated Poisson Process (MMPP) M(t) defined on a Markov chain J(t) is a pure jump process where jumps of M(t) occur according to a Poisson process with intensity λi whenever the Markov chain J(t) is in state i. M(t) is called strongly renewal (SR) if M(t) is a renewal process for an arbitrary initial probability vector of J(t) with full support on P={i:λi>0}. M(t) is called weakly renewal (WR) if there exists an initial probability vector of J(t) such that the resulting MMPP is a renewal process. The purpose of this paper is to develop general characterization theorems for the class SR and some sufficiency theorems for the class WR in terms of the first passage times of the bivariate Markov chain [J(t),M(t)]. Relevance to the lumpability of J(t) is also studied.


1982 ◽  
Vol 19 (1) ◽  
pp. 72-81 ◽  
Author(s):  
George E. Monahan

The problem of optimal stopping in a Markov chain when there is imperfect state information is formulated as a partially observable Markov decision process. Properties of the optimal value function are developed. It is shown that under mild conditions the optimal policy is well structured. An efficient algorithm, which uses the structural information in the computation of the optimal policy, is presented.


Author(s):  
Nan Li ◽  
Hao Chen ◽  
Ilya Kolmanovsky ◽  
Anouck Girard

In this paper, an explicit decision tree approach for automated driving is proposed. The ego vehicle operates in traffic that is modeled as a discrete-time Markov chain, the state of which is only partially observable. In this setting, the automated driving policy is generated from a decision tree algorithm offline operation to create an explicit implementation for the online use. Simulation results of an implementation of this approach for automated driving on a two-lane highway are reported to illustrate the potential of this approach.


1982 ◽  
Vol 19 (01) ◽  
pp. 72-81 ◽  
Author(s):  
George E. Monahan

The problem of optimal stopping in a Markov chain when there is imperfect state information is formulated as a partially observable Markov decision process. Properties of the optimal value function are developed. It is shown that under mild conditions the optimal policy is well structured. An efficient algorithm, which uses the structural information in the computation of the optimal policy, is presented.


Sign in / Sign up

Export Citation Format

Share Document