Nonhomogeneous Stochastic Automata

1981 ◽  
Vol 4 (4) ◽  
pp. 891-917
Author(s):  
Sławomir Janicki

In this note we consider a nonhomogeneous Markov chain type stochastic automaton which is a generalization of Bartoszyński’s stochastic automaton. The latter is a generalization of the Pawlak’s known machine in a stochastic direction. By nonhomogeneous stochastic automaton we mean a system ⟨ T, α, {A(n), n ⩾ 1}⟩, where T is a finite nonempty set, α, is an initial distribution on T, and {A(n), n ⩾ 1} is a matrix sequence whose every element is a stochastic matrix called a transition probability matrix. If A(n) = A for all n ⩾ 1, then we obtain Bartoszyński’s automaton. The sequence (ti0, ti1, …), tij ∈ T, j = 0, 1, 2, … is called a word of automata if α(ti0) > 0 and A(k)(tik-1, tik) > 0 for every k ⩾ 1. The goal of this note is to give necessary and sufficient conditions for the existence of an extension and a shrinkage of the automata under consideration. These problems for T, A were considered for the first time by Bartoszyński. The shrinkage problem deals with the existence of a stochastic automaton which generates only all sequences of states of T which are simultaneously generated by two given automata while the extension problem treats of the existence of a stochastic automaton which generates all sequences of states of which are generated by at least one of two given automata. Moreover, we introduce some new notions: attainable state, concordance of automata in a wide and a narrow sense, which help us to solve the problems mentioned above.

2015 ◽  
Vol 93 (3) ◽  
pp. 473-485 ◽  
Author(s):  
JIAN-ZE LI

In this article, we study the Mazur–Ulam property of the sum of two strictly convex Banach spaces. We give an equivalent form of the isometric extension problem and two equivalent conditions to decide whether all strictly convex Banach spaces admit the Mazur–Ulam property. We also find necessary and sufficient conditions under which the $\ell ^{1}$-sum and the $\ell ^{\infty }$-sum of two strictly convex Banach spaces admit the Mazur–Ulam property.


1996 ◽  
Vol 33 (04) ◽  
pp. 974-985 ◽  
Author(s):  
F. Simonot ◽  
Y. Q. Song

Let P be an infinite irreducible stochastic matrix, recurrent positive and stochastically monotone and Pn be any n × n stochastic matrix with Pn ≧ Tn , where Tn denotes the n × n northwest corner truncation of P. These assumptions imply the existence of limit distributions π and π n for P and Pn respectively. We show that if the Markov chain with transition probability matrix P meets the further condition of geometric recurrence then the exact convergence rate of π n to π can be expressed in terms of the radius of convergence of the generating function of π. As an application of the preceding result, we deal with the random walk on a half line and prove that the assumption of geometric recurrence can be relaxed. We also show that if the i.i.d. input sequence (A(m)) is such that we can find a real number r 0 > 1 with , then the exact convergence rate of π n to π is characterized by r 0. Moreover, when the generating function of A is not defined for |z| > 1, we derive an upper bound for the distance between π n and π based on the moments of A.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Dan Ye ◽  
Quan-Yong Fan ◽  
Xin-Gang Zhao ◽  
Guang-Hong Yang

This paper is concerned with delay-dependent stochastic stability for time-delay Markovian jump systems (MJSs) with sector-bounded nonlinearities and more general transition probabilities. Different from the previous results where the transition probability matrix is completely known, a more general transition probability matrix is considered which includes completely known elements, boundary known elements, and completely unknown ones. In order to get less conservative criterion, the state and transition probability information is used as much as possible to construct the Lyapunov-Krasovskii functional and deal with stability analysis. The delay-dependent sufficient conditions are derived in terms of linear matrix inequalities to guarantee the stability of systems. Finally, numerical examples are exploited to demonstrate the effectiveness of the proposed method.


2018 ◽  
Vol 50 (01) ◽  
pp. 178-203 ◽  
Author(s):  
Nicolas Champagnat ◽  
Denis Villemonais

Abstract In this paper we study the quasi-stationary behavior of absorbed one-dimensional diffusions. We obtain necessary and sufficient conditions for the exponential convergence to a unique quasi-stationary distribution in total variation, uniformly with respect to the initial distribution. An important tool is provided by one-dimensional strict local martingale diffusions coming down from infinity. We prove, under mild assumptions, that their expectation at any positive time is uniformly bounded with respect to the initial position. We provide several examples and extensions, including the sticky Brownian motion and some one-dimensional processes with jumps.


1997 ◽  
Vol 34 (3) ◽  
pp. 790-794 ◽  
Author(s):  
R. M. Phatarfod ◽  
A. J. Pryde ◽  
David Dyte

In this paper we consider the operation of the move-to-front scheme where the requests form a Markov chain of N states with transition probability matrix P. It is shown that the configurations of items at successive requests form a Markov chain, and its transition probability matrix has eigenvalues that are the eigenvalues of all the principal submatrices of P except those of order N—1. We also show that the multiplicity of the eigenvalues of submatrices of order m is the number of derangements of N — m objects. The last result is shown to be true even if P is not a stochastic matrix.


2008 ◽  
Vol 45 (01) ◽  
pp. 211-225 ◽  
Author(s):  
Alexander Dudin ◽  
Chesoong Kim ◽  
Valentina Klimenok

In this paper we consider discrete-time multidimensional Markov chains having a block transition probability matrix which is the sum of a matrix with repeating block rows and a matrix of upper-Hessenberg, quasi-Toeplitz structure. We derive sufficient conditions for the existence of the stationary distribution, and outline two algorithms for calculating the stationary distribution.


1997 ◽  
Vol 34 (03) ◽  
pp. 790-794 ◽  
Author(s):  
R. M. Phatarfod ◽  
A. J. Pryde ◽  
David Dyte

In this paper we consider the operation of the move-to-front scheme where the requests form a Markov chain of N states with transition probability matrix P . It is shown that the configurations of items at successive requests form a Markov chain, and its transition probability matrix has eigenvalues that are the eigenvalues of all the principal submatrices of P except those of order N—1. We also show that the multiplicity of the eigenvalues of submatrices of order m is the number of derangements of N — m objects. The last result is shown to be true even if P is not a stochastic matrix.


2008 ◽  
Vol 45 (1) ◽  
pp. 211-225 ◽  
Author(s):  
Alexander Dudin ◽  
Chesoong Kim ◽  
Valentina Klimenok

In this paper we consider discrete-time multidimensional Markov chains having a block transition probability matrix which is the sum of a matrix with repeating block rows and a matrix of upper-Hessenberg, quasi-Toeplitz structure. We derive sufficient conditions for the existence of the stationary distribution, and outline two algorithms for calculating the stationary distribution.


Author(s):  
Halina Frydman

In this paper we consider the embedding problem for Markov chains with three states. A non-singular stochastic matrix P is called embeddable if there exists a two-parameter family of stochastic matricessatisfyingand such thatThough extensive characterizations of embeddable n × n stochastic matrices have been given in (l), (2), (3), (6), and further characterizations of embeddable 3 × 3 stochastic matrices in (4), they do not provide, except in the case of 2 × 2 stochastic matrices, easily applicable necessary and sufficient conditions for embeddability.


2021 ◽  
Author(s):  
Matheus Guedes de Andrade ◽  
Franklin De Lima Marquezino ◽  
Daniel Ratton Figueiredo

Quantum walks on graphs are ubiquitous in quantum computing finding a myriad of applications. Likewise, random walks on graphs are a fundamental building block for a large number of algorithms with diverse applications. While the relationship between quantum and random walks has been recently discussed in specific scenarios, this work establishes a formal equivalence between the two processes on arbitrary finite graphs and general conditions for shift and coin operators. It requires empowering random walks with time heterogeneity, where the transition probability of the walker is non-uniform and time dependent. The equivalence is obtained by equating the probability of measuring the quantum walk on a given node of the graph and the probability that the random walk is at that same node, for all nodes and time steps. The first result establishes procedure for a stochastic matrix sequence to induce a random walk that yields the exact same vertex probability distribution sequence of any given quantum walk, including the scenario with multiple interfering walkers. The second result establishes a similar procedure in the opposite direction. Given any random walk, a time-dependent quantum walk with the exact same vertex probability distribution is constructed. Interestingly, the matrices constructed by the first procedure allows for a different simulation approach for quantum walks where node samples respect neighbor locality and convergence is guaranteed by the law of large numbers, enabling efficient (polynomial-time) sampling of quantum graph trajectories. Furthermore, the complexity of constructing this sequence of matrices is discussed in the general case.


Sign in / Sign up

Export Citation Format

Share Document