scholarly journals Random Walks with Invariant Loop Probabilities: Stereographic Random Walks

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 729
Author(s):  
Miquel Montero

Random walks with invariant loop probabilities comprise a wide family of Markov processes with site-dependent, one-step transition probabilities. The whole family, which includes the simple random walk, emerges from geometric considerations related to the stereographic projection of an underlying geometry into a line. After a general introduction, we focus our attention on the elliptic case: random walks on a circle with built-in reflexing boundaries.

1981 ◽  
Vol 13 (01) ◽  
pp. 61-83 ◽  
Author(s):  
Richard Serfozo

This is a study of simple random walks, birth and death processes, and M/M/s queues that have transition probabilities and rates that are sequentially controlled at jump times of the processes. Each control action yields a one-step reward depending on the chosen probabilities or transition rates and the state of the process. The aim is to find control policies that maximize the total discounted or average reward. Conditions are given for these processes to have certain natural monotone optimal policies. Under such a policy for the M/M/s queue, for example, the service and arrival rates are non-decreasing and non-increasing functions, respectively, of the queue length. Properties of these policies and a linear program for computing them are also discussed.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


Author(s):  
Igor Vitalievich Kotenko ◽  
Igor Borisovich Parashchuk

The object of research is the process of detecting harmful information in the social networks and global network. There has been proposed the approach to verifying the parameters of a mathematical model of a random process of detecting malicious information with the unreliable, inaccurately (contradictory) given initial data. The approach is based on using stochastic equations of state and observation that are based on controlled Markov chains in finite differences. At the same time, verification of key parameters of a mathematical model of this type - elements of a matrix of one-step transition probabilities - is performed by using an extrapolating neural network. This allows to take into account and compensate the inaccuracy of the original data inherent in random processes of searching and detecting malicious information, as well as to increase the accuracy of decision-making on the assessment and categorization of digital network content to detect and counter information of this class.


1999 ◽  
Vol 12 (4) ◽  
pp. 371-392
Author(s):  
Bong Dae Choi ◽  
Sung Ho Choi ◽  
Dan Keun Sung ◽  
Tae-Hee Lee ◽  
Kyu-Seog Song

We analyze the transient behavior of a Markovian arrival queue with congestion control based on a double of thresholds, where the arrival process is a queue-length dependent Markovian arrival process. We consider Markov chain embedded at arrival epochs and derive the one-step transition probabilities. From these results, we obtain the mean delay and the loss probability of the nth arrival packet. Before we study this complex model, first we give a transient analysis of an MAP/M/1 queueing system without congestion control at arrival epochs. We apply our result to a signaling system No. 7 network with a congestion control based on thresholds.


1983 ◽  
Vol 20 (1) ◽  
pp. 178-184 ◽  
Author(s):  
Harry Cohn

A Borel–Cantelli-type property in terms of one-step transition probabilities is given for events like {|Xn+1| > a + ε, |Xn|≦a}, a and ε being two positive numbers. Applications to normed sums of i.i.d. random variables with infinite mean and branching processes in varying environment with or without immigration are derived.


2018 ◽  
Vol 55 (3) ◽  
pp. 862-886 ◽  
Author(s):  
F. Alberto Grünbaum ◽  
Manuel D. de la Iglesia

Abstract We consider upper‒lower (UL) (and lower‒upper (LU)) factorizations of the one-step transition probability matrix of a random walk with the state space of nonnegative integers, with the condition that both upper and lower triangular matrices in the factorization are also stochastic matrices. We provide conditions on the free parameter of the UL factorization in terms of certain continued fractions such that this stochastic factorization is possible. By inverting the order of the factors (also known as a Darboux transformation) we obtain a new family of random walks where it is possible to state the spectral measures in terms of a Geronimus transformation. We repeat this for the LU factorization but without a free parameter. Finally, we apply our results in two examples; the random walk with constant transition probabilities, and the random walk generated by the Jacobi orthogonal polynomials. In both situations we obtain urn models associated with all the random walks in question.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1981 ◽  
Vol 13 (1) ◽  
pp. 61-83 ◽  
Author(s):  
Richard Serfozo

This is a study of simple random walks, birth and death processes, and M/M/s queues that have transition probabilities and rates that are sequentially controlled at jump times of the processes. Each control action yields a one-step reward depending on the chosen probabilities or transition rates and the state of the process. The aim is to find control policies that maximize the total discounted or average reward. Conditions are given for these processes to have certain natural monotone optimal policies. Under such a policy for the M/M/s queue, for example, the service and arrival rates are non-decreasing and non-increasing functions, respectively, of the queue length. Properties of these policies and a linear program for computing them are also discussed.


1977 ◽  
Vol 14 (2) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B1, B2, …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


Sign in / Sign up

Export Citation Format

Share Document