Uniform sampling modulo a group of symmetries using Markov chain simulation

Author(s):  
Mark Jerrum
1996 ◽  
Vol 28 (02) ◽  
pp. 342-343
Author(s):  
Masaharu Tanemura

We consider two mechanisms for simulating spatial patterns of hard-core non-spherical particles, namely the random sequential packing (RSP) and the Markov chain Monte Carlo (MCMC) procedures. The former is described as follows: we put a particle one-by-one into a finite region by sampling its location x and direction θ uniformly at random; if it does not overlap with other particles put before, it is put successfully, otherwise, we discard it and try another uniform sampling of (x, θ); by repeating the above, we can obtain a set of non-overlapping particles. The MCMC procedure is the following: we first give a certain non-overlapping pattern of non-spherical particles prepared in a random or a regular manner; then we select a particle and sample its new trial location x and direction θ at random; if the new sample (x, θ) is accepted, i.e. it does not overlap with other particles, the selected particle is moved to the new ‘position’, otherwise the particle is retained at the old position; by repeating the above, a series of a set of non-overlapping particles is generated.


10.37236/3028 ◽  
2013 ◽  
Vol 20 (1) ◽  
Author(s):  
István Miklós ◽  
Péter L Erdős ◽  
Lajos Soukup

In this paper we consider a simple Markov chain for bipartite graphs with given degree sequence on $n$ vertices. We show that the mixing time of this Markov chain is bounded above by a polynomial in $n$ in case of half-regular degree sequence. The novelty of our approach lies in the construction of the multicommodity flow in Sinclair's method.


1996 ◽  
Vol 28 (2) ◽  
pp. 342-343
Author(s):  
Masaharu Tanemura

We consider two mechanisms for simulating spatial patterns of hard-core non-spherical particles, namely the random sequential packing (RSP) and the Markov chain Monte Carlo (MCMC) procedures. The former is described as follows: we put a particle one-by-one into a finite region by sampling its location x and direction θ uniformly at random; if it does not overlap with other particles put before, it is put successfully, otherwise, we discard it and try another uniform sampling of (x, θ); by repeating the above, we can obtain a set of non-overlapping particles. The MCMC procedure is the following: we first give a certain non-overlapping pattern of non-spherical particles prepared in a random or a regular manner; then we select a particle and sample its new trial location x and direction θ at random; if the new sample (x, θ) is accepted, i.e. it does not overlap with other particles, the selected particle is moved to the new ‘position’, otherwise the particle is retained at the old position; by repeating the above, a series of a set of non-overlapping particles is generated.


10.37236/9503 ◽  
2020 ◽  
Vol 27 (4) ◽  
Author(s):  
Pieter Kleer ◽  
Viresh Patel ◽  
Fabian Stroh

We consider the irreducibility of switch-based Markov chains for the approximate uniform sampling of Hamiltonian cycles in a given undirected dense graph on $n$ vertices. As our main result, we show that every pair of Hamiltonian cycles in a graph with minimum degree at least $n/2+7$ can be transformed into each other by switch operations of size at most 10, implying that the switch Markov chain using switches of size at most 10 is irreducible. As a proof of concept, we also show that this Markov chain is rapidly mixing on dense monotone graphs.


2013 ◽  
Vol 22 (3) ◽  
pp. 366-383 ◽  
Author(s):  
PÉTER L. ERDŐS ◽  
ZOLTÁN KIRÁLY ◽  
ISTVÁN MIKLÓS

One of the first graph-theoretical problems to be given serious attention (in the 1950s) was the decision whether a given integer sequence is equal to the degree sequence of a simple graph (orgraphical, for short). One method to solve this problem is the greedy algorithm of Havel and Hakimi, which is based on theswapoperation. Another, closely related question is to find a sequence of swap operations to transform one graphical realization into another of the same degree sequence. This latter problem has received particular attention in the context of rapidly mixing Markov chain approaches to uniform sampling of all possible realizations of a given degree sequence. (This becomes a matter of interest in the context of the study of large social networks, for example.) Previously there were only crude upper bounds on the shortest possible length of such swap sequences between two realizations. In this paper we develop formulae (Gallai-type identities) for theswap-distances of any two realizations of simple undirected or directed degree sequences. These identities considerably improve the known upper bounds on the swap-distances.


2011 ◽  
Vol DMTCS Proceedings vol. AO,... (Proceedings) ◽  
Author(s):  
Christine E. Heitsch ◽  
Prasad Tetali

International audience We consider a Markov chain Monte Carlo approach to the uniform sampling of meanders. Combinatorially, a meander $M = [A:B]$ is formed by two noncrossing perfect matchings, above $A$ and below $B$ the same endpoints, which form a single closed loop. We prove that meanders are connected under appropriate pairs of balanced local moves, one operating on $A$ and the other on $B$. We also prove that the subset of meanders with a fixed $B$ is connected under a suitable local move operating on an appropriately defined meandric triple in $A$. We provide diameter bounds under such moves, tight up to a (worst case) factor of two. The mixing times of the Markov chains remain open. Nous considérons une approche de Monte Carlo par chaîne de Markov pour l'échantillonnage uniforme des méandres. Combinatoirement, un méandre $M = [A : B]$ est constitué par deux couplages (matchings) parfaits sans intersection $A$ et $B$, définis sur le même ensemble de points alignés, et qui forment une boucle fermée simple lorsqu'on dessine $A$ "vers le haut'' et $B$ "vers le bas''. Nous montrons que les méandres sont connectés sous l'action de paires appropriées de mouvements locaux équilibrés, l'un opérant sur $A$ et l'autre sur $B$. Nous montrons également que le sous-ensemble de méandres avec un $B$ fixe est connecté sous l'action de mouvements locaux définis sur des "triplets méandriques'' de $A$. Nous fournissons des bornes sur les diamètres pour de tels mouvements, exactes à un facteur 2 près (dans le pire des cas). Les temps de mélange des chaînes de Markov demeurent une question ouverte.


Author(s):  
Topi Talvitie ◽  
Teppo Niinimäki ◽  
Mikko Koivisto

We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which admits estimation for moderate-size partial orders. Our empirical results suggest that the Markov chain approach to sample linear extensions can be made to scale well in practice, provided that the actual mixing times can be realized by instance-sensitive upper bounds or termination rules. Examples of the latter include existing perfect simulation algorithms, whose running times in our experiments follow the actual mixing times of certain chains, albeit with significant overhead.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Sign in / Sign up

Export Citation Format

Share Document