Displaying similar documents to “Fill's algorithm for absolutely continuous stochastically monotone kernels.”

Hit and run as a unifying device

Hans C. Andersen, Persi Diaconis (2007)

Journal de la société française de statistique

Similarity:

We present a generalization of hit and run algorithms for Markov chain Monte Carlo problems that is ‘equivalent’ to data augmentation and auxiliary variables. These algorithms contain the Gibbs sampler and Swendsen-Wang block spin dynamics as special cases. The unification allows theorems, examples, and heuristics developed in one domain to illuminate parallel domains.

Asymptotic behaviour of a BIPF algorithm with an improper target

Claudio Asci, Mauro Piccioni (2009)

Kybernetika

Similarity:

The BIPF algorithm is a Markovian algorithm with the purpose of simulating certain probability distributions supported by contingency tables belonging to hierarchical log-linear models. The updating steps of the algorithm depend only on the required expected marginal tables over the maximal terms of the hierarchical model. Usually these tables are marginals of a positive joint table, in which case it is well known that the algorithm is a blocking Gibbs Sampler. But the algorithm makes...

Towards effective dynamics in complex systems by Markov kernel approximation

Christof Schütte, Tobias Jahnke (2009)

ESAIM: Mathematical Modelling and Numerical Analysis

Similarity:

Many complex systems occurring in various application share the property that the underlying Markov process remains in certain regions of the state space for long times, and that transitions between such metastable sets occur only rarely. Often the dynamics within each metastable set is of minor importance, but the transitions between these sets are crucial for the behavior and the understanding of the system. Since simulations of the original process are usually prohibitively expensive,...

The behavior of a Markov network with respect to an absorbing class: the target algorithm

Giacomo Aletti (2009)

RAIRO - Operations Research

Similarity:

In this paper, we face a generalization of the problem of finding the distribution of how long it takes to reach a “target” set of states in Markov chain. The graph problems of finding the number of paths that go from a state to a target set and of finding the -length path connections are shown to belong to this generalization. This paper explores how the state space of the Markov chain can be reduced by collapsing together those states that behave in the same way for the purposes...

Two algorithms based on Markov chains and their application to recognition of protein coding genes in prokaryotic genomes

Małgorzata Grabińska, Paweł Błażej, Paweł Mackiewicz (2013)

Applicationes Mathematicae

Similarity:

Methods based on the theory of Markov chains are most commonly used in the recognition of protein coding sequences. However, they require big learning sets to fill up all elements in transition probability matrices describing dependence between nucleotides in the analyzed sequences. Moreover, gene prediction is strongly influenced by the nucleotide bias measured by e.g. G+C content. In this paper we compare two methods: (i) the classical GeneMark algorithm, which uses a three-periodic...