Displaying similar documents to “Transition probability estimates for reversible Markov chains.”

Reduction of absorbing Markov chain

Mariusz Górajski (2009)

Annales UMCS, Mathematica

Similarity:

In this paper we consider an absorbing Markov chain with finite number of states. We focus especially on random walk on transient states. We present a graph reduction method and prove its validity. Using this method we build algorithms which allow us to determine the distribution of time to absorption, in particular we compute its moments and the probability of absorption. The main idea used in the proofs consists in observing a nondecreasing sequence of stopping times. Random walk on...

Simple Markov chains

O. Adelman (1976)

Annales scientifiques de l'Université de Clermont. Mathématiques

Similarity:

Large deviations and full Edgeworth expansions for finite Markov chains with applications to the analysis of genomic sequences

Pierre Pudlo (2010)

ESAIM: Probability and Statistics

Similarity:

To establish lists of words with unexpected frequencies in long sequences, for instance in a molecular biology context, one needs to quantify the exceptionality of families of word frequencies in random sequences. To this aim, we study large deviation probabilities of multidimensional word counts for Markov and hidden Markov models. More specifically, we compute local Edgeworth expansions of arbitrary degrees for multivariate partial sums of lattice valued functionals of finite...

Estimates for perturbations of discounted Markov chains on general spaces

Raúl Montes-de-Oca, Alexander Sakhanenko, Francisco Salem-Silva (2003)

Applicationes Mathematicae

Similarity:

We analyse a Markov chain and perturbations of the transition probability and the one-step cost function (possibly unbounded) defined on it. Under certain conditions, of Lyapunov and Harris type, we obtain new estimates of the effects of such perturbations via an index of perturbations, defined as the difference of the total expected discounted costs between the original Markov chain and the perturbed one. We provide an example which illustrates our analysis.