On the Markov chain central limit theorem.
Jones, Galin L. (2004)
Probability Surveys [electronic only]
Similarity:
Jones, Galin L. (2004)
Probability Surveys [electronic only]
Similarity:
R. M. Phatarfod (1983)
Applicationes Mathematicae
Similarity:
Witold Bednorz (2013)
Applicationes Mathematicae
Similarity:
We give an improved quantitative version of the Kendall theorem. The Kendall theorem states that under mild conditions imposed on a probability distribution on the positive integers (i.e. a probability sequence) one can prove convergence of its renewal sequence. Due to the well-known property (the first entrance last exit decomposition) such results are of interest in the stability theory of time-homogeneous Markov chains. In particular this approach may be used to measure rates of convergence...
Thomas Kaijser
Similarity:
Consider a Hidden Markov Model (HMM) such that both the state space and the observation space are complete, separable, metric spaces and for which both the transition probability function (tr.pr.f.) determining the hidden Markov chain of the HMM and the tr.pr.f. determining the observation sequence of the HMM have densities. Such HMMs are called fully dominated. In this paper we consider a subclass of fully dominated HMMs which we call regular. A fully dominated,...
Zbyněk Šidák (1976)
Aplikace matematiky
Similarity:
Keilson, Julian (1998)
Journal of Applied Mathematics and Stochastic Analysis
Similarity:
E. Nummelin, R. L. Tweedie (1976)
Annales scientifiques de l'Université de Clermont. Mathématiques
Similarity:
Kalashnikov, Vladimir V. (1994)
Journal of Applied Mathematics and Stochastic Analysis
Similarity:
Karl Gustafson, Jeffrey J. Hunter (2016)
Special Matrices
Similarity:
We present a new fundamental intuition forwhy the Kemeny feature of a Markov chain is a constant. This new perspective has interesting further implications.
Andrzej Nowak (1998)
Applicationes Mathematicae
Similarity:
We provide a generalization of Ueno's inequality for n-step transition probabilities of Markov chains in a general state space. Our result is relevant to the study of adaptive control problems and approximation problems in the theory of discrete-time Markov decision processes and stochastic games.