Simple Markov chains
O. Adelman (1976)
Annales scientifiques de l'Université de Clermont. Mathématiques
Similarity:
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
O. Adelman (1976)
Annales scientifiques de l'Université de Clermont. Mathématiques
Similarity:
Tomasz R. Bielecki, Jacek Jakubowski, Mariusz Niewęgłowski (2015)
Banach Center Publications
Similarity:
In this paper we study finite state conditional Markov chains (CMCs). We give two examples of CMCs, one which admits intensity, and another one, which does not admit an intensity. We also give a sufficient condition under which a doubly stochastic Markov chain is a CMC. In addition we provide a method for construction of conditional Markov chains via change of measure.
Laurent Mazliak (2007)
Revue d'histoire des mathématiques
Similarity:
We present the letters sent by Wolfgang Doeblin to Bohuslav Hostinský between 1936 and 1938. They concern some aspects of the general theory of Markov chains and the solutions of the Chapman-Kolmogorov equation that Doeblin was then establishing for his PhD thesis.
Jeffrey J. Hunter (2016)
Special Matrices
Similarity:
This article describes an accurate procedure for computing the mean first passage times of a finite irreducible Markov chain and a Markov renewal process. The method is a refinement to the Kohlas, Zeit fur Oper Res, 30, 197–207, (1986) procedure. The technique is numerically stable in that it doesn’t involve subtractions. Algebraic expressions for the special cases of one, two, three and four states are derived.Aconsequence of the procedure is that the stationary distribution of the...
Zbyněk Šidák (1976)
Aplikace matematiky
Similarity:
Raúl Montes-de-Oca, Alexander Sakhanenko, Francisco Salem-Silva (2003)
Applicationes Mathematicae
Similarity:
We analyse a Markov chain and perturbations of the transition probability and the one-step cost function (possibly unbounded) defined on it. Under certain conditions, of Lyapunov and Harris type, we obtain new estimates of the effects of such perturbations via an index of perturbations, defined as the difference of the total expected discounted costs between the original Markov chain and the perturbed one. We provide an example which illustrates our analysis.
Franco Giannessi (2002)
RAIRO - Operations Research - Recherche Opérationnelle
Similarity:
A problem (arisen from applications to networks) is posed about the principal minors of the matrix of transition probabilities of a Markov chain.
Mariusz Górajski (2009)
Annales UMCS, Mathematica
Similarity:
In this paper we consider an absorbing Markov chain with finite number of states. We focus especially on random walk on transient states. We present a graph reduction method and prove its validity. Using this method we build algorithms which allow us to determine the distribution of time to absorption, in particular we compute its moments and the probability of absorption. The main idea used in the proofs consists in observing a nondecreasing sequence of stopping times. Random walk on...
Thomas Kaijser
Similarity:
Consider a Hidden Markov Model (HMM) such that both the state space and the observation space are complete, separable, metric spaces and for which both the transition probability function (tr.pr.f.) determining the hidden Markov chain of the HMM and the tr.pr.f. determining the observation sequence of the HMM have densities. Such HMMs are called fully dominated. In this paper we consider a subclass of fully dominated HMMs which we call regular. A fully dominated,...
Markov, A.A. (2006)
Journal Électronique d'Histoire des Probabilités et de la Statistique [electronic only]
Similarity: