Edge occupation measure for a reversible Markov chain.
We answer some questions raised by Gantert, Löwe and Steif (Ann. Inst. Henri Poincaré Probab. Stat.41(2005) 767–780) concerning “signed” voter models on locally finite graphs. These are voter model like processes with the difference that the edges are considered to be either positive or negative. If an edge between a site and a site is negative (respectively positive) the site will contribute towards the flip rate of if and only if the two current spin values are equal (respectively opposed)....
We provide an extension of topological methods applied to a certain class of Non Feller Models which we call Quasi-Feller. We give conditions to ensure the existence of a stationary distribution. Finally, we strengthen the conditions to obtain a positive Harris recurrence, which in turn implies the existence of a strong law of large numbers.
We provide an extension of topological methods applied to a certain class of Non Feller Models which we call Quasi-Feller. We give conditions to ensure the existence of a stationary distribution. Finally, we strengthen the conditions to obtain a positive Harris recurrence, which in turn implies the existence of a strong law of large numbers.
We analyse a Markov chain and perturbations of the transition probability and the one-step cost function (possibly unbounded) defined on it. Under certain conditions, of Lyapunov and Harris type, we obtain new estimates of the effects of such perturbations via an index of perturbations, defined as the difference of the total expected discounted costs between the original Markov chain and the perturbed one. We provide an example which illustrates our analysis.
We extend previous results of the same authors ([11]) on the effects of perturbation in the transition probability of a Markov cost chain for discounted Markov control processes. Supposing valid, for each stationary policy, conditions of Lyapunov and Harris type, we get upper bounds for the index of perturbations, defined as the difference of the total expected discounted costs for the original Markov control process and the perturbed one. We present examples that satisfy our conditions.
We present two data-driven procedures to estimate the transition density of an homogeneous Markov chain. The first yields a piecewise constant estimator on a suitable random partition. By using an Hellinger-type loss, we establish non-asymptotic risk bounds for our estimator when the square root of the transition density belongs to possibly inhomogeneous Besov spaces with possibly small regularity index. Some simulations are also provided. The second procedure is of theoretical interest and leads...