Page 1

Displaying 1 – 9 of 9

Showing per page

Entropy of probability kernels from the backward tail boundary

Tim Austin (2015)

Studia Mathematica

A number of recent works have sought to generalize the Kolmogorov-Sinai entropy of probability-preserving transformations to the setting of Markov operators acting on the integrable functions on a probability space (X,μ). These works have culminated in a proof by Downarowicz and Frej that various competing definitions all coincide, and that the resulting quantity is uniquely characterized by certain abstract properties. On the other hand, Makarov has shown that this 'operator...

Ergodicity of a certain class of non Feller models : applications to 𝐴𝑅𝐶𝐻 and Markov switching models

Jean-Gabriel Attali (2004)

ESAIM: Probability and Statistics

We provide an extension of topological methods applied to a certain class of Non Feller Models which we call Quasi-Feller. We give conditions to ensure the existence of a stationary distribution. Finally, we strengthen the conditions to obtain a positive Harris recurrence, which in turn implies the existence of a strong law of large numbers.

Ergodicity of a certain class of Non Feller Models: Applications to ARCH and Markov switching models

Jean-Gabriel Attali (2010)

ESAIM: Probability and Statistics

We provide an extension of topological methods applied to a certain class of Non Feller Models which we call Quasi-Feller. We give conditions to ensure the existence of a stationary distribution. Finally, we strengthen the conditions to obtain a positive Harris recurrence, which in turn implies the existence of a strong law of large numbers.

Estimates for perturbations of general discounted Markov control chains

Raúl Montes-de-Oca, Alexander Sakhanenko, Francisco Salem-Silva (2003)

Applicationes Mathematicae

We extend previous results of the same authors ([11]) on the effects of perturbation in the transition probability of a Markov cost chain for discounted Markov control processes. Supposing valid, for each stationary policy, conditions of Lyapunov and Harris type, we get upper bounds for the index of perturbations, defined as the difference of the total expected discounted costs for the original Markov control process and the perturbed one. We present examples that satisfy our conditions.

Estimation and control in finite Markov decision processes with the average reward criterion

Rolando Cavazos-Cadena, Raúl Montes-de-Oca (2004)

Applicationes Mathematicae

This work concerns Markov decision chains with finite state and action sets. The transition law satisfies the simultaneous Doeblin condition but is unknown to the controller, and the problem of determining an optimal adaptive policy with respect to the average reward criterion is addressed. A subset of policies is identified so that, when the system evolves under a policy in that class, the frequency estimators of the transition law are consistent on an essential set of admissible state-action pairs,...

Evaluating default priors with a generalization of Eaton’s Markov chain

Brian P. Shea, Galin L. Jones (2014)

Annales de l'I.H.P. Probabilités et statistiques

We consider evaluating improper priors in a formal Bayes setting according to the consequences of their use. Let 𝛷 be a class of functions on the parameter space and consider estimating elements of 𝛷 under quadratic loss. If the formal Bayes estimator of every function in 𝛷 is admissible, then the prior is strongly admissible with respect to 𝛷 . Eaton’s method for establishing strong admissibility is based on studying the stability properties of a particular Markov chain associated with the inferential...

Currently displaying 1 – 9 of 9

Page 1