### A. A. Markov, ses probabilités en chaîne et les statistiques linguistiques

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

We prove that a planar random walk with bounded increments and mean zero which is conditioned to stay in a cone converges weakly to the corresponding Brownian meander if and only if the tail distribution of the exit time from the cone is regularly varying. This condition is satisfied in many natural examples.

Stettner [Bull. Polish Acad. Sci. Math. 42 (1994)] considered the asymptotic stability of Markov-Feller chains, provided the sequence of transition probabilities of the chain converges to an invariant probability measure in the weak sense and converges uniformly with respect to the initial state variable on compact sets. We extend those results to the setting of Polish spaces and relax the original assumptions. Finally, we present a class of Markov-Feller chains with a linear state space model which...

Using the natural extensions for the Rosen maps, we give an infinite-order-chain representation of the sequence of the incomplete quotients of the Rosen fractions. Together with the ergodic behaviour of a certain homogeneous random system with complete connections, this allows us to solve a variant of Gauss-Kuzmin problem for the above fraction expansion.

We provide a generalization of Ueno's inequality for n-step transition probabilities of Markov chains in a general state space. Our result is relevant to the study of adaptive control problems and approximation problems in the theory of discrete-time Markov decision processes and stochastic games.

We present a stochastic model which yields a stationary Markov process whose invariant distribution is maximum stable with respect to the geometrically distributed sample size. In particular, we obtain the autoregressive Pareto processes and the autoregressive logistic processes introduced earlier by Yeh et al

In this paper we study the almost sure conditional central limit theorem in its functional form for a class of random variables satisfying a projective criterion. Applications to strongly mixing processes and nonirreducible Markov chains are given. The proofs are based on the normal approximation of double indexed martingale-like sequences, an approach which has interest in itself.

We study the adaptive control problem for discrete-time Markov control processes with Borel state and action spaces and possibly unbounded one-stage costs. The processes are given by recurrent equations ${x}_{t+1}=F({x}_{t},{a}_{t},{\xi}_{t}),\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}t=0,1,...$ with i.i.d. ${\Re}^{k}$-valued random vectors ${\xi}_{t}$ whose density $\rho $ is unknown. Assuming observability of ${\xi}_{t}$ we propose the procedure of statistical estimation of $\rho $ that allows us to prove discounted asymptotic optimality of two types of adaptive policies used early for the processes with bounded costs.