A Bayesian model for binary Markov chains.
A system composed from a set of independent and identical parallel units is considered and its resistance (survival) against an increasing load is modelled by a counting process model, in the framework of statistical survival analysis. The objective is to estimate the (nonparametrized) hazard function of the distribution of loads breaking the units of the system (i. e. their breaking strengths), to derive the large sample properties of the estimator, and to propose a goodness-of-fit test. We also...
2000 Mathematics Subject Classification: 60J80.In this work, the problem of the limiting behaviour of an irreducible Multitype Galton-Watson Branching Process with period d greater than 1 is considered. More specifically, almost sure convergence of some linear functionals depending on d consecutive generations is studied under hypothesis of non extinction. As consequence the main parameters of the model are given a convenient interpretation from a practical point of view. For a better understanding...
In this paper, we investigate a nonparametric approach to provide a recursive estimator of the transition density of a piecewise-deterministic Markov process, from only one observation of the path within a long time. In this framework, we do not observe a Markov chain with transition kernel of interest. Fortunately, one may write the transition density of interest as the ratio of the invariant distributions of two embedded chains of the process. Our method consists in estimating these invariant...
I propose a nonlinear Bayesian methodology to estimate the latent states which are partially observed in financial market. The distinguishable character of my methodology is that the recursive Bayesian estimation can be represented by some deterministic partial differential equation (PDE) (or evolution equation in the general case) parameterized by the underlying observation path. Unlike the traditional stochastic filtering equation, this dynamical representation is continuously dependent on the...
We study the adaptive control problem for discrete-time Markov control processes with Borel state and action spaces and possibly unbounded one-stage costs. The processes are given by recurrent equations with i.i.d. -valued random vectors whose density is unknown. Assuming observability of we propose the procedure of statistical estimation of that allows us to prove discounted asymptotic optimality of two types of adaptive policies used early for the processes with bounded costs.
We build a kernel estimator of the Markovian transition operator as an endomorphism on L¹ for some discrete time continuous states Markov processes which satisfy certain additional regularity conditions. The main result deals with the asymptotic normality of the kernel estimator constructed.
We consider a diffusion process smoothed with (small) sampling parameter . As in Berzin, León and Ortega (2001), we consider a kernel estimate with window of a function of its variance. In order to exhibit global tests of hypothesis, we derive here central limit theorems for the deviations such as
We consider a diffusion process Xt smoothed with (small) sampling parameter ε. As in Berzin, León and Ortega (2001), we consider a kernel estimate with window h(ε) of a function α of its variance. In order to exhibit global tests of hypothesis, we derive here central limit theorems for the Lp deviations such as