adaptive density estimation in a mixing framework
2000 Mathematics Subject Classification: 62G07, 60F10.In this paper we prove large and moderate deviations principles for the recursive kernel estimator of a probability density function and its partial derivatives. Unlike the density estimator, the derivatives estimators exhibit a quadratic behaviour not only for the moderate deviations scale but also for the large deviations one. We provide results both for the pointwise and the uniform deviations.
The problem of nonparametric function fitting using the complete orthogonal system of trigonometric functions , k=0,1,2,..., for the observation model , i=1,...,n, is considered, where are uncorrelated random variables with zero mean value and finite variance, and the observation points , i=1,...,n, are equidistant. Conditions for convergence of the mean-square prediction error , the integrated mean-square error and the pointwise mean-square error of the estimator for f ∈ C[0,2π] and...
We consider a lacunar wavelet series function observed with an additive Brownian motion. Such functions are statistically characterized by two parameters. The first parameter governs the lacunarity of the wavelet coefficients while the second one governs its intensity. In this paper, we establish the local and asymptotic normality (LAN) of the model, with respect to this couple of parameters. This enables to prove the optimality of an estimator for the lacunarity parameter, and to build optimal...
We consider a lacunar wavelet series function observed with an additive Brownian motion. Such functions are statistically characterized by two parameters. The first parameter governs the lacunarity of the wavelet coefficients while the second one governs its intensity. In this paper, we establish the local and asymptotic normality (LAN) of the model, with respect to this couple of parameters. This enables to prove the optimality of an estimator for the lacunarity parameter, and to build optimal...
In this work, we introduce a local linear estimator of the conditional mode for a random real response variable which is subject to left-truncation by another random variable where the covariate takes values in an infinite dimensional space. We first establish both of pointwise and uniform almost sure convergences, with rates, of the conditional density estimator. Then, we deduce the strong consistency of the obtained conditional mode estimator. We finally illustrate the outperformance of our method...
We construct a data-driven projection density estimator for continuous time processes. This estimator reaches superoptimal rates over a class F0 of densities that is dense in the family of all possible densities, and a «reasonable» rate elsewhere. The class F0 may be chosen previously by the analyst. Results apply to Rd-valued processes and to N-valued processes. In the particular case where square-integrable local time does exist, it is shown that our estimator is strictly better than the local...
A solution to the marginal problem is obtained in a form of parametric exponential (Gibbs–Markov) distribution, where the unknown parameters are obtained by an optimization procedure that agrees with the maximum likelihood (ML) estimate. With respect to a difficult performance of the method we propose also an alternative approach, providing the original basis of marginals can be appropriately extended. Then the (numerically feasible) solution can be obtained either by the maximum pseudo-likelihood...
We consider the problem of estimating an unknown regression function when the design is random with values in . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small probability....
We consider the problem of estimating an unknown regression function when the design is random with values in . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small...
We propose two methods to solve multistage stochastic programs when only a (large) finite set of scenarios is available. The usual scenario tree construction to represent non-anticipativity constraints is replaced by alternative discretization schemes coming from non-parametric estimation ideas. In the first method, a penalty term is added to the objective so as to enforce the closeness between decision variables and the Nadaraya–Watson estimation of their conditional expectation. A numerical application...