Displaying 481 – 500 of 1021

Showing per page

Minimax nonparametric hypothesis testing for ellipsoids and Besov bodies

Yuri I. Ingster, Irina A. Suslina (2010)

ESAIM: Probability and Statistics

We observe an infinitely dimensional Gaussian random vector x = ξ + v where ξ is a sequence of standard Gaussian variables and v ∈ l2 is an unknown mean. We consider the hypothesis testing problem H0 : v = 0versus alternatives H ε , τ : v V ε for the sets V ε = V ε ( τ , ρ ε ) l 2 . The sets Vε are lq-ellipsoids of semi-axes ai = i-s R/ε with lp-ellipsoid of semi-axes bi = i-r pε/ε removed or similar Besov bodies Bq,t;s (R/ε) with Besov bodies Bp,h;r (pε/ε) removed. Here τ = ( κ , R ) or τ = ( κ , h , t , R ) ; κ = ( p , q , r , s ) are the parameters which define the sets Vε for given radii...

Minimax nonparametric prediction

Maciej Wilczyński (2001)

Applicationes Mathematicae

Let U₀ be a random vector taking its values in a measurable space and having an unknown distribution P and let U₁,...,Uₙ and V , . . . , V m be independent, simple random samples from P of size n and m, respectively. Further, let z , . . . , z k be real-valued functions defined on the same space. Assuming that only the first sample is observed, we find a minimax predictor d⁰(n,U₁,...,Uₙ) of the vector Y m = j = 1 m ( z ( V j ) , . . . , z k ( V j ) ) T with respect to a quadratic errors loss function.

Minimax results for estimating integrals of analytic processes

Karim Benhenni, Jacques Istas (2010)

ESAIM: Probability and Statistics

The problem of predicting integrals of stochastic processes is considered. Linear estimators have been constructed by means of samples at N discrete times for processes having a fixed Hölderian regularity s > 0 in quadratic mean. It is known that the rate of convergence of the mean squared error is of order N-(2s+1). In the class of analytic processes Hp, p ≥ 1, we show that among all estimators, the linear ones are optimal. Moreover, using optimal coefficient estimators derived through...

Model selection and estimation of a component in additive regression

Xavier Gendre (2014)

ESAIM: Probability and Statistics

Let Y ∈ ℝn be a random vector with mean s and covariance matrix σ2PntPn where Pn is some known n × n-matrix. We construct a statistical procedure to estimate s as well as under moment condition on Y or Gaussian hypothesis. Both cases are developed for known or unknown σ2. Our approach is free from any prior assumption on s and is based on non-asymptotic model selection methods. Given some linear spaces collection {Sm, m ∈ ℳ}, we consider, for any m ∈ ℳ, the least-squares estimator ŝm of s in Sm....

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2001)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2010)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-Gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for estimating the non zero components of a Gaussian vector

Sylvie Huet (2006)

ESAIM: Probability and Statistics

We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.

Model selection for quantum homodyne tomography

Jonas Kahn (2009)

ESAIM: Probability and Statistics

This paper deals with a non-parametric problem coming from physics, namely quantum tomography. That consists in determining the quantum state of a mode of light through a homodyne measurement. We apply several model selection procedures: penalized projection estimators, where we may use pattern functions or wavelets, and penalized maximum likelihood estimators. In all these cases, we get oracle inequalities. In the former we also have a polynomial rate of convergence for the non-parametric problem....

Model selection for regression on a random design

Yannick Baraud (2002)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small probability....

Model selection for regression on a random design

Yannick Baraud (2010)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small...

Moderate deviations for the Durbin–Watson statistic related to the first-order autoregressive process

S. Valère Bitseki Penda, Hacène Djellout, Frédéric Proïa (2014)

ESAIM: Probability and Statistics

The purpose of this paper is to investigate moderate deviations for the Durbin–Watson statistic associated with the stable first-order autoregressive process where the driven noise is also given by a first-order autoregressive process. We first establish a moderate deviation principle for both the least squares estimator of the unknown parameter of the autoregressive process as well as for the serial correlation estimator associated with the driven noise. It enables us to provide a moderate deviation...

Currently displaying 481 – 500 of 1021