Displaying 141 – 160 of 253

Showing per page

Model selection and estimation of a component in additive regression

Xavier Gendre (2014)

ESAIM: Probability and Statistics

Let Y ∈ ℝn be a random vector with mean s and covariance matrix σ2PntPn where Pn is some known n × n-matrix. We construct a statistical procedure to estimate s as well as under moment condition on Y or Gaussian hypothesis. Both cases are developed for known or unknown σ2. Our approach is free from any prior assumption on s and is based on non-asymptotic model selection methods. Given some linear spaces collection {Sm, m ∈ ℳ}, we consider, for any m ∈ ℳ, the least-squares estimator ŝm of s in Sm....

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2001)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2010)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-Gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for estimating the non zero components of a Gaussian vector

Sylvie Huet (2006)

ESAIM: Probability and Statistics

We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.

Model selection for quantum homodyne tomography

Jonas Kahn (2009)

ESAIM: Probability and Statistics

This paper deals with a non-parametric problem coming from physics, namely quantum tomography. That consists in determining the quantum state of a mode of light through a homodyne measurement. We apply several model selection procedures: penalized projection estimators, where we may use pattern functions or wavelets, and penalized maximum likelihood estimators. In all these cases, we get oracle inequalities. In the former we also have a polynomial rate of convergence for the non-parametric problem....

Model selection for regression on a random design

Yannick Baraud (2002)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small probability....

Model selection for regression on a random design

Yannick Baraud (2010)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small...

Model selection with vague prior information

Elias Moreno, F. Javier Girón, M. Lina Martínez (1998)

Revista de la Real Academia de Ciencias Exactas Físicas y Naturales

In the Bayesian approach, the Bayes factor is the main tool for model selection and hypothesis testing. When prior information is weak, "default" or "automatic" priors, which are typicaIly improper, are commonly used but, unfortunately, the Bayes factor is defined up to a multiplicative constant. In this paper we revise some recent but already popular methodologies, intrinsic and lractional, to deal with improper priors in model selection and hypothesis testing. Special attention is paid to the...

Modelado de series temporales con métodos en bloque y recursivos. Desarrollo de estimadores y predictores adaptativos.

David de la Fuente García, Daniel F. García Martínez (1988)

Qüestiió

En este artículo se presenta un análisis comparativo entre los algoritmos más interesantes para la estimación de parámetros de series temporales, tanto en bloque como recursivos. Se propone que los modelos autorregresivos largos constituyen una parametrización general para modelizar series inestables, cuyos parámetros pueden estimarse adecuadamente con algoritmos recursivos, tales como los filtros celosía.

Currently displaying 141 – 160 of 253