Page 1

Displaying 1 – 8 of 8

Showing per page

Recursive bias estimation for multivariate regression smoothers

Pierre-André Cornillon, N. W. Hengartner, E. Matzner-Løber (2014)

ESAIM: Probability and Statistics

This paper presents a practical and simple fully nonparametric multivariate smoothing procedure that adapts to the underlying smoothness of the true regression function. Our estimator is easily computed by successive application of existing base smoothers (without the need of selecting an optimal smoothing parameter), such as thin-plate spline or kernel smoothers. The resulting smoother has better out of sample predictive capabilities than the underlying base smoother, or competing structurally...

Redescending M-estimators in regression analysis, cluster analysis and image analysis

Christine H. Müller (2004)

Discussiones Mathematicae Probability and Statistics

We give a review on the properties and applications of M-estimators with redescending score function. For regression analysis, some of these redescending M-estimators can attain the maximum breakdown point which is possible in this setup. Moreover, some of them are the solutions of the problem of maximizing the efficiency under bounded influence function when the regression coefficient and the scale parameter are estimated simultaneously. Hence redescending M-estimators satisfy several outlier robustness...

Remarks on optimum kernels and optimum boundary kernels

Jitka Poměnková (2008)

Applications of Mathematics

Kernel smoothers belong to the most popular nonparametric functional estimates used for describing data structure. They can be applied to the fix design regression model as well as to the random design regression model. The main idea of this paper is to present a construction of the optimum kernel and optimum boundary kernel by means of the Gegenbauer and Legendre polynomials.

Risk bounds for mixture density estimation

Alexander Rakhlin, Dmitry Panchenko, Sayan Mukherjee (2005)

ESAIM: Probability and Statistics

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Estimator (MLE) and the greedy procedure described by Li and Barron (1999) under the additional assumption of boundedness of densities. We prove an O ( 1 n ) bound on the estimation error which does not depend on the number of densities in the estimated combination. Under the boundedness assumption, this improves the bound of Li and Barron by...

Risk bounds for mixture density estimation

Alexander Rakhlin, Dmitry Panchenko, Sayan Mukherjee (2010)

ESAIM: Probability and Statistics

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Estimator (MLE) and the greedy procedure described by Li and Barron (1999) under the additional assumption of boundedness of densities. We prove an O ( 1 n ) bound on the estimation error which does not depend on the number of densities in the estimated combination. Under the boundedness assumption, this improves the bound of Li and Barron...

Currently displaying 1 – 8 of 8

Page 1