Previous Page 4

Displaying 61 – 76 of 76

Showing per page

Recursive bias estimation for multivariate regression smoothers

Pierre-André Cornillon, N. W. Hengartner, E. Matzner-Løber (2014)

ESAIM: Probability and Statistics

This paper presents a practical and simple fully nonparametric multivariate smoothing procedure that adapts to the underlying smoothness of the true regression function. Our estimator is easily computed by successive application of existing base smoothers (without the need of selecting an optimal smoothing parameter), such as thin-plate spline or kernel smoothers. The resulting smoother has better out of sample predictive capabilities than the underlying base smoother, or competing structurally...

Remarks on optimum kernels and optimum boundary kernels

Jitka Poměnková (2008)

Applications of Mathematics

Kernel smoothers belong to the most popular nonparametric functional estimates used for describing data structure. They can be applied to the fix design regression model as well as to the random design regression model. The main idea of this paper is to present a construction of the optimum kernel and optimum boundary kernel by means of the Gegenbauer and Legendre polynomials.

Selección de la ventana en suavización tipo núcleo de la parte no paramétrica de un modelo parcialmente lineal con errores autorregresivos.

Germán Aneiros Pérez (2000)

Qüestiió

Supongamos que yi = ζiT β + m(ti) + εi, i = 1, ..., n, donde el vector (p x 1) β y la función m(·) son desconocidos, y los errores εi provienen de un proceso autorregresivo de orden uno (AR(1)) estacionario. Discutimos aquí el problema de la selección del parámetro ventana de un estimador tipo núcleo de la función m(·) basado en un estimador Generalizado de Mínimos Cuadrados de β. Obtenemos la expresión asintótica de una ventana óptima y proponemos un método para estimarla, de modo que dé lugar...

Smoothing and preservation of irregularities using local linear fitting

Irène Gijbels (2008)

Applications of Mathematics

For nonparametric estimation of a smooth regression function, local linear fitting is a widely-used method. The goal of this paper is to briefly review how to use this method when the unknown curve possibly has some irregularities, such as jumps or peaks, at unknown locations. It is then explained how the same basic method can be used when estimating unsmooth probability densities and conditional variance functions.

Smoothing dichotomy in randomized fixed-design regression with strongly dependent errors based on a moving average

Artur Bryk (2014)

Applicationes Mathematicae

We consider a fixed-design regression model with errors which form a Borel measurable function of a long-range dependent moving average process. We introduce an artificial randomization of grid points at which observations are taken in order to diminish the impact of strong dependence. We show that the Priestley-Chao kernel estimator of the regression fuction exhibits a dichotomous asymptotic behaviour depending on the amount of smoothing employed. Moreover, the resulting estimator is shown to exhibit...

Stacked regression with restrictions

Tomasz Górecki (2005)

Discussiones Mathematicae Probability and Statistics

When we apply stacked regression to classification we need only discriminant indices which can be negative. In many situations, we want these indices to be positive, e.g., if we want to use them to count posterior probabilities, when we want to use stacked regression to combining classification. In such situation, we have to use leastsquares regression under the constraint βₖ ≥ 0, k = 1,2,...,K. In their earlier work [5], LeBlanc and Tibshirani used an algorithm given in [4]. However, in this paper...

Testing Linearity in an AR Errors-in-variables Model with Application to Stochastic Volatility

D. Feldmann, W. Härdle, C. Hafner, M. Hoffmann, O. Lepski, A. Tsybakov (2003)

Applicationes Mathematicae

Stochastic Volatility (SV) models are widely used in financial applications. To decide whether standard parametric restrictions are justified for a given data set, a statistical test is required. In this paper, we develop such a test of a linear hypothesis versus a general composite nonparametric alternative using the state space representation of the SV model as an errors-in-variables AR(1) model. The power of the test is analyzed. We provide a simulation study and apply the test to the HFDF96...

Theory of classification : a survey of some recent advances

Stéphane Boucheron, Olivier Bousquet, Gábor Lugosi (2005)

ESAIM: Probability and Statistics

The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.

Theory of Classification: a Survey of Some Recent Advances

Stéphane Boucheron, Olivier Bousquet, Gábor Lugosi (2010)

ESAIM: Probability and Statistics

The last few years have witnessed important new developments in the theory and practice of pattern classification. We intend to survey some of the main new ideas that have led to these recent results.

Trimmed Estimators in Regression Framework

TomĂĄĹĄ Jurczyk (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

From the practical point of view the regression analysis and its Least Squares method is clearly one of the most used techniques of statistics. Unfortunately, if there is some problem present in the data (for example contamination), classical methods are not longer suitable. A lot of methods have been proposed to overcome these problematic situations. In this contribution we focus on special kind of methods based on trimming. There exist several approaches which use trimming off part of the observations,...

Uniform Confidence Bands for Local Polynomial Quantile Estimators

Camille Sabbah (2014)

ESAIM: Probability and Statistics

This paper deals with uniform consistency and uniform confidence bands for the quantile function and its derivatives. We describe a kernel local polynomial estimator of quantile function and give uniform consistency. Furthermore, we derive its maximal deviation limit distribution using an approximation in the spirit of Bickel and Rosenblatt [P.J. Bickel and M. Rosenblatt, Ann. Statist. 1 (1973) 1071–1095].

Using randomization to improve performance of a variance estimator of strongly dependent errors

Artur Bryk (2012)

Applicationes Mathematicae

We consider a fixed-design regression model with long-range dependent errors which form a moving average or Gaussian process. We introduce an artificial randomization of grid points at which observations are taken in order to diminish the impact of strong dependence. We estimate the variance of the errors using the Rice estimator. The estimator is shown to exhibit weak (i.e. in probability) consistency. Simulation results confirm this property for moderate and large sample sizes when randomization...

Why L 1 view and what is next?

László Györfi, Adam Krzyżak (2011)

Kybernetika

N. N. Cencov wrote a commentary chapter included in the Appendix of the Russian translation of the Devroye and Györfi book [15] collecting some arguments supporting the L 1 view of density estimation. The Cencov’s work is available in Russian only and it hasn’t been translated, so late Igor Vajda decided to translate the Cencov’s paper and to add some remarks on the occasion of organizing the session “25 Years of the L 1 Density Estimation” at the Prague Stochastics 2010 Symposium. In this paper we...

Currently displaying 61 – 76 of 76

Previous Page 4