Page 1

Displaying 1 – 4 of 4

Showing per page

Selección de la ventana en suavización tipo núcleo de la parte no paramétrica de un modelo parcialmente lineal con errores autorregresivos.

Germán Aneiros Pérez (2000)

Qüestiió

Supongamos que yi = ζiT β + m(ti) + εi, i = 1, ..., n, donde el vector (p x 1) β y la función m(·) son desconocidos, y los errores εi provienen de un proceso autorregresivo de orden uno (AR(1)) estacionario. Discutimos aquí el problema de la selección del parámetro ventana de un estimador tipo núcleo de la función m(·) basado en un estimador Generalizado de Mínimos Cuadrados de β. Obtenemos la expresión asintótica de una ventana óptima y proponemos un método para estimarla, de modo que dé lugar...

Smoothing and preservation of irregularities using local linear fitting

Irène Gijbels (2008)

Applications of Mathematics

For nonparametric estimation of a smooth regression function, local linear fitting is a widely-used method. The goal of this paper is to briefly review how to use this method when the unknown curve possibly has some irregularities, such as jumps or peaks, at unknown locations. It is then explained how the same basic method can be used when estimating unsmooth probability densities and conditional variance functions.

Smoothing dichotomy in randomized fixed-design regression with strongly dependent errors based on a moving average

Artur Bryk (2014)

Applicationes Mathematicae

We consider a fixed-design regression model with errors which form a Borel measurable function of a long-range dependent moving average process. We introduce an artificial randomization of grid points at which observations are taken in order to diminish the impact of strong dependence. We show that the Priestley-Chao kernel estimator of the regression fuction exhibits a dichotomous asymptotic behaviour depending on the amount of smoothing employed. Moreover, the resulting estimator is shown to exhibit...

Stacked regression with restrictions

Tomasz Górecki (2005)

Discussiones Mathematicae Probability and Statistics

When we apply stacked regression to classification we need only discriminant indices which can be negative. In many situations, we want these indices to be positive, e.g., if we want to use them to count posterior probabilities, when we want to use stacked regression to combining classification. In such situation, we have to use leastsquares regression under the constraint βₖ ≥ 0, k = 1,2,...,K. In their earlier work [5], LeBlanc and Tibshirani used an algorithm given in [4]. However, in this paper...

Currently displaying 1 – 4 of 4

Page 1