Displaying similar documents to “On nonparametric estimators of location of maximum”

Smoothing dichotomy in randomized fixed-design regression with strongly dependent errors based on a moving average

Artur Bryk (2014)

Applicationes Mathematicae

Similarity:

We consider a fixed-design regression model with errors which form a Borel measurable function of a long-range dependent moving average process. We introduce an artificial randomization of grid points at which observations are taken in order to diminish the impact of strong dependence. We show that the Priestley-Chao kernel estimator of the regression fuction exhibits a dichotomous asymptotic behaviour depending on the amount of smoothing employed. Moreover, the resulting estimator is...

Strange Design Points in Linear Regression

Andrej Pázman (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Similarity:

We discuss, partly on examples, several intuitively unexpected results in a standard linear regression model. We demonstrate that direct observations of the regression curve at a given point can not be substituted by observations at two very close neighboring points. On the opposite, we show that observations at two distant design points improve the variance of the estimator. In an experiment with correlated observations we show somewhat unexpected conditions under which a design point...

A note on the rate of convergence of local polynomial estimators in regression models

Friedrich Liese, Ingo Steinke (2001)

Kybernetika

Similarity:

Local polynomials are used to construct estimators for the value m ( x 0 ) of the regression function m and the values of the derivatives D γ m ( x 0 ) in a general class of nonparametric regression models. The covariables are allowed to be random or non-random. Only asymptotic conditions on the average distribution of the covariables are used as smoothness of the experimental design. This smoothness condition is discussed in detail. The optimal stochastic rate of convergence of the estimators is established....

Trimmed Estimators in Regression Framework

TomĂĄĹĄ Jurczyk (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Similarity:

From the practical point of view the regression analysis and its Least Squares method is clearly one of the most used techniques of statistics. Unfortunately, if there is some problem present in the data (for example contamination), classical methods are not longer suitable. A lot of methods have been proposed to overcome these problematic situations. In this contribution we focus on special kind of methods based on trimming. There exist several approaches which use trimming off part...

Adaptive trimmed likelihood estimation in regression

Tadeusz Bednarski, Brenton R. Clarke, Daniel Schubert (2010)

Discussiones Mathematicae Probability and Statistics

Similarity:

In this paper we derive an asymptotic normality result for an adaptive trimmed likelihood estimator of regression starting from initial high breakdownpoint robust regression estimates. The approach leads to quickly and easily computed robust and efficient estimates for regression. A highlight of the method is that it tends automatically in one algorithm to expose the outliers and give least squares estimates with the outliers removed. The idea is to begin with a rapidly computed consistent...

Using randomization to improve performance of a variance estimator of strongly dependent errors

Artur Bryk (2012)

Applicationes Mathematicae

Similarity:

We consider a fixed-design regression model with long-range dependent errors which form a moving average or Gaussian process. We introduce an artificial randomization of grid points at which observations are taken in order to diminish the impact of strong dependence. We estimate the variance of the errors using the Rice estimator. The estimator is shown to exhibit weak (i.e. in probability) consistency. Simulation results confirm this property for moderate and large sample sizes when...