Displaying similar documents to “Sufficient conditions for the strong consistency of least squares estimator with α-stable errors”

Strong convergence for weighted sums of WOD random variables and its application in the EV regression model

Liwang Ding, Caoqing Jiang (2024)

Applications of Mathematics

Similarity:

The strong convergence for weighted sums of widely orthant dependent (WOD) random variables is investigated. As an application, we further investigate the strong consistency of the least squares estimator in EV regression model for WOD random variables. A simulation study is carried out to confirm the theoretical results.

L 1 -penalization in functional linear regression with subgaussian design

Vladimir Koltchinskii, Stanislav Minsker (2014)

Journal de l’École polytechnique — Mathématiques

Similarity:

We study functional regression with random subgaussian design and real-valued response. The focus is on the problems in which the regression function can be well approximated by a functional linear model with the slope function being “sparse” in the sense that it can be represented as a sum of a small number of well separated “spikes”. This can be viewed as an extension of now classical sparse estimation problems to the case of infinite dictionaries. We study an estimator of the regression...

Instrumental weighted variables under heteroscedasticity Part I – Consistency

Jan Ámos Víšek (2017)

Kybernetika

Similarity:

The proof of consistency instrumental weighted variables, the robust version of the classical instrumental variables is given. It is proved that all solutions of the corresponding normal equations are contained, with high probability, in a ball, the radius of which can be selected - asymptotically - arbitrarily small. Then also n -consistency is proved. An extended numerical study (the Part II of the paper) offers a picture of behavior of the estimator for finite samples under various...

Deviation inequalities and moderate deviations for estimators of parameters in bifurcating autoregressive models

S. Valère Bitseki Penda, Hacène Djellout (2014)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

The purpose of this paper is to investigate the deviation inequalities and the moderate deviation principle of the least squares estimators of the unknown parameters of general p th-order asymmetric bifurcating autoregressive processes, under suitable assumptions on the driven noise of the process. Our investigation relies on the moderate deviation principle for martingales.

Adaptive trimmed likelihood estimation in regression

Tadeusz Bednarski, Brenton R. Clarke, Daniel Schubert (2010)

Discussiones Mathematicae Probability and Statistics

Similarity:

In this paper we derive an asymptotic normality result for an adaptive trimmed likelihood estimator of regression starting from initial high breakdownpoint robust regression estimates. The approach leads to quickly and easily computed robust and efficient estimates for regression. A highlight of the method is that it tends automatically in one algorithm to expose the outliers and give least squares estimates with the outliers removed. The idea is to begin with a rapidly computed consistent...

Trimmed Estimators in Regression Framework

TomĂĄĹĄ Jurczyk (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Similarity:

From the practical point of view the regression analysis and its Least Squares method is clearly one of the most used techniques of statistics. Unfortunately, if there is some problem present in the data (for example contamination), classical methods are not longer suitable. A lot of methods have been proposed to overcome these problematic situations. In this contribution we focus on special kind of methods based on trimming. There exist several approaches which use trimming off part...

Least squares estimator consistency: a geometric approach

João Tiago Mexia, João Lita da Silva (2006)

Discussiones Mathematicae Probability and Statistics

Similarity:

Consistency of LSE estimator in linear models is studied assuming that the error vector has radial symmetry. Generalized polar coordinates and algebraic assumptions on the design matrix are considered in the results that are established.

On the Equivalence between Orthogonal Regression and Linear Model with Type-II Constraints

Sandra Donevska, Eva Fišerová, Karel Hron (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Similarity:

Orthogonal regression, also known as the total least squares method, regression with errors-in variables or as a calibration problem, analyzes linear relationship between variables. Comparing to the standard regression, both dependent and explanatory variables account for measurement errors. Through this paper we shortly discuss the orthogonal least squares, the least squares and the maximum likelihood methods for estimation of the orthogonal regression line. We also show that all mentioned...

Consistency of trigonometric and polynomial regression estimators

Waldemar Popiński (1998)

Applicationes Mathematicae

Similarity:

The problem of nonparametric regression function estimation is considered using the complete orthonormal system of trigonometric functions or Legendre polynomials e k , k=0,1,..., for the observation model y i = f ( x i ) + η i , i=1,...,n, where the η i are independent random variables with zero mean value and finite variance, and the observation points x i [ a , b ] , i=1,...,n, form a random sample from a distribution with density ϱ L 1 [ a , b ] . Sufficient and necessary conditions are obtained for consistency in the sense of the errors...

Least-squares trigonometric regression estimation

Waldemar Popiński (1999)

Applicationes Mathematicae

Similarity:

The problem of nonparametric function fitting using the complete orthogonal system of trigonometric functions e k , k=0,1,2,..., for the observation model y i = f ( x i n ) + η i , i=1,...,n, is considered, where η i are uncorrelated random variables with zero mean value and finite variance, and the observation points x i n [ 0 , 2 π ] , i=1,...,n, are equidistant. Conditions for convergence of the mean-square prediction error ( 1 / n ) i = 1 n E ( f ( x i n ) - f ^ N ( n ) ( x i n ) ) 2 , the integrated mean-square error E f - f ^ N ( n ) 2 and the pointwise mean-square error E ( f ( x ) - N ( n ) ( x ) ) 2 of the estimator f ^ N ( n ) ( x ) = k = 0 N ( n ) c ^ k e k ( x ) for f ∈...

Model selection for regression on a random design

Yannick Baraud (2002)

ESAIM: Probability and Statistics

Similarity:

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of...