Displaying similar documents to “Instrumental weighted variables under heteroscedasticity. Part I – Consistency”

Estimator selection in the gaussian setting

Yannick Baraud, Christophe Giraud, Sylvie Huet (2014)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

We consider the problem of estimating the mean f of a Gaussian vector Y with independent components of common unknown variance σ 2 . Our estimation procedure is based on estimator selection. More precisely, we start with an arbitrary and possibly infinite collection 𝔽 of estimators of f based on Y and, with the same data Y , aim at selecting an estimator among 𝔽 with the smallest Euclidean risk. No assumptions on the estimators are made and their dependencies with respect to Y may be unknown....

Rank theory approach to ridge, LASSO, preliminary test and Stein-type estimators: Comparative study

A. K. Md. Ehsanes Saleh, Radim Navrátil (2018)

Kybernetika

Similarity:

In the development of efficient predictive models, the key is to identify suitable predictors for a given linear model. For the first time, this paper provides a comparative study of ridge regression, LASSO, preliminary test and Stein-type estimators based on the theory of rank statistics. Under the orthonormal design matrix of a given linear model, we find that the rank based ridge estimator outperforms the usual rank estimator, restricted R-estimator, rank-based LASSO, preliminary...

Orthogonal series regression estimation under long-range dependent errors

Waldemar Popiński (2001)

Applicationes Mathematicae

Similarity:

This paper is concerned with general conditions for convergence rates of nonparametric orthogonal series estimators of the regression function. The estimators are obtained by the least squares method on the basis of an observation sample Y i = f ( X i ) + η i , i=1,...,n, where X i A d are independently chosen from a distribution with density ϱ ∈ L¹(A) and η i are zero mean stationary errors with long-range dependence. Convergence rates of the error n - 1 i = 1 n ( f ( X i ) - f ̂ N ( X i ) ) ² for the estimator f ̂ N ( x ) = k = 1 N c ̂ k e k ( x ) , constructed using an orthonormal system...

Existence, Consistency and computer simulation for selected variants of minimum distance estimators

Václav Kůs, Domingo Morales, Jitka Hrabáková, Iva Frýdlová (2018)

Kybernetika

Similarity:

The paper deals with sufficient conditions for the existence of general approximate minimum distance estimator (AMDE) of a probability density function f 0 on the real line. It shows that the AMDE always exists when the bounded φ -divergence, Kolmogorov, Lévy, Cramér, or discrepancy distance is used. Consequently, n - 1 / 2 consistency rate in any bounded φ -divergence is established for Kolmogorov, Lévy, and discrepancy estimators under the condition that the degree of variations of the corresponding...

Orthogonal series estimation of band-limited regression functions

Waldemar Popiński (2014)

Applicationes Mathematicae

Similarity:

The problem of nonparametric function fitting using the complete orthogonal system of Whittaker cardinal functions s k , k = 0,±1,..., for the observation model y j = f ( u j ) + η j , j = 1,...,n, is considered, where f ∈ L²(ℝ) ∩ BL(Ω) for Ω > 0 is a band-limited function, u j are independent random variables uniformly distributed in the observation interval [-T,T], η j are uncorrelated or correlated random variables with zero mean value and finite variance, independent of the observation points. Conditions...

L 1 -penalization in functional linear regression with subgaussian design

Vladimir Koltchinskii, Stanislav Minsker (2014)

Journal de l’École polytechnique — Mathématiques

Similarity:

We study functional regression with random subgaussian design and real-valued response. The focus is on the problems in which the regression function can be well approximated by a functional linear model with the slope function being “sparse” in the sense that it can be represented as a sum of a small number of well separated “spikes”. This can be viewed as an extension of now classical sparse estimation problems to the case of infinite dictionaries. We study an estimator of the regression...

Instrumental weighted variables under heteroscedasticity. Part II – Numerical study

Jan Ámos Víšek (2017)

Kybernetika

Similarity:

Results of a numerical study of the behavior of the instrumental weighted variables estimator – in a competition with two other estimators – are presented. The study was performed under various frameworks (homoscedsticity/heteroscedasticity, several level and types of contamination of data, fulfilled/broken orthogonality condition). At the beginning the optimal values of eligible parameters of estimatros in question were empirically established. It was done under the various sizes of...

M -estimators of structural parameters in pseudolinear models

Friedrich Liese, Igor Vajda (1999)

Applications of Mathematics

Similarity:

Real valued M -estimators θ ^ n : = min 1 n ρ ( Y i - τ ( θ ) ) in a statistical model with observations Y i F θ 0 are replaced by p -valued M -estimators β ^ n : = min 1 n ρ ( Y i - τ ( u ( z i T β ) ) ) in a new model with observations Y i F u ( z i t β 0 ) , where z i p are regressors, β 0 p is a structural parameter and u : a structural function of the new model. Sufficient conditions for the consistency of β ^ n are derived, motivated by the sufficiency conditions for the simpler “parent estimator” θ ^ n . The result is a general method of consistent estimation in a class of nonlinear (pseudolinear) statistical...

Compact hypothesis and extremal set estimators

João Tiago Mexia, Pedro Corte Real (2003)

Discussiones Mathematicae Probability and Statistics

Similarity:

In extremal estimation theory the estimators are local or absolute extremes of functions defined on the cartesian product of the parameter by the sample space. Assuming that these functions converge uniformly, in a convenient stochastic way, to a limit function g, set estimators for the set ∇ of absolute maxima (minima) of g are obtained under the compactness assumption that ∇ is contained in a known compact U. A strongly consistent test is presented for this assumption. Moreover, when...

On orthogonal series estimation of bounded regression functions

Waldemar Popiński (2001)

Applicationes Mathematicae

Similarity:

The problem of nonparametric estimation of a bounded regression function f L ² ( [ a , b ] d ) , [a,b] ⊂ ℝ, d ≥ 1, using an orthonormal system of functions e k , k=1,2,..., is considered in the case when the observations follow the model Y i = f ( X i ) + η i , i=1,...,n, where X i and η i are i.i.d. copies of independent random variables X and η, respectively, the distribution of X has density ϱ, and η has mean zero and finite variance. The estimators are constructed by proper truncation of the function f ̂ ( x ) = k = 1 N ( n ) c ̂ k e k ( x ) , where the coefficients c ̂ , . . . , c ̂ N ( n ) ...

Complete f -moment convergence for weighted sums of WOD arrays with statistical applications

Xi Chen, Xinran Tao, Xuejun Wang (2023)

Kybernetika

Similarity:

Complete f -moment convergence is much more general than complete convergence and complete moment convergence. In this work, we mainly investigate the complete f -moment convergence for weighted sums of widely orthant dependent (WOD, for short) arrays. A general result on Complete f -moment convergence is obtained under some suitable conditions, which generalizes the corresponding one in the literature. As an application, we establish the complete consistency for the weighted linear estimator...

Optimal estimators in learning theory

V. N. Temlyakov (2006)

Banach Center Publications

Similarity:

This paper is a survey of recent results on some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning-from-examples, refers to a process that builds on the base of available data of inputs x i and outputs y i , i = 1,...,m, a function that best represents the relation between the inputs x ∈ X and the corresponding outputs y ∈ Y. The goal is to find an estimator f z on the base of given data z : = ( ( x , y ) , . . . , ( x m , y m ) ) that approximates well the regression function...