Displaying similar documents to “Least empirical risk procedures in statistical inference”

Estimation of nuisance parameters for inference based on least absolute deviations

Wojciech Niemiro (1995)

Applicationes Mathematicae

Similarity:

Statistical inference procedures based on least absolute deviations involve estimates of a matrix which plays the role of a multivariate nuisance parameter. To estimate this matrix, we use kernel smoothing. We show consistency and obtain bounds on the rate of convergence.

One Bootstrap suffices to generate sharp uniform bounds in functional estimation

Paul Deheuvels (2011)

Kybernetika

Similarity:

We consider, in the framework of multidimensional observations, nonparametric functional estimators, which include, as special cases, the Akaike–Parzen–Rosenblatt kernel density estimators ([1, 18, 20]), and the Nadaraya–Watson kernel regression estimators ([16, 22]). We evaluate the sup-norm, over a given set 𝐈 , of the difference between the estimator and a non-random functional centering factor (which reduces to the estimator mean for kernel density estimation). We show that, under...

Estimation and prediction in regression models with random explanatory variables

Nguyen Bac-Van

Similarity:

The regression model X(t),Y(t);t=1,...,n with random explanatory variable X is transformed by prescribing a partition S 1 , . . . , S k of the given domain S of X-values and specifying X ( 1 ) , . . . , X ( n ) S i = X i 1 , . . . , X i α ( i ) , i = 1 , . . . , k . Through the conditioning α ( i ) = a ( i ) , i = 1 , . . . , k , X i 1 , . . . , X i α ( i ) ; i = 1 , . . . , k = x 11 , . . . , x k a ( k ) the initial model with i.i.d. pairs (X(t),Y(t)),t=1,...,n, becomes a conditional fixed-design ( x 11 , . . . , x k a ( k ) ) model Y i j , i = 1 , . . . , k ; j = 1 , . . . , a ( i ) where the response variables Y i j are independent and distributed according to the mixed conditional distribution Q ( · , x i j ) of Y given X at the observed value x i j .Afterwards, we investigate the case ( Q ) E ( Y ' | x ) = i = 1 k b i ( x ) θ i I S i ( x ) , ( Q ) D ( Y | x ) = i = 1 k d i ( x ) Σ i I S i ( x ) which...

Minimax Prediction for the Multinomial and Multivariate Hypergeometric Distributions

Alicja Jokiel-Rokita (1998)

Applicationes Mathematicae

Similarity:

A problem of minimax prediction for the multinomial and multivariate hypergeometric distribution is considered. A class of minimax predictors is determined for estimating linear combinations of the unknown parameter and the random variable having the multinomial or the multivariate hypergeometric distribution.

Modified power divergence estimators in normal models – simulation and comparative study

Iva Frýdlová, Igor Vajda, Václav Kůs (2012)

Kybernetika

Similarity:

Point estimators based on minimization of information-theoretic divergences between empirical and hypothetical distribution induce a problem when working with continuous families which are measure-theoretically orthogonal with the family of empirical distributions. In this case, the φ -divergence is always equal to its upper bound, and the minimum φ -divergence estimates are trivial. Broniatowski and Vajda [3] proposed several modifications of the minimum divergence rule to provide a solution...