### A most bias-robust linear estimate of the scale parameter of the exponential distribution

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

In this paper we derive an asymptotic normality result for an adaptive trimmed likelihood estimator of regression starting from initial high breakdownpoint robust regression estimates. The approach leads to quickly and easily computed robust and efficient estimates for regression. A highlight of the method is that it tends automatically in one algorithm to expose the outliers and give least squares estimates with the outliers removed. The idea is to begin with a rapidly computed consistent robust...

In small to moderate sample sizes it is important to make use of all the data when there are no outliers, for reasons of efficiency. It is equally important to guard against the possibility that there may be single or multiple outliers which can have disastrous effects on normal theory least squares estimation and inference. The purpose of this paper is to describe and illustrate the use of an adaptive regression estimation algorithm which can be used to highlight outliers, either single or multiple...

A homogeneous Poisson process (N(t),t ≥ 0) with the intensity function m(t)=θ is observed on the interval [0,T]. The problem consists in estimating θ with balancing the LINEX loss due to an error of estimation and the cost of sampling which depends linearly on T. The optimal T is given when the prior distribution of θ is not uniquely specified.

An upper bound for the Kolmogorov distance between the posterior distributions in terms of that between the prior distributions is given. For some likelihood functions the inequality is sharp. Applications to assessing Bayes robustness are presented.

Several new examples of divergences emerged in the recent literature called blended divergences. Mostly these examples are constructed by the modification or parametrization of the old well-known phi-divergences. Newly introduced parameter is often called blending parameter. In this paper we present compact theory of blended divergences which provides us with a generally applicable method for finding new classes of divergences containing any two divergences ${D}_{0}$ and ${D}_{1}$ given in advance. Several examples...

Employing recently derived asymptotic representation of the least trimmed squares estimator, the combinations of the forecasts with constraints are studied. Under assumption of unbiasedness of individual forecasts it is shown that the combination without intercept and with constraint imposed on the estimate of regression coefficients that they sum to one, is better than others. A numerical example is included to support theoretical conclusions.

A robust version of the Ordinary Least Squares accommodating the idea of weighting the order statistics of the squared residuals (rather than directly the squares of residuals) is recalled and its properties are studied. The existence of solution of the corresponding extremal problem and the consistency under heteroscedasticity is proved.