A Minimal Complete Class Theorem for Decision Problems Where the Parameter Space Contains Only Finitely Many Points.
It was recently shown that all estimators which are locally best in the relative interior of the parameter set, together with their limits constitute a complete class in linear estimation, both unbiased and biased. However, not all these limits are admissible. A sufficient condition for admissibility of a limit was given by the author (1986) for the case of unbiased estimation in a linear model with the natural parameter space. This paper extends this result to the general linear model and to biased...
Let be observation vector in the usual linear model with expectation and covariance matrix known up to a multiplicative scalar, possibly singular. A linear statistic is called invariant estimator for a parametric function if its MSE depends on only through . It is shown that is admissible invariant for , if and only if, it is a BLUE of in the case when is estimable with zero variance, and it is of the form , where and is an arbitrary BLUE, otherwise. This result is used in...
Estimation in truncated parameter space is one of the most important features in statistical inference, because the frequently used criterion of unbiasedness is useless, since no unbiased estimator exists in general. So, other optimally criteria such as admissibility and minimaxity have to be looked for among others. In this paper we consider a subclass of the exponential families of distributions. Bayes estimator of a lower-bounded scale parameter, under the squared-log error loss function with...
In the paper an explicit expression for the Bayes invariant quadratic unbiased estimate of the linear function of the variance components is presented for the mixed linear model , , with the unknown variance componets in the normal case. The matrices , may be singular. Applications to two examples of the analysis of variance are given.
In this paper, we study the admissibility of linear estimator of regression coefficient in linear model under the extended balanced loss function (EBLF). The sufficient and necessary condition for linear estimators to be admissible are obtained respectively in homogeneous and non-homogeneous classes. Furthermore, we show that admissible linear estimator under the EBLF is a convex combination of the admissible linear estimator under the sum of square residuals and quadratic loss function.
The statistical estimation problem of the normal distribution function and of the density at a point is considered. The traditional unbiased estimators are shown to have Bayes nature and admissibility of related generalized Bayes procedures is proved. Also inadmissibility of the unbiased density estimator is demonstrated.
The problem considered is that of estimation of the size (N) of a closed population under three sampling schemes admitting unbiased estimation of N. It is proved that for each of these schemes, the uniformly minimum variance unbiased estimator (UMVUE) of N is inadmissible under square error loss function. For the first scheme, the UMVUE is also the maximum likelihood estimator (MLE) of N. For the second scheme and a special case of the third, it is shown respectively that an MLE and an estimator...
We consider evaluating improper priors in a formal Bayes setting according to the consequences of their use. Let be a class of functions on the parameter space and consider estimating elements of under quadratic loss. If the formal Bayes estimator of every function in is admissible, then the prior is strongly admissible with respect to . Eaton’s method for establishing strong admissibility is based on studying the stability properties of a particular Markov chain associated with the inferential...
From an optimality point of view the solution of a decision problem is related to classes of optimal strategies: admissible, Bayes, etc. which are closely related to boundaries of the risk set S such as lower-boundary, Bayes boundary, positive Bayes boundary. In this paper we present some results concerning invariance properties of such boundaries when the set is transformed by means of a continuous monotonic increasing function W.
A class of minimax predictors of random variables with multinomial or multivariate hypergeometric distribution is determined in the case when the sample size is assumed to be a random variable with an unknown distribution. It is also proved that the usual predictors, which are minimax when the sample size is fixed, are not minimax, but they remain admissible when the sample size is an ancillary statistic with unknown distribution.