### A Minimal Complete Class Theorem for Decision Problems Where the Parameter Space Contains Only Finitely Many Points.

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

It was recently shown that all estimators which are locally best in the relative interior of the parameter set, together with their limits constitute a complete class in linear estimation, both unbiased and biased. However, not all these limits are admissible. A sufficient condition for admissibility of a limit was given by the author (1986) for the case of unbiased estimation in a linear model with the natural parameter space. This paper extends this result to the general linear model and to biased...

Let $\mathbf{y}$ be observation vector in the usual linear model with expectation $\mathbf{A}\beta $ and covariance matrix known up to a multiplicative scalar, possibly singular. A linear statistic ${\mathbf{a}}^{T}\mathbf{y}$ is called invariant estimator for a parametric function $\phi ={\mathbf{c}}^{T}\beta $ if its MSE depends on $\beta $ only through $\phi $. It is shown that ${\mathbf{a}}^{T}\mathbf{y}$ is admissible invariant for $\phi $, if and only if, it is a BLUE of $\phi ,$ in the case when $\phi $ is estimable with zero variance, and it is of the form $k\widehat{\phi}$, where $k\in \u23290,1\u232a$ and $\widehat{\phi}$ is an arbitrary BLUE, otherwise. This result is used in...

Estimation in truncated parameter space is one of the most important features in statistical inference, because the frequently used criterion of unbiasedness is useless, since no unbiased estimator exists in general. So, other optimally criteria such as admissibility and minimaxity have to be looked for among others. In this paper we consider a subclass of the exponential families of distributions. Bayes estimator of a lower-bounded scale parameter, under the squared-log error loss function with...

In the paper an explicit expression for the Bayes invariant quadratic unbiased estimate of the linear function of the variance components is presented for the mixed linear model $\mathbf{t}=\mathbf{X}\beta +\u03f5$, $\mathbf{E}\left(\mathbf{t}\right)=\mathbf{X}\beta $, $\mathbf{D}\left(\mathbf{t}\right)={\mathbf{0}}_{\mathbf{1}}{\mathbf{U}}_{\mathbf{1}}+{\mathbf{0}}_{\mathbf{2}}{\mathbf{U}}_{\mathbf{2}}$ with the unknown variance componets in the normal case. The matrices ${\mathbf{U}}_{\mathbf{1}}$, ${\mathbf{U}}_{\mathbf{2}}$ may be singular. Applications to two examples of the analysis of variance are given.

The statistical estimation problem of the normal distribution function and of the density at a point is considered. The traditional unbiased estimators are shown to have Bayes nature and admissibility of related generalized Bayes procedures is proved. Also inadmissibility of the unbiased density estimator is demonstrated.

The problem considered is that of estimation of the size (N) of a closed population under three sampling schemes admitting unbiased estimation of N. It is proved that for each of these schemes, the uniformly minimum variance unbiased estimator (UMVUE) of N is inadmissible under square error loss function. For the first scheme, the UMVUE is also the maximum likelihood estimator (MLE) of N. For the second scheme and a special case of the third, it is shown respectively that an MLE and an estimator...

We consider evaluating improper priors in a formal Bayes setting according to the consequences of their use. Let $\mathit{\Phi}$ be a class of functions on the parameter space and consider estimating elements of $\mathit{\Phi}$ under quadratic loss. If the formal Bayes estimator of every function in $\mathit{\Phi}$ is admissible, then the prior is strongly admissible with respect to $\mathit{\Phi}$. Eaton’s method for establishing strong admissibility is based on studying the stability properties of a particular Markov chain associated with the inferential...

From an optimality point of view the solution of a decision problem is related to classes of optimal strategies: admissible, Bayes, etc. which are closely related to boundaries of the risk set S such as lower-boundary, Bayes boundary, positive Bayes boundary. In this paper we present some results concerning invariance properties of such boundaries when the set is transformed by means of a continuous monotonic increasing function W.

A class of minimax predictors of random variables with multinomial or multivariate hypergeometric distribution is determined in the case when the sample size is assumed to be a random variable with an unknown distribution. It is also proved that the usual predictors, which are minimax when the sample size is fixed, are not minimax, but they remain admissible when the sample size is an ancillary statistic with unknown distribution.