Le programme HIVOR de classification ascendante hiérarchique selon les voisins réciproques et le critère de la variance
We consider the empirical risk function (for iid ’s) under the assumption that f(α,z) is convex with respect to α. Asymptotics of the minimum of is investigated. Tests for linear hypotheses are derived. Our results generalize some of those concerning LAD estimators and related tests.
Consistency of LSE estimator in linear models is studied assuming that the error vector has radial symmetry. Generalized polar coordinates and algebraic assumptions on the design matrix are considered in the results that are established.
Este artículo desarrolla y comenta diversas correcciones de continuidad a las aproximaciones normal y chi-cuadrado de algunas distribuciones discretas.
The paper deals with the linear comparative calibration problem, i. e. the situation when both variables are subject to errors. Considered is a quite general model which allows to include possibly correlated data (measurements). From statistical point of view the model could be represented by the linear errors-in-variables (EIV) model. We suggest an iterative algorithm for estimation the parameters of the analysis function (inverse of the calibration line) and we solve the problem of deriving the...
Linear conform transformation in the case of non-negligible errors in both coordinate systems is investigated. Estimation of transformation parameters and their statistical properties are described. Confidence ellipses of transformed nonidentical points and cross covariance matrices among them and identical points are determined. Some simulation for a verification of theoretical results are presented.
The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving LDA in this...
In mixed linear statistical models the best linear unbiased estimators need a known covariance matrix. However, the variance components must be usually estimated. Thus a problem arises what is the covariance matrix of the plug-in estimators.
The properties of the regular linear model are well known (see [1], Chapter 1). In this paper the situation where the vector of the first order parameters is divided into two parts (to the vector of the useful parameters and to the vector of the nuisance parameters) is considered. It will be shown how the BLUEs of these parameters will be changed by constraints given on them. The theory will be illustrated by an example from the practice.
A construction of confidence regions in nonlinear regression models is difficult mainly in the case that the dimension of an estimated vector parameter is large. A singularity is also a problem. Therefore some simple approximation of an exact confidence region is welcome. The aim of the paper is to give a small modification of a confidence ellipsoid constructed in a linearized model which is sufficient under some conditions for an approximation of the exact confidence region.
If an observation vector in a nonlinear regression model is normally distributed, then an algorithm for a determination of the exact -confidence region for the parameter of the mean value of the observation vector is well known. However its numerical realization is tedious and therefore it is of some interest to find some condition which enables us to construct this region in a simpler way.
The paper deals with the linear model with uncorrelated observations. The dispersions of the values observed are linear-quadratic functions of the unknown parameters of the mean (measurements by devices of a given class of precision). Investigated are the locally best linear-quadratic unbiased estimators as improvements of locally best linear unbiased estimators in the case that the design matrix has none, one or two linearly dependent rows.