The search session has expired. Please query the service again.
Displaying 161 –
180 of
388
The properties of the regular linear model are well known (see [1], Chapter 1). In this paper the situation where the vector of the first order parameters is divided into two parts (to the vector of the useful parameters and to the vector of the nuisance parameters) is considered. It will be shown how the BLUEs of these parameters will be changed by constraints given on them. The theory will be illustrated by an example from the practice.
In nonlinear regression models an approximate value of an unknown parameter is frequently at our disposal. Then the linearization of the model is used and a linear estimate of the parameter can be calculated. Some criteria how to recognize whether a linearization is possible are developed. In the case that they are not satisfied, it is necessary to take into account either some quadratic corrections or to use the nonlinear least squares method. The aim of the paper is to find some criteria for an...
In the case of the nonlinear regression model, methods and procedures have been developed to obtain estimates of the parameters. These methods are much more complicated than the procedures used if the model considered is linear. Moreover, unlike the linear case, the properties of the resulting estimators are unknown and usually depend on the true values of the estimated parameters. It is sometimes possible to approximate the nonlinear model by a linear one and use the much more developed linear...
A construction of confidence regions in nonlinear regression models is difficult mainly in the case that the dimension of an estimated vector parameter is large. A singularity is also a problem. Therefore some simple approximation of an exact confidence region is welcome. The aim of the paper is to give a small modification of a confidence ellipsoid constructed in a linearized model which is sufficient under some conditions for an approximation of the exact confidence region.
If an observation vector in a nonlinear regression model is normally distributed, then an algorithm for a determination of the exact -confidence region for the parameter of the mean value of the observation vector is well known. However its numerical realization is tedious and therefore it is of some interest to find some condition which enables us to construct this region in a simpler way.
In nonlinear regression models with constraints a linearization of the model leads to a bias in estimators of parameters of the mean value of the observation vector. Some criteria how to recognize whether a linearization is possible is developed. In the case that they are not satisfied, it is necessary to decide whether some quadratic corrections can make the estimator better. The aim of the paper is to contribute to the solution of the problem.
A linearization of the nonlinear regression model causes a bias in estimators of model parameters. It can be eliminated, e.g., either by a proper choice of the point where the model is developed into the Taylor series or by quadratic corrections of linear estimators. The aim of the paper is to obtain formulae for biases and variances of estimators in linearized models and also for corrected estimators.
The aim of the paper is to estimate a function (with known matrices) in a regression model with an unknown parameter and covariance matrix . Stochastically independent replications of the stochastic vector are considered, where the estimators of and are and , respectively. Locally and uniformly best inbiased estimators of the function , based on and , are given.
Two estimates of the regression coefficient in bivariate normal distribution are considered: the usual one based on a sample and a new one making use of additional observations of one of the variables. They are compared with respect to variance. The same is done for two regression lines. The conclusion is that the additional observations are worth using only when the sample is very small.
Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear models (GLMs),...
Least-Squares Solution (LSS) of a linear matrix equation and Ordinary Least-Squares Estimator (OLSE) of unknown parameters in a general linear model are two standard algebraical methods in computational mathematics and regression analysis. Assume that a symmetric quadratic matrix-valued function Φ(Z) = Q − ZPZ0 is given, where Z is taken as the LSS of the linear matrix equation AZ = B. In this paper, we establish a group of formulas for calculating maximum and minimum ranks and inertias of Φ(Z)...
Currently displaying 161 –
180 of
388