The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

Displaying similar documents to “On decompositions of estimators under a general linear model with partial parameter restrictions”

Bias of LS estimators in nonlinear regression models with constraints. Part II: Biadditive models

Jean-Baptiste Denis, Andrej Pázman (1999)

Applications of Mathematics

Similarity:

General results giving approximate bias for nonlinear models with constrained parameters are applied to bilinear models in anova framework, called biadditive models. Known results on the information matrix and the asymptotic variance matrix of the parameters are summarized, and the Jacobians and Hessians of the response and of the constraints are derived. These intermediate results are the basis for any subsequent second order study of the model. Despite the large number of parameters...

Matrix rank and inertia formulas in the analysis of general linear models

Yongge Tian (2017)

Open Mathematics

Similarity:

Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear...

Minimum disparity estimators for discrete and continuous models

María Luisa Menéndez, Domingo Morales, Leandro Pardo, Igor Vajda (2001)

Applications of Mathematics

Similarity:

Disparities of discrete distributions are introduced as a natural and useful extension of the information-theoretic divergences. The minimum disparity point estimators are studied in regular discrete models with i.i.d. observations and their asymptotic efficiency of the first order, in the sense of Rao, is proved. These estimators are applied to continuous models with i.i.d. observations when the observation space is quantized by fixed points, or at random, by the sample quantiles...

Estimation in universal models with restrictions

Eva Fišerová (2004)

Discussiones Mathematicae Probability and Statistics

Similarity:

In modelling a measurement experiment some singularities can occur even if the experiment is quite standard and simple. Such an experiment is described in the paper as a motivation example. It is presented in the papar how to solve these situations under special restrictions on model parameters. The estimability of model parameters is studied and unbiased estimators are given in explicit forms.

Estimation of the first order parameters in the twoepoch linear model

Karel Hron (2007)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Similarity:

The linear regression model, where the mean value parameters are divided into stable and nonstable part in each of both epochs of measurement, is considered in this paper. Then, equivalent formulas of the best linear unbiased estimators of this parameters in both epochs using partitioned matrix inverse are derived.

Strong law of large numbers for additive extremum estimators

João Tiago Mexia, Pedro Corte Real (2001)

Discussiones Mathematicae Probability and Statistics

Similarity:

Extremum estimators are obtained by maximizing or minimizing a function of the sample and of the parameters relatively to the parameters. When the function to maximize or minimize is the sum of subfunctions each depending on one observation, the extremum estimators are additive. Maximum likelihood estimators are extremum additive whenever the observations are independent. Another instance of additive extremum estimators are the least squares estimators for multiple regressions when the...

On the equality of the ordinary least squares estimators and the best linear unbiased estimators in multivariate growth-curve models.

Gabriela Beganu (2007)

RACSAM

Similarity:

It is well known that there were proved several necessary and sufficient conditions for the ordinary least squares estimators (OLSE) to be the best linear unbiased estimators (BLUE) of the fixed effects in general linear models. The purpose of this article is to verify one of these conditions given by Zyskind [39, 40]: there exists a matrix Q such that ΩX = XQ, where X and Ω are the design matrix and the covariance matrix, respectively. It will be shown the accessibility of this condition...