Some remarks to testing statistical hypothesis in linear regression model with constraints
Necessary and sufficient conditions are derived for the inclusions and to be fulfilled where , and , are some classes of invariant linearly sufficient statistics (Oktaba, Kornacki, Wawrzosek (1988)) corresponding to the Gauss-Markov models and , respectively.
This paper deals with stability of stochastic optimization problems in a general setting. Objective function is defined on a metric space and depends on a probability measure which is unknown, but, estimated from empirical observations. We try to derive stability results without precise knowledge of problem structure and without measurability assumption. Moreover, -optimal solutions are considered. The setup is illustrated on consistency of a --estimator in linear regression model.
This paper deals with an application of regression analysis to the regulation of the blood-sugar under diabetes mellitus. Section 2 gives a description of Gram-Schmidt orthogonalization, while Section 3 discusses the difference between Gauss-Markov estimation and Least Squares Estimation. Section 4 is devoted to the statistical analysis of the blood-sugar during the night. The response change of blood-sugar is explained by three variables: time, food and physical activity ("Bewegung"). At the beginning...
The aim of this work is to propose models to study the toxic effect of different concentrations of some standard mutagens in different colon cancer cell lines. We find estimates and, by means of an inverse regression problem, confidence intervals for the subtoxic concentration, that is the concentration that reduces by thirty percent the number of colonies obtained in the absence of mutagen.
We discuss, partly on examples, several intuitively unexpected results in a standard linear regression model. We demonstrate that direct observations of the regression curve at a given point can not be substituted by observations at two very close neighboring points. On the opposite, we show that observations at two distant design points improve the variance of the estimator. In an experiment with correlated observations we show somewhat unexpected conditions under which a design point gives no...
Extremum estimators are obtained by maximizing or minimizing a function of the sample and of the parameters relatively to the parameters. When the function to maximize or minimize is the sum of subfunctions each depending on one observation, the extremum estimators are additive. Maximum likelihood estimators are extremum additive whenever the observations are independent. Another instance of additive extremum estimators are the least squares estimators for multiple regressions when the usual assumptions...
Biology and medicine are not the only fields that present problems unsolvable through a linear models approach. One way to overcome this obstacle is to use nonlinear methods, even though these are not as thoroughly explored. Another possibility is to linearize and transform the originally nonlinear task to make it accessible to linear methods. In this aricle I investigate an easy and quick criterion to verify suitability of linearization of nonlinear problems via Taylor series expansion so that...