Bemerkungen zum Vergleich linearer normaler Experimente.
The autoregressive process takes an important part in predicting problems leading to decision making. In practice, we use the least squares method to estimate the parameter θ̃ of the first-order autoregressive process taking values in a real separable Banach space B (ARB(1)), if it satisfies the following relation: . In this paper we study the convergence in distribution of the linear operator for ||θ̃|| > 1 and so we construct inequalities of Bernstein type for this operator.
This paper proposes a bias reduction of the coefficients' estimator for linear regression models when observations are randomly censored and the error distribution is unknown. The proposed bias correction is applied to the weighted least squares estimator proposed by Stute [28] [W. Stute: Consistent estimation under random censorship when covariables are present. J. Multivariate Anal. 45 (1993), 89-103.], and it is based on model-based bootstrap resampling techniques that also allow us to work with...
We derive expressions for the asymptotic approximation of the bias of the least squares estimators in nonlinear regression models with parameters which are subject to nonlinear equality constraints. The approach suggested modifies the normal equations of the estimator, and approximates them up to , where is the number of observations. The “bias equations” so obtained are solved under different assumptions on constraints and on the model. For functions of the parameters the invariance of the approximate...
General results giving approximate bias for nonlinear models with constrained parameters are applied to bilinear models in anova framework, called biadditive models. Known results on the information matrix and the asymptotic variance matrix of the parameters are summarized, and the Jacobians and Hessians of the response and of the constraints are derived. These intermediate results are the basis for any subsequent second order study of the model. Despite the large number of parameters involved,...
We study properties of Linear Genetic Programming (LGP) through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a) the...