The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
We study functional regression with random subgaussian design and real-valued response. The focus is on the problems in which the regression function can be well approximated by a functional linear model with the slope function being “sparse” in the sense that it can be represented as a sum of a small number of well separated “spikes”. This can be viewed as an extension of now classical sparse estimation problems to the case of infinite dictionaries. We study an estimator of the regression function...
Consistency of LSE estimator in linear models is studied assuming that the error vector has radial symmetry. Generalized polar coordinates and algebraic assumptions on the design matrix are considered in the results that are established.
A linear model in which the mean vector and covariance matrix depend on the same parameters is connected. Limit results for these models are presented. The characteristic function of the gradient of the score is obtained for normal connected models, thus, enabling the study of maximum likelihood estimators. A special case with diagonal covariance matrix is studied.
In this paper the likelihood function is considered to be the primary source of the objectivity of a Bayesian method. The necessity of using the expected behaviour of the likelihood function for the choice of the prior distribution is emphasized. Numerical examples, including seasonal adjustment of time series, are given to illustrate the practical utility of the common-sense approach to Bayesian statistics proposed in this paper.
The paper deals with the linear comparative calibration problem, i. e. the situation when both variables are subject to errors. Considered is a quite general model which allows to include possibly correlated data (measurements). From statistical point of view the model could be represented by the linear errors-in-variables (EIV) model. We suggest an iterative algorithm for estimation the parameters of the analysis function (inverse of the calibration line) and we solve the problem of deriving the...
Linear conform transformation in the case of non-negligible errors in both coordinate systems is investigated. Estimation of transformation parameters and their statistical properties are described. Confidence ellipses of transformed nonidentical points and cross covariance matrices among them and identical points are determined. Some simulation for a verification of theoretical results are presented.
In mixed linear statistical models the best linear unbiased estimators need a known covariance matrix. However, the variance components must be usually estimated. Thus a problem arises what is the covariance matrix of the plug-in estimators.
A linear model with approximate variance components is considered. Differences among approximate and actual values of variance components influence the proper position and the shape of confidence ellipsoids, the level of statistical tests and their power function. A procedure how to recognize whether these diferences can be neglected is given in the paper.
The properties of the regular linear model are well known (see [1], Chapter 1). In this paper the situation where the vector of the first order parameters is divided into two parts (to the vector of the useful parameters and to the vector of the nuisance parameters) is considered. It will be shown how the BLUEs of these parameters will be changed by constraints given on them. The theory will be illustrated by an example from the practice.
In nonlinear regression models an approximate value of an unknown parameter is frequently at our disposal. Then the linearization of the model is used and a linear estimate of the parameter can be calculated. Some criteria how to recognize whether a linearization is possible are developed. In the case that they are not satisfied, it is necessary to take into account either some quadratic corrections or to use the nonlinear least squares method. The aim of the paper is to find some criteria for an...
In the case of the nonlinear regression model, methods and procedures have been developed to obtain estimates of the parameters. These methods are much more complicated than the procedures used if the model considered is linear. Moreover, unlike the linear case, the properties of the resulting estimators are unknown and usually depend on the true values of the estimated parameters. It is sometimes possible to approximate the nonlinear model by a linear one and use the much more developed linear...
A construction of confidence regions in nonlinear regression models is difficult mainly in the case that the dimension of an estimated vector parameter is large. A singularity is also a problem. Therefore some simple approximation of an exact confidence region is welcome. The aim of the paper is to give a small modification of a confidence ellipsoid constructed in a linearized model which is sufficient under some conditions for an approximation of the exact confidence region.
If an observation vector in a nonlinear regression model is normally distributed, then an algorithm for a determination of the exact -confidence region for the parameter of the mean value of the observation vector is well known. However its numerical realization is tedious and therefore it is of some interest to find some condition which enables us to construct this region in a simpler way.
Currently displaying 1 –
20 of
25