Displaying 61 – 80 of 119

Showing per page

Linearized regression model with constraints of type II

Lubomír Kubáček (2003)

Applications of Mathematics

A linearization of the nonlinear regression model causes a bias in estimators of model parameters. It can be eliminated, e.g., either by a proper choice of the point where the model is developed into the Taylor series or by quadratic corrections of linear estimators. The aim of the paper is to obtain formulae for biases and variances of estimators in linearized models and also for corrected estimators.

M -estimation in nonlinear regression for longitudinal data

Martina Orsáková (2007)

Kybernetika

The longitudinal regression model Z i j = m ( θ 0 , 𝕏 i ( T i j ) ) + ε i j , where Z i j is the j th measurement of the i th subject at random time T i j , m is the regression function, 𝕏 i ( T i j ) is a predictable covariate process observed at time T i j and ε i j is a noise, is studied in marked point process framework. In this paper we introduce the assumptions which guarantee the consistency and asymptotic normality of smooth M -estimator of unknown parameter θ 0 .

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2001)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2010)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-Gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for regression on a random design

Yannick Baraud (2002)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small probability....

Model selection for regression on a random design

Yannick Baraud (2010)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small...

New M-estimators in semi-parametric regression with errors in variables

Cristina Butucea, Marie-Luce Taupin (2008)

Annales de l'I.H.P. Probabilités et statistiques

In the regression model with errors in variables, we observe n i.i.d. copies of (Y, Z) satisfying Y=fθ0(X)+ξ and Z=X+ɛ involving independent and unobserved random variables X, ξ, ɛ plus a regression function fθ0, known up to a finite dimensional θ0. The common densities of the Xi’s and of the ξi’s are unknown, whereas the distribution of ɛ is completely known. We aim at estimating the parameter θ0 by using the observations (Y1, Z1), …, (Yn, Zn). We propose an estimation procedure based on the least...

On a linearization of regression models

Lubomír Kubáček (1995)

Applications of Mathematics

An approximate value of a parameter in a nonlinear regression model is known in many cases. In such situation a linearization of the model is possible however it is important to recognize, whether the difference between the actual value of the parameter and the approximate value does not cause significant changes, e.g., in the bias of the estimator or in its variance, etc. Some rules suitable for a solution of this problem are given in the paper.

On Fourier coefficient estimators consistent in the mean-square sense

Waldemar Popiński (1994)

Applicationes Mathematicae

The properties of two recursive estimators of the Fourier coefficients of a regression function f L 2 [ a , b ] with respect to a complete orthonormal system of bounded functions (ek) , k=1,2,..., are considered in the case of the observation model y i = f ( x i ) + η i , i=1,...,n , where η i are independent random variables with zero mean and finite variance, x i [ a , b ] R 1 , i=1,...,n, form a random sample from a distribution with density ϱ =1/(b-a) (uniform distribution) and are independent of the errors η i , i=1,...,n . Unbiasedness and mean-square...

On parameter-effects arrays in non-linear regression models

Rastislav Potocký, Van Ban To (1993)

Applications of Mathematics

Formulas for a new three- and four-dimensional parameter-effects arrays corresponding to transformations of parameters in non-linear regression models are given. These formulae make the construction of the confidence regions for parameters easier. An example is presented which shows that some care is necessary when a new array is computed.

Currently displaying 61 – 80 of 119