Displaying 321 – 340 of 657

Showing per page

Locally weighted neural networks for an analysis of the biosensor response

Romas Baronas, Feliksas Ivanauskas, Romualdas Maslovskis, Marijus Radavičius, Pranas Vaitkus (2007)

Kybernetika

This paper presents a semi-global mathematical model for an analysis of a signal of amperometric biosensors. Artificial neural networks were applied to an analysis of the biosensor response to multi-component mixtures. A large amount of the learning and test data was synthesized using computer simulation of the biosensor response. The biosensor signal was analyzed with respect to the concentration of each component of the mixture. The paradigm of locally weighted linear regression was used for retraining...

M -estimation in nonlinear regression for longitudinal data

Martina Orsáková (2007)

Kybernetika

The longitudinal regression model Z i j = m ( θ 0 , 𝕏 i ( T i j ) ) + ε i j , where Z i j is the j th measurement of the i th subject at random time T i j , m is the regression function, 𝕏 i ( T i j ) is a predictable covariate process observed at time T i j and ε i j is a noise, is studied in marked point process framework. In this paper we introduce the assumptions which guarantee the consistency and asymptotic normality of smooth M -estimator of unknown parameter θ 0 .

M -estimators of structural parameters in pseudolinear models

Friedrich Liese, Igor Vajda (1999)

Applications of Mathematics

Real valued M -estimators θ ^ n : = min 1 n ρ ( Y i - τ ( θ ) ) in a statistical model with observations Y i F θ 0 are replaced by p -valued M -estimators β ^ n : = min 1 n ρ ( Y i - τ ( u ( z i T β ) ) ) in a new model with observations Y i F u ( z i t β 0 ) , where z i p are regressors, β 0 p is a structural parameter and u : a structural function of the new model. Sufficient conditions for the consistency of β ^ n are derived, motivated by the sufficiency conditions for the simpler “parent estimator” θ ^ n . The result is a general method of consistent estimation in a class of nonlinear (pseudolinear) statistical problems. If...

Making use of incomplete observations for regression in bivariate normal model

Joanna Tarasińska (2003)

Applications of Mathematics

Two estimates of the regression coefficient in bivariate normal distribution are considered: the usual one based on a sample and a new one making use of additional observations of one of the variables. They are compared with respect to variance. The same is done for two regression lines. The conclusion is that the additional observations are worth using only when the sample is very small.

Matrix rank and inertia formulas in the analysis of general linear models

Yongge Tian (2017)

Open Mathematics

Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear models (GLMs),...

Matrix rank/inertia formulas for least-squares solutions with statistical applications

Yongge Tian, Bo Jiang (2016)

Special Matrices

Least-Squares Solution (LSS) of a linear matrix equation and Ordinary Least-Squares Estimator (OLSE) of unknown parameters in a general linear model are two standard algebraical methods in computational mathematics and regression analysis. Assume that a symmetric quadratic matrix-valued function Φ(Z) = Q − ZPZ0 is given, where Z is taken as the LSS of the linear matrix equation AZ = B. In this paper, we establish a group of formulas for calculating maximum and minimum ranks and inertias of Φ(Z)...

Minimum mean square error estimation

Gejza Wimmer (1979)

Aplikace matematiky

In many cases we can consider the regression parameters as realizations of a random variable. In these situations the minimum mean square error estimator seems to be useful and important. The explicit form of this estimator is given in the case that both the covariance matrices of the random parameters and those of the error vector are singular.

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2001)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2010)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-Gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for regression on a random design

Yannick Baraud (2002)

ESAIM: Probability and Statistics

We consider the problem of estimating an unknown regression function when the design is random with values in k . Our estimation procedure is based on model selection and does not rely on any prior information on the target function. We start with a collection of linear functional spaces and build, on a data selected space among this collection, the least-squares estimator. We study the performance of an estimator which is obtained by modifying this least-squares estimator on a set of small probability....

Currently displaying 321 – 340 of 657