Displaying 321 – 340 of 388

Showing per page

Stability of stochastic optimization problems - nonmeasurable case

Petr Lachout (2008)

Kybernetika

This paper deals with stability of stochastic optimization problems in a general setting. Objective function is defined on a metric space and depends on a probability measure which is unknown, but, estimated from empirical observations. We try to derive stability results without precise knowledge of problem structure and without measurability assumption. Moreover, ε -optimal solutions are considered. The setup is illustrated on consistency of a ε - M -estimator in linear regression model.

Statistical analysis of diabetes mellitus

Hilmar Drygas (2009)

Discussiones Mathematicae Probability and Statistics

This paper deals with an application of regression analysis to the regulation of the blood-sugar under diabetes mellitus. Section 2 gives a description of Gram-Schmidt orthogonalization, while Section 3 discusses the difference between Gauss-Markov estimation and Least Squares Estimation. Section 4 is devoted to the statistical analysis of the blood-sugar during the night. The response change of blood-sugar is explained by three variables: time, food and physical activity ("Bewegung"). At the beginning...

Statistical models to study subtoxic concentrations for some standard mutagens in three colon cancer cell lines.

Xavier Bardina, Laura Fernández, Elisabet Piñeiro, Jordi Surrallés, Antonia Velázquez (2006)

SORT

The aim of this work is to propose models to study the toxic effect of different concentrations of some standard mutagens in different colon cancer cell lines. We find estimates and, by means of an inverse regression problem, confidence intervals for the subtoxic concentration, that is the concentration that reduces by thirty percent the number of colonies obtained in the absence of mutagen.

Strange Design Points in Linear Regression

Andrej Pázman (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

We discuss, partly on examples, several intuitively unexpected results in a standard linear regression model. We demonstrate that direct observations of the regression curve at a given point can not be substituted by observations at two very close neighboring points. On the opposite, we show that observations at two distant design points improve the variance of the estimator. In an experiment with correlated observations we show somewhat unexpected conditions under which a design point gives no...

Strong law of large numbers for additive extremum estimators

João Tiago Mexia, Pedro Corte Real (2001)

Discussiones Mathematicae Probability and Statistics

Extremum estimators are obtained by maximizing or minimizing a function of the sample and of the parameters relatively to the parameters. When the function to maximize or minimize is the sum of subfunctions each depending on one observation, the extremum estimators are additive. Maximum likelihood estimators are extremum additive whenever the observations are independent. Another instance of additive extremum estimators are the least squares estimators for multiple regressions when the usual assumptions...

Suitability of linearization of nonlinear problems not only in biology and medicine

Jana Vrbková (2009)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Biology and medicine are not the only fields that present problems unsolvable through a linear models approach. One way to overcome this obstacle is to use nonlinear methods, even though these are not as thoroughly explored. Another possibility is to linearize and transform the originally nonlinear task to make it accessible to linear methods. In this aricle I investigate an easy and quick criterion to verify suitability of linearization of nonlinear problems via Taylor series expansion so that...

Test of linear hypothesis in multivariate models

Lubomír Kubáček (2007)

Kybernetika

In regular multivariate regression model a test of linear hypothesis is dependent on a structure and a knowledge of the covariance matrix. Several tests procedures are given for the cases that the covariance matrix is either totally unknown, or partially unknown (variance components), or totally known.

Testing a sub-hypothesis in linear regression models with long memory covariates and errors

Hira L. Koul, Donatas Surgailis (2008)

Applications of Mathematics

This paper considers the problem of testing a sub-hypothesis in homoscedastic linear regression models when the covariate and error processes form independent long memory moving averages. The asymptotic null distribution of the likelihood ratio type test based on Whittle quadratic forms is shown to be a chi-square distribution. Additionally, the estimators of the slope parameters obtained by minimizing the Whittle dispersion is seen to be n 1 / 2 -consistent for all values of the long memory parameters...

Testing hypotheses in universal models

Eva Fišerová (2006)

Discussiones Mathematicae Probability and Statistics

A linear regression model, when a design matrix has not full column rank and a covariance matrix is singular, is considered. The problem of testing hypotheses on mean value parameters is studied. Conditions when a hypothesis can be tested or when need not be tested are given. Explicit forms of test statistics based on residual sums of squares are presented.

Tests in weakly nonlinear regression model

Lubomír Kubáček, Eva Tesaříková (2005)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

In weakly nonlinear regression model a weakly nonlinear hypothesis can be tested by linear methods if an information on actual values of model parameters is at our disposal and some condition is satisfied. In other words we must know that unknown parameters are with sufficiently high probability in so called linearization region. The aim of the paper is to determine this region.

Currently displaying 321 – 340 of 388