Displaying 121 – 140 of 183

Showing per page

Linear prediction of long-range dependent time series

Fanny Godet (2009)

ESAIM: Probability and Statistics

We present two approaches for linear prediction of long-memory time series. The first approach consists in truncating the Wiener-Kolmogorov predictor by restricting the observations to the last k terms, which are the only available data in practice. We derive the asymptotic behaviour of the mean-squared error as k tends to +∞. The second predictor is the finite linear least-squares predictor i.e.  the projection of the forecast value on the last k observations. It is shown that these two predictors...

Linear rescaling of the stochastic process

Petr Lachout (1992)

Commentationes Mathematicae Universitatis Carolinae

Discussion on the limits in distribution of processes Y under joint rescaling of space and time is presented in this paper. The results due to Lamperti (1962), Weissman (1975), Hudson Mason (1982) and Laha Rohatgi (1982) are improved here.

Linear versus quadratic estimators in linearized models

Lubomír Kubáček (2004)

Applications of Mathematics

In nonlinear regression models an approximate value of an unknown parameter is frequently at our disposal. Then the linearization of the model is used and a linear estimate of the parameter can be calculated. Some criteria how to recognize whether a linearization is possible are developed. In the case that they are not satisfied, it is necessary to take into account either some quadratic corrections or to use the nonlinear least squares method. The aim of the paper is to find some criteria for an...

Linearization conditions for regression models with unknown variance parameter

Anna Jenčová (2000)

Applications of Mathematics

In the case of the nonlinear regression model, methods and procedures have been developed to obtain estimates of the parameters. These methods are much more complicated than the procedures used if the model considered is linear. Moreover, unlike the linear case, the properties of the resulting estimators are unknown and usually depend on the true values of the estimated parameters. It is sometimes possible to approximate the nonlinear model by a linear one and use the much more developed linear...

Linearization regions for a confidence ellipsoid in singular nonlinear regression models

Lubomír Kubáček, Eva Tesaříková (2009)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

A construction of confidence regions in nonlinear regression models is difficult mainly in the case that the dimension of an estimated vector parameter is large. A singularity is also a problem. Therefore some simple approximation of an exact confidence region is welcome. The aim of the paper is to give a small modification of a confidence ellipsoid constructed in a linearized model which is sufficient under some conditions for an approximation of the exact confidence region.

Linearization regions for confidence ellipsoids

Lubomír Kubáček, Eva Tesaříková (2008)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

If an observation vector in a nonlinear regression model is normally distributed, then an algorithm for a determination of the exact ( 1 - α ) -confidence region for the parameter of the mean value of the observation vector is well known. However its numerical realization is tedious and therefore it is of some interest to find some condition which enables us to construct this region in a simpler way.

Linearized models with constraints of type I

Lubomír Kubáček (2003)

Applications of Mathematics

In nonlinear regression models with constraints a linearization of the model leads to a bias in estimators of parameters of the mean value of the observation vector. Some criteria how to recognize whether a linearization is possible is developed. In the case that they are not satisfied, it is necessary to decide whether some quadratic corrections can make the estimator better. The aim of the paper is to contribute to the solution of the problem.

Linearized regression model with constraints of type II

Lubomír Kubáček (2003)

Applications of Mathematics

A linearization of the nonlinear regression model causes a bias in estimators of model parameters. It can be eliminated, e.g., either by a proper choice of the point where the model is developed into the Taylor series or by quadratic corrections of linear estimators. The aim of the paper is to obtain formulae for biases and variances of estimators in linearized models and also for corrected estimators.

Linear-quadratic estimators in a special structure of the linear model

Gejza Wimmer (1995)

Applications of Mathematics

The paper deals with the linear model with uncorrelated observations. The dispersions of the values observed are linear-quadratic functions of the unknown parameters of the mean (measurements by devices of a given class of precision). Investigated are the locally best linear-quadratic unbiased estimators as improvements of locally best linear unbiased estimators in the case that the design matrix has none, one or two linearly dependent rows.

Linking population genetics to phylogenetics

Paul G. Higgs (2008)

Banach Center Publications

Population geneticists study the variability of gene sequences within a species, whereas phylogeneticists compare gene sequences between species and usually have only one representative sequence per species. Stochastic models in population genetics are used to determine probability distributions for gene frequencies and to predict the probability that a new mutation will become fixed in a population. Stochastic models in phylogenetics describe the substitution process in the single sequence that...

Local asymptotic normality for normal inverse gaussian Lévy processes with high-frequency sampling

Reiichiro Kawai, Hiroki Masuda (2013)

ESAIM: Probability and Statistics

We prove the local asymptotic normality for the full parameters of the normal inverse Gaussian Lévy process X, when we observe high-frequency data XΔn,X2Δn,...,XnΔn with sampling mesh Δn → 0 and the terminal sampling time nΔn → ∞. The rate of convergence turns out to be (√nΔn, √nΔn, √n, √n) for the dominating parameter (α,β,δ,μ), where α stands for the heaviness of the tails, β the degree of skewness, δ the scale, and μ the location. The essential feature in our study is that the suitably normalized...

Local Asymptotic Normality Property for Lacunar Wavelet Series multifractal model

Jean-Michel Loubes, Davy Paindaveine (2011)

ESAIM: Probability and Statistics

We consider a lacunar wavelet series function observed with an additive Brownian motion. Such functions are statistically characterized by two parameters. The first parameter governs the lacunarity of the wavelet coefficients while the second one governs its intensity. In this paper, we establish the local and asymptotic normality (LAN) of the model, with respect to this couple of parameters. This enables to prove the optimality of an estimator for the lacunarity parameter, and to build optimal...

Currently displaying 121 – 140 of 183