Loading [MathJax]/extensions/MathZoom.js
Two estimates of the regression coefficient in bivariate normal distribution are considered: the usual one based on a sample and a new one making use of additional observations of one of the variables. They are compared with respect to variance. The same is done for two regression lines. The conclusion is that the additional observations are worth using only when the sample is very small.
A solution to the marginal problem is obtained in a form of parametric exponential (Gibbs–Markov) distribution, where the unknown parameters are obtained by an optimization procedure that agrees with the maximum likelihood (ML) estimate. With respect to a difficult performance of the method we propose also an alternative approach, providing the original basis of marginals can be appropriately extended. Then the (numerically feasible) solution can be obtained either by the maximum pseudo-likelihood...
Matrix mathematics provides a powerful tool set for addressing statistical problems, in particular, the theory of matrix ranks and inertias has been developed as effective methodology of simplifying various complicated matrix expressions, and establishing equalities and inequalities occurred in statistical analysis. This paper describes how to establish exact formulas for calculating ranks and inertias of covariances of predictors and estimators of parameter spaces in general linear models (GLMs),...
In many cases we can consider the regression parameters as realizations of a random variable. In these situations the minimum mean square error estimator seems to be useful and important. The explicit form of this estimator is given in the case that both the covariance matrices of the random parameters and those of the error vector are singular.
We propose a method based on a penalised likelihood criterion, for
estimating the number on non-zero components of the mean
of a
Gaussian vector. Following the work of Birgé and Massart in Gaussian model
selection, we choose the penalty function such that the resulting
estimator minimises the Kullback risk.
In this paper, we provide a tutorial on multivariate extreme value methods which allows to estimate the risk associated with rare events occurring jointly. We draw particular attention to issues related to extremal dependence and we insist on the asymptotic independence feature. We apply the multivariate extreme value theory on two data sets related to hydrology and meteorology: first, the joint flooding of two rivers, which puts at risk the facilities lying downstream the confluence; then the joint...
In multivariate linear statistical models with normally distributed observation matrix a structure of a covariance matrix plays an important role when confidence regions must be determined. In the paper it is assumed that the covariance matrix is a linear combination of known symmetric and positive semidefinite matrices and unknown parameters (variance components) which are unbiasedly estimable. Then insensitivity regions are found for them which enables us to decide whether plug-in approach can...
Sea (X1, X2) un vector aleatorio con una función de distribución F. La transformación integral de la probabilidad (pit) es la variable aleatoria unidimensional P2 = F(X1, X2). La expresion de su función de distribución, y un algoritmo de simulación en términos de la función cuantil, dada por Chakak et al [2000], cuando la distribución es absolumente continua, son extendidas a distribuciones que pueden tener singularidades. La estimación de máxima verosimilitud del parámetro de dependencia basada...
Currently displaying 1 –
20 of
21