Displaying similar documents to “Regularization for high-dimensional covariance matrix”

Ridge estimation of covariance matrix from data in two classes

Yi Zhou, Bin Zhang (2024)

Applications of Mathematics

Similarity:

This paper deals with the problem of estimating a covariance matrix from the data in two classes: (1) good data with the covariance matrix of interest and (2) contamination coming from a Gaussian distribution with a different covariance matrix. The ridge penalty is introduced to address the problem of high-dimensional challenges in estimating the covariance matrix from the two-class data model. A ridge estimator of the covariance matrix has a uniform expression and keeps positive-definite,...

Inverting covariance matrices

Czesław Stępniak (2006)

Discussiones Mathematicae Probability and Statistics

Similarity:

Some useful tools in modelling linear experiments with general multi-way classification of the random effects and some convenient forms of the covariance matrix and its inverse are presented. Moreover, the Sherman-Morrison-Woodbury formula is applied for inverting the covariance matrix in such experiments.

Bayesian joint modelling of the mean and covariance structures for normal longitudinal data.

Edilberto Cepeda-Cuervo, Vicente Nunez-Anton (2007)

SORT

Similarity:

We consider the joint modelling of the mean and covariance structures for the general antedependence model, estimating their parameters and the innovation variances in a longitudinal data context. We propose a new and computationally efficient classic estimation method based on the Fisher scoring algorithm to obtain the maximum likelihood estimates of the parameters. In addition, we also propose a new and innovative Bayesian methodology based on the Gibbs sampling, properly adapted for...

Rotation to physiological factors revised

Miroslav Kárný, Martin Šámal, Josef Böhm (1998)

Kybernetika

Similarity:

Reconstruction of underlying physiological structures from a sequence of images is a long-standing problem which has been solved by factor analysis with a success. This paper tries to return to roots of the problem, to exploit the available findings and to propose an improved paradigm.

Intrinsic dimensionality and small sample properties of classifiers

Šarūnas Raudys (1998)

Kybernetika

Similarity:

Small learning-set properties of the Euclidean distance, the Parzen window, the minimum empirical error and the nonlinear single layer perceptron classifiers depend on an “intrinsic dimensionality” of the data, however the Fisher linear discriminant function is sensitive to all dimensions. There is no unique definition of the “intrinsic dimensionality”. The dimensionality of the subspace where the data points are situated is not a sufficient definition of the “intrinsic dimensionality”....