Displaying similar documents to “Intrinsic dimensionality and small sample properties of classifiers”

Linear discriminant analysis with a generalization of the Moore-Penrose pseudoinverse

Tomasz Górecki, Maciej Łuczak (2013)

International Journal of Applied Mathematics and Computer Science

Similarity:

The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving...

Handwritten digit recognition by combined classifiers

M. Breukelen, Robert P. W. Duin, David M. J. Tax, J. E. den Hartog (1998)

Kybernetika

Similarity:

Classifiers can be combined to reduce classification errors. We did experiments on a data set consisting of different sets of features of handwritten digits. Different types of classifiers were trained on these feature sets. The performances of these classifiers and combination rules were tested. The best results were acquired with the mean, median and product combination rules. The product was best for combining linear classifiers, the median for k -NN classifiers. Training a classifier...

Correlation-based feature selection strategy in classification problems

Krzysztof Michalak, Halina Kwaśnicka (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In classification problems, the issue of high dimensionality, of data is often considered important. To lower data dimensionality, feature selection methods are often employed. To select a set of features that will span a representation space that is as good as possible for the classification task, one must take into consideration possible interdependencies between the features. As a trade-off between the complexity of the selection process and the quality of the selected feature set,...

Rough sets methods in feature reduction and classification

Roman Świniarski (2001)

International Journal of Applied Mathematics and Computer Science

Similarity:

The paper presents an application of rough sets and statistical methods to feature reduction and pattern recognition. The presented description of rough sets theory emphasizes the role of rough sets reducts in feature selection and data reduction in pattern recognition. The overview of methods of feature selection emphasizes feature selection criteria, including rough set-based methods. The paper also contains a description of the algorithm for feature selection and reduction based on...

Bayesian joint modelling of the mean and covariance structures for normal longitudinal data.

Edilberto Cepeda-Cuervo, Vicente Nunez-Anton (2007)

SORT

Similarity:

We consider the joint modelling of the mean and covariance structures for the general antedependence model, estimating their parameters and the innovation variances in a longitudinal data context. We propose a new and computationally efficient classic estimation method based on the Fisher scoring algorithm to obtain the maximum likelihood estimates of the parameters. In addition, we also propose a new and innovative Bayesian methodology based on the Gibbs sampling, properly adapted for...