Displaying similar documents to “Statistical approach to pattern recognition: Theory and practical solution by means of PREDITAS system”

A simple upper bound to the Bayes error probability for feature selection

Lorenzo Bruzzone, Sebastiano B. Serpico (1998)

Kybernetika

Similarity:

In this paper, feature selection in multiclass cases for classification of remote-sensing images is addressed. A criterion based on a simple upper bound to the error probability of the Bayes classifier for the minimum error is proposed. This criterion has the advantage of selecting features having a link with the error probability with a low computational load. Experiments have been carried out in order to compare the performances provided by the proposed criterion with the ones of some...

Conceptual base of feature selection consulting system

Pavel Pudil, Jana Novovičová, Petr Somol, Radek Vrňata (1998)

Kybernetika

Similarity:

The paper briefly reviews recent advances in the methodology of feature selection (FS) and the conceptual base of a consulting system for solving FS problems. The reasons for designing a kind of expert or consulting system which would guide a less experienced user are outlined. The paper also attempts to provide a guideline which approach to choose with respect to the extent of a priori knowledge of the problem. The methods discussed here form the core of the software package being developed...

Combining adaptive vector quantization and prototype selection techniques to improve nearest neighbour classifiers

Francesc J. Ferri (1998)

Kybernetika

Similarity:

Prototype Selection (PS) techniques have traditionally been applied prior to Nearest Neighbour (NN) classification rules both to improve its accuracy (editing) and to alleviate its computational burden (condensing). Methods based on selecting/discarding prototypes and methods based on adapting prototypes have been separately introduced to deal with this problem. Different approaches to this problem are considered in this paper and their main advantages and drawbacks are pointed out along...

Linear discriminant analysis with a generalization of the Moore-Penrose pseudoinverse

Tomasz Górecki, Maciej Łuczak (2013)

International Journal of Applied Mathematics and Computer Science

Similarity:

The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving...

Improving feature selection process resistance to failures caused by curse-of-dimensionality effects

Petr Somol, Jiří Grim, Jana Novovičová, Pavel Pudil (2011)

Kybernetika

Similarity:

The purpose of feature selection in machine learning is at least two-fold - saving measurement acquisition costs and reducing the negative effects of the curse of dimensionality with the aim to improve the accuracy of the models and the classification rate of classifiers with respect to previously unknown data. Yet it has been shown recently that the process of feature selection itself can be negatively affected by the very same curse of dimensionality - feature selection methods may...

Probabilities of discrepancy between minima of cross-validation, Vapnik bounds and true risks

Przemysław Klęsk (2010)

International Journal of Applied Mathematics and Computer Science

Similarity:

Two known approaches to complexity selection are taken under consideration: n-fold cross-validation and structural risk minimization. Obviously, in either approach, a discrepancy between the indicated optimal complexity (indicated as the minimum of a generalization error estimate or a bound) and the genuine minimum of unknown true risks is possible. In the paper, this problem is posed in a novel quantitative way. We state and prove theorems demonstrating how one can calculate pessimistic...