Data mining methods for gene selection on the basis of gene expression arrays
Michał Muszyński, Stanisław Osowski (2014)
International Journal of Applied Mathematics and Computer Science
Similarity:
Michał Muszyński, Stanisław Osowski (2014)
International Journal of Applied Mathematics and Computer Science
Similarity:
Milan M. Milosavljević, Milan Ž. Marković (1999)
The Yugoslav Journal of Operations Research
Similarity:
Jan Rybka, Artur Janicki (2013)
International Journal of Applied Mathematics and Computer Science
Similarity:
This paper describes a study of emotion recognition based on speech analysis. The introduction to the theory contains a review of emotion inventories used in various studies of emotion recognition as well as the speech corpora applied, methods of speech parametrization, and the most commonly employed classification algorithms. In the current study the EMO-DB speech corpus and three selected classifiers, the k-Nearest Neighbor (k-NN), the Artificial Neural Network (ANN) and Support Vector...
Sayeed, Shohel, Andrews, S., Besar, Rosli, Kiong, Loo Chu (2007)
Discrete Dynamics in Nature and Society
Similarity:
M. Breukelen, Robert P. W. Duin, David M. J. Tax, J. E. den Hartog (1998)
Kybernetika
Similarity:
Classifiers can be combined to reduce classification errors. We did experiments on a data set consisting of different sets of features of handwritten digits. Different types of classifiers were trained on these feature sets. The performances of these classifiers and combination rules were tested. The best results were acquired with the mean, median and product combination rules. The product was best for combining linear classifiers, the median for -NN classifiers. Training a classifier...
Petr Somol, Jiří Grim, Jana Novovičová, Pavel Pudil (2011)
Kybernetika
Similarity:
The purpose of feature selection in machine learning is at least two-fold - saving measurement acquisition costs and reducing the negative effects of the curse of dimensionality with the aim to improve the accuracy of the models and the classification rate of classifiers with respect to previously unknown data. Yet it has been shown recently that the process of feature selection itself can be negatively affected by the very same curse of dimensionality - feature selection methods may...
Tomasz Górecki, Maciej Łuczak (2013)
International Journal of Applied Mathematics and Computer Science
Similarity:
The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving...