A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier

Pawel Trajdos; Marek Kurzynski

International Journal of Applied Mathematics and Computer Science (2016)

  • Volume: 26, Issue: 1, page 175-189
  • ISSN: 1641-876X

Abstract

top
Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.

How to cite

top

Pawel Trajdos, and Marek Kurzynski. "A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier." International Journal of Applied Mathematics and Computer Science 26.1 (2016): 175-189. <http://eudml.org/doc/276588>.

@article{PawelTrajdos2016,
abstract = {Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.},
author = {Pawel Trajdos, Marek Kurzynski},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {multiclassifier; cross-competence measure; confusion matrix; feedback information},
language = {eng},
number = {1},
pages = {175-189},
title = {A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier},
url = {http://eudml.org/doc/276588},
volume = {26},
year = {2016},
}

TY - JOUR
AU - Pawel Trajdos
AU - Marek Kurzynski
TI - A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier
JO - International Journal of Applied Mathematics and Computer Science
PY - 2016
VL - 26
IS - 1
SP - 175
EP - 189
AB - Nowadays, multiclassifier systems (MCSs) are being widely applied in various machine learning problems and in many different domains. Over the last two decades, a variety of ensemble systems have been developed, but there is still room for improvement. This paper focuses on developing competence and interclass cross-competence measures which can be applied as a method for classifiers combination. The cross-competence measure allows an ensemble to harness pieces of information obtained from incompetent classifiers instead of removing them from the ensemble. The cross-competence measure originally determined on the basis of a validation set (static mode) can be further easily updated using additional feedback information on correct/incorrect classification during the recognition process (dynamic mode). The analysis of computational and storage complexity of the proposed method is presented. The performance of the MCS with the proposed cross-competence function was experimentally compared against five reference MCSs and one reference MCS for static and dynamic modes, respectively. Results for the static mode show that the proposed technique is comparable with the reference methods in terms of classification accuracy. For the dynamic mode, the system developed achieves the highest classification accuracy, demonstrating the potential of the MCS for practical applications when feedback information is available.
LA - eng
KW - multiclassifier; cross-competence measure; confusion matrix; feedback information
UR - http://eudml.org/doc/276588
ER -

References

top
  1. Bache, K. and Lichman, M. (2013). UCI machine learning repository, http://archive.ics.uci.edu/ml. 
  2. Berger, J.O. and Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, NY. Zbl0572.62008
  3. Bishop, C. (1995). Neural Networks for Pattern Recognition, Clarendon Press/Oxford University Press, Oxford/New York, NY. Zbl0868.68096
  4. Blum, A. (1998). On-line algorithms in machine learning, in A. Fiat and G.J. Woeginger (Eds.), Developments from a June 1996 Seminar on Online Algorithms: The State of the Art, Springer-Verlag, London, pp. 306-325. 
  5. Breiman, L. (1996). Bagging predictors, Machine Learning 24(2): 123-140. Zbl0858.68080
  6. Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA. Zbl0541.62042
  7. Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory 13(1): 21-27, DOI:10.1109/TIT.1967.1053964. Zbl0154.44505
  8. Dai, Q. (2013). A competitive ensemble pruning approach based on cross-validation technique, Knowledge-Based Systems 37(9): 394-414, DOI: 10.1016/j.knosys.2012.08.024. 
  9. Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research 7: 1-30. Zbl1222.68184
  10. Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition, Springer, New York, NY. Zbl0853.68150
  11. Didaci, L., Giacinto, G., Roli, F. and Marcialis, G.L. (2005). A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognition 38(11): 2188-2191. Zbl1077.68797
  12. Dietterich, T.G. (2000). Ensemble methods in machine learning, Proceedings of the 1st International Workshop on Multiple Classifier Systems, MCS'00, Cagliari, Italy, pp. 1-15. 
  13. Dunn, O.J. (1961). Multiple comparisons among means, Journal of the American Statistical Association 56(293): 52-64. Zbl0103.37001
  14. Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G. and Barman, S. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Transactions on Biomedical Engineering 59(9): 2538-2548. 
  15. Freund, Y. and Shapire, R. (1996). Experiments with a new boosting algorithm, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy, pp. 148-156. 
  16. Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings, The Annals of Mathematical Statistics 11(1): 86-92, DOI: 10.2307/2235971. Zbl66.1305.08
  17. Gama, J. (2010). Knowledge Discovery from Data Streams, 1st Edn., Chapman & Hall/CRC, London. Zbl1230.68017
  18. Giacinto, G. and Roli, F. (2001). Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognition 34(9): 1879-1881. Zbl0995.68100
  19. Holm, S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6(2): 65-70. Zbl0402.62058
  20. Hsieh, N.-C. and Hung, L.-P. (2010). A data driven ensemble classifier for credit scoring analysis, Expert systems with Applications 37(1): 534-545. 
  21. Huenupán, F., Yoma, N.B., Molina, C. and Garretón, C. (2008). Confidence based multiple classifier fusion in speaker verification, Pattern Recognition Letters 29(7): 957-966. 
  22. Jurek, A., Bi, Y., Wu, S. and Nugent, C. (2013). A survey of commonly used ensemble-based classification techniques, The Knowledge Engineering Review 29(5): 551-581, DOI: 10.1017/s0269888913000155. 
  23. Kittler, J. (1998). Combining classifiers: A theoretical framework, Pattern Analysis and Applications 1(1): 18-27. 
  24. Ko, A.H., Sabourin, R. and Britto, Jr., A.S. (2008). From dynamic classifier selection to dynamic ensemble selection, Pattern Recognition 41(5): 1718-1731. Zbl1140.68466
  25. Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, 1st Edn., Wiley-Interscience, New York, NY. Zbl1066.68114
  26. Kuncheva, L.I. and Rodríguez, J.J. (2014). A weighted voting framework for classifiers ensembles, Knowledge-Based Systems 38(2): 259-275. 
  27. Kurzynski, M. (1987). Diagnosis of acute abdominal pain using three-stage classifier, Computers in Biology and Medicine 17(1): 19-27. 
  28. Kurzynski, M., Krysmann, M., Trajdos, P. and Wolczowski, A. (2014). Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus. 
  29. Kurzynski, M. and Wolczowski, A. (2012). Control system of bioprosthetic hand based on advanced analysis of biosignals and feedback from the prosthesis sensors, Proceedings of the 3rd International Conference on Information Technologies in Biomedicine, ITIB 12, Kamień Śląski, Poland, pp. 199-208. 
  30. Mamoni, D. (2013). On cardinality of fuzzy sets, International Journal of Intelligent Systems and Applications 5(6): 47-52. 
  31. Plumpton, C.O. (2014). Semi-supervised ensemble update strategies for on-line classification of FMRI data, Pattern Recognition Letters 37: 172-177. 
  32. Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N. and Johnston, S.J. (2012). Naive random subspace ensemble with linear classifiers for real-time classification of FMRI data, Pattern Recognition 45(6): 2101-2108. 
  33. R Core Team (2012). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, http://www.R-project.org/. 
  34. Rokach, L. (2010). Ensemble-based classifiers, Artificial Intelligence Review 33(1-2): 1-39. 
  35. Rokach, L. and Maimon, O. (2005). Clustering methods, Data Mining and Knowledge Discovery Handbook, Springer Science + Business Media, New York, NY, pp. 321-352. Zbl1087.68029
  36. Rousseeuw, P. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics 20(1): 53-65. Zbl0636.62059
  37. Scholkopf, B. and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA. 
  38. Tahir, M.A., Kittler, J. and Bouridane, A. (2012). Multilabel classification using heterogeneous ensemble of multi-label classifiers, Pattern Recognition Letters 33(5): 513-523. 
  39. Tsoumakas, G., Katakis, I. and Vlahavas, I. (2010). Random k-labelsets for multi-label classification, IEEE Transactions on Knowledge and Data Engineering 99(1): 1079-1089. 
  40. Valdovinos, R. and Sánchez, J. (2009). Combining multiple classifiers with dynamic weighted voting, in E. Corchado et al. (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 510-516. 
  41. Ward, J. (1963). Hierarchical grouping to optimize an objective function, Journal of the American Statistical Association 58(301): 236-244. 
  42. Wilcoxon, F. (1945). Individual comparisons by ranking methods, Biometrics Bulletin 1(6): 80-83. 
  43. Woloszynski, T. (2013). Classifier competence based on probabilistic modeling (ccprmod.m) at Matlab central file exchange, http://www.mathworks.com/ matlabcentral/fileexchange/28391-a -probabilistic-model-of-classifier -competence. 
  44. Woloszynski, T. and Kurzynski, M. (2011). A probabilistic model of classifier competence for dynamic ensemble selection, Pattern Recognition 44(10-11): 2656-2668. Zbl1218.68155
  45. Woloszynski, T., Kurzynski, M., Podsiadlo, P. and Stachowiak, G.W. (2012). A measure of competence based on random classification for dynamic ensemble selection, Information Fusion 13(3): 207-213. 
  46. Wolpert, D.H. (1992). Stacked generalization, Neural Networks 5(2): 214-259. 
  47. Wozniak, M., Graña, M. and Corchado, E. (2014). A survey of multiple classifier systems as hybrid systems, Information Fusion 16(1): 3-17. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.