Statistical testing of segment homogeneity in classification of piecewise-regular objects

Andrey V. Savchenko; Natalya S. Belova

International Journal of Applied Mathematics and Computer Science (2015)

  • Volume: 25, Issue: 4, page 915-925
  • ISSN: 1641-876X

Abstract

top
The paper is focused on the problem of multi-class classification of composite (piecewise-regular) objects (e.g., speech signals, complex images, etc.). We propose a mathematical model of composite object representation as a sequence of independent segments. Each segment is represented as a random sample of independent identically distributed feature vectors. Based on this model and a statistical approach, we reduce the task to a problem of composite hypothesis testing of segment homogeneity. Several nearest-neighbor criteria are implemented, and for some of them the well-known special cases (e.g., the Kullback-Leibler minimum information discrimination principle, the probabilistic neural network) are highlighted. It is experimentally shown that the proposed approach improves the accuracy when compared with contemporary classifiers.

How to cite

top

Andrey V. Savchenko, and Natalya S. Belova. "Statistical testing of segment homogeneity in classification of piecewise-regular objects." International Journal of Applied Mathematics and Computer Science 25.4 (2015): 915-925. <http://eudml.org/doc/275948>.

@article{AndreyV2015,
abstract = {The paper is focused on the problem of multi-class classification of composite (piecewise-regular) objects (e.g., speech signals, complex images, etc.). We propose a mathematical model of composite object representation as a sequence of independent segments. Each segment is represented as a random sample of independent identically distributed feature vectors. Based on this model and a statistical approach, we reduce the task to a problem of composite hypothesis testing of segment homogeneity. Several nearest-neighbor criteria are implemented, and for some of them the well-known special cases (e.g., the Kullback-Leibler minimum information discrimination principle, the probabilistic neural network) are highlighted. It is experimentally shown that the proposed approach improves the accuracy when compared with contemporary classifiers.},
author = {Andrey V. Savchenko, Natalya S. Belova},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {statistical pattern recognition; classification; testing of segment homogeneity; probabilistic neural network},
language = {eng},
number = {4},
pages = {915-925},
title = {Statistical testing of segment homogeneity in classification of piecewise-regular objects},
url = {http://eudml.org/doc/275948},
volume = {25},
year = {2015},
}

TY - JOUR
AU - Andrey V. Savchenko
AU - Natalya S. Belova
TI - Statistical testing of segment homogeneity in classification of piecewise-regular objects
JO - International Journal of Applied Mathematics and Computer Science
PY - 2015
VL - 25
IS - 4
SP - 915
EP - 925
AB - The paper is focused on the problem of multi-class classification of composite (piecewise-regular) objects (e.g., speech signals, complex images, etc.). We propose a mathematical model of composite object representation as a sequence of independent segments. Each segment is represented as a random sample of independent identically distributed feature vectors. Based on this model and a statistical approach, we reduce the task to a problem of composite hypothesis testing of segment homogeneity. Several nearest-neighbor criteria are implemented, and for some of them the well-known special cases (e.g., the Kullback-Leibler minimum information discrimination principle, the probabilistic neural network) are highlighted. It is experimentally shown that the proposed approach improves the accuracy when compared with contemporary classifiers.
LA - eng
KW - statistical pattern recognition; classification; testing of segment homogeneity; probabilistic neural network
UR - http://eudml.org/doc/275948
ER -

References

top
  1. Asadpour, V., Homayounpour, M.M. and Towhidkhah, F. (2011). Audio-visual speaker identification using dynamic facial movements and utterance phonetic content, Applied Soft Computing 11(2): 2083-2093. 
  2. Benesty, J., Sondhi, M.M. and Huang, Y. (2008). Springer Handbook of Speech Processing, Springer, Berlin. 
  3. Borovkov, A.A. (1998). Mathematical Statistics, Gordon and Breach Science Publishers, Amsterdam. Zbl0913.62002
  4. Bottou, L., Fogelman Soulie, F., Blanchet, P. and Lienard, J. (1990). Speaker-independent isolated digit recognition: Multilayer perceptrons vs. dynamic time warping, Neural Networks 3(4): 453-465. 
  5. Ciresan, D., Meier, U., Masci, J. and Schmidhuber, J. (2012). Multi-column deep neural network for traffic sign classification, Neural Networks 32: 333-338. 
  6. Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, USA, pp. 886-893. 
  7. Gray, R., Buzo, A., Gray, A., Jr. and Matsuyama, Y. (1980). Distortion measures for speech processing, IEEE Transactions on Acoustics, Speech and Signal Processing 28(4): 367-376. Zbl0524.94011
  8. Haykin, S.O. (2008). Neural Networks and Learning Machines, 3rd Edn., Prentice Hall, Harlow. 
  9. Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. and Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal Processing Magazine 29(6): 82-97. 
  10. Hinton, G.E., Osindero, S. and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets, Neural Computation 18(7): 1527-1554. Zbl1106.68094
  11. Huang, J.-T., Li, J., Yu, D., Deng, L. and Gong, Y. (2013). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, pp. 7304-7308. 
  12. Janakiraman, R., Kumar, J. and Murthy, H. (2010). Robust syllable segmentation and its application to syllable-centric continuous speech recognition, Proceedings of the National Conference on Communications, NCC 2010, Chennai, India, pp. 1-5. 
  13. Kullback, S. (1997). Information Theory and Statistics, Dover Publications, New York, NY. Zbl0897.62003
  14. LeCun, Y., Bengio, Y. and Hinton, G. (2015). Deep learning, Nature 521(7553): 436-444. 
  15. LeCun, Y., Bottou, L., Bengio, Y. and Haffner, P. (1998). Gradient-based learning applied to document recognition, Proceedings of the IEEE 86(11): 2278-2324. 
  16. Liao, S., Zhu, X., Lei, Z., Zhang, L. and Li, S.Z. (2007). Learning multi-scale block local binary patterns for face recognition, in S.-W. Lee and S.Z. Li (Eds.), Advances in Biometrics, Lecture Notes in Computer Science, Vol. 4642, Springer, Berlin/Heidelberg, pp. 828-837. 
  17. Lowe, D.G. (2004). Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision 60(2): 91-110. 
  18. Martins, A.F.T., Figueiredo, M.A.T., Aguiar, P.M.Q., Smith, N.A. and Xing, E.P. (2008). Nonextensive entropic kernels, Proceedings of the 25th International Conference on Machine Learning, ICML '2008, New York, NY, USA, pp. 640-647. 
  19. Merialdo, B. (1988). Multilevel decoding for very-large-size-dictionary speech recognition, IBM Journal of Research and Development 32(2): 227-237. 
  20. Pfau, T. and Ruske, G. (1998). Estimating the speaking rate by vowel detection, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 1998, Seattle, WA, USA, Vol. 2, pp. 945-948. 
  21. Rutkowski, L. (2008). Computational Intelligence: Methods and Techniques, Springer-Verlag, Berlin/Heidelberg. Zbl1147.68061
  22. Sas, J. and Żołnierek, A. (2013). Pipelined language model construction for Polish speech recognition, International Journal of Applied Mathematics and Computer Science 23(3): 649-668, DOI: 10.2478/amcs-2013-0049. Zbl06255999
  23. Savchenko, A.V. (2012). Directed enumeration method in image recognition, Pattern Recognition 45(8): 2952-2961. 
  24. Savchenko, A.V. (2013a). Phonetic words decoding software in the problem of Russian speech recognition, Automation and Remote Control 74(7): 1225-1232. 
  25. Savchenko, A.V. (2013b). Probabilistic neural network with homogeneity testing in recognition of discrete patterns set, Neural Networks 46: 227-241. Zbl1296.68160
  26. Savchenko, A.V. and Khokhlova, Y.I. (2014). About neural-network algorithms application in viseme classification problem with face video in audiovisual speech recognition systems, Optical Memory and Neural Networks (Information Optics) 23(1): 34-42. 
  27. Specht, D.F. (1990). Probabilistic neural networks, Neural Networks 3(1): 109-118. 
  28. Świercz, E. (2010). Classification in the Gabor time-frequency domain of non-stationary signals embedded in heavy noise with unknown statistical distribution, International Journal of Applied Mathematics and Computer Science 20(1): 135-147, DOI: 10.2478/v10006-010-0010-x. Zbl1300.62045
  29. Tan, X., Chen, S., Zhou, Z.-H. and Zhang, F. (2006). Face recognition from a single image per person: A survey, Pattern Recognition 39(9): 1725-1745. Zbl1096.68732
  30. Theodoridis, S. and Koutroumbas, K. (2008). Pattern Recognition, 4th Edn., Academic Press, Burlington, MA/London. Zbl1093.68103
  31. Zhou, E., Cao, Z. and Yin, Q. (2015). Naive-deep face recognition: Touching the limit of LFW benchmark or not?, CoRR abs/1501.04690. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.