Highly robust training of regularizedradial basis function networks

Jan Kalina; Petra Vidnerová; Patrik Janáček

Kybernetika (2024)

  • Issue: 1, page 38-59
  • ISSN: 0023-5954

Abstract

top
Radial basis function (RBF) networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Because their standard training is vulnerable with respect to the presence of outliers in the data, several robust methods for RBF network training have been proposed recently. This paper is interested in robust regularized RBF networks. A robust inter-quantile version of RBF networks based on trimmed least squares is proposed here. Then, a systematic comparison of robust regularized RBF networks follows, which is evaluated over a set of 405 networks trained using various combinations of robustness and regularization types. The experiments proceed with a particular focus on the effect of variable selection, which is performed by means of a backward procedure, on the optimal number of RBF units. The regularized inter-quantile RBF networks based on trimmed least squares turn out to outperform the competing approaches in the experiments if a highly robust prediction error measure is considered.

How to cite

top

Kalina, Jan, Vidnerová, Petra, and Janáček, Patrik. "Highly robust training of regularizedradial basis function networks." Kybernetika (2024): 38-59. <http://eudml.org/doc/299573>.

@article{Kalina2024,
abstract = {Radial basis function (RBF) networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Because their standard training is vulnerable with respect to the presence of outliers in the data, several robust methods for RBF network training have been proposed recently. This paper is interested in robust regularized RBF networks. A robust inter-quantile version of RBF networks based on trimmed least squares is proposed here. Then, a systematic comparison of robust regularized RBF networks follows, which is evaluated over a set of 405 networks trained using various combinations of robustness and regularization types. The experiments proceed with a particular focus on the effect of variable selection, which is performed by means of a backward procedure, on the optimal number of RBF units. The regularized inter-quantile RBF networks based on trimmed least squares turn out to outperform the competing approaches in the experiments if a highly robust prediction error measure is considered.},
author = {Kalina, Jan, Vidnerová, Petra, Janáček, Patrik},
journal = {Kybernetika},
keywords = {regression neural networks; robust training; effective regularization; quantile regression; robustness},
language = {eng},
number = {1},
pages = {38-59},
publisher = {Institute of Information Theory and Automation AS CR},
title = {Highly robust training of regularizedradial basis function networks},
url = {http://eudml.org/doc/299573},
year = {2024},
}

TY - JOUR
AU - Kalina, Jan
AU - Vidnerová, Petra
AU - Janáček, Patrik
TI - Highly robust training of regularizedradial basis function networks
JO - Kybernetika
PY - 2024
PB - Institute of Information Theory and Automation AS CR
IS - 1
SP - 38
EP - 59
AB - Radial basis function (RBF) networks represent established tools for nonlinear regression modeling with numerous applications in various fields. Because their standard training is vulnerable with respect to the presence of outliers in the data, several robust methods for RBF network training have been proposed recently. This paper is interested in robust regularized RBF networks. A robust inter-quantile version of RBF networks based on trimmed least squares is proposed here. Then, a systematic comparison of robust regularized RBF networks follows, which is evaluated over a set of 405 networks trained using various combinations of robustness and regularization types. The experiments proceed with a particular focus on the effect of variable selection, which is performed by means of a backward procedure, on the optimal number of RBF units. The regularized inter-quantile RBF networks based on trimmed least squares turn out to outperform the competing approaches in the experiments if a highly robust prediction error measure is considered.
LA - eng
KW - regression neural networks; robust training; effective regularization; quantile regression; robustness
UR - http://eudml.org/doc/299573
ER -

References

top
  1. Blokdyk, G., Artificial Neural Network: A Complete Guide., Createspace Independent Publishing Platform, Scotts Valey 2017. 
  2. Borş, A. G., Pitas, I., Robust RBF networks., In: Radial basis function networks 1. (R. J. Howlett, L. C. Jain, and J. Kacprzyk, eds.). Recent developments in theory and applications, Physica Verlag Rudolf Liebing KG, Vienna 2001, pp. 123-133. 
  3. Boudt, K., Rousseeuw, P. J., Vanduffel, S., Verdonck, T., , Statist. Computing 30 (2020), 113-128. MR4057474DOI
  4. Chatterjee, S., Hadi, A. S., Regression Analysis by Example. (Fifth edition.), Wiley, Hoboken 2012. 
  5. Cheng, Y. B., Chen, X. H., Li, H. L., Cheng, Z. Y., al., R. Jiang et, , Math. Problems Engrg. (2018), 7509046. MR3816161DOI
  6. Chollet, F., Keras. 
  7. Dodge, Y., J, Jurečková, Adaptive Regression., Springer, New York 2000. MR1932533
  8. Dong, J., Zhao, Y., Liu, C., Han, Z. F., Leung, C. S., , Neurocomputing 339 (2019), 217-231. DOI
  9. Dua, D., Graff, C., UCI Machine Learning Repository. 
  10. Egrioglu, E., Bas, E., Karahasan, O., , Granular Comput. 8 (2023), 491-501. DOI
  11. Fath, A. H., Madanifar, F., Abbasi, M., , Petroleum 6 (2020), 80-91. DOI
  12. Hallin, M., Paindaveine, D., Šiman, M., , Ann. Statist. 38 (2010), 635-669. Zbl1183.62088MR2604670DOI
  13. Han, I., Qian, X., Huang, H., Huang, T., , Neurocomputing 450 (2021), 253-263. DOI
  14. Hastie, T., Tibshirani, R., Friedman, J., The Elements of Statistical Learning: Data Mining, Inference, and Prediction. (Second edition.)., Springer, New York 2009. MR2722294
  15. Jurečková, J., Picek, J., Schindler, M., Robust Statistical Methods with R. (Second edition.), Chapman and Hall/CRC, Boca Raton 2019. MR3967085
  16. Kalina, J., Tichavský, J., , Measurement Sci. Rev. 20 (2020), 6-14. DOI
  17. Kalina, J., Neoral, A., Vidnerová, P., , Int. J. Neural Syst. 31 (2021), 2150020. DOI
  18. Kalina, J., Tumpach, J., Holeňa, M., , In: 2000 International Joint Conference on Neural Networks (IJCNN), IEEE 2022. DOI
  19. Karar, M. E., , Int. J. Adaptive Control Signal Process. 32 (2018), 1040-1051. MR3826364DOI
  20. Khan, I. A., Hussain, T., Ullah, A., Rho, S., Lee, M., M., Baik, S. W., , Sensors 20 (2020), 1399. DOI
  21. Klambauer, G., Unterthiner, T., Mayr, A., Hochreiter, S., Self-normalizing neural networks., In: NIPS'17: Proc. 31st International Conference on Neural Information Processing Systems, Curran Associates, New York 2017, pp. 972-981. 
  22. Knefati, M. A., Chauvet, P. E., N'Guyen, S., Daya, B., , Neural Process. Lett. 43 (2016), 17-30. DOI
  23. Koenker, R., , Annual Rev. Econom. 9 (2917), 155-176. MR2268657DOI
  24. Kordos, M., Rusiecki, A., , Soft Comput. 20 (2016), 46-65. DOI
  25. Lee, C. C., Chung, P. C., Tsai, J. R., Chang, C. I., , IEEE Trans. Systems Man Cybernet. B 29 (1999), 674-685. DOI
  26. Li, X., Sun, Y., , Neural Computing Appl. 33 (2021), 8227-8235. DOI
  27. Liu, Z., Leung, C. S., So, H. C., , Neurocomputing 532 (2023), 77-93. DOI
  28. Standards, National Institute of, (NIST), Technology, Nonlinear Regression Datasets. 
  29. Paul, C., Vishwakarma, G. K., , Commun. Statist. Simul. Comput. 46 (2017), 6772-6789. MR3764938DOI
  30. Petneházi, G., , Machine Learning Appl. 6 (2021), 100096. DOI
  31. Poggio, T., Smale, S., , Notices Amer. Math. Soc. 50 (2003), 537-544. MR1968413DOI
  32. Procházka, B., , Comput. Statist. Data Anal. 6 (1988), 385-391. MR0947590DOI
  33. Que, Q., Belkin, M., , IEEE Trans. Pattern Anal. Machine Intell. 42 (2020), 1856-1867. DOI
  34. Romano, Y., Patterson, E., Candès, E. J., , ArXiv 2019. DOI
  35. Rusiecki, A., , Electron. Lett. 55 (2019), 319-320. DOI
  36. Rusiecki, A., , Neurocomputing 120 (2013), 624-632. DOI
  37. Saleh, A. K. M. E., Picek, J., Kalina, J., , Metrika 75 (2012), 311-328. MR2909549DOI
  38. Seghouane, A. K., Shokouhi, N., , IEEE Trans. Cybernet. 51 (2021), 2847-2856. DOI
  39. Šíma, J., Vidnerová, P., Mrázek, V., , Lecture Notes Computer Sci. 14263 (2023), 186-198. DOI
  40. Su, M. J., Deng, W., , Lecture Notes Computer Sci. 4113 (2006), 280-285. DOI
  41. Sze, V., Chen, Y. B., Yang, T. J., Emer, J. S., Efficient Processing of Deep Neural Networks., Morgan and Claypool, San Rafael 2020. 
  42. Tukey, J., , Biometrics 5 (1949), 99-114. MR0030734DOI
  43. Ullah, I., Youn, H. Y., Han, Y. H., An efficient data aggregation and outlier detection scheme based on radial basis function neural network for WSN., J. Ambient Intell. Humanized Comput., in press (2022). 
  44. Víšek, J.Á., Consistency of the least weighted squares under heteroscedasticity., Kybernetika 47 (2011), 179-206. Zbl1228.62026MR2828572
  45. Werner, T., , ArXiv 2021. MR4549754DOI
  46. Wilcox, R. R., Introduction to Robust Estimation and Hypothesis Testing. Fourth edition., Academic Pres, London 2017. MR3642283
  47. Yang, C., Oh, S. K., Pedrycz, W., Fu, Z., Yang, B., , IEEE Trans. Fuzzy Systems 29 (2021), 2506-2520. DOI
  48. Yerpude, P., Gudur, V., , Int. J. Data Mining Knowledge Management Process 7 (2017), 43-58. DOI
  49. Zhang, D., Zang, G., Li, J., Ma, K., Liu, H., , Computers Electron. Agriculture 154 (2018), 10-17. DOI
  50. Zuo, Y., , ArXiv 2022. DOI

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.