Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk

International Journal of Applied Mathematics and Computer Science (2012)

  • Volume: 22, Issue: 2, page 449-459
  • ISSN: 1641-876X

Abstract

top
In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely used in physics. But for the same reason, the classical backpropagation delta rule for the MLP network cannot be used. The general equation for the backpropagation generalized delta rule for the Sigma-if neural network is derived and a selection of experimental results that confirm its usefulness are presented.

How to cite

top

Maciej Huk. "Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network." International Journal of Applied Mathematics and Computer Science 22.2 (2012): 449-459. <http://eudml.org/doc/208121>.

@article{MaciejHuk2012,
abstract = {In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely used in physics. But for the same reason, the classical backpropagation delta rule for the MLP network cannot be used. The general equation for the backpropagation generalized delta rule for the Sigma-if neural network is derived and a selection of experimental results that confirm its usefulness are presented.},
author = {Maciej Huk},
journal = {International Journal of Applied Mathematics and Computer Science},
keywords = {artificial neural networks; selective attention; self consistency; error backpropagation; delta rule},
language = {eng},
number = {2},
pages = {449-459},
title = {Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network},
url = {http://eudml.org/doc/208121},
volume = {22},
year = {2012},
}

TY - JOUR
AU - Maciej Huk
TI - Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network
JO - International Journal of Applied Mathematics and Computer Science
PY - 2012
VL - 22
IS - 2
SP - 449
EP - 459
AB - In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely used in physics. But for the same reason, the classical backpropagation delta rule for the MLP network cannot be used. The general equation for the backpropagation generalized delta rule for the Sigma-if neural network is derived and a selection of experimental results that confirm its usefulness are presented.
LA - eng
KW - artificial neural networks; selective attention; self consistency; error backpropagation; delta rule
UR - http://eudml.org/doc/208121
ER -

References

top
  1. Broadbent, D. (1982). Task combination and selective intake of information, Acta Psychologica 50(3): 253-290. 
  2. Desimone, R. and Duncan, J. (1995). Neural mechanisms of selective visual-attention, Annual Review of Neuroscience 18(1): 193-222. 
  3. Duch, W. and Jankowski, N. (1999). Survey of neural transfer functions, Neural Computing Surveys 2(1): 163-212. 
  4. Durbin, R. and Rumelhart, D. (1990). Product units: A computationally powerful and biologically plausible extension to backpropagation networks, Neural Computation 1(1): 133-142. 
  5. Feldman, J. and Ballard, D. (1982). Connectionist models and their properties, Cognitive Science 6(3): 205-254. 
  6. Ferguene, F. and Toumi, F.F. (2009). Dynamic external force feedback loop control of a robot manipulator using a neural compensator-Application to the trajectory following in an unknown environment, International Journal of Applied Mathematics and Computer Science 19(1): 113-126, DOI: 10.2478/v10006-009-0011-9. Zbl1169.93376
  7. Fonseca, L., Jimenez, J., Leburton, J. and Martin, R. (1998). Self-consistent calculation of the electronic structure and electron-electron interaction in self-assembled InAs-GaAs quantum dot structures, Physical Review B 57(7): 4017-4026. 
  8. Gupta, M. (2008). Correlative type higher-order neural units with applications, IEEE International Conference on Automation and Logistics, ICAL 2008, Qingdao, China, pp. 715-718. 
  9. Hager, G. and Toyama, K. (1999). Incremental focus of attention for robust visual tracking, International Journal of Computer Vision 35(1): 45-63. 
  10. Houghton, G. and Tipper, S. (1996). Inhibitory mechanisms of neural and cognitive control: Applications to selective attention and sequential action, Brain and Cognition 30(1): 20-43. 
  11. Huk, M. (2004). The sigma-if neural network as a method of dynamic selection of decision subspaces for medical reasoning systems, Journal of Medical Informatics & Technologies 7(1): 65-73. 
  12. Huk, M. (2006). Sigma-if neural network as a use of selective attention technique in classification and knowledge discovery problems solving, Annales UMCS Informatica AI 5(2): 121-131. 
  13. Huk, M. (2009). Learning distributed selective attention strategies with the Sigma-if neural network, in M. Akbar and D. Hussain (Eds.), Advances in Computer Science and IT, In-Tech, Vukovar, pp. 209-232. 
  14. Indiveri, G. (2008). Neuromorphic VLSI models of selective attention: From single chip vision sensors to multi-chip systems, Sensors 8(9): 5352-5375. 
  15. Korbicz, J., Obuchowicz, A. and Uciński, D. (1994). Unidirectional networks, in L. Bolc (Ed.), Artificial Neural Networks: Foundations and Applications, Akademicka Oficyna Wydawnicza PLJ, Warsaw, pp. 35-58. Zbl0850.68265
  16. Körding, K. and König, P. (2001). Neurons with two sites of synaptic integration learn invariant representations, Neural Computation 13(12): 2823-2849. Zbl0984.92007
  17. Mel, B. (1990). The sigma-pi column: A model of associative learning in cerebral cortex, Technical report, CNS Memo 6, Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA. 
  18. Mel, B. (1992). The clusteron: Toward a simple abstraction for a complex neuron, in J. Moody, S. Hanson and R. Lippmann (Eds.), Advances in Neural Information Processing Systems, Vol. 4, Morgan Kaufmann, San Mateo, CA, pp. 35-42. 
  19. Neville, R. and Eldridge, S. (2002). Transformations of sigmapi nets: Obtaining reflected functions by reflecting weight matrices, Neural Networks 15(3): 375-393. 
  20. Niebur, E., Hsiao, S. and Johnson, K. (2002). Synchrony: A neuronal mechanism for attentional selection?, Current Opinion in Neurobiology 12(2): 190-194. 
  21. Noh, T., Song, P. and Sievers, A. (1991). Self-consistency conditions for the effective-medium approximation in composite materials, Physical Review B 44(11): 5459-5464. 
  22. Noton, D. and Stark, L. (1971). Scanpaths in saccadic eye movements while viewing and recognizing patterns, Vision Research 11(9): 929-942. 
  23. Olshausen, B., Anderson, C. and Van Essen, D. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information, The Journal of Neuroscience 13(11): 4700-4719. 
  24. Pedro, J. O. and Dahunsi, O.A. (2011). Neural network based feedback linearization control of a servo-hydraulic vehicle suspension system, International Journal of Applied Mathematics and Computer Science 21(1): 137-147, DOI: 10.2478/v10006-011-0010-5. Zbl1221.93088
  25. Raczkowski, D., Canning, A. and Wang, L. (2001). Thomasfermi charge mixing for obtaining self-consistency in density functional calculations, Physical Review B 64(12): 121101-121105. 
  26. Rumelhart, D., Hinton, G. and McClelland, J. (1986). A general framework for parallel distributed processing, in D. Rumelhart and J. McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, Vol. 1, The MIT Press, Cambridge, MA, pp. 45-76. 
  27. Stark, L., Privitera, C. and Azzariti, M. (2000). Locating regions-of-interest for the mars rover expedition, International Journal of Remote Sensing 21(17): 3327-3347. 
  28. Treisman, A. (1960). Contextual cues in selective listening, Quarterly Journal of Experimental Psychology 12(4): 242-248. 
  29. Tsotsos, J., Culhane, S. and Cutzu, F. (2001). From foundational principles to a hierarchical selection circuit for attention, in J. Braun, C. Koch and J. Davis (Eds.), Visual Attention and Cortical Circuits, MIT Press, Cambridge, MA, pp. 285-306. 
  30. Vanrullen, R. and Koch, C. (2003). Visual selective behavior can be triggered by a feed-forward process, Journal of Cognitive Neuroscience 15(2): 209-217. 
  31. Weber, C. and Wermter, S. (2007). A self-organizing map of sigma-pi units, Neurocomputing 70(13-15): 2552-2560. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.