Displaying similar documents to “Parallel counter-propagation networks.”

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...

A chunking mechanism in a neural system for the parallel processing of propositional production rules.

Ernesto Burattini, A. Pasconcino, Guglielmo Tamburrini (1995)

Mathware and Soft Computing

Similarity:

The problem of extracting more compact rules from a rule-based knowledge base is approached by means of a chunking mechanism implemented via a neural system. Taking advantage of the parallel processing potentialities of neural systems, the computational problem normally arising when introducing chuncking processes is overcome. Also the memory saturation effect is coped with using some sort of forgetting mechanism which allows the system to eliminate previously stored, but less often...

An effective way to generate neural network structures for function approximation.

Andreas Bastian (1994)

Mathware and Soft Computing

Similarity:

One still open question in the area of research of multi-layer feedforward neural networks is concerning the number of neurons in its hidden layer(s). Especially in real life applications, this problem is often solved by heuristic methods. In this work an effective way to dynamically determine the number of hidden units in a three-layer feedforward neural network for function approximation is proposed.