Displaying similar documents to “Mechanical analogy of statement networks”

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...

Neural network realizations of Bayes decision rules for exponentially distributed data

Igor Vajda, Belomír Lonek, Viktor Nikolov, Arnošt Veselý (1998)

Kybernetika

Similarity:

For general Bayes decision rules there are considered perceptron approximations based on sufficient statistics inputs. A particular attention is paid to Bayes discrimination and classification. In the case of exponentially distributed data with known model it is shown that a perceptron with one hidden layer is sufficient and the learning is restricted to synaptic weights of the output neuron. If only the dimension of the exponential model is known, then the number of hidden layers will...

A chunking mechanism in a neural system for the parallel processing of propositional production rules.

Ernesto Burattini, A. Pasconcino, Guglielmo Tamburrini (1995)

Mathware and Soft Computing

Similarity:

The problem of extracting more compact rules from a rule-based knowledge base is approached by means of a chunking mechanism implemented via a neural system. Taking advantage of the parallel processing potentialities of neural systems, the computational problem normally arising when introducing chuncking processes is overcome. Also the memory saturation effect is coped with using some sort of forgetting mechanism which allows the system to eliminate previously stored, but less often...

A heuristic forecasting model for stock decision making.

D. Zhang, Q. Jiang, X. Li (2005)

Mathware and Soft Computing

Similarity:

This paper describes a heuristic forecasting model based on neural networks for stock decision-making. Some heuristic strategies are presented for enhancing the learning capability of neural networks and obtaining better trading performance. The China Shanghai Composite Index is used as case study. The forecasting model can forecast the buying and selling signs according to the result of neural network prediction. Results are compared with a benchmark buy-and-hold strategy. The forecasting...

Analysis of the ReSuMe learning process for spiking neural networks

Filip Ponulak (2008)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper we perform an analysis of the learning process with the ReSuMe method and spiking neural networks (Ponulak, 2005; Ponulak, 2006b). We investigate how the particular parameters of the learning algorithm affect the process of learning. We consider the issue of speeding up the adaptation process, while maintaining the stability of the optimal solution. This is an important issue in many real-life tasks where the neural networks are applied and where the fast learning convergence...

Integrating inference and neural classification in a hybrid system for recognition tasks.

Massimo De Gregorio (1996)

Mathware and Soft Computing

Similarity:

While the coupling of artificial of neural networks (ANN) and symbolic AI (SAI) is a strategy adopted in many hybrid systems, a real integration of the two methodologies has not been thoroughly investigated yet: so far, most hybrid systems have been viewed as just an engineering shortcut to solve complex problems in which one methodology alone seems too weak. In this paper, an approach to integrating ANN and SAI is presented. The basic idea explored here is that there is much more to...