Displaying similar documents to “An Application of Discriminant Analysis and Artificial Neural Networks to Classification Problems”

Neural networks using Bayesian training

Gabriela Andrejková, Miroslav Levický (2003)

Kybernetika

Similarity:

Bayesian probability theory provides a framework for data modeling. In this framework it is possible to find models that are well-matched to the data, and to use these models to make nearly optimal predictions. In connection to neural networks and especially to neural network learning, the theory is interpreted as an inference of the most probable parameters for the model and the given training data. This article describes an application of Neural Networks using the Bayesian training...

A heuristic forecasting model for stock decision making.

D. Zhang, Q. Jiang, X. Li (2005)

Mathware and Soft Computing

Similarity:

This paper describes a heuristic forecasting model based on neural networks for stock decision-making. Some heuristic strategies are presented for enhancing the learning capability of neural networks and obtaining better trading performance. The China Shanghai Composite Index is used as case study. The forecasting model can forecast the buying and selling signs according to the result of neural network prediction. Results are compared with a benchmark buy-and-hold strategy. The forecasting...

About the maximum information and maximum likelihood principles

Igor Vajda, Jiří Grim (1998)

Kybernetika

Similarity:

Neural networks with radial basis functions are considered, and the Shannon information in their output concerning input. The role of information- preserving input transformations is discussed when the network is specified by the maximum information principle and by the maximum likelihood principle. A transformation is found which simplifies the input structure in the sense that it minimizes the entropy in the class of all information-preserving transformations. Such transformation need...

Evolutionary learning of rich neural networks in the Bayesian model selection framework

Matteo Matteucci, Dario Spadoni (2004)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to...

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...

An effective way to generate neural network structures for function approximation.

Andreas Bastian (1994)

Mathware and Soft Computing

Similarity:

One still open question in the area of research of multi-layer feedforward neural networks is concerning the number of neurons in its hidden layer(s). Especially in real life applications, this problem is often solved by heuristic methods. In this work an effective way to dynamically determine the number of hidden units in a three-layer feedforward neural network for function approximation is proposed.