An Application of Discriminant Analysis and Artificial Neural Networks to Classification Problems
Jasna Soldić-Aleksić (2001)
The Yugoslav Journal of Operations Research
Similarity:
Jasna Soldić-Aleksić (2001)
The Yugoslav Journal of Operations Research
Similarity:
Matteo Matteucci, Dario Spadoni (2004)
International Journal of Applied Mathematics and Computer Science
Similarity:
In this paper we focus on the problem of using a genetic algorithm for model selection within a Bayesian framework. We propose to reduce the model selection problem to a search problem solved using evolutionary computation to explore a posterior distribution over the model space. As a case study, we introduce ELeaRNT (Evolutionary Learning of Rich Neural Network Topologies), a genetic algorithm which evolves a particular class of models, namely, Rich Neural Networks (RNN), in order to...
Igor Vajda, Belomír Lonek, Viktor Nikolov, Arnošt Veselý (1998)
Kybernetika
Similarity:
For general Bayes decision rules there are considered perceptron approximations based on sufficient statistics inputs. A particular attention is paid to Bayes discrimination and classification. In the case of exponentially distributed data with known model it is shown that a perceptron with one hidden layer is sufficient and the learning is restricted to synaptic weights of the output neuron. If only the dimension of the exponential model is known, then the number of hidden layers will...
Jiří Grim (2007)
Kybernetika
Similarity:
We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example,...
Igor Vajda, Jiří Grim (1998)
Kybernetika
Similarity:
Neural networks with radial basis functions are considered, and the Shannon information in their output concerning input. The role of information- preserving input transformations is discussed when the network is specified by the maximum information principle and by the maximum likelihood principle. A transformation is found which simplifies the input structure in the sense that it minimizes the entropy in the class of all information-preserving transformations. Such transformation need...