Displaying similar documents to “Reconstructing a neural net from its output.”

The logic of neural networks.

Juan Luis Castro, Enric Trillas (1998)

Mathware and Soft Computing

Similarity:

This paper establishes the equivalence between multilayer feedforward networks and linear combinations of Lukasiewicz propositions. In this sense, multilayer forward networks have a logic interpretation, which should permit to apply logical techniques in the neural networks framework.

A new approach to image reconstruction from projections using a recurrent neural network

Robert Cierniak (2008)

International Journal of Applied Mathematics and Computer Science

Similarity:

A new neural network approach to image reconstruction from projections considering the parallel geometry of the scanner is presented. To solve this key problem in computed tomography, a special recurrent neural network is proposed. The reconstruction process is performed during the minimization of the energy function in this network. The performed computer simulations show that the neural network reconstruction algorithm designed to work in this way outperforms conventional methods in...

Determining the weights of a Fourier series neural network on the basis of the multidimensional discrete Fourier transform

Krzysztof Halawa (2008)

International Journal of Applied Mathematics and Computer Science

Similarity:

This paper presents a method for training a Fourier series neural network on the basis of the multidimensional discrete Fourier transform. The proposed method is characterized by low computational complexity. The article shows how the method can be used for modelling dynamic systems.

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...