Displaying similar documents to “A new approach to image reconstruction from projections using a recurrent neural network”

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...

Image recall using a large scale generalized Brain-State-in-a-Box neural network

Cheolhwan Oh, Stanisław Żak (2005)

International Journal of Applied Mathematics and Computer Science

Similarity:

An image recall system using a large scale associative memory employing the generalized Brain-State-in-a-Box (gBSB) neural network model is proposed. The gBSB neural network can store binary vectors as stable equilibrium points. This property is used to store images in the gBSB memory. When a noisy image is presented as an input to the gBSB network, the gBSB net processes it to filter out the noise. The overlapping decomposition method is utilized to efficiently process images using...

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...

An effective way to generate neural network structures for function approximation.

Andreas Bastian (1994)

Mathware and Soft Computing

Similarity:

One still open question in the area of research of multi-layer feedforward neural networks is concerning the number of neurons in its hidden layer(s). Especially in real life applications, this problem is often solved by heuristic methods. In this work an effective way to dynamically determine the number of hidden units in a three-layer feedforward neural network for function approximation is proposed.

Determining the weights of a Fourier series neural network on the basis of the multidimensional discrete Fourier transform

Krzysztof Halawa (2008)

International Journal of Applied Mathematics and Computer Science

Similarity:

This paper presents a method for training a Fourier series neural network on the basis of the multidimensional discrete Fourier transform. The proposed method is characterized by low computational complexity. The article shows how the method can be used for modelling dynamic systems.