Displaying similar documents to “A survey of factors influencing MLP error surface”

A simplex trained neural network-based architecture for sensor fusion and tracking of target maneuvers

Yee Chin Wong, Malur K. Sundareshan (1999)

Kybernetika

Similarity:

One of the major applications for which neural network-based methods are being successfully employed is in the design of intelligent integrated processing architectures that efficiently implement sensor fusion operations. In this paper we shall present a novel scheme for developing fused decisions for surveillance and tracking in typical multi-sensor environments characterized by the disparity in the data streams arriving from various sensors. This scheme employs an integration of a...

Neural network segmentation of images from stained cucurbits leaves with colour symptoms of biotic and abiotic stresses

Jarosław Gocławski, Joanna Sekulska-Nalewajko, Elżbieta Kuźniak (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

The increased production of Reactive Oxygen Species (ROS) in plant leaf tissues is a hallmark of a plant's reaction to various environmental stresses. This paper describes an automatic segmentation method for scanned images of cucurbits leaves stained to visualise ROS accumulation sites featured by specific colour hues and intensities. The leaves placed separately in the scanner view field on a colour background are extracted by thresholding in the RGB colour space, then cleaned from...

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...

Comparison of supervised learning methods for spike time coding in spiking neural networks

Andrzej Kasiński, Filip Ponulak (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this review we focus our attention on supervised learning methods for spike time coding in Spiking Neural Networks (SNNs). This study is motivated by recent experimental results regarding information coding in biological neural systems, which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We are concerned with the fundamental question: What paradigms of neural temporal coding can be implemented with the recent learning methods?...

Image recall using a large scale generalized Brain-State-in-a-Box neural network

Cheolhwan Oh, Stanisław Żak (2005)

International Journal of Applied Mathematics and Computer Science

Similarity:

An image recall system using a large scale associative memory employing the generalized Brain-State-in-a-Box (gBSB) neural network model is proposed. The gBSB neural network can store binary vectors as stable equilibrium points. This property is used to store images in the gBSB memory. When a noisy image is presented as an input to the gBSB network, the gBSB net processes it to filter out the noise. The overlapping decomposition method is utilized to efficiently process images using...

An effective way to generate neural network structures for function approximation.

Andreas Bastian (1994)

Mathware and Soft Computing

Similarity:

One still open question in the area of research of multi-layer feedforward neural networks is concerning the number of neurons in its hidden layer(s). Especially in real life applications, this problem is often solved by heuristic methods. In this work an effective way to dynamically determine the number of hidden units in a three-layer feedforward neural network for function approximation is proposed.