Displaying similar documents to “A new robust training law for dynamic neural networks with external disturbance: an LMI approach.”

Local stability conditions for discrete-time cascade locally recurrent neural networks

Krzysztof Patan (2010)

International Journal of Applied Mathematics and Computer Science

Similarity:

The paper deals with a specific kind of discrete-time recurrent neural network designed with dynamic neuron models. Dynamics are reproduced within each single neuron, hence the network considered is a locally recurrent globally feedforward. A crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates local stability conditions for the analysed class of neural networks using Lyapunov's first method. Moreover,...

Design of a multivariable neural controller for control of a nonlinear MIMO plant

Stanisław Bańka, Paweł Dworak, Krzysztof Jaroszewski (2014)

International Journal of Applied Mathematics and Computer Science

Similarity:

The paper presents the training problem of a set of neural nets to obtain a (gain-scheduling, adaptive) multivariable neural controller for control of a nonlinear MIMO dynamic process represented by a mathematical model of Low-Frequency (LF) motions of a drillship over the drilling point at the sea bottom. The designed neural controller contains a set of neural nets that determine values of its parameters chosen on the basis of two measured auxiliary signals. These are the ship's current...

Neural network-based MRAC control of dynamic nonlinear systems

Ghania Debbache, Abdelhak Bennia, Noureddine Golea (2006)

International Journal of Applied Mathematics and Computer Science

Similarity:

This paper presents direct model reference adaptive control for a class of nonlinear systems with unknown nonlinearities. The model following conditions are assured by using adaptive neural networks as the nonlinear state feedback controller. Both full state information and observer-based schemes are investigated. All the signals in the closed loop are guaranteed to be bounded and the system state is proven to converge to a small neighborhood of the reference model state. It is also...

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network

Maciej Huk (2012)

International Journal of Applied Mathematics and Computer Science

Similarity:

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely...