Page 1

Displaying 1 – 4 of 4

Showing per page

Neural networks using Bayesian training

Gabriela Andrejková, Miroslav Levický (2003)

Kybernetika

Bayesian probability theory provides a framework for data modeling. In this framework it is possible to find models that are well-matched to the data, and to use these models to make nearly optimal predictions. In connection to neural networks and especially to neural network learning, the theory is interpreted as an inference of the most probable parameters for the model and the given training data. This article describes an application of Neural Networks using the Bayesian training to the problem...

Nonquadratic stabilization of continuous-time systems in the Takagi-Sugeno form

Miguel Bernal, Petr Hušek, Vladimír Kučera (2006)

Kybernetika

This paper presents a relaxed scheme for controller synthesis of continuous- time systems in the Takagi-Sugeno form, based on non-quadratic Lyapunov functions and a non-PDC control law. The relaxations here provided allow state and input dependence of the membership functions’ derivatives, as well as independence on initial conditions when input constraints are needed. Moreover, the controller synthesis is attainable via linear matrix inequalities, which are efficiently solved by commercially available...

Numerical realization of the Bayesian inversion accelerated using surrogate models

Bérešová, Simona (2023)

Programs and Algorithms of Numerical Mathematics

The Bayesian inversion is a natural approach to the solution of inverse problems based on uncertain observed data. The result of such an inverse problem is the posterior distribution of unknown parameters. This paper deals with the numerical realization of the Bayesian inversion focusing on problems governed by computationally expensive forward models such as numerical solutions of partial differential equations. Samples from the posterior distribution are generated using the Markov chain Monte...

Currently displaying 1 – 4 of 4

Page 1