Displaying 461 – 480 of 862

Showing per page

Note on stability estimation in average Markov control processes

Jaime Martínez Sánchez, Elena Zaitseva (2015)

Kybernetika

We study the stability of average optimal control of general discrete-time Markov processes. Under certain ergodicity and Lipschitz conditions the stability index is bounded by a constant times the Prokhorov distance between distributions of random vectors determinating the “original and the perturbated” control processes.

Novel optimal recursive filter for state and fault estimation of linear stochastic systems with unknown disturbances

Karim Khémiri, Fayçal Ben Hmida, José Ragot, Moncef Gossa (2011)

International Journal of Applied Mathematics and Computer Science

This paper studies recursive optimal filtering as well as robust fault and state estimation for linear stochastic systems with unknown disturbances. It proposes a new recursive optimal filter structure with transformation of the original system. This transformation is based on the singular value decomposition of the direct feedthrough matrix distribution of the fault which is assumed to be of arbitrary rank. The resulting filter is optimal in the sense of the unbiased minimum-variance criteria....

Numerical studies of parameter estimation techniques for nonlinear evolution equations

Azmy S. Ackleh, Robert R. Ferdinand, Simeon Reich (1998)

Kybernetika

We briefly discuss an abstract approximation framework and a convergence theory of parameter estimation for a general class of nonautonomous nonlinear evolution equations. A detailed discussion of the above theory has been given earlier by the authors in another paper. The application of this theory together with numerical results indicating the feasibility of this general least squares approach are presented in the context of quasilinear reaction diffusion equations.

On adaptive control of a partially observed Markov chain

Giovanni Di Masi, Łukasz Stettner (1994)

Applicationes Mathematicae

A control problem for a partially observable Markov chain depending on a parameter with long run average cost is studied. Using uniform ergodicity arguments it is shown that, for values of the parameter varying in a compact set, it is possible to consider only a finite number of nearly optimal controls based on the values of actually computable approximate filters. This leads to an algorithm that guarantees nearly selfoptimizing properties without identifiability conditions. The algorithm is based...

On additive and multiplicative (controlled) Poisson equations

G. B. Di Masi, Ł. Stettner (2006)

Banach Center Publications

Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.

On approximations of nonzero-sum uniformly continuous ergodic stochastic games

Andrzej Nowak (1999)

Applicationes Mathematicae

We consider a class of uniformly ergodic nonzero-sum stochastic games with the expected average payoff criterion, a separable metric state space and compact metric action spaces. We assume that the payoff and transition probability functions are uniformly continuous. Our aim is to prove the existence of stationary ε-equilibria for that class of ergodic stochastic games. This theorem extends to a much wider class of stochastic games a result proven recently by Bielecki [2].

Currently displaying 461 – 480 of 862