Displaying 41 – 60 of 481

Showing per page

On adaptive control for the continuous time-varying JLQG problem

Adam Czornik, Andrzej Świernik (2005)

International Journal of Applied Mathematics and Computer Science

In this paper the adaptive control problem for a continuous infinite time-varying stochastic control system with jumps in parameters and quadratic cost is investigated. It is assumed that the unknown coefficients of the system have limits as time tends to infinity and the boundary system is absolutely observable and stabilizable. Under these assumptions it is shown that the optimal value of the quadratic cost can be reached based only on the values of these limits, which, in turn, can be estimated...

On adaptive control of a partially observed Markov chain

Giovanni Di Masi, Łukasz Stettner (1994)

Applicationes Mathematicae

A control problem for a partially observable Markov chain depending on a parameter with long run average cost is studied. Using uniform ergodicity arguments it is shown that, for values of the parameter varying in a compact set, it is possible to consider only a finite number of nearly optimal controls based on the values of actually computable approximate filters. This leads to an algorithm that guarantees nearly selfoptimizing properties without identifiability conditions. The algorithm is based...

On additive and multiplicative (controlled) Poisson equations

G. B. Di Masi, Ł. Stettner (2006)

Banach Center Publications

Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.

On an invariant design of feedbacks for bilinear control systems of second order

Vasiliy Belozyorov (2001)

International Journal of Applied Mathematics and Computer Science

The problem of linear feedback design for bilinear control systems guaranteeing their conditional closed-loop stability is considered. It is shown that this problem can be reduced to investigating the conditional stability of solutions of quadratic systems of differential equations depending on parameters of the control law. Sufficient conditions for stability in the cone of a homogeneous quadratic system are obtained. For second-order systems, invariant conditions of conditional asymptotic stability...

On application of Rothe's fixed point theorem to study the controllability of fractional semilinear systems with delays

Beata Sikora (2019)

Kybernetika

The paper presents finite-dimensional dynamical control systems described by semilinear fractional-order state equations with multiple delays in the control and nonlinear function f . The relative controllability of the presented semilinear system is discussed. Rothe’s fixed point theorem is applied to study the controllability of the fractional-order semilinear system. A control that steers the semilinear system from an initial complete state to a final state at time t > 0 is presented. A numerical...

On approximation of stability radius for an infinite-dimensional feedback control system

Hideki Sano (2016)

Kybernetika

In this paper, we discuss the problem of approximating stability radius appearing in the design procedure of finite-dimensional stabilizing controllers for an infinite-dimensional dynamical system. The calculation of stability radius needs the value of H -norm of a transfer function whose realization is described by infinite-dimensional operators in a Hilbert space. From the computational point of view, we need to prepare a family of approximate finite-dimensional operators and then to calculate...

On approximations of nonzero-sum uniformly continuous ergodic stochastic games

Andrzej Nowak (1999)

Applicationes Mathematicae

We consider a class of uniformly ergodic nonzero-sum stochastic games with the expected average payoff criterion, a separable metric state space and compact metric action spaces. We assume that the payoff and transition probability functions are uniformly continuous. Our aim is to prove the existence of stationary ε-equilibria for that class of ergodic stochastic games. This theorem extends to a much wider class of stochastic games a result proven recently by Bielecki [2].

On asymptotic exit-time control problems lacking coercivity

M. Motta, C. Sartori (2014)

ESAIM: Control, Optimisation and Calculus of Variations

The research on a class of asymptotic exit-time problems with a vanishing Lagrangian, begun in [M. Motta and C. Sartori, Nonlinear Differ. Equ. Appl. Springer (2014).] for the compact control case, is extended here to the case of unbounded controls and data, including both coercive and non-coercive problems. We give sufficient conditions to have a well-posed notion of generalized control problem and obtain regularity, characterization and approximation results for the value function of the problem....

On Carleman estimates for elliptic and parabolic operators. Applications to unique continuation and control of parabolic equations

Jérôme Le Rousseau, Gilles Lebeau (2012)

ESAIM: Control, Optimisation and Calculus of Variations

Local and global Carleman estimates play a central role in the study of some partial differential equations regarding questions such as unique continuation and controllability. We survey and prove such estimates in the case of elliptic and parabolic operators by means of semi-classical microlocal techniques. Optimality results for these estimates and some of their consequences are presented. We point out the connexion of these optimality results to the local phase-space geometry after conjugation...

Currently displaying 41 – 60 of 481