The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
We consider a model for the control of a linear network flow system with unknown but bounded demand
and polytopic bounds on controlled flows. We are interested in the problem of finding a suitable objective function
that makes robust optimal the policy represented by the so-called linear saturated feedback control.
We regard the problem as a suitable differential game with switching cost and study it in the framework of the viscosity solutions theory for Bellman and Isaacs equations.
We consider a model for the control of a linear network flow system with unknown but bounded demand
and polytopic bounds on controlled flows. We are interested in the problem of finding a suitable objective function
that makes robust optimal the policy represented by the so-called linear saturated feedback control.
We regard the problem as a suitable differential game with switching cost and study it in the framework of the viscosity solutions theory for Bellman and Isaacs equations.
Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.
The research on a class of asymptotic exit-time problems with a vanishing Lagrangian, begun in [M. Motta and C. Sartori, Nonlinear Differ. Equ. Appl. Springer (2014).] for the compact control case, is extended here to the case of unbounded controls and data, including both coercive and non-coercive problems. We give sufficient conditions to have a well-posed notion of generalized control problem and obtain regularity, characterization and approximation results for the value function of the problem....
We study the asymptotic behavior of as , where is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case)withWe discuss the cases in which the state of the system is required to stay in an -dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the boundary)...
We study the asymptotic behavior of as
, where
is the viscosity solution of the following Hamilton-Jacobi-Isaacs
equation (infinite horizon case)
with
We discuss the cases in which the state of the system is required to stay in an
n-dimensional torus, called periodic boundary conditions,
or in the closure
of a bounded connected domain with sufficiently smooth boundary.
As far as the latter is concerned, we treat
both
the case of the Neumann boundary conditions
(reflection on the...
The paper considers the problem of active fault diagnosis for discrete-time stochastic systems over an infinite time horizon. It is assumed that the switching between a fault-free and finitely many faulty conditions can be modelled by a finite-state Markov chain and the continuous dynamics of the observed system can be described for the fault-free and each faulty condition by non-linear non-Gaussian models with a fully observed continuous state. The design of an optimal active fault detector that...
Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite difference...
Using systematically a tricky idea of N.V. Krylov, we obtain
general results on the rate of convergence of a certain class of
monotone approximation schemes for stationary
Hamilton-Jacobi-Bellman equations with variable coefficients.
This result applies in particular to control schemes based on the
dynamic programming principle and to finite difference schemes
despite, here, we are not able to treat the most general case.
General results have been obtained earlier by Krylov for
finite...
Currently displaying 1 –
20 of
43