Page 1

Displaying 1 – 20 of 20

Showing per page

On additive and multiplicative (controlled) Poisson equations

G. B. Di Masi, Ł. Stettner (2006)

Banach Center Publications

Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.

On asymptotic exit-time control problems lacking coercivity

M. Motta, C. Sartori (2014)

ESAIM: Control, Optimisation and Calculus of Variations

The research on a class of asymptotic exit-time problems with a vanishing Lagrangian, begun in [M. Motta and C. Sartori, Nonlinear Differ. Equ. Appl. Springer (2014).] for the compact control case, is extended here to the case of unbounded controls and data, including both coercive and non-coercive problems. We give sufficient conditions to have a well-posed notion of generalized control problem and obtain regularity, characterization and approximation results for the value function of the problem....

On ergodic problem for Hamilton-Jacobi-Isaacs equations

Piernicola Bettiol (2005)

ESAIM: Control, Optimisation and Calculus of Variations

We study the asymptotic behavior of λ v λ as λ 0 + , where v λ is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case) λ v λ + H ( x , D v λ ) = 0 , with H ( x , p ) : = min b B max a A { - f ( x , a , b ) · p - l ( x , a , b ) } . We discuss the cases in which the state of the system is required to stay in an n -dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain Ω n with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the boundary)...

On ergodic problem for Hamilton-Jacobi-Isaacs equations

Piernicola Bettiol (2010)

ESAIM: Control, Optimisation and Calculus of Variations

We study the asymptotic behavior of λ v λ as λ 0 + , where v λ is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case) λ v λ + H ( x , D v λ ) = 0 , with H ( x , p ) : = min b B max a A { - f ( x , a , b ) · p - l ( x , a , b ) } . We discuss the cases in which the state of the system is required to stay in an n-dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain Ω n with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the...

On infinite horizon active fault diagnosis for a class of non-linear non-Gaussian systems

Ivo Punčochář, Miroslav Šimandl (2014)

International Journal of Applied Mathematics and Computer Science

The paper considers the problem of active fault diagnosis for discrete-time stochastic systems over an infinite time horizon. It is assumed that the switching between a fault-free and finitely many faulty conditions can be modelled by a finite-state Markov chain and the continuous dynamics of the observed system can be described for the fault-free and each faulty condition by non-linear non-Gaussian models with a fully observed continuous state. The design of an optimal active fault detector that...

On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman equations

Guy Barles, Espen Robstad Jakobsen (2002)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite difference...

On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations

Guy Barles, Espen Robstad Jakobsen (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite...

Optimal control of a stochastic heat equation with boundary-noise and boundary-control

Arnaud Debussche, Marco Fuhrman, Gianmario Tessitore (2007)

ESAIM: Control, Optimisation and Calculus of Variations

We are concerned with the optimal control of a nonlinear stochastic heat equation on a bounded real interval with Neumann boundary conditions. The specificity here is that both the control and the noise act on the boundary. We start by reformulating the state equation as an infinite dimensional stochastic evolution equation. The first main result of the paper is the proof of existence and uniqueness of a mild solution for the corresponding Hamilton-Jacobi-Bellman (HJB) equation. The C1 regularity...

Currently displaying 1 – 20 of 20

Page 1