Multi-Grid Methods for Hamiltonian-Jacobi-Bellmann Equations.
In this paper, we investigate Nash equilibrium payoffs for nonzero-sum stochastic differential games with reflection. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals defined by doubly controlled reflected backward stochastic differential equations.
In this work we deal with the numerical solution of a Hamilton-Jacobi-Bellman (HJB) equation with infinitely many solutions. To compute the maximal solution – the optimal cost of the original optimal control problem – we present a complete discrete method based on the use of some finite elements and penalization techniques.
Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.
The research on a class of asymptotic exit-time problems with a vanishing Lagrangian, begun in [M. Motta and C. Sartori, Nonlinear Differ. Equ. Appl. Springer (2014).] for the compact control case, is extended here to the case of unbounded controls and data, including both coercive and non-coercive problems. We give sufficient conditions to have a well-posed notion of generalized control problem and obtain regularity, characterization and approximation results for the value function of the problem....
We study the asymptotic behavior of as , where is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case)withWe discuss the cases in which the state of the system is required to stay in an -dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the boundary)...
We study the asymptotic behavior of as , where is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case) with We discuss the cases in which the state of the system is required to stay in an n-dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the...
The paper considers the problem of active fault diagnosis for discrete-time stochastic systems over an infinite time horizon. It is assumed that the switching between a fault-free and finitely many faulty conditions can be modelled by a finite-state Markov chain and the continuous dynamics of the observed system can be described for the fault-free and each faulty condition by non-linear non-Gaussian models with a fully observed continuous state. The design of an optimal active fault detector that...
Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite difference...
Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite...