Displaying 81 – 100 of 143

Showing per page

Nash equilibrium payoffs for stochastic differential games with reflection

Qian Lin (2013)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper, we investigate Nash equilibrium payoffs for nonzero-sum stochastic differential games with reflection. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals defined by doubly controlled reflected backward stochastic differential equations.

Numerical procedure to approximate a singular optimal control problem

Silvia C. Di Marco, Roberto L.V. González (2007)

ESAIM: Mathematical Modelling and Numerical Analysis

In this work we deal with the numerical solution of a Hamilton-Jacobi-Bellman (HJB) equation with infinitely many solutions. To compute the maximal solution – the optimal cost of the original optimal control problem – we present a complete discrete method based on the use of some finite elements and penalization techniques.

On additive and multiplicative (controlled) Poisson equations

G. B. Di Masi, Ł. Stettner (2006)

Banach Center Publications

Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.

On asymptotic exit-time control problems lacking coercivity

M. Motta, C. Sartori (2014)

ESAIM: Control, Optimisation and Calculus of Variations

The research on a class of asymptotic exit-time problems with a vanishing Lagrangian, begun in [M. Motta and C. Sartori, Nonlinear Differ. Equ. Appl. Springer (2014).] for the compact control case, is extended here to the case of unbounded controls and data, including both coercive and non-coercive problems. We give sufficient conditions to have a well-posed notion of generalized control problem and obtain regularity, characterization and approximation results for the value function of the problem....

On ergodic problem for Hamilton-Jacobi-Isaacs equations

Piernicola Bettiol (2005)

ESAIM: Control, Optimisation and Calculus of Variations

We study the asymptotic behavior of λ v λ as λ 0 + , where v λ is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case) λ v λ + H ( x , D v λ ) = 0 , with H ( x , p ) : = min b B max a A { - f ( x , a , b ) · p - l ( x , a , b ) } . We discuss the cases in which the state of the system is required to stay in an n -dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain Ω n with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the boundary)...

On ergodic problem for Hamilton-Jacobi-Isaacs equations

Piernicola Bettiol (2010)

ESAIM: Control, Optimisation and Calculus of Variations

We study the asymptotic behavior of λ v λ as λ 0 + , where v λ is the viscosity solution of the following Hamilton-Jacobi-Isaacs equation (infinite horizon case) λ v λ + H ( x , D v λ ) = 0 , with H ( x , p ) : = min b B max a A { - f ( x , a , b ) · p - l ( x , a , b ) } . We discuss the cases in which the state of the system is required to stay in an n-dimensional torus, called periodic boundary conditions, or in the closure of a bounded connected domain Ω n with sufficiently smooth boundary. As far as the latter is concerned, we treat both the case of the Neumann boundary conditions (reflection on the...

On infinite horizon active fault diagnosis for a class of non-linear non-Gaussian systems

Ivo Punčochář, Miroslav Šimandl (2014)

International Journal of Applied Mathematics and Computer Science

The paper considers the problem of active fault diagnosis for discrete-time stochastic systems over an infinite time horizon. It is assumed that the switching between a fault-free and finitely many faulty conditions can be modelled by a finite-state Markov chain and the continuous dynamics of the observed system can be described for the fault-free and each faulty condition by non-linear non-Gaussian models with a fully observed continuous state. The design of an optimal active fault detector that...

On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman equations

Guy Barles, Espen Robstad Jakobsen (2002)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite difference...

On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations

Guy Barles, Espen Robstad Jakobsen (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

Using systematically a tricky idea of N.V. Krylov, we obtain general results on the rate of convergence of a certain class of monotone approximation schemes for stationary Hamilton-Jacobi-Bellman equations with variable coefficients. This result applies in particular to control schemes based on the dynamic programming principle and to finite difference schemes despite, here, we are not able to treat the most general case. General results have been obtained earlier by Krylov for finite...

Currently displaying 81 – 100 of 143