Page 1

Displaying 1 – 6 of 6

Showing per page

Nash equilibrium payoffs for stochastic differential games with reflection

Qian Lin (2013)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper, we investigate Nash equilibrium payoffs for nonzero-sum stochastic differential games with reflection. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals defined by doubly controlled reflected backward stochastic differential equations.

Nonconvex Duality and Semicontinuous Proximal Solutions of HJB Equation in Optimal Control

Mustapha Serhani, Nadia Raïssi (2009)

RAIRO - Operations Research

In this work, we study an optimal control problem dealing with differential inclusion. Without requiring Lipschitz condition of the set valued map, it is very hard to look for a solution of the control problem. Our aim is to find estimations of the minimal value, (α), of the cost function of the control problem. For this, we construct an intermediary dual problem leading to a weak duality result, and then, thanks to additional assumptions of monotonicity of proximal subdifferential, we give a more...

Non-Trapping sets and Huygens Principle

Dario Benedetto, Emanuele Caglioti, Roberto Libero (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

We consider the evolution of a set Λ 2 according to the Huygens principle: i.e. the domain at time t>0, Λt, is the set of the points whose distance from Λ is lower than t. We give some general results for this evolution, with particular care given to the behavior of the perimeter of the evoluted set as a function of time. We define a class of sets (non-trapping sets) for which the perimeter is a continuous function of t, and we give an algorithm to approximate the evolution. Finally we restrict...

Numerical procedure to approximate a singular optimal control problem

Silvia C. Di Marco, Roberto L.V. González (2007)

ESAIM: Mathematical Modelling and Numerical Analysis

In this work we deal with the numerical solution of a Hamilton-Jacobi-Bellman (HJB) equation with infinitely many solutions. To compute the maximal solution – the optimal cost of the original optimal control problem – we present a complete discrete method based on the use of some finite elements and penalization techniques.

Currently displaying 1 – 6 of 6

Page 1