In the present paper, we consider nonlinear optimal control problems with constraints on the state of the system. We are interested in the characterization of the value function without any controllability assumption. In the unconstrained case, it is possible to derive a characterization of the value function by means of a Hamilton-Jacobi-Bellman (HJB) equation. This equation expresses the behavior of the value function along the trajectories arriving or starting from any position . In the constrained...
The paper deals with deterministic optimal control problems with state constraints and non-linear dynamics. It is known for such problems that the value function is in general discontinuous and its characterization by means of a Hamilton-Jacobi equation requires some controllability assumptions involving the dynamics and the set of state constraints. Here, we first adopt the viability point of view and look at the value function as its epigraph. Then, we prove that this epigraph can always be described...
In the present paper, we consider nonlinear optimal control problems
with constraints on the state of the system. We are interested in
the characterization of the value function without any
controllability assumption. In the unconstrained case, it is possible to derive a
characterization of the value function by means of a
Hamilton-Jacobi-Bellman (HJB) equation. This equation expresses the
behavior of the value function along the trajectories arriving or
starting from any position . In...
Download Results (CSV)