### A characterization of stability and sensitivity properties for state-constrained optimal control

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

A deterministic affine-quadratic optimal control problem is considered. Due to the nature of the problem, optimal controls exist under some very mild conditions. Further, it is shown that under some assumptions, the optimal control is unique which leads to the differentiability of the value function. Therefore, the value function satisfies the corresponding Hamilton–Jacobi–Bellman equation in the classical sense, and the optimal control admits a state feedback representation. Under some additional...

The problem considered is that of approximate minimisation of the Bolza problem of optimal control. Starting from Bellman's method of dynamic programming, we define the ε-value function to be an approximation to the value function being a solution to the Hamilton-Jacobi equation. The paper shows an approach that can be used to construct an algorithm for calculating the values of an ε-value function at given points, thus approximating the respective values of the value function.

We consider a problem of maximization of the distance traveled by a material point in the presence of a nonlinear friction under a bounded thrust and fuel expenditure. Using the maximum principle we obtain the form of optimal control and establish conditions under which it contains a singular subarc. This problem seems to be the simplest one having a mechanical sense in which singular subarcs appear in a nontrivial way.

We investigate the control of dynamical networks for the case of nodes, that although different, can be make passive by feedback. The so-called V-stability characterization allows for a simple set of stabilization conditions even in the case of nonidentical nodes. This is due to the fact that under V-stability characterization the dynamical difference between node of a network reduces to their different passivity degrees, that is, a measure of the required feedback gain necessary to make the node...

Geometric control theory and Riemannian techniques are used to describe the reachable set at time t of left invariant single-input control systems on semi-simple compact Lie groups and to estimate the minimal time needed to reach any point from identity. This method provides an effective way to give an upper and a lower bound for the minimal time needed to transfer a controlled quantum system with a drift from a given initial position to a given final position. The bounds include diameters...

Mathematical models for cancer treatment that include immunological activity are considered as an optimal control problem with an objective that is motivated by a separatrix of the uncontrolled system. For various growth models on the cancer cells the existence and optimality of singular controls is investigated. For a Gompertzian growth function a synthesis of controls that move the state into the region of attraction of a benign equilibrium point is developed.

The paper studies discrete/finite-difference approximations of optimal control problems governed by continuous-time dynamical systems with endpoint constraints. Finite-difference systems, considered as parametric control problems with the decreasing step of discretization, occupy an intermediate position between continuous-time and discrete-time (with fixed steps) control processes and play a significant role in both qualitative and numerical aspects of optimal control. In this paper we derive an...