On ε-optimal controls for state constraint problems
We are concerned with the optimal control of a nonlinear stochastic heat equation on a bounded real interval with Neumann boundary conditions. The specificity here is that both the control and the noise act on the boundary. We start by reformulating the state equation as an infinite dimensional stochastic evolution equation. The first main result of the paper is the proof of existence and uniqueness of a mild solution for the corresponding Hamilton-Jacobi-Bellman (HJB) equation. The C1 regularity...
We prove that Perron's method and the method of half-relaxed limits of Barles-Perthame works for the so called B-continuous viscosity solutions of a large class of fully nonlinear unbounded partial differential equations in Hilbert spaces. Perron's method extends the existence of B-continuous viscosity solutions to many new equations that are not of Bellman type. The method of half-relaxed limits allows limiting operations with viscosity solutions without any a priori estimates. Possible applications...
We consider an optimal control problem of Mayer type and prove that, under suitable conditions on the system, the value function is differentiable along optimal trajectories, except possibly at the endpoints. We provide counterexamples to show that this property may fail to hold if some of our conditions are violated. We then apply our regularity result to derive optimality conditions for the trajectories of the system.
We consider an optimal control problem of Mayer type and prove that, under suitable conditions on the system, the value function is differentiable along optimal trajectories, except possibly at the endpoints. We provide counterexamples to show that this property may fail to hold if some of our conditions are violated. We then apply our regularity result to derive optimality conditions for the trajectories of the system.
We study the Hamilton-Jacobi equation of the minimal time function in a domain which contains the target set. We generalize the results of Clarke and Nour [J. Convex Anal., 2004], where the target set is taken to be a single point. As an application, we give necessary and sufficient conditions for the existence of solutions to eikonal equations.
We study the Hamilton-Jacobi equation of the minimal time function in a domain which contains the target set. We generalize the results of Clarke and Nour [J. Convex Anal., 2004], where the target set is taken to be a single point. As an application, we give necessary and sufficient conditions for the existence of solutions to eikonal equations.
This paper studies a class of discrete-time discounted semi-Markov control model on Borel spaces. We assume possibly unbounded costs and a non-stationary exponential form in the discount factor which depends of on a rate, called the discount rate. Given an initial discount rate the evolution in next steps depends on both the previous discount rate and the sojourn time of the system at the current state. The new results provided here are the existence and the approximation of optimal policies for...
A production inventory problem with limited backlogging and with stockouts is described in a discrete time, stochastic optimal control framework with finite horizon. It is proved by dynamic programming methods that an optimal policy is of (s,S)-type. This means that in every period the policy is completely determined by two fixed levels of the stochastic inventory process considered.
A zero-sum stochastic differential game problem on infinite horizon with continuous and impulse controls is studied. We obtain the existence of the value of the game and characterize it as the unique viscosity solution of the associated system of quasi-variational inequalities. We also obtain a verification theorem which provides an optimal strategy of the game.