Strict convex regularizations, proximal points and augmented lagrangians
Proximal Point Methods (PPM) can be traced to the pioneer works of Moreau [16], Martinet [14, 15] and Rockafellar [19, 20] who used as regularization function the square of the Euclidean norm. In this work, we study PPM in the context of optimization and we derive a class of such methods which contains Rockafellar's result. We also present a less stringent criterion to the acceptance of an approximate solution to the subproblems that arise in the inner loops of PPM. Moreover, we introduce a new...
In the paper, some sufficient optimality conditions for strict minima of order in constrained nonlinear mathematical programming problems involving (locally Lipschitz) -convex functions of order are presented. Furthermore, the concept of strict local minimizer of order is also used to state various duality results in the sense of Mond-Weir and in the sense of Wolfe for such nondifferentiable optimization problems.
This paper deals with continuous-time Markov decision processes with the unbounded transition rates under the strong average cost criterion. The state and action spaces are Borel spaces, and the costs are allowed to be unbounded from above and from below. Under mild conditions, we first prove that the finite-horizon optimal value function is a solution to the optimality equation for the case of uncountable state spaces and unbounded transition rates, and that there exists an optimal deterministic...
We are concerned with two-level optimization problems called strongweak Stackelberg problems, generalizing the class of Stackelberg problems in the strong and weak sense. In order to handle the fact that the considered two-level optimization problems may fail to have a solution under mild assumptions, we consider a regularization involving ε-approximate optimal solutions in the lower level problems. We prove the existence of optimal solutions for such regularized problems and present some approximation...