On discrete control problems having a minmax type objective functional
We investigate the existence of the solution to the following problem min φ(x) subject to G(x)=0, where φ: X → ℝ, G: X → Y and X,Y are Banach spaces. The question of existence is considered in a neighborhood of such point x₀ that the Hessian of the Lagrange function is degenerate. There was obtained an approximation for the distance of solution x* to the initial point x₀.
In this paper, we show how optimization methods can be used efficiently to determine the parameters of an oscillatory model of handwriting. Because these methods have to be used in real-time applications, this involves that the optimization problems must be rapidely solved. Hence, we developed an original heuristic algorithm, named FHA. This code was validated by comparing it (accuracy/CPU-times) with a multistart method based on Trust Region Reflective algorithm.
Henrici’s transformation is a generalization of Aitken’s -process to the vector case. It has been used for accelerating vector sequences. We use a modified version of Henrici’s transformation for solving some unconstrained nonlinear optimization problems. A convergence acceleration result is established and numerical examples are given.
In the paper necessary optimality conditions are derived for the minimization of a locally Lipschitz objective with respect to the consttraints , where is a closed set and is a set-valued map. No convexity requirements are imposed on . The conditions are applied to a generalized mathematical programming problem and to an abstract finite-dimensional optimal control problem.
In this paper we present the motivation for using the Truncated Newton method in an algorithm that maximises a non-linear function with additional maximin-like arguments subject to a network-like linear system of constraints. The special structure of the network (so-termed replicated quasi-arborescence) allows to introduce the new concept of independent superbasic sets and, then, using second-order information about the objective function without too much computer effort and storage.
Studying a critical value function in parametric nonlinear programming, we recall conditions guaranteeing that is a function and derive second order Taylor expansion formulas including second-order terms in the form of certain generalized derivatives of . Several specializations and applications are discussed. These results are understood as supplements to the well–developed theory of first- and second-order directional differentiability of the optimal value function in parametric optimization....
The minimization of a nonlinear function with linear and nonlinear constraints and simple bounds can be performed by minimizing an augmented Lagrangian function, including only the nonlinear constraints. This procedure is particularly interesting in case that the linear constraints are flow conservation equations, as there exist efficient techniques to solve nonlinear network problems. It is then necessary to estimate their multipliers, and variable reduction techniques can be used to carry out...
The minimization of a nonlinear function subject to linear and nonlinear equality constraints and simple bounds can be performed through minimizing a partial augmented Lagrangian function subject only to linear constraints and simple bounds by variable reduction techniques. The first-order procedure for estimating the multiplier of the nonlinear equality constraints through the Kuhn-Tucker conditions is analyzed and compared to that of Hestenes-Powell. There is a method which identifies those major...