On optimal methods in numerical analysis
Round-off error analysis of the gradient method.
Round-off error analysis of the gradient method.
In this paper we present the motivation for using the Truncated Newton method in an algorithm that maximises a non-linear function with additional maximin-like arguments subject to a network-like linear system of constraints. The special structure of the network (so-termed replicated quasi-arborescence) allows to introduce the new concept of independent superbasic sets and, then, using second-order information about the objective function without too much computer effort and storage.
For a linear complementarity problem with inconsistent system of constraints a notion of quasi-solution of Tschebyshev type is introduced. It’s shown that this solution can be obtained automatically by Lemke’s method if the constraint matrix of the original problem is copositive plus or belongs to the intersection of matrix classes P 0 and Q 0.
Studying a critical value function in parametric nonlinear programming, we recall conditions guaranteeing that is a function and derive second order Taylor expansion formulas including second-order terms in the form of certain generalized derivatives of . Several specializations and applications are discussed. These results are understood as supplements to the well–developed theory of first- and second-order directional differentiability of the optimal value function in parametric optimization....
In this paper a genetic algorithm (GA) is applied on Maximum Betweennes Problem (MBP). The maximum of the objective function is obtained by finding a permutation which satisfies a maximal number of betweenness constraints. Every permutation considered is genetically coded with an integer representation. Standard operators are used in the GA. Instances in the experimental results are randomly generated. For smaller dimensions, optimal solutions of MBP are obtained by total enumeration. For those...
The system of inequalities is transformed to the least squares problem on the positive ortant. This problem is solved using orthogonal transformations which are memorized as products. Author’s previous paper presented a method where at each step all the coefficients of the system were transformed. This paper describes a method applicable also to large matrices. Like in revised simplex method, in this method an auxiliary matrix is used for the computations. The algorithm is suitable for unstable...
Some iterative methods of mathematical programming use a damping sequence {αt} such that 0 ≤ αt ≤ 1 for all t, αt → 0 as t → ∞, and Σ αt = ∞. For example, αt = 1/(t+1) in Brown's method for solving matrix games. In this paper, for a model class of iterative methods, the convergence rate for any damping sequence {αt} depending only on time t is computed. The computation is used to find the best damping sequence.