The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
A full Nesterov-Todd step infeasible interior-point algorithm is proposed for solving linear programming problems over symmetric cones by using the Euclidean Jordan algebra. Using a new approach, we also provide a search direction and show that the iteration bound coincides with the best known bound for infeasible interior-point methods.
The main purpose of this paper is to describe the design, implementation and possibilities of our object-oriented library of algorithms for dynamic optimization problems. We briefly present library classes for the formulation and manipulation of dynamic optimization problems, and give a general survey of solver classes for unconstrained and constrained optimization. We also demonstrate methods of derivative evaluation that we used, in particular automatic differentiation. Further, we briefly formulate...
In this work, we study the properties of central paths, defined with respect to a large class of penalty and barrier functions, for convex semidefinite programs. The type of programs studied here is characterized by the minimization of a smooth and convex objective function subject to a linear matrix inequality constraint. So, it is a particular case of convex programming with conic constraints. The studied class of functions consists of spectrally defined functions induced by penalty or barrier...
An application of advanced optimization techniques to solve the path planning problem for closed chain robot systems is proposed. The approach to path planning is formulated as a “quasi-dynamic” NonLinear Programming (NLP) problem with equality and inequality constraints in terms of the joint variables. The essence of the method is to find joint paths which satisfy the given constraints and minimize the proposed performance index. For numerical solution of the NLP problem, the IPOPT solver is used,...
In the present paper rather general penalty/barrier path-following methods (e.g. with p-th power penalties, logarithmic barriers, SUMT, exponential penalties) applied to linearly constrained convex optimization problems are studied. In particular, unlike in previous studies [1,11], here simultaneously different types of penalty/barrier embeddings are included. Together with the assumed 2nd order sufficient optimality conditions this required a significant change in proving the local existence of...
In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. (i. e. interior point method that uses explicitly computed approximations of Lagrange multipliers instead of their updates). Next we describe the basic algorithm and give more details concerning its implementation...
In this paper, we propose a primal interior-point method for large sparse minimax optimization. After a short introduction, the complete algorithm is introduced and important implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus the large sparse nonconvex minimax optimization problems can be solved successfully. The results of extensive computational experiments given in this paper confirm efficiency and robustness of the proposed...
In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately multiplications and additions per iteration, so it is comparable with other efficient limited memory variable metric methods. Numerical experiments concerning Algorithm...
We propose new projection method for nonsmooth convex minimization problems. We present some method of subgradient selection, which is based on the so called residual selection model and is a generalization of the so called obtuse cone model. We also present numerical results for some test problems and compare these results with some other convex nonsmooth minimization methods. The numerical results show that the presented selection strategies ensure long steps and lead to an essential acceleration...
Se presenta un algoritmo de punto interior para la solución de problemas cuadráticos simétricos y definidos positivos, mediante su transformación en problemas equivalentes separables (esto es, la matriz de coeficientes cuadráticos es diagonal y no existen términos cruzados). El algoritmo difiere de otros ya existentes (como el implementado en el sistema LoQo) en el hecho de que soluciona las denominadas "ecuaciones normales en forma primal" (LoQo soluciona el denominado "sistema aumentado") y en...
Dans cet article, nous proposons une nouvelle méthode de purification pour les problèmes de complémentarité linéaire, monotones. Cette méthode associe à chaque itéré de la suite, générée par une méthode de points intérieurs, une base non nécessairement réalisable. Nous montrons que, sous les hypothèses de complémentarité stricte et de non dégénérescence, la suite des bases converge en un nombre fini d’itérations vers une base optimale qui donne une solution exacte du problème. Le procédé adopté...
Dans cet article, nous proposons une nouvelle méthode de purification pour les problèmes de complémentarité linéaire, monotones. Cette méthode associe à chaque itéré de la suite, générée par une méthode de points intérieurs, une base non nécessairement réalisable. Nous montrons que, sous les hypothèses de complémentarité stricte et de non dégénérescence, la suite des bases converge en un nombre fini d'itérations vers une base optimale qui donne une solution exacte du problème. Le procédé adopté...
Currently displaying 41 –
56 of
56