Page 1

Displaying 1 – 10 of 10

Showing per page

A conjugate gradient method with quasi-Newton approximation

Jonas Koko (2000)

Applicationes Mathematicae

The conjugate gradient method of Liu and Storey is an efficient minimization algorithm which uses second derivatives information, without saving matrices, by finite difference approximation. It is shown that the finite difference scheme can be removed by using a quasi-Newton approximation for computing a search direction, without loss of convergence. A conjugate gradient method based on BFGS approximation is proposed and compared with existing methods of the same class.

A modified limited-memory BNS method for unconstrained minimization derived from the conjugate directions idea

Vlček, Jan, Lukšan, Ladislav (2015)

Programs and Algorithms of Numerical Mathematics

A modification of the limited-memory variable metric BNS method for large scale unconstrained optimization of the differentiable function f : N is considered, which consists in corrections (based on the idea of conjugate directions) of difference vectors for better satisfaction of the previous quasi-Newton conditions. In comparison with [11], more previous iterations can be utilized here. For quadratic objective functions, the improvement of convergence is the best one in some sense, all stored corrected...

A new one-step smoothing newton method for second-order cone programming

Jingyong Tang, Guoping He, Li Dong, Liang Fang (2012)

Applications of Mathematics

In this paper, we present a new one-step smoothing Newton method for solving the second-order cone programming (SOCP). Based on a new smoothing function of the well-known Fischer-Burmeister function, the SOCP is approximated by a family of parameterized smooth equations. Our algorithm solves only one system of linear equations and performs only one Armijo-type line search at each iteration. It can start from an arbitrary initial point and does not require the iterative points to be in the sets...

A nonmonotone line search for the LBFGS method in parabolic optimal control problems

Omid Solaymani Fard, Farhad Sarani, Akbar Hashemi Borzabadi, Hadi Nosratipour (2019)

Kybernetika

In this paper a nonmonotone limited memory BFGS (NLBFGS) method is applied for approximately solving optimal control problems (OCPs) governed by one-dimensional parabolic partial differential equations. A discretized optimal control problem is obtained by using piecewise linear finite element and well-known backward Euler methods. Afterwards, regarding the implicit function theorem, the optimal control problem is transformed into an unconstrained nonlinear optimization problem (UNOP). Finally the...

A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization)

Marek Smietanski (2008)

Open Mathematics

An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.

A smoothing Newton method for the second-order cone complementarity problem

Jingyong Tang, Guoping He, Li Dong, Liang Fang, Jinchuan Zhou (2013)

Applications of Mathematics

In this paper we introduce a new smoothing function and show that it is coercive under suitable assumptions. Based on this new function, we propose a smoothing Newton method for solving the second-order cone complementarity problem (SOCCP). The proposed algorithm solves only one linear system of equations and performs only one line search at each iteration. It is shown that any accumulation point of the iteration sequence generated by the proposed algorithm is a solution to the SOCCP. Furthermore,...

An accurate active set Newton algorithm for large scale bound constrained optimization

Li Sun, Guoping He, Yongli Wang, Changyin Zhou (2011)

Applications of Mathematics

A new algorithm for solving large scale bound constrained minimization problems is proposed. The algorithm is based on an accurate identification technique of the active set proposed by Facchinei, Fischer and Kanzow in 1998. A further division of the active set yields the global convergence of the new algorithm. In particular, the convergence rate is superlinear without requiring the strict complementarity assumption. Numerical tests demonstrate the efficiency and performance of the present strategy...

An interior point algorithm for convex quadratic programming with strict equilibrium constraints

Rachid Benouahboun, Abdelatif Mansouri (2005)

RAIRO - Operations Research - Recherche Opérationnelle

We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of O ( n L ) number of iterations, where L is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton’s method.

An interior point algorithm for convex quadratic programming with strict equilibrium constraints

Rachid Benouahboun, Abdelatif Mansouri (2010)

RAIRO - Operations Research

We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of O ( n L ) number of iterations, where L is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton's method.

Currently displaying 1 – 10 of 10

Page 1