Displaying similar documents to “Application of the infinitely many times repeated BNS update and conjugate directions to limited-memory optimization methods”

An improved nonmonotone adaptive trust region method

Yanqin Xue, Hongwei Liu, Zexian Liu (2019)

Applications of Mathematics

Similarity:

Trust region methods are a class of effective iterative schemes in numerical optimization. In this paper, a new improved nonmonotone adaptive trust region method for solving unconstrained optimization problems is proposed. We construct an approximate model where the approximation to Hessian matrix is updated by the scaled memoryless BFGS update formula, and incorporate a nonmonotone technique with the new proposed adaptive trust region radius. The new ratio to adjusting the next trust...

A modified Fletcher-Reeves conjugate gradient method for unconstrained optimization with applications in image restoration

Zainab Hassan Ahmed, Mohamed Hbaib, Khalil K. Abbo (2024)

Applications of Mathematics

Similarity:

The Fletcher-Reeves (FR) method is widely recognized for its drawbacks, such as generating unfavorable directions and taking small steps, which can lead to subsequent poor directions and steps. To address this issue, we propose a modification to the FR method, and then we develop it into the three-term conjugate gradient method in this paper. The suggested methods, named ``HZF'' and ``THZF'', preserve the descent property of the FR method while mitigating the drawbacks. The algorithms...

Adjustment of the scaling parameter of Dai-Kou type conjugate gradient methods with application to motion control

Mahbube Akbari, Saeed Nezhadhosein, Aghile Heydari (2024)

Applications of Mathematics

Similarity:

We introduce a new scaling parameter for the Dai-Kou family of conjugate gradient algorithms (2013), which is one of the most numerically efficient methods for unconstrained optimization. The suggested parameter is based on eigenvalue analysis of the search direction matrix and minimizing the measure function defined by Dennis and Wolkowicz (1993). The corresponding search direction of conjugate gradient method has the sufficient descent property and the extended conjugacy condition....

A generalized limited-memory BNS method based on the block BFGS update

Vlček, Jan, Lukšan, Ladislav

Similarity:

A block version of the BFGS variable metric update formula is investigated. It satisfies the quasi-Newton conditions with all used difference vectors and gives the best improvement of convergence in some sense for quadratic objective functions, but it does not guarantee that the direction vectors are descent for general functions. To overcome this difficulty and utilize the advantageous properties of the block BFGS update, a block version of the limited-memory BNS method for large scale...

A new nonmonotone adaptive trust region algorithm

Ahmad Kamandi, Keyvan Amini (2022)

Applications of Mathematics

Similarity:

We propose a new and efficient nonmonotone adaptive trust region algorithm to solve unconstrained optimization problems. This algorithm incorporates two novelties: it benefits from a radius dependent shrinkage parameter for adjusting the trust region radius that avoids undesirable directions and exploits a new strategy to prevent sudden increments of objective function values in nonmonotone trust region techniques. Global convergence of this algorithm is investigated under some mild...

Modifications of the limited-memory BFGS method based on the idea of conjugate directions

Vlček, Jan, Lukšan, Ladislav

Similarity:

Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections of the used difference vectors (derived from the idea of conjugate directions), utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. The algorithm is globally convergent for...

Multi-agent solver for non-negative matrix factorization based on optimization

Zhipeng Tu, Weijian Li (2021)

Kybernetika

Similarity:

This paper investigates a distributed solver for non-negative matrix factorization (NMF) over a multi-agent network. After reformulating the problem into the standard distributed optimization form, we design our distributed algorithm (DisNMF) based on the primal-dual method and in the form of multiplicative update rule. With the help of auxiliary functions, we provide monotonic convergence analysis. Furthermore, we show by computational complexity analysis and numerical examples that...

A self-scaling memoryless BFGS based conjugate gradient method using multi-step secant condition for unconstrained minimization

Yongjin Kim, Yunchol Jong, Yong Kim (2024)

Applications of Mathematics

Similarity:

Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. Based on the self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno (SSML-BFGS) method, new conjugate gradient algorithms CG-DESCENT and CGOPT have been proposed by W. Hager, H. Zhang (2005) and Y. Dai, C. Kou (2013), respectively. It is noted that the two conjugate gradient methods perform more efficiently than the SSML-BFGS method....

Random perturbation of the projected variable metric method for nonsmooth nonconvex optimization problems with linear constraints

Abdelkrim El Mouatasim, Rachid Ellaia, Eduardo Souza de Cursi (2011)

International Journal of Applied Mathematics and Computer Science

Similarity:

We present a random perturbation of the projected variable metric method for solving linearly constrained nonsmooth (i.e., nondifferentiable) nonconvex optimization problems, and we establish the convergence to a global minimum for a locally Lipschitz continuous objective function which may be nondifferentiable on a countable set of points. Numerical results show the effectiveness of the proposed approach.

The classic differential evolution algorithm and its convergence properties

Roman Knobloch, Jaroslav Mlýnek, Radek Srb (2017)

Applications of Mathematics

Similarity:

Differential evolution algorithms represent an up to date and efficient way of solving complicated optimization tasks. In this article we concentrate on the ability of the differential evolution algorithms to attain the global minimum of the cost function. We demonstrate that although often declared as a global optimizer the classic differential evolution algorithm does not in general guarantee the convergence to the global minimum. To improve this weakness we design a simple modification...