Displaying similar documents to “A self-scaling memoryless BFGS based conjugate gradient method using multi-step secant condition for unconstrained minimization”

A modified Fletcher-Reeves conjugate gradient method for unconstrained optimization with applications in image restoration

Zainab Hassan Ahmed, Mohamed Hbaib, Khalil K. Abbo (2024)

Applications of Mathematics

Similarity:

The Fletcher-Reeves (FR) method is widely recognized for its drawbacks, such as generating unfavorable directions and taking small steps, which can lead to subsequent poor directions and steps. To address this issue, we propose a modification to the FR method, and then we develop it into the three-term conjugate gradient method in this paper. The suggested methods, named ``HZF'' and ``THZF'', preserve the descent property of the FR method while mitigating the drawbacks. The algorithms...

Adjustment of the scaling parameter of Dai-Kou type conjugate gradient methods with application to motion control

Mahbube Akbari, Saeed Nezhadhosein, Aghile Heydari (2024)

Applications of Mathematics

Similarity:

We introduce a new scaling parameter for the Dai-Kou family of conjugate gradient algorithms (2013), which is one of the most numerically efficient methods for unconstrained optimization. The suggested parameter is based on eigenvalue analysis of the search direction matrix and minimizing the measure function defined by Dennis and Wolkowicz (1993). The corresponding search direction of conjugate gradient method has the sufficient descent property and the extended conjugacy condition....

Modifications of the limited-memory BFGS method based on the idea of conjugate directions

Vlček, Jan, Lukšan, Ladislav

Similarity:

Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections of the used difference vectors (derived from the idea of conjugate directions), utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. The algorithm is globally convergent for...