Displaying 441 – 460 of 839

Showing per page

Modifications of the limited-memory BFGS method based on the idea of conjugate directions

Vlček, Jan, Lukšan, Ladislav (2013)

Programs and Algorithms of Numerical Mathematics

Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections of the used difference vectors (derived from the idea of conjugate directions), utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. The algorithm is globally convergent for convex sufficiently...

Modified golden ratio algorithms for pseudomonotone equilibrium problems and variational inequalities

Lulu Yin, Hongwei Liu, Jun Yang (2022)

Applications of Mathematics

We propose a modification of the golden ratio algorithm for solving pseudomonotone equilibrium problems with a Lipschitz-type condition in Hilbert spaces. A new non-monotone stepsize rule is used in the method. Without such an additional condition, the theorem of weak convergence is proved. Furthermore, with strongly pseudomonotone condition, the $R$-linear convergence rate of the method is established. The results obtained are applied to a variational inequality problem, and the convergence rate...

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments show the...

New quasi-Newton method for solving systems of nonlinear equations

Ladislav Lukšan, Jan Vlček (2017)

Applications of Mathematics

We propose a new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR or LU decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires O ( n 2 ) arithmetic operations per iteration in contrast with the Newton method, which requires O ( n 3 ) operations per iteration. Computational experiments confirm the high efficiency...

Newton and conjugate gradient for harmonic maps from the disc into the sphere

Morgan Pierre (2004)

ESAIM: Control, Optimisation and Calculus of Variations

We compute numerically the minimizers of the Dirichlet energy E ( u ) = 1 2 B 2 | u | 2 d x among maps u : B 2 S 2 from the unit disc into the unit sphere that satisfy a boundary condition and a degree condition. We use a Sobolev gradient algorithm for the minimization and we prove that its continuous version preserves the degree. For the discretization of the problem we use continuous P 1 finite elements. We propose an original mesh-refining strategy needed to preserve the degree with the discrete version of the algorithm (which is a preconditioned...

Newton's methods for variational inclusions under conditioned Fréchet derivative

Ioannis K. Argyros, Saïd Hilout (2007)

Applicationes Mathematicae

Estimates of the radius of convergence of Newton's methods for variational inclusions in Banach spaces are investigated under a weak Lipschitz condition on the first Fréchet derivative. We establish the linear convergence of Newton's and of a variant of Newton methods using the concepts of pseudo-Lipschitz set-valued map and ω-conditioned Fréchet derivative or the center-Lipschitz condition introduced by the first author.

Nonlinear conjugate gradient methods

Lukšan, Ladislav, Vlček, Jan (2015)

Programs and Algorithms of Numerical Mathematics

Modifications of nonlinear conjugate gradient method are described and tested.

Nonlinear Rescaling Method and Self-concordant Functions

Richard Andrášik (2013)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Nonlinear rescaling is a tool for solving large-scale nonlinear programming problems. The primal-dual nonlinear rescaling method was used to solve two quadratic programming problems with quadratic constraints. Based on the performance of primal-dual nonlinear rescaling method on testing problems, the conclusions about setting up the parameters are made. Next, the connection between nonlinear rescaling methods and self-concordant functions is discussed and modified logarithmic barrier function is...

Currently displaying 441 – 460 of 839