Displaying similar documents to “On an iterative method for unconstrained optimization”

Local convergence for a family of iterative methods based on decomposition techniques

Ioannis K. Argyros, Santhosh George, Shobha Monnanda Erappa (2016)

Applicationes Mathematicae

Similarity:

We present a local convergence analysis for a family of iterative methods obtained by using decomposition techniques. The convergence of these methods was shown before using hypotheses on up to the seventh derivative although only the first derivative appears in these methods. In the present study we expand the applicability of these methods by showing convergence using only the first derivative. Moreover we present a radius of convergence and computable error bounds based only on Lipschitz...

Local convergence comparison between two novel sixth order methods for solving equations

Santhosh George, Ioannis K. Argyros (2019)

Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica

Similarity:

The aim of this article is to provide the local convergence analysis of two novel competing sixth convergence order methods for solving equations involving Banach space valued operators. Earlier studies have used hypotheses reaching up to the sixth derivative but only the first derivative appears in these methods. These hypotheses limit the applicability of the methods. That is why we are motivated to present convergence analysis based only on the first derivative. Numerical examples...

Local-global convergence, an analytic and structural approach

Jaroslav Nešetřil, Patrice Ossona de Mendez (2019)

Commentationes Mathematicae Universitatis Carolinae

Similarity:

Based on methods of structural convergence we provide a unifying view of local-global convergence, fitting to model theory and analysis. The general approach outlined here provides a possibility to extend the theory of local-global convergence to graphs with unbounded degrees. As an application, we extend previous results on continuous clustering of local convergent sequences and prove the existence of modeling quasi-limits for local-global convergent sequences of nowhere dense graphs. ...

An improved nonmonotone adaptive trust region method

Yanqin Xue, Hongwei Liu, Zexian Liu (2019)

Applications of Mathematics

Similarity:

Trust region methods are a class of effective iterative schemes in numerical optimization. In this paper, a new improved nonmonotone adaptive trust region method for solving unconstrained optimization problems is proposed. We construct an approximate model where the approximation to Hessian matrix is updated by the scaled memoryless BFGS update formula, and incorporate a nonmonotone technique with the new proposed adaptive trust region radius. The new ratio to adjusting the next trust...

Local convergence of a multi-step high order method with divided differences under hypotheses on the first derivative

Ioannis K. Argyros, Santhosh George (2017)

Annales Universitatis Paedagogicae Cracoviensis. Studia Mathematica

Similarity:

This paper is devoted to the study of a multi-step method with divided differences for solving nonlinear equations in Banach spaces. In earlier studies, hypotheses on the Fréchet derivative up to the sixth order of the operator under consideration is used to prove the convergence of the method. That restricts the applicability of the method. In this paper we extended the applicability of the sixth-order multi-step method by using only hypotheses on the first derivative of the operator...

Expanding the applicability of two-point Newton-like methods under generalized conditions

Ioannis K. Argyros, Saïd Hilout (2013)

Applicationes Mathematicae

Similarity:

We use a two-point Newton-like method to approximate a locally unique solution of a nonlinear equation containing a non-differentiable term in a Banach space setting. Using more precise majorizing sequences than in earlier studies, we present a tighter semi-local and local convergence analysis and weaker convergence criteria. This way we expand the applicability of these methods. Numerical examples are provided where the old convergence criteria do not hold but the new convergence criteria...

État de l'art des méthodes “d'optimisation globale”

Gérard Berthiau, Patrick Siarry (2010)

RAIRO - Operations Research

Similarity:

We present a review of the main “global optimization" methods. The paper comprises one introduction and two parts. In the introduction, we recall some generalities about non linear constraint-less optimization and we list some classifications which have been proposed for the global optimization methods. We then describe, in the first part, various “classical" global optimization methods, most of which available long before the appearance of Simulated Annealing (a key event in this...

New technique for solving univariate global optimization

Djamel Aaid, Amel Noui, Mohand Ouanes (2017)

Archivum Mathematicum

Similarity:

In this paper, a new global optimization method is proposed for an optimization problem with twice differentiable objective function a single variable with box constraint. The method employs a difference of linear interpolant of the objective and a concave function, where the former is a continuous piecewise convex quadratic function underestimator. The main objectives of this research are to determine the value of the lower bound that does not need an iterative local optimizer. The...

Improved local convergence analysis of inexact Newton-like methods under the majorant condition

Ioannis K. Argyros, Santhosh George (2015)

Applicationes Mathematicae

Similarity:

We present a local convergence analysis of inexact Newton-like methods for solving nonlinear equations. Using more precise majorant conditions than in earlier studies, we provide: a larger radius of convergence; tighter error estimates on the distances involved; and a clearer relationship between the majorant function and the associated least squares problem. Moreover, these advantages are obtained under the same computational cost.

Distributed optimization with inexact oracle

Kui Zhu, Yichen Zhang, Yutao Tang (2022)

Kybernetika

Similarity:

In this paper, we study the distributed optimization problem using approximate first-order information. We suppose the agent can repeatedly call an inexact first-order oracle of each individual objective function and exchange information with its time-varying neighbors. We revisit the distributed subgradient method in this circumstance and show its suboptimality under square summable but not summable step sizes. We also present several conditions on the inexactness of the local oracles...

Local convergence analysis of a modified Newton-Jarratt's composition under weak conditions

Ioannis K. Argyros, Santhosh George (2019)

Commentationes Mathematicae Universitatis Carolinae

Similarity:

A. Cordero et. al (2010) considered a modified Newton-Jarratt's composition to solve nonlinear equations. In this study, using decomposition technique under weaker assumptions we extend the applicability of this method. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.