Newton's Method for Gradient Equations Based upon the Fixed Point Map: Convergence and Complexity Study.
Joseph W. Jerome (1987)
Numerische Mathematik
Similarity:
Joseph W. Jerome (1987)
Numerische Mathematik
Similarity:
K.M. BROWN, J.E. JR. DENNIS (1968)
Numerische Mathematik
Similarity:
J.E. jr. DENNIS (1968)
Numerische Mathematik
Similarity:
J.C.P. Bus (1976/1977)
Numerische Mathematik
Similarity:
T.J. Ypma (1984)
Numerische Mathematik
Similarity:
Kruk, Serge, Wolkowicz, Henry (2003)
Journal of Applied Mathematics
Similarity:
W.M. Häußler (1986)
Numerische Mathematik
Similarity:
L.B. RALL (1966/67)
Numerische Mathematik
Similarity:
Ioannis K. Argyros, Santhosh George (2019)
Commentationes Mathematicae Universitatis Carolinae
Similarity:
A. Cordero et. al (2010) considered a modified Newton-Jarratt's composition to solve nonlinear equations. In this study, using decomposition technique under weaker assumptions we extend the applicability of this method. Numerical examples where earlier results cannot apply to solve equations but our results can apply are also given in this study.
Argyros, Ioannis K. (1998)
Southwest Journal of Pure and Applied Mathematics [electronic only]
Similarity:
O. Knoth (1989/90)
Numerische Mathematik
Similarity:
Ioannis K. Argyros, Santhosh George (2013)
Applicationes Mathematicae
Similarity:
We present new semilocal convergence conditions for a two-step Newton-like projection method of Lavrentiev regularization for solving ill-posed equations in a Hilbert space setting. The new convergence conditions are weaker than in earlier studies. Examples are presented to show that older convergence conditions are not satisfied but the new conditions are satisfied.
Polyak, B.T. (2004)
Journal of Mathematical Sciences (New York)
Similarity:
Ioannis K. Argyros, Santhosh George (2015)
Applicationes Mathematicae
Similarity:
We present a local convergence analysis of inexact Newton-like methods for solving nonlinear equations. Using more precise majorant conditions than in earlier studies, we provide: a larger radius of convergence; tighter error estimates on the distances involved; and a clearer relationship between the majorant function and the associated least squares problem. Moreover, these advantages are obtained under the same computational cost.