The Newton-kantorovich Method under Mild Differentiability Conditions and the Patak Error Estimates.
The Newton-Mysovskikh theorem provides sufficient conditions for the semilocal convergence of Newton's method to a locally unique solution of an equation in a Banach space setting. It turns out that under weaker hypotheses and a more precise error analysis than before, weaker sufficient conditions can be obtained for the local as well as semilocal convergence of Newton's method. Error bounds on the distances involved as well as a larger radius of convergence are obtained. Some numerical examples...
We provide local and semilocal convergence results for Newton's method when used to solve generalized equations. Using Lipschitz as well as center-Lipschitz conditions on the operators involved instead of just Lipschitz conditions we show that our Newton-Kantorovich hypotheses are weaker than earlier sufficient conditions for the convergence of Newton's method. In the semilocal case we provide finer error bounds and a better information on the location of the solution. In the local case we can provide...
We provide new sufficient convergence conditions for the local and semilocal convergence of Stirling's method to a locally unique solution of a nonlinear operator equation in a Banach space setting. In contrast to earlier results we do not make use of the basic restrictive assumption in [8] that the norm of the Fréchet derivative of the operator involved is strictly bounded above by 1. The study concludes with a numerical example where our results compare favorably with earlier ones.
The Newton-Kantorovich hypothesis (15) has been used for a long time as a sufficient condition for convergence of Newton's method to a locally unique solution of a nonlinear equation in a Banach space setting. Recently in [3], [4] we showed that this hypothesis can always be replaced by a condition weaker in general (see (18), (19) or (20)) whose verification requires the same computational cost. Moreover, finer error bounds and at least as precise information on the location of the solution can...
We provide new local and semilocal convergence results for Newton's method. We introduce Lipschitz-type hypotheses on the mth-Frechet derivative. This way we manage to enlarge the radius of convergence of Newton's method. Numerical examples are also provided to show that our results guarantee convergence where others do not.
We present a local and a semilocal analysis for Newton-like methods in a Banach space. Our hypotheses on the operators involved are very general. It turns out that by choosing special cases for the "majorizing" functions we obtain all previous results in the literature, but not vice versa. Since our results give a deeper insight into the structure of the functions involved, we can obtain semilocal convergence under weaker conditions and in the case of local convergence a larger convergence radius....
We provide local convergence theorems for the convergence of Newton's method to a solution of an equation in a Banach space utilizing only information at one point. It turns out that for analytic operators the convergence radius for Newton's method is enlarged compared with earlier results. A numerical example is also provided that compares our results favorably with earlier ones.
We answer a question posed by Cianciaruso and De Pascale: What is the exact size of the gap between the semilocal convergence domains of the Newton and the modified Newton method? In particular, is it possible to close it? Our answer is yes in some cases. Using some ideas of ours and more precise error estimates we provide a semilocal convergence analysis for both methods with the following advantages over earlier approaches: weaker hypotheses; finer error bounds on the distances involved, and at...
The Newton-Kantorovich approach and the majorant principle are used to provide new local and semilocal convergence results for Newton-like methods using outer or generalized inverses in a Banach space setting. Using the same conditions as before, we provide more precise information on the location of the solution and on the error bounds on the distances involved. Moreover since our Newton-Kantorovich-type hypothesis is weaker than before, we can cover cases where the original Newton-Kantorovich...
We present a local and a semi-local convergence analysis of an iterative method for approximating zeros of derivatives for solving univariate and unconstrained optimization problems. In the local case, the radius of convergence is obtained, whereas in the semi-local case, sufficient convergence criteria are presented. Numerical examples are also provided.
Using a weaker version of the Newton-Kantorovich theorem, we provide a discretization result to find finite element solutions of elliptic boundary value problems. Our hypotheses are weaker and under the same computational cost lead to finer estimates on the distances involved and a more precise information on the location of the solution than before.
We provide a local as well as a semilocal convergence analysis for Newton's method to approximate a locally unique solution of an equation in a Banach space setting. Using a combination of center-gamma with a gamma-condition, we obtain an upper bound on the inverses of the operators involved which can be more precise than those given in the elegant works by Smale, Wang, and Zhao and Wang. This observation leads (under the same or less computational cost) to a convergence analysis with the following...
We provide new sufficient conditions for the convergence of the secant method to a locally unique solution of a nonlinear equation in a Banach space. Our new idea uses “Lipschitz-type” and center-“Lipschitz-type” instead of just “Lipschitz-type” conditions on the divided difference of the operator involved. It turns out that this way our error bounds are more precise than the earlier ones and under our convergence hypotheses we can cover cases where the earlier conditions are violated.
We provide local convergence theorems for Newton’s method in Banach space using outer or generalized inverses. In contrast to earlier results we use hypotheses on the second instead of the first Fréchet-derivative. This way our convergence balls differ from earlier ones. In fact we show that with a simple numerical example that our convergence ball contains earlier ones. This way we have a wider choice of initial guesses than before. Our results can be used to solve undetermined systems, nonlinear...
Page 1 Next