A new convergence theorem for Steffensen's method on Banach spaces and applications.
A new Kantorovich-type convergence theorem for Newton's method is established for approximating a locally unique solution of an equation F(x)=0 defined on a Banach space. It is assumed that the operator F is twice Fréchet differentiable, and that F', F'' satisfy Lipschitz conditions. Our convergence condition differs from earlier ones and therefore it has theoretical and practical value.
We give a derivation of an a-posteriori strategy for choosing the regularization parameter in Tikhonov regularization for solving nonlinear ill-posed problems, which leads to optimal convergence rates. This strategy requires a special stability estimate for the regularized solutions. A new proof fot this stability estimate is given.
We propose a penalty approach for a box constrained variational inequality problem . This problem is replaced by a sequence of nonlinear equations containing a penalty term. We show that if the penalty parameter tends to infinity, the solution of this sequence converges to that of when the function involved is continuous and strongly monotone and the box contains the origin. We develop the algorithmic aspect with theoretical arguments properly established. The numerical results tested on...
The time-dependent Stokes equations in two- or three-dimensional bounded domains are discretized by the backward Euler scheme in time and finite elements in space. The error of this discretization is bounded globally from above and locally from below by the sum of two types of computable error indicators, the first one being linked to the time discretization and the second one to the space discretization.
The time-dependent Stokes equations in two- or three-dimensional bounded domains are discretized by the backward Euler scheme in time and finite elements in space. The error of this discretization is bounded globally from above and locally from below by the sum of two types of computable error indicators, the first one being linked to the time discretization and the second one to the space discretization.
Shape optimization is described by finding the geometry of a structure which is optimal in the sense of a minimized cost function with respect to certain constraints. A Newton’s mesh independence principle was very efficiently used to solve a certain class of optimal design problems in [6]. Here motivated by optimization considerations we show that under the same computational cost an even finer mesh independence principle can be given.
In order to save CPU-time in solving large systems of equations in function spaces we decompose the large system in subsystems and solve the subsystems by an appropriate method. We give a sufficient condition for the convergence of the corresponding procedure and apply the approach to differential algebraic systems.
We provide a local as well as a semilocal convergence analysis for Newton's method using unifying hypotheses on twice Fréchet-differentiable operators in a Banach space setting. Our approach extends the applicability of Newton's method. Numerical examples are also provided.
The Newton-Mysovskikh theorem provides sufficient conditions for the semilocal convergence of Newton's method to a locally unique solution of an equation in a Banach space setting. It turns out that under weaker hypotheses and a more precise error analysis than before, weaker sufficient conditions can be obtained for the local as well as semilocal convergence of Newton's method. Error bounds on the distances involved as well as a larger radius of convergence are obtained. Some numerical examples...