Displaying similar documents to “Descent methods for convex optimization problems in Banach spaces.”

A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization)

Marek Smietanski (2008)

Open Mathematics


An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.

Averaging approach to distributed convex optimization for continuous-time multi-agent systems

Wei Ni, Xiaoli Wang (2016)



Recently, distributed convex optimization has received much attention by many researchers. Current research on this problem mainly focuses on fixed network topologies, without enough attention to switching ones. This paper specially establishes a new technique called averaging-base approach to design a continuous-time distributed algorithm for convex optimization problem under switching topology. This idea of using averaging was proposed in our earlier works for the consensus problem...

Prox-regularization and solution of ill-posed elliptic variational inequalities

Alexander Kaplan, Rainer Tichatschke (1997)

Applications of Mathematics


In this paper new methods for solving elliptic variational inequalities with weakly coercive operators are considered. The use of the iterative prox-regularization coupled with a successive discretization of the variational inequality by means of a finite element method ensures well-posedness of the auxiliary problems and strong convergence of their approximate solutions to a solution of the original problem. In particular, regularization on the kernel of the differential operator and...