Descent methods for convex optimization problems in Banach spaces.
2000 Mathematics Subject Classification: 90C48, 49N15, 90C25In this paper we reconsider a nonconvex duality theory established by B. Lemaire and M. Volle (see [4]), related to a primal problem of minimizing the difference of two convex functions subject to a DC-constraint. The purpose of this note is to present a new method based on Toland-Singer duality principle. Applications to the case when the constraints are vector-valued are provided.
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions....
In this paper, a distributed optimal consensus problem is investigated to achieve the optimization of the sum of local cost function for a group of agents in the Euler-Lagrangian (EL) system form. We consider that the local cost function of each agent is only known by itself and cannot be shared with others, which brings challenges in this distributed optimization problem. A novel gradient-based distributed continuous-time algorithm with the parameters of EL system is proposed, which takes the distributed...
We prove that under some topological assumptions (e.g. if M has nonempty interior in X), a convex cone M in a linear topological space X is a linear subspace if and only if each convex functional on M has a convex extension on the whole space X.