Necessary and sufficient optimality conditions for two-stage stochastic programming problems
In the paper we present second-order necessary conditions for constrained vector optimization problems in infinite-dimensional spaces. In this way we generalize some corresponding results obtained earlier.
The supervised learning process of multilayer feedforward neural networks can be considered as a class of multi-objective, multi-stage optimal control problem. An iterative parametric minimax method is proposed in which the original optimization problem is embedded into a weighted minimax formulation. The resulting auxiliary parametric optimization problems at the lower level have simple structures that are readily tackled by efficient solution methods, such as the dynamic programming or the error...
The coupled tasks scheduling problem is a class of scheduling problems introduced for beam steering software of sophisticated radar devices, called phased arrays. Due to increasing popularity of such radars, the importance of coupled tasks scheduling is constantly growing. Unfortunately, most of the coupled tasks problems are NP-hard, and only a few practically usable algorithms for such problems were found. This paper provides a survey of already known complexity results of various variants of...
A full Nesterov-Todd step infeasible interior-point algorithm is proposed for solving linear programming problems over symmetric cones by using the Euclidean Jordan algebra. Using a new approach, we also provide a search direction and show that the iteration bound coincides with the best known bound for infeasible interior-point methods.
This paper provides KKT and saddle point optimality conditions, duality theorems and stability theorems for consistent convex optimization problems posed in locally convex topological vector spaces. The feasible sets of these optimization problems are formed by those elements of a given closed convex set which satisfy a (possibly infinite) convex system. Moreover, all the involved functions are assumed to be convex, lower semicontinuous and proper (but not necessarily real-valued). The key result...
The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of and . We compute the convex parameter using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments show the...
using point-to-set mappings we identify two new regions of stability in input optimization. Then we extend various results from the literature on optimality conditions, continuity of Lagrange multipliers, and the marginal value formula over the new and some old regions of stability.
In this paper, we show that the direct semidefinite programming (SDP) bound for the nonconvex quadratic optimization problem over ℓ1 unit ball (QPL1) is equivalent to the optimal d.c. (difference between convex) bound for the standard quadratic programming reformulation of QPL1. Then we disprove a conjecture about the tightness of the direct SDP bound. Finally, as an extension of QPL1, we study the relaxation problem of the sparse principal component analysis, denoted by QPL2L1. We show that the...