Necessary and sufficient optimality conditions for two-stage stochastic programming problems
Page 1 Next
Vlasta Kaňková (1989)
Kybernetika
Jaroslav Doležal (1981)
Kybernetika
Marie Dvorská, Karel Pastor (2015)
Kybernetika
In the paper we present second-order necessary conditions for constrained vector optimization problems in infinite-dimensional spaces. In this way we generalize some corresponding results obtained earlier.
Jaroslav Doležal (1975)
Kybernetika
A. Friedlander, J. M. Martinez (1992)
RAIRO - Operations Research - Recherche Opérationnelle
J-B. Hiriart-Urruty (1979)
Mémoires de la Société Mathématique de France
José Mario Martínez, Sandra Augusta Santos (1997)
RAIRO - Operations Research - Recherche Opérationnelle
Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)
Kybernetika
The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of and . We compute the convex parameter using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments show the...
Sheng Huang, Sanjo Zlobec (1988)
Aplikace matematiky
using point-to-set mappings we identify two new regions of stability in input optimization. Then we extend various results from the literature on optimality conditions, continuity of Lagrange multipliers, and the marginal value formula over the new and some old regions of stability.
Djamel Aaid, Amel Noui, Mohand Ouanes (2017)
Archivum Mathematicum
In this paper, a new global optimization method is proposed for an optimization problem with twice differentiable objective function a single variable with box constraint. The method employs a difference of linear interpolant of the objective and a concave function, where the former is a continuous piecewise convex quadratic function underestimator. The main objectives of this research are to determine the value of the lower bound that does not need an iterative local optimizer. The proposed method...
Yan Gao (2001)
Applications of Mathematics
The paper is devoted to two systems of nonsmooth equations. One is the system of equations of max-type functions and the other is the system of equations of smooth compositions of max-type functions. The Newton and approximate Newton methods for these two systems are proposed. The Q-superlinear convergence of the Newton methods and the Q-linear convergence of the approximate Newton methods are established. The present methods can be more easily implemented than the previous ones, since they do not...
James W. Daniel (1973)
Numerische Mathematik
Lukšan, Ladislav, Vlček, Jan (2015)
Programs and Algorithms of Numerical Mathematics
Modifications of nonlinear conjugate gradient method are described and tested.
Zhang, Jianguo, Xiao, Yunhai, Wei, Zengxin (2009)
Mathematical Problems in Engineering
Alfonso G. Azpeitia (1987)
Revista colombiana de matematicas
Fridrich Sloboda (1976)
Aplikace matematiky
Leon S. Lasdon, Richard L. Fox, Margery W. Ratner (1974)
RAIRO - Operations Research - Recherche Opérationnelle
T. L. Magnanti (1975)
RAIRO - Operations Research - Recherche Opérationnelle
Richard Andrášik (2013)
Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica
Nonlinear rescaling is a tool for solving large-scale nonlinear programming problems. The primal-dual nonlinear rescaling method was used to solve two quadratic programming problems with quadratic constraints. Based on the performance of primal-dual nonlinear rescaling method on testing problems, the conclusions about setting up the parameters are made. Next, the connection between nonlinear rescaling methods and self-concordant functions is discussed and modified logarithmic barrier function is...
Lukšan, Ladislav, Matonoha, Ctirad, Vlček, Jan (2025)
Programs and Algorithms of Numerical Mathematics
The contribution deals with the description of two nonsmooth equation methods for inequality constrained mathematical programming problems. Three algorithms are presented and their efficiency is demonstrated by numerical experiments.
Page 1 Next