Page 1 Next

Displaying 1 – 20 of 23

Showing per page

New regions of stability in input optimization

Sheng Huang, Sanjo Zlobec (1988)

Aplikace matematiky

using point-to-set mappings we identify two new regions of stability in input optimization. Then we extend various results from the literature on optimality conditions, continuity of Lagrange multipliers, and the marginal value formula over the new and some old regions of stability.

New technique for solving univariate global optimization

Djamel Aaid, Amel Noui, Mohand Ouanes (2017)

Archivum Mathematicum

In this paper, a new global optimization method is proposed for an optimization problem with twice differentiable objective function a single variable with box constraint. The method employs a difference of linear interpolant of the objective and a concave function, where the former is a continuous piecewise convex quadratic function underestimator. The main objectives of this research are to determine the value of the lower bound that does not need an iterative local optimizer. The proposed method...

Newton methods for solving two classes of nonsmooth equations

Yan Gao (2001)

Applications of Mathematics

The paper is devoted to two systems of nonsmooth equations. One is the system of equations of max-type functions and the other is the system of equations of smooth compositions of max-type functions. The Newton and approximate Newton methods for these two systems are proposed. The Q-superlinear convergence of the Newton methods and the Q-linear convergence of the approximate Newton methods are established. The present methods can be more easily implemented than the previous ones, since they do not...

Nonlinear conjugate gradient methods

Lukšan, Ladislav, Vlček, Jan (2015)

Programs and Algorithms of Numerical Mathematics

Modifications of nonlinear conjugate gradient method are described and tested.

Nonlinear Rescaling Method and Self-concordant Functions

Richard Andrášik (2013)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Nonlinear rescaling is a tool for solving large-scale nonlinear programming problems. The primal-dual nonlinear rescaling method was used to solve two quadratic programming problems with quadratic constraints. Based on the performance of primal-dual nonlinear rescaling method on testing problems, the conclusions about setting up the parameters are made. Next, the connection between nonlinear rescaling methods and self-concordant functions is discussed and modified logarithmic barrier function is...

Nonsmooth Problems of Calculus of Variations via Codifferentiation

Maxim Dolgopolik (2014)

ESAIM: Control, Optimisation and Calculus of Variations

In this paper multidimensional nonsmooth, nonconvex problems of the calculus of variations with codifferentiable integrand are studied. Special classes of codifferentiable functions, that play an important role in the calculus of variations, are introduced and studied. The codifferentiability of the main functional of the calculus of variations is derived. Necessary conditions for the extremum of a codifferentiable function on a closed convex set and its applications to the nonsmooth problems of...

Currently displaying 1 – 20 of 23

Page 1 Next