The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

Displaying similar documents to “Optimality conditions for an interval-valued vector problem”

Exact l 1 penalty function for nonsmooth multiobjective interval-valued problems

Julie Khatri, Ashish Kumar Prasad (2024)

Kybernetika

Similarity:

Our objective in this article is to explore the idea of an unconstrained problem using the exact l 1 penalty function for the nonsmooth multiobjective interval-valued problem (MIVP) having inequality and equality constraints. First of all, we figure out the KKT-type optimality conditions for the problem (MIVP). Next, we establish the equivalence between the set of weak LU-efficient solutions to the problem (MIVP) and the penalized problem (MIVP ρ ) with the exact l 1 penalty function. The...

Locally Lipschitz vector optimization with inequality and equality constraints

Ivan Ginchev, Angelo Guerraggio, Matteo Rocca (2010)

Applications of Mathematics

Similarity:

The present paper studies the following constrained vector optimization problem: min C f ( x ) , g ( x ) - K , h ( x ) = 0 , where f : n m , g : n p are locally Lipschitz functions, h : n q is C 1 function, and C m and K p are closed convex cones. Two types of solutions are important for the consideration, namely w -minimizers (weakly efficient points) and i -minimizers (isolated minimizers of order 1). In terms of the Dini directional derivative first-order necessary conditions for a point x 0 to be a w -minimizer and first-order sufficient conditions...

Distributed dual averaging algorithm for multi-agent optimization with coupled constraints

Zhipeng Tu, Shu Liang (2024)

Kybernetika

Similarity:

This paper investigates a distributed algorithm for the multi-agent constrained optimization problem, which is to minimize a global objective function formed by a sum of local convex (possibly nonsmooth) functions under both coupled inequality and affine equality constraints. By introducing auxiliary variables, we decouple the constraints and transform the multi-agent optimization problem into a variational inequality problem with a set-valued monotone mapping. We propose a distributed...

Derivatives of Hadamard type in scalar constrained optimization

Karel Pastor (2017)

Kybernetika

Similarity:

Vsevolod I. Ivanov stated (Nonlinear Analysis 125 (2015), 270-289) the general second-order optimality condition for the constrained vector problem in terms of Hadamard derivatives. We will consider its special case for a scalar problem and show some corollaries for example for -stable at feasible point functions. Then we show the advantages of obtained results with respect to the previously obtained results.

A primal-dual integral method in global optimization

Jens Hichert, Armin Hoffmann, Huan Xoang Phú, Rüdiger Reinhardt (2000)

Discussiones Mathematicae, Differential Inclusions, Control and Optimization

Similarity:

Using the Fenchel conjugate F c of Phú’s Volume function F of a given essentially bounded measurable function f defined on the bounded box D ⊂ Rⁿ, the integral method of Chew and Zheng for global optimization is modified to a superlinearly convergent method with respect to the level sequence. Numerical results are given for low dimensional functions with a strict global essential supremum.

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

Similarity:

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments...

Saddle point criteria for second order η -approximated vector optimization problems

Anurag Jayswal, Shalini Jha, Sarita Choudhury (2016)

Kybernetika

Similarity:

The purpose of this paper is to apply second order η -approximation method introduced to optimization theory by Antczak [2] to obtain a new second order η -saddle point criteria for vector optimization problems involving second order invex functions. Therefore, a second order η -saddle point and the second order η -Lagrange function are defined for the second order η -approximated vector optimization problem constructed in this approach. Then, the equivalence between an (weak) efficient solution...