Displaying similar documents to “Locally Lipschitz vector optimization with inequality and equality constraints”

On weak sharp minima for a special class of nonsmooth functions

Marcin Studniarski (2000)

Discussiones Mathematicae, Differential Inclusions, Control and Optimization

Similarity:

We present a characterization of weak sharp local minimizers of order one for a function f: ℝⁿ → ℝ defined by f ( x ) : = m a x f i ( x ) | i = 1 , . . . , p , where the functions f i are strictly differentiable. It is given in terms of the gradients of f i and the Mordukhovich normal cone to a given set on which f is constant. Then we apply this result to a smooth nonlinear programming problem with constraints.

Saddle point criteria for second order η -approximated vector optimization problems

Anurag Jayswal, Shalini Jha, Sarita Choudhury (2016)

Kybernetika

Similarity:

The purpose of this paper is to apply second order η -approximation method introduced to optimization theory by Antczak [2] to obtain a new second order η -saddle point criteria for vector optimization problems involving second order invex functions. Therefore, a second order η -saddle point and the second order η -Lagrange function are defined for the second order η -approximated vector optimization problem constructed in this approach. Then, the equivalence between an (weak) efficient solution...

Exact l 1 penalty function for nonsmooth multiobjective interval-valued problems

Julie Khatri, Ashish Kumar Prasad (2024)

Kybernetika

Similarity:

Our objective in this article is to explore the idea of an unconstrained problem using the exact l 1 penalty function for the nonsmooth multiobjective interval-valued problem (MIVP) having inequality and equality constraints. First of all, we figure out the KKT-type optimality conditions for the problem (MIVP). Next, we establish the equivalence between the set of weak LU-efficient solutions to the problem (MIVP) and the penalized problem (MIVP ρ ) with the exact l 1 penalty function. The...

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

Similarity:

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments...

Combination of t-norms and their conorms

Karel Zimmermann (2023)

Kybernetika

Similarity:

Non-negative linear combinations of t min -norms and their conorms are used to formulate some decision making problems using systems of max-separable equations and inequalities and optimization problems under constraints described by such systems. The systems have the left hand sides equal to the maximum of increasing functions of one variable and on the right hand sides are constants. Properties of the systems are studied as well as optimization problems with constraints given by the systems...

Derivatives of Hadamard type in scalar constrained optimization

Karel Pastor (2017)

Kybernetika

Similarity:

Vsevolod I. Ivanov stated (Nonlinear Analysis 125 (2015), 270-289) the general second-order optimality condition for the constrained vector problem in terms of Hadamard derivatives. We will consider its special case for a scalar problem and show some corollaries for example for -stable at feasible point functions. Then we show the advantages of obtained results with respect to the previously obtained results.

Distributed dual averaging algorithm for multi-agent optimization with coupled constraints

Zhipeng Tu, Shu Liang (2024)

Kybernetika

Similarity:

This paper investigates a distributed algorithm for the multi-agent constrained optimization problem, which is to minimize a global objective function formed by a sum of local convex (possibly nonsmooth) functions under both coupled inequality and affine equality constraints. By introducing auxiliary variables, we decouple the constraints and transform the multi-agent optimization problem into a variational inequality problem with a set-valued monotone mapping. We propose a distributed...

Lipschitz extensions of convex-valued maps

Alberto Bressan, Agostino Cortesi (1986)

Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Rendiconti Lincei. Matematica e Applicazioni

Similarity:

Si dimostra che ogni funzione multivoca lipschitziana con costante di Lipschitz M , definita su un sottoinsieme di uno spazio di Hilbert H a valori compatti e convessi in n , può essere estesa su tutto H ad una funzione multivoca lipschitziana con costante minore di 7 nM. In generale, non esistono invece estensioni aventi la stessa costante di Lipschitz M .

Linearization techniques for 𝕃 See PDF-control problems and dynamic programming principles in classical and 𝕃 See PDF-control problems

Dan Goreac, Oana-Silvia Serea (2012)

ESAIM: Control, Optimisation and Calculus of Variations

Similarity:

The aim of the paper is to provide a linearization approach to the 𝕃 See PDF-control problems. We begin by proving a semigroup-type behaviour of the set of constraints appearing in the linearized formulation of (standard) control problems. As a byproduct we obtain a linear formulation of the dynamic programming principle. Then, we use the 𝕃 p See PDF approach and the associated linear formulations. This seems to be the most appropriate tool for treating 𝕃 See PDF problems in continuous and...