Displaying similar documents to “A novel kernel function bridging iteration bounds in interior-point algorithms for linear programming”

Complexity of primal-dual interior-point algorithm for linear programming based on a new class of kernel functions

Safa Guerdouh, Wided Chikouche, Imene Touil, Adnan Yassine (2023)

Kybernetika

Similarity:

In this paper, we first present a polynomial-time primal-dual interior-point method (IPM) for solving linear programming (LP) problems, based on a new kernel function (KF) with a hyperbolic-logarithmic barrier term. To improve the iteration bound, we propose a parameterized version of this function. We show that the complexity result meets the currently best iteration bound for large-update methods by choosing a special value of the parameter. Numerical experiments reveal that the new...

Combination of t-norms and their conorms

Karel Zimmermann (2023)

Kybernetika

Similarity:

Non-negative linear combinations of t min -norms and their conorms are used to formulate some decision making problems using systems of max-separable equations and inequalities and optimization problems under constraints described by such systems. The systems have the left hand sides equal to the maximum of increasing functions of one variable and on the right hand sides are constants. Properties of the systems are studied as well as optimization problems with constraints given by the systems...

A new parameterized logarithmic kernel function for linear optimization with a double barrier term yielding the best known iteration bound

Benhadid Ayache, Saoudi Khaled (2020)

Communications in Mathematics

Similarity:

In this paper, we propose a large-update primal-dual interior point algorithm for linear optimization. The method is based on a new class of kernel functions which differs from the existing kernel functions in which it has a double barrier term. The investigation according to it yields the best known iteration bound O ( n log ( n ) log ( n ε ) ) for large-update algorithm with the special choice of its parameter m and thus improves the iteration bound obtained in Bai et al. [] for large-update algorithm. ...

Distributed dual averaging algorithm for multi-agent optimization with coupled constraints

Zhipeng Tu, Shu Liang (2024)

Kybernetika

Similarity:

This paper investigates a distributed algorithm for the multi-agent constrained optimization problem, which is to minimize a global objective function formed by a sum of local convex (possibly nonsmooth) functions under both coupled inequality and affine equality constraints. By introducing auxiliary variables, we decouple the constraints and transform the multi-agent optimization problem into a variational inequality problem with a set-valued monotone mapping. We propose a distributed...

Exact l 1 penalty function for nonsmooth multiobjective interval-valued problems

Julie Khatri, Ashish Kumar Prasad (2024)

Kybernetika

Similarity:

Our objective in this article is to explore the idea of an unconstrained problem using the exact l 1 penalty function for the nonsmooth multiobjective interval-valued problem (MIVP) having inequality and equality constraints. First of all, we figure out the KKT-type optimality conditions for the problem (MIVP). Next, we establish the equivalence between the set of weak LU-efficient solutions to the problem (MIVP) and the penalized problem (MIVP ρ ) with the exact l 1 penalty function. The...

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

Similarity:

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments...

On sparsity of approximate solutions to max-plus linear systems

Pingke Li (2024)

Kybernetika

Similarity:

When a system of one-sided max-plus linear equations is inconsistent, the approximate solutions within an admissible error bound may be desired instead, particularly with some sparsity property. It is demonstrated in this paper that obtaining the sparsest approximate solution within a given L error bound may be transformed in polynomial time into the set covering problem, which is known to be NP-hard. Besides, the problem of obtaining the sparsest approximate solution within a given...

Fourier analysis, linear programming, and densities of distance avoiding sets in n

Fernando Mário de Oliveira Filho, Frank Vallentin (2010)

Journal of the European Mathematical Society

Similarity:

We derive new upper bounds for the densities of measurable sets in n which avoid a finite set of prescribed distances. The new bounds come from the solution of a linear programming problem. We apply this method to obtain new upper bounds for measurable sets which avoid the unit distance in dimensions 2 , , 24 . This gives new lower bounds for the measurable chromatic number in dimensions 3 , , 24 . We apply it to get a short proof of a variant of a recent result of Bukh which in turn generalizes theorems...

New results on semidefinite bounds for 1 -constrained nonconvex quadratic optimization

Yong Xia (2013)

RAIRO - Operations Research - Recherche Opérationnelle

Similarity:

In this paper, we show that the direct semidefinite programming (SDP) bound for the nonconvex quadratic optimization problem over unit ball (QPL1) is equivalent to the optimal d.c. (difference between convex) bound for the standard quadratic programming reformulation of QPL1. Then we disprove a conjecture about the tightness of the direct SDP bound. Finally, as an extension of QPL1, we study the relaxation problem of the sparse principal component analysis, denoted by...

A dual-parameter double-step splitting iteration method for solving complex symmetric linear equations

Beibei Li, Jingjing Cui, Zhengge Huang, Xiaofeng Xie (2024)

Applications of Mathematics

Similarity:

We multiply both sides of the complex symmetric linear system A x = b by 1 - i ω to obtain a new equivalent linear system, then a dual-parameter double-step splitting (DDSS) method is established for solving the new linear system. In addition, we present an upper bound for the spectral radius of iteration matrix of the DDSS method and obtain its quasi-optimal parameter. Theoretical analyses demonstrate that the new method is convergent when some conditions are satisfied. Some tested examples are...

Stochastic optimization problems with second order stochastic dominance constraints via Wasserstein metric

Vlasta Kaňková, Vadim Omelčenko (2018)

Kybernetika

Similarity:

Optimization problems with stochastic dominance constraints are helpful to many real-life applications. We can recall e. g., problems of portfolio selection or problems connected with energy production. The above mentioned constraints are very suitable because they guarantee a solution fulfilling partial order between utility functions in a given subsystem 𝒰 of the utility functions. Especially, considering 𝒰 : = 𝒰 1 (where 𝒰 1 is a system of non decreasing concave nonnegative utility functions)...

Linearization techniques for 𝕃 See PDF-control problems and dynamic programming principles in classical and 𝕃 See PDF-control problems

Dan Goreac, Oana-Silvia Serea (2012)

ESAIM: Control, Optimisation and Calculus of Variations

Similarity:

The aim of the paper is to provide a linearization approach to the 𝕃 See PDF-control problems. We begin by proving a semigroup-type behaviour of the set of constraints appearing in the linearized formulation of (standard) control problems. As a byproduct we obtain a linear formulation of the dynamic programming principle. Then, we use the 𝕃 p See PDF approach and the associated linear formulations. This seems to be the most appropriate tool for treating 𝕃 See PDF problems in continuous and...

A primal-dual integral method in global optimization

Jens Hichert, Armin Hoffmann, Huan Xoang Phú, Rüdiger Reinhardt (2000)

Discussiones Mathematicae, Differential Inclusions, Control and Optimization

Similarity:

Using the Fenchel conjugate F c of Phú’s Volume function F of a given essentially bounded measurable function f defined on the bounded box D ⊂ Rⁿ, the integral method of Chew and Zheng for global optimization is modified to a superlinearly convergent method with respect to the level sequence. Numerical results are given for low dimensional functions with a strict global essential supremum.