Displaying similar documents to “A viscosity-proximal gradient method with inertial extrapolation for solving certain minimization problems in Hilbert space”

On the weighted Euclidean matching problem in d

Birgit Anthes, Ludger Rüschendorf (2001)

Applicationes Mathematicae

Similarity:

A partitioning algorithm for the Euclidean matching problem in d is introduced and analyzed in a probabilistic model. The algorithm uses elements from the fixed dissection algorithm of Karp and Steele (1985) and the Zig-Zag algorithm of Halton and Terada (1982) for the traveling salesman problem. The algorithm runs in expected time n ( l o g n ) p - 1 and approximates the optimal matching in the probabilistic sense.

Distributed accelerated Nash equilibrium learning for two-subnetwork zero-sum game with bilinear coupling

Xianlin Zeng, Lihua Dou, Jinqiang Cui (2023)

Kybernetika

Similarity:

This paper proposes a distributed accelerated first-order continuous-time algorithm for O ( 1 / t 2 ) convergence to Nash equilibria in a class of two-subnetwork zero-sum games with bilinear couplings. First-order methods, which only use subgradients of functions, are frequently used in distributed/parallel algorithms for solving large-scale and big-data problems due to their simple structures. However, in the worst cases, first-order methods for two-subnetwork zero-sum games often have an asymptotic...

An interior-point algorithm for semidefinite least-squares problems

Chafia Daili, Mohamed Achache (2022)

Applications of Mathematics

Similarity:

We propose a feasible primal-dual path-following interior-point algorithm for semidefinite least squares problems (SDLS). At each iteration, the algorithm uses only full Nesterov-Todd steps with the advantage that no line search is required. Under new appropriate choices of the parameter β which defines the size of the neighborhood of the central-path and of the parameter θ which determines the rate of decrease of the barrier parameter, we show that the proposed algorithm is well defined...

A stochastic mirror-descent algorithm for solving A X B = C over an multi-agent system

Yinghui Wang, Songsong Cheng (2021)

Kybernetika

Similarity:

In this paper, we consider a distributed stochastic computation of A X B = C with local set constraints over an multi-agent system, where each agent over the network only knows a few rows or columns of matrixes. Through formulating an equivalent distributed optimization problem for seeking least-squares solutions of A X B = C , we propose a distributed stochastic mirror-descent algorithm for solving the equivalent distributed problem. Then, we provide the sublinear convergence of the proposed algorithm....

The adaptation of the k -means algorithm to solving the multiple ellipses detection problem by using an initial approximation obtained by the DIRECT global optimization algorithm

Rudolf Scitovski, Kristian Sabo (2019)

Applications of Mathematics

Similarity:

We consider the multiple ellipses detection problem on the basis of a data points set coming from a number of ellipses in the plane not known in advance, whereby an ellipse E is viewed as a Mahalanobis circle with center S , radius r , and some positive definite matrix Σ . A very efficient method for solving this problem is proposed. The method uses a modification of the k -means algorithm for Mahalanobis-circle centers. The initial approximation consists of the set of circles whose centers...

New hybrid conjugate gradient method for nonlinear optimization with application to image restoration problems

Youcef Elhamam Hemici, Samia Khelladi, Djamel Benterki (2024)

Kybernetika

Similarity:

The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of β k R M I L and β k H S . We compute the convex parameter θ k using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments...

Implicitization of Parametric Hypersurfaces via Points

Ferruccio Orecchia, Isabella Ramella (2018)

Rendiconto dell’Accademia delle Scienze Fisiche e Matematiche

Similarity:

Given a parametric polynomial representation of an algebraic hypersurface 𝐒 in the projective space we give a new algorithm for finding the implicit cartesian equation of 𝐒 .The algorithm is based on finding a suitable finite number of points on 𝐒 and computing, by linear algebra, the equation of the hypersurface of least degree that passes through the points. In particular the algorithm works for plane curves and surfaces in the ordinary three-dimensional space. Using C++ the algorithm...

An improvement of Euclid's algorithm

Zítko, Jan, Kuřátko, Jan

Similarity:

The paper introduces the calculation of a greatest common divisor of two univariate polynomials. Euclid’s algorithm can be easily simulated by the reduction of the Sylvester matrix to an upper triangular form. This is performed by using c - s transformation and Q R -factorization methods. Both procedures are described and numerically compared. Computations are performed in the floating point environment.

On the Kaczmarz algorithm of approximation in infinite-dimensional spaces

Stanisław Kwapień, Jan Mycielski (2001)

Studia Mathematica

Similarity:

The Kaczmarz algorithm of successive projections suggests the following concept. A sequence ( e k ) of unit vectors in a Hilbert space is said to be effective if for each vector x in the space the sequence (xₙ) converges to x where (xₙ) is defined inductively: x₀ = 0 and x = x n - 1 + α e , where α = x - x n - 1 , e . We prove the effectivity of some sequences in Hilbert spaces. We generalize the concept of effectivity to sequences of vectors in Banach spaces and we prove some results for this more general concept.

Greedy approximation and the multivariate Haar system

A. Kamont, V. N. Temlyakov (2004)

Studia Mathematica

Similarity:

We study nonlinear m-term approximation in a Banach space with regard to a basis. It is known that in the case of a greedy basis (like the Haar basis in L p ( [ 0 , 1 ] ) , 1 < p < ∞) a greedy type algorithm realizes nearly best m-term approximation for any individual function. In this paper we generalize this result in two directions. First, instead of a greedy algorithm we consider a weak greedy algorithm. Second, we study in detail unconditional nongreedy bases (like the multivariate Haar basis...

Uniform convergence of the greedy algorithm with respect to the Walsh system

Martin Grigoryan (2010)

Studia Mathematica

Similarity:

For any 0 < ϵ < 1, p ≥ 1 and each function f L p [ 0 , 1 ] one can find a function g L [ 0 , 1 ) with mesx ∈ [0,1): g ≠ f < ϵ such that its greedy algorithm with respect to the Walsh system converges uniformly on [0,1) and the sequence | c k ( g ) | : k s p e c ( g ) is decreasing, where c k ( g ) is the sequence of Fourier coefficients of g with respect to the Walsh system.

An adaptive s -step conjugate gradient algorithm with dynamic basis updating

Erin Claire Carson (2020)

Applications of Mathematics

Similarity:

The adaptive s -step CG algorithm is a solver for sparse symmetric positive definite linear systems designed to reduce the synchronization cost per iteration while still achieving a user-specified accuracy requirement. In this work, we improve the adaptive s -step conjugate gradient algorithm by the use of iteratively updated estimates of the largest and smallest Ritz values, which give approximations of the largest and smallest eigenvalues of A , using a technique due to G. Meurant and...