The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

Currently displaying 1 – 20 of 37

Showing per page

Order by Relevance | Title | Year of publication

Conjugate gradient algorithms for conic functions

Ladislav Lukšan — 1986

Aplikace matematiky

The paper contains a description and an analysis of two modifications of the conjugate gradient method for unconstrained minimization which find a minimum of the conic function after a finite number of steps. Moreover, further extension of the conjugate gradient method is given which is based on a more general class of the model functions.

Dual method for solving a special problem of quadratic programming as a subproblem at nonlinear minimax approximation

Ladislav Lukšan — 1986

Aplikace matematiky

The paper describes the dual method for solving a special problem of quadratic programming as a subproblem at nonlinear minimax approximation. Two cases are analyzed in detail, differring in linear dependence of gradients of the active functions. The complete algorithm of the dual method is presented and its finite step convergence is proved.

Recursive form of general limited memory variable metric methods

Ladislav LukšanJan Vlček — 2013

Kybernetika

In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately 4 m n multiplications and additions per iteration, so it is comparable with other efficient limited memory variable metric methods. Numerical experiments concerning Algorithm...

New quasi-Newton method for solving systems of nonlinear equations

Ladislav LukšanJan Vlček — 2017

Applications of Mathematics

We propose a new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR or LU decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires O ( n 2 ) arithmetic operations per iteration in contrast with the Newton method, which requires O ( n 3 ) operations per iteration. Computational experiments confirm the high efficiency...

Primal interior point method for minimization of generalized minimax functions

Ladislav LukšanCtirad MatonohaJan Vlček — 2010

Kybernetika

In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. (i. e. interior point method that uses explicitly computed approximations of Lagrange multipliers instead of their updates). Next we describe the basic algorithm and give more details concerning its implementation...

Primal interior-point method for large sparse minimax optimization

Ladislav LukšanCtirad MatonohaJan Vlček — 2009

Kybernetika

In this paper, we propose a primal interior-point method for large sparse minimax optimization. After a short introduction, the complete algorithm is introduced and important implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus the large sparse nonconvex minimax optimization problems can be solved successfully. The results of extensive computational experiments given in this paper confirm efficiency and robustness of the proposed...

Page 1 Next

Download Results (CSV)