Currently displaying 1 – 20 of 36

Showing per page

Order by Relevance | Title | Year of publication

Conjugate gradient algorithms for conic functions

Ladislav Lukšan — 1986

Aplikace matematiky

The paper contains a description and an analysis of two modifications of the conjugate gradient method for unconstrained minimization which find a minimum of the conic function after a finite number of steps. Moreover, further extension of the conjugate gradient method is given which is based on a more general class of the model functions.

Dual method for solving a special problem of quadratic programming as a subproblem at nonlinear minimax approximation

Ladislav Lukšan — 1986

Aplikace matematiky

The paper describes the dual method for solving a special problem of quadratic programming as a subproblem at nonlinear minimax approximation. Two cases are analyzed in detail, differring in linear dependence of gradients of the active functions. The complete algorithm of the dual method is presented and its finite step convergence is proved.

New quasi-Newton method for solving systems of nonlinear equations

Ladislav LukšanJan Vlček — 2017

Applications of Mathematics

We propose a new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR or LU decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires O ( n 2 ) arithmetic operations per iteration in contrast with the Newton method, which requires O ( n 3 ) operations per iteration. Computational experiments confirm the high efficiency...

Recursive form of general limited memory variable metric methods

Ladislav LukšanJan Vlček — 2013

Kybernetika

In this report we propose a new recursive matrix formulation of limited memory variable metric methods. This approach can be used for an arbitrary update from the Broyden class (and some other updates) and also for the approximation of both the Hessian matrix and its inverse. The new recursive formulation requires approximately 4 m n multiplications and additions per iteration, so it is comparable with other efficient limited memory variable metric methods. Numerical experiments concerning Algorithm...

Primal interior point method for minimization of generalized minimax functions

Ladislav LukšanCtirad MatonohaJan Vlček — 2010

Kybernetika

In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. (i. e. interior point method that uses explicitly computed approximations of Lagrange multipliers instead of their updates). Next we describe the basic algorithm and give more details concerning its implementation...

Automatic differentiation and its program realization

Jan HartmanLadislav LukšanJan Zítko — 2009

Kybernetika

Automatic differentiation is an effective method for evaluating derivatives of function, which is defined by a formula or a program. Program for evaluating of value of function is by automatic differentiation modified to program, which also evaluates values of derivatives. Computed values are exact up to computer precision and their evaluation is very quick. In this article, we describe a program realization of automatic differentiation. This implementation is prepared in the system UFO, but its...

Page 1 Next

Download Results (CSV)