Displaying 21 – 40 of 100

Showing per page

A globally convergent non-interior point algorithm with full Newton step for second-order cone programming

Liang Fang, Guoping He, Li Sun (2009)

Applications of Mathematics

A non-interior point algorithm based on projection for second-order cone programming problems is proposed and analyzed. The main idea of the algorithm is that we cast the complementary equation in the primal-dual optimality conditions as a projection equation. By using this reformulation, we only need to solve a system of linear equations with the same coefficient matrix and compute two simple projections at each iteration, without performing any line search. This algorithm can start from an arbitrary...

A modified limited-memory BNS method for unconstrained minimization derived from the conjugate directions idea

Vlček, Jan, Lukšan, Ladislav (2015)

Programs and Algorithms of Numerical Mathematics

A modification of the limited-memory variable metric BNS method for large scale unconstrained optimization of the differentiable function f : N is considered, which consists in corrections (based on the idea of conjugate directions) of difference vectors for better satisfaction of the previous quasi-Newton conditions. In comparison with [11], more previous iterations can be utilized here. For quadratic objective functions, the improvement of convergence is the best one in some sense, all stored corrected...

A new non-interior continuation method for P 0 -NCP based on a SSPM-function

Liang Fang (2011)

Applications of Mathematics

In this paper, we consider a new non-interior continuation method for the solution of nonlinear complementarity problem with P 0 -function ( P 0 -NCP). The proposed algorithm is based on a smoothing symmetric perturbed minimum function (SSPM-function), and one only needs to solve one system of linear equations and to perform only one Armijo-type line search at each iteration. The method is proved to possess global and local convergence under weaker conditions. Preliminary numerical results indicate that...

A new one-step smoothing newton method for second-order cone programming

Jingyong Tang, Guoping He, Li Dong, Liang Fang (2012)

Applications of Mathematics

In this paper, we present a new one-step smoothing Newton method for solving the second-order cone programming (SOCP). Based on a new smoothing function of the well-known Fischer-Burmeister function, the SOCP is approximated by a family of parameterized smooth equations. Our algorithm solves only one system of linear equations and performs only one Armijo-type line search at each iteration. It can start from an arbitrary initial point and does not require the iterative points to be in the sets...

A new simultaneous subgradient projection algorithm for solving a multiple-sets split feasibility problem

Yazheng Dang, Yan Gao (2014)

Applications of Mathematics

In this paper, we present a simultaneous subgradient algorithm for solving the multiple-sets split feasibility problem. The algorithm employs two extrapolated factors in each iteration, which not only improves feasibility by eliminating the need to compute the Lipschitz constant, but also enhances flexibility due to applying variable step size. The convergence of the algorithm is proved under suitable conditions. Numerical results illustrate that the new algorithm has better convergence than the...

A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization)

Marek Smietanski (2008)

Open Mathematics

An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.

A note on direct methods for approximations of sparse Hessian matrices

Miroslav Tůma (1988)

Aplikace matematiky

Necessity of computing large sparse Hessian matrices gave birth to many methods for their effective approximation by differences of gradients. We adopt the so-called direct methods for this problem that we faced when developing programs for nonlinear optimization. A new approach used in the frame of symmetric sequential coloring is described. Numerical results illustrate the differences between this method and the popular Powell-Toint method.

A numerical method of fitting a multiparameter nonlinear function to experimental data in the L 1 norm

Jaromír Jakeš (1988)

Aplikace matematiky

A numerical method of fitting a multiparameter function, non-linear in the parameters which are to be estimated, to the experimental data in the L 1 norm (i.e., by minimizing the sum of absolute values of errors of the experimental data) has been developed. This method starts with the least squares solution for the function and then minimizes the expression i ( x i 2 + a 2 ) 1 / 2 , where x i is the error of the i -th experimental datum, starting with an a comparable with the root-mean-square error of the least squares solution...

A numerically stable least squares solution to the quadratic programming problem

E. Übi (2008)

Open Mathematics

The strictly convex quadratic programming problem is transformed to the least distance problem - finding the solution of minimum norm to the system of linear inequalities. This problem is equivalent to the linear least squares problem on the positive orthant. It is solved using orthogonal transformations, which are memorized as products. Like in the revised simplex method, an auxiliary matrix is used for computations. Compared to the modified-simplex type methods, the presented dual algorithm QPLS...

Currently displaying 21 – 40 of 100