Page 1 Next

Displaying 1 – 20 of 34

Showing per page

A conjugate gradient method with quasi-Newton approximation

Jonas Koko (2000)

Applicationes Mathematicae

The conjugate gradient method of Liu and Storey is an efficient minimization algorithm which uses second derivatives information, without saving matrices, by finite difference approximation. It is shown that the finite difference scheme can be removed by using a quasi-Newton approximation for computing a search direction, without loss of convergence. A conjugate gradient method based on BFGS approximation is proposed and compared with existing methods of the same class.

A modified limited-memory BNS method for unconstrained minimization derived from the conjugate directions idea

Vlček, Jan, Lukšan, Ladislav (2015)

Programs and Algorithms of Numerical Mathematics

A modification of the limited-memory variable metric BNS method for large scale unconstrained optimization of the differentiable function f : N is considered, which consists in corrections (based on the idea of conjugate directions) of difference vectors for better satisfaction of the previous quasi-Newton conditions. In comparison with [11], more previous iterations can be utilized here. For quadratic objective functions, the improvement of convergence is the best one in some sense, all stored corrected...

A new one-step smoothing newton method for second-order cone programming

Jingyong Tang, Guoping He, Li Dong, Liang Fang (2012)

Applications of Mathematics

In this paper, we present a new one-step smoothing Newton method for solving the second-order cone programming (SOCP). Based on a new smoothing function of the well-known Fischer-Burmeister function, the SOCP is approximated by a family of parameterized smooth equations. Our algorithm solves only one system of linear equations and performs only one Armijo-type line search at each iteration. It can start from an arbitrary initial point and does not require the iterative points to be in the sets...

A nonsmooth version of the univariate optimization algorithm for locating the nearest extremum (locating extremum in nonsmooth univariate optimization)

Marek Smietanski (2008)

Open Mathematics

An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.

A smoothing Newton method for the second-order cone complementarity problem

Jingyong Tang, Guoping He, Li Dong, Liang Fang, Jinchuan Zhou (2013)

Applications of Mathematics

In this paper we introduce a new smoothing function and show that it is coercive under suitable assumptions. Based on this new function, we propose a smoothing Newton method for solving the second-order cone complementarity problem (SOCCP). The proposed algorithm solves only one linear system of equations and performs only one line search at each iteration. It is shown that any accumulation point of the iteration sequence generated by the proposed algorithm is a solution to the SOCCP. Furthermore,...

An accurate active set Newton algorithm for large scale bound constrained optimization

Li Sun, Guoping He, Yongli Wang, Changyin Zhou (2011)

Applications of Mathematics

A new algorithm for solving large scale bound constrained minimization problems is proposed. The algorithm is based on an accurate identification technique of the active set proposed by Facchinei, Fischer and Kanzow in 1998. A further division of the active set yields the global convergence of the new algorithm. In particular, the convergence rate is superlinear without requiring the strict complementarity assumption. Numerical tests demonstrate the efficiency and performance of the present strategy...

An interior point algorithm for convex quadratic programming with strict equilibrium constraints

Rachid Benouahboun, Abdelatif Mansouri (2010)

RAIRO - Operations Research

We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of O ( n L ) number of iterations, where L is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton's method.

An interior point algorithm for convex quadratic programming with strict equilibrium constraints

Rachid Benouahboun, Abdelatif Mansouri (2005)

RAIRO - Operations Research - Recherche Opérationnelle

We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of O ( n L ) number of iterations, where L is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton’s method.

Computing minimum norm solution of a specific constrained convex nonlinear problem

Saeed Ketabchi, Hossein Moosaei (2012)

Kybernetika

The characterization of the solution set of a convex constrained problem is a well-known attempt. In this paper, we focus on the minimum norm solution of a specific constrained convex nonlinear problem and reformulate this problem as an unconstrained minimization problem by using the alternative theorem.The objective function of this problem is piecewise quadratic, convex, and once differentiable. To minimize this function, we will provide a new Newton-type method with global convergence properties....

Convergence of prox-regularization methods for generalized fractional programming

Ahmed Roubi (2002)

RAIRO - Operations Research - Recherche Opérationnelle

We analyze the convergence of the prox-regularization algorithms introduced in [1], to solve generalized fractional programs, without assuming that the optimal solutions set of the considered problem is nonempty, and since the objective functions are variable with respect to the iterations in the auxiliary problems generated by Dinkelbach-type algorithms DT1 and DT2, we consider that the regularizing parameter is also variable. On the other hand we study the convergence when the iterates are only...

Convergence of Prox-Regularization Methods for Generalized Fractional Programming

Ahmed Roubi (2010)

RAIRO - Operations Research

We analyze the convergence of the prox-regularization algorithms introduced in [1], to solve generalized fractional programs, without assuming that the optimal solutions set of the considered problem is nonempty, and since the objective functions are variable with respect to the iterations in the auxiliary problems generated by Dinkelbach-type algorithms DT1 and DT2, we consider that the regularizing parameter is also variable. On the other hand we study the convergence when the iterates are only ηk-minimizers...

Globalization of SQP-methods in control of the instationary Navier-Stokes equations

Michael Hintermüller, Michael Hinze (2002)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

A numerically inexpensive globalization strategy of sequential quadratic programming methods (SQP-methods) for control of the instationary Navier Stokes equations is investigated. Based on the proper functional analytic setting a convergence analysis for the globalized method is given. It is argued that the a priori formidable SQP-step can be decomposed into linear primal and linear adjoint systems, which is amenable for existing CFL-software. A report on a numerical test demonstrates the feasibility...

Globalization of SQP-Methods in Control of the Instationary Navier-Stokes Equations

Michael Hintermüller, Michael Hinze (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

A numerically inexpensive globalization strategy of sequential quadratic programming methods (SQP-methods) for control of the instationary Navier Stokes equations is investigated. Based on the proper functional analytic setting a convergence analysis for the globalized method is given. It is argued that the a priori formidable SQP-step can be decomposed into linear primal and linear adjoint systems, which is amenable for existing CFL-software. A report on a numerical test demonstrates the feasibility...

How much do approximate derivatives hurt filter methods?

Caroline Sainvitu (2009)

RAIRO - Operations Research

In this paper, we examine the influence of approximate first and/or second derivatives on the filter-trust-region algorithm designed for solving unconstrained nonlinear optimization problems and proposed by Gould, Sainvitu and Toint in [12]. Numerical experiments carried out on small-scaled unconstrained problems from the CUTEr collection describe the effect of the use of approximate derivatives on the robustness and the efficiency of the filter-trust-region method.

Inverse modelling of image-based patient-specific blood vessels: zero-pressure geometry and in vivo stress incorporation

Joris Bols, Joris Degroote, Bram Trachet, Benedict Verhegghe, Patrick Segers, Jan Vierendeels (2013)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

In vivo visualization of cardiovascular structures is possible using medical images. However, one has to realize that the resulting 3D geometries correspond to in vivo conditions. This entails an internal stress state to be present in the in vivo measured geometry of e.g. a blood vessel due to the presence of the blood pressure. In order to correct for this in vivo stress, this paper presents an inverse method to restore the original zero-pressure geometry of a structure, and to recover the in vivo...

Modifications of the limited-memory BFGS method based on the idea of conjugate directions

Vlček, Jan, Lukšan, Ladislav (2013)

Programs and Algorithms of Numerical Mathematics

Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections of the used difference vectors (derived from the idea of conjugate directions), utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. The algorithm is globally convergent for convex sufficiently...

Newton and conjugate gradient for harmonic maps from the disc into the sphere

Morgan Pierre (2004)

ESAIM: Control, Optimisation and Calculus of Variations

We compute numerically the minimizers of the Dirichlet energy E ( u ) = 1 2 B 2 | u | 2 d x among maps u : B 2 S 2 from the unit disc into the unit sphere that satisfy a boundary condition and a degree condition. We use a Sobolev gradient algorithm for the minimization and we prove that its continuous version preserves the degree. For the discretization of the problem we use continuous P 1 finite elements. We propose an original mesh-refining strategy needed to preserve the degree with the discrete version of the algorithm (which is a preconditioned...

Newton and conjugate gradient for harmonic maps from the disc into the sphere

Morgan Pierre (2010)

ESAIM: Control, Optimisation and Calculus of Variations

We compute numerically the minimizers of the Dirichlet energy E ( u ) = 1 2 B 2 | u | 2 d x among maps u : B 2 S 2 from the unit disc into the unit sphere that satisfy a boundary condition and a degree condition. We use a Sobolev gradient algorithm for the minimization and we prove that its continuous version preserves the degree. For the discretization of the problem we use continuous P1 finite elements. We propose an original mesh-refining strategy needed to preserve the degree with the discrete version of the algorithm (which...

Currently displaying 1 – 20 of 34

Page 1 Next