The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The conjugate gradient method of Liu and Storey is an efficient minimization algorithm which uses second derivatives information, without saving matrices, by finite difference approximation. It is shown that the finite difference scheme can be removed by using a quasi-Newton approximation for computing a search direction, without loss of convergence. A conjugate gradient method based on BFGS approximation is proposed and compared with existing methods of the same class.
A modification of the limited-memory variable metric BNS method for large scale unconstrained optimization of the differentiable function is considered, which consists in corrections (based on the idea of conjugate directions) of difference vectors for better satisfaction of the previous quasi-Newton conditions. In comparison with [11], more previous iterations can be utilized here.
For quadratic objective functions, the improvement of convergence is the best
one in some sense, all stored corrected...
In this paper, we present a new one-step smoothing Newton method for solving the second-order cone programming (SOCP). Based on a new smoothing function of the well-known Fischer-Burmeister function, the SOCP is approximated by a family of parameterized smooth equations. Our algorithm solves only one system of linear equations and performs only one Armijo-type line search at each iteration. It can start from an arbitrary initial point and does not require the iterative points to be in the sets...
In this paper a nonmonotone limited memory BFGS (NLBFGS) method is applied for approximately solving optimal control problems (OCPs) governed by one-dimensional parabolic partial differential equations. A discretized optimal control problem is obtained by using piecewise linear finite element and well-known backward Euler methods. Afterwards, regarding the implicit function theorem, the optimal control problem is transformed into an unconstrained nonlinear optimization problem (UNOP). Finally the...
An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.
In this paper we introduce a new smoothing function and show that it is coercive under suitable assumptions. Based on this new function, we propose a smoothing Newton method for solving the second-order cone complementarity problem (SOCCP). The proposed algorithm solves only one linear system of equations and performs only one line search at each iteration. It is shown that any accumulation point of the iteration sequence generated by the proposed algorithm is a solution to the SOCCP. Furthermore,...
A new algorithm for solving large scale bound constrained minimization problems is proposed. The algorithm is based on an accurate identification technique of the active set proposed by Facchinei, Fischer and Kanzow in 1998. A further division of the active set yields the global convergence of the new algorithm. In particular, the convergence rate is superlinear without requiring the strict complementarity assumption. Numerical tests demonstrate the efficiency and performance of the present strategy...
We describe an interior point algorithm for convex quadratic problem with a strict complementarity constraints. We show that under some assumptions the approach requires a total of number of iterations, where is the input size of the problem. The algorithm generates a sequence of problems, each of which is approximately solved by Newton’s method.
We describe an interior point algorithm for convex quadratic problem with a
strict complementarity constraints. We show that under some assumptions the
approach requires a total of number of iterations, where L
is the input size of the problem. The algorithm generates a sequence of problems, each of which is
approximately solved by Newton's method.
The characterization of the solution set of a convex constrained problem is a well-known attempt. In this paper, we focus on the minimum norm solution of a specific constrained convex nonlinear problem and reformulate this problem as an unconstrained minimization problem by using the alternative theorem.The objective function of this problem is piecewise quadratic, convex, and once differentiable. To minimize this function, we will provide a new Newton-type method with global convergence properties....
We analyze the convergence of the prox-regularization algorithms introduced in [1], to solve generalized fractional programs, without assuming that the optimal solutions set of the considered problem is nonempty, and since the objective functions are variable with respect to the iterations in the auxiliary problems generated by Dinkelbach-type algorithms DT1 and DT2, we consider that the regularizing parameter is also variable. On the other hand we study the convergence when the iterates are only...
We analyze the convergence of the prox-regularization algorithms
introduced in [1], to solve generalized fractional programs,
without assuming that the optimal solutions set of the considered
problem is nonempty, and since the objective functions are
variable with respect to the iterations in the auxiliary problems
generated by Dinkelbach-type algorithms DT1 and DT2, we consider
that the regularizing parameter is also variable. On the other
hand we study the convergence when the iterates are only
ηk-minimizers...
A numerically inexpensive globalization strategy of sequential quadratic programming methods (SQP-methods) for control of the instationary Navier Stokes equations is investigated. Based on the proper functional analytic setting a convergence analysis for the globalized method is given. It is argued that the a priori formidable SQP-step can be decomposed into linear primal and linear adjoint systems, which is amenable for existing CFL-software. A report on a numerical test demonstrates the feasibility...
A numerically inexpensive globalization strategy of sequential quadratic programming
methods (SQP-methods) for control of the instationary Navier Stokes equations is investigated.
Based on the proper functional analytic setting a convergence analysis for the globalized method
is given. It is argued that the a priori formidable SQP-step can be decomposed into linear primal
and linear adjoint systems, which is amenable for existing CFL-software. A report on a numerical
test demonstrates the feasibility...
In this paper, we examine the influence of approximate first and/or
second derivatives on the filter-trust-region algorithm designed for
solving unconstrained nonlinear optimization problems and proposed by
Gould, Sainvitu and Toint in
[12]. Numerical
experiments carried out on small-scaled unconstrained problems from
the CUTEr collection describe the effect of the use of
approximate derivatives on the robustness and the efficiency of the
filter-trust-region method.
In vivo visualization of cardiovascular structures is possible using medical images. However, one has to realize that the resulting 3D geometries correspond to in vivo conditions. This entails an internal stress state to be present in the in vivo measured geometry of e.g. a blood vessel due to the presence of the blood pressure. In order to correct for this in vivo stress, this paper presents an inverse method to restore the original zero-pressure geometry of a structure, and to recover the in vivo...
Simple modifications of the limited-memory BFGS method (L-BFGS) for large
scale unconstrained optimization are considered, which consist in corrections of the used difference vectors (derived from the idea of conjugate directions), utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. The algorithm is globally convergent for convex sufficiently...
We compute numerically the minimizers of the Dirichlet energyamong maps from the unit disc into the unit sphere that satisfy a boundary condition and a degree condition. We use a Sobolev gradient algorithm for the minimization and we prove that its continuous version preserves the degree. For the discretization of the problem we use continuous finite elements. We propose an original mesh-refining strategy needed to preserve the degree with the discrete version of the algorithm (which is a preconditioned...
Currently displaying 1 –
20 of
35