New algorithm for polynomial spectral factorization with quadratic convergence. I
The conjugate gradient method is one of the most effective algorithm for unconstrained nonlinear optimization problems. This is due to the fact that it does not need a lot of storage memory and its simple structure properties, which motivate us to propose a new hybrid conjugate gradient method through a convex combination of and . We compute the convex parameter using the Newton direction. Global convergence is established through the strong Wolfe conditions. Numerical experiments show the...
We propose a new Broyden method for solving systems of nonlinear equations, which uses the first derivatives, but is more efficient than the Newton method (measured by the computational time) for larger dense systems. The new method updates QR or LU decompositions of nonsymmetric approximations of the Jacobian matrix, so it requires arithmetic operations per iteration in contrast with the Newton method, which requires operations per iteration. Computational experiments confirm the high efficiency...
We compute numerically the minimizers of the Dirichlet energyamong maps from the unit disc into the unit sphere that satisfy a boundary condition and a degree condition. We use a Sobolev gradient algorithm for the minimization and we prove that its continuous version preserves the degree. For the discretization of the problem we use continuous finite elements. We propose an original mesh-refining strategy needed to preserve the degree with the discrete version of the algorithm (which is a preconditioned...
Estimates of the radius of convergence of Newton's methods for variational inclusions in Banach spaces are investigated under a weak Lipschitz condition on the first Fréchet derivative. We establish the linear convergence of Newton's and of a variant of Newton methods using the concepts of pseudo-Lipschitz set-valued map and ω-conditioned Fréchet derivative or the center-Lipschitz condition introduced by the first author.
Modifications of nonlinear conjugate gradient method are described and tested.
Nonlinear rescaling is a tool for solving large-scale nonlinear programming problems. The primal-dual nonlinear rescaling method was used to solve two quadratic programming problems with quadratic constraints. Based on the performance of primal-dual nonlinear rescaling method on testing problems, the conclusions about setting up the parameters are made. Next, the connection between nonlinear rescaling methods and self-concordant functions is discussed and modified logarithmic barrier function is...
An algorithm for quadratic minimization with simple bounds is introduced, combining, as many well-known methods do, active set strategies and projection steps. The novelty is that here the criterion for acceptance of a projected trial point is weaker than the usual ones, which are based on monotone decrease of the objective function. It is proved that convergence follows as in the monotone case. Numerical experiments with bound-constrained quadratic problems from CUTE collection show that the modified...
The method of projections onto convex sets to find a point in the intersection of a finite number of closed convex sets in an Euclidean space, sometimes leads to slow convergence of the constructed sequence. Such slow convergence depends both on the choice of the starting point and on the monotoneous behaviour of the usual algorithms. As there is normally no indication of how to choose the starting point in order to avoid slow convergence, we present in this paper a non-monotoneous parallel algorithm...