We prove that within the frame of smoothed prolongations, rapid coarsening between first two levels can be compensated by massive prolongation smoothing and pre- and post-smoothing derived from the prolongator smoother.
In this paper a black-box solver based on combining the unknowns aggregation with smoothing is suggested. Convergence is improved by overcorrection. Numerical experiments demonstrate the efficiency.
The technique for accelerating the convergence of the algebraic multigrid method is proposed.
One method for computing the least eigenvalue of a positive definite matrix of order is described.
An algorithm of the preconditioned conjugate gradient method in which the solution of an auxiliary system is replaced with multiplication by the matrix for suitably chosen is presented.
An algorithm for using the preconditioned conjugate gradient method to solve a coarse level problem is presented.
In this paper we analyse an algorithm which is a modification of the so-called two-level algorithm with overcorrection, published in [2]. We illustrate the efficiency of this algorithm by a model example.
A two-level algebraic algorithm is introduced and its convergence is proved. The restriction as well as prolongation operators are defined with the help of aggregation classes. Moreover, a particular smoothing operator is defined in an analogical way to accelarate the convergence of the algorithm. A model example is presented in conclusion.
Solving a system of linear algebraic equations by the preconditioned conjugate gradient method requires to solve an auxiliary system of linear algebraic equations in each step. In this paper instead of solving the auxiliary system one iteration of the two level method for the original system is done.
We analyze a general multigrid method with aggressive coarsening and polynomial smoothing. We use a special polynomial smoother that originates in the context of the smoothed aggregation method. Assuming the degree of the smoothing polynomial is, on each level , at least , we prove a convergence result independent of . The suggested smoother is cheaper than the overlapping Schwarz method that allows to prove the same result. Moreover, unlike in the case of the overlapping Schwarz method, analysis...
We extend the analysis of the recently proposed nonlinear EIS scheme applied to the partial eigenvalue problem. We address the case where the Rayleigh quotient iteration is used as the smoother on the fine-level. Unlike in our previous theoretical results, where the smoother given by the linear inverse power method is assumed, we prove nonlinear speed-up when the approximation becomes close to the exact solution. The speed-up is cubic. Unlike existent convergence estimates for the Rayleigh quotient...
A variational two-level method in the class of methods with an aggressive coarsening and a massive polynomial smoothing is proposed. The method is a modification of the method of Section 5 of Tezaur, Vaněk (2018). Compared to that method, a significantly sharper estimate is proved while requiring only slightly more computational work.
We prove nearly uniform convergence bounds for the BPX preconditioner based on smoothed aggregation under the assumption that the mesh is regular. The analysis is based on the fact that under the assumption of regular geometry, the coarse-space basis functions form a system of macroelements. This property tends to be satisfied by the smoothed aggregation bases formed for unstructured meshes.
The smoothed aggregation method has became a widely used tool for solving the linear systems arising by the discretization of elliptic partial differential equations and their singular perturbations. The smoothed aggregation method is an algebraic multigrid technique where the prolongators are constructed in two steps. First, the tentative prolongator is constructed by the aggregation (or, the generalized aggregation) method. Then, the range of the tentative prolongator is smoothed by a sparse linear...
We derive the smoothed aggregation two-level method from the variational objective to minimize the final error after finishing the entire iteration. This contrasts to a standard variational two-level method, where the coarse-grid correction vector is chosen to minimize the error after coarse-grid correction procedure, which represents merely an intermediate stage of computing. Thus, we enforce the global minimization of the error. The method with smoothed prolongator is thus interpreted as a qualitatively...
Download Results (CSV)