### Nový pohled na numerický výpočet největšího společného dělitele dvou polynomů

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

Back to Simple Search
# Advanced Search

The preconditioned conjugate gradient method for solving the system of linear algebraic equations with a positive definite matrix is investigated. The initial approximation for conjugate gradient is constructed as a result of a matrix iteration method after $m$ steps. The behaviour of the error vector for such a combined method is studied and special numerical tests and conclusions are made.

In this paper, our attention is concentrated on the GMRES method for the solution of the system $(I-T)x=b$ of linear algebraic equations with a nonsymmetric matrix. We perform $m$ pre-iterations ${y}_{l+1}=T{y}_{l}+b$ before starting GMRES and put ${y}_{m}$ for the initial approximation in GMRES. We derive an upper estimate for the norm of the error vector in dependence on the $m$th powers of eigenvalues of the matrix $T$. Further we study under what eigenvalues lay-out this upper estimate is the best one. The estimate shows and numerical...

Let ${x}_{k+1}=T{x}_{k}+b$ be an iterative process for solving the operator equation $x=Tx+b$ in Hilbert space $X$. Let the sequence ${\left\{{x}_{k}\right\}}_{k=o}^{\infty}$ formed by the above described iterative process be convergent for some initial approximation ${x}_{o}$ with a limit ${x}^{*}=T{x}^{*}+b$. For given $l>1,{m}_{0},{m}_{1},\cdots ,{m}_{l}$ let us define a new sequence ${\left\{{y}_{k}\right\}}_{k={m}_{1}}^{\infty}$ by the formula ${y}_{k}={\alpha}_{0}^{\left(k\right)}{x}_{k}+{\alpha}_{1}^{\left(k\right)}{x}_{k-{m}_{1}}+...+{\alpha}_{l}^{\left(k\right)}{x}_{k-{m}_{l}}$, where ${\alpha}_{i}^{\left(k\right)}$ are obtained by solving a minimization problem for a given functional. In this paper convergence properties of ${\alpha}_{i}^{\left(k\right)}$ are investigated and on the basis of the results thus obtainded it is proved that ${lim}_{k\to \infty}\u2225{x}^{*}-{y}_{k}\u2225/{\u2225{x}^{*}-{x}_{k}\u2225}^{p}=0$ for some $p\ge 1$.

The author considers the operator equation $x=Tx+b$. Methods for acceleration of convergence of the iterative process ${x}_{n+1)}=T{x}_{n}+b$ are investigated.

Limits of the extrapolation coefficients are rational functions of several poles with the largest moduli of the resolvent operator $R(\lambda ,T)={(\lambda I-T)}^{-1}$ and therefore good estimates of these poles could be calculated from these coefficients. The calculation is very easy for the case of two coefficients and its practical effect in finite dimensional space is considerable. The results are used for acceleration of S.O.R. method.

Let $B$ be a non-negative irreducible $n\times n$ cyclic matrix of index 2, let $I$ be the unite matrix. In this paper the calculation of the spectral radius of the matrix $B+\alpha I$ by minimax method as well as the rate of convergence in dependence on the number $\alpha $ is studied.

Lanczos’ method for solving the system of linear algebraic equations $Ax=b$ consists in constructing a sequence of vectors ${x}_{k}$ in such a way that ${r}_{k}=b-A{x}_{k}\in {r}_{0}+A{\mathcal{K}}_{k}(A,{r}_{0})$ and ${r}_{k}\perp {\mathcal{K}}_{k}({A}^{T},{\tilde{r}}_{0})$. This sequence of vectors can be computed by the BiCG (BiOMin) algorithm. In this paper is shown how to obtain the recurrences of BiCG (BiOMin) directly from this conditions.

The paper investigates the dependence of the number of iterations on the change of a relaxation factor by the over relaxation method (SOR). This dependence has been formulated in two theorems and explained in tables and graphs enclosed to the paper.

Automatic differentiation is an effective method for evaluating derivatives of function, which is defined by a formula or a program. Program for evaluating of value of function is by automatic differentiation modified to program, which also evaluates values of derivatives. Computed values are exact up to computer precision and their evaluation is very quick. In this article, we describe a program realization of automatic differentiation. This implementation is prepared in the system UFO, but its...

The coefficients of the greatest common divisor of two polynomials $f$ and $g$ (GCD$(f,g)$) can be obtained from the Sylvester subresultant matrix ${S}_{j}(f,g)$ transformed to lower triangular form, where $1\le j\le d$ and $d=$ deg(GCD$(f,g)$) needs to be computed. Firstly, it is supposed that the coefficients of polynomials are given exactly. Transformations of ${S}_{j}(f,g)$ for an arbitrary allowable $j$ are in details described and an algorithm for the calculation of the GCD$(f,g)$ is formulated. If inexact polynomials are given, then an approximate greatest...

The computation of polynomial greatest common divisor (GCD) ranks among basic algebraic problems with many applications, for example, in image processing and control theory. The problem of the GCD computing of two exact polynomials is well defined and can be solved symbolically, for example, by the oldest and commonly used Euclid’s algorithm. However, this is an ill-posed problem, particularly when some unknown noise is applied to the polynomial coefficients. Hence, new methods for the GCD computation...

The paper introduces the calculation of a greatest common divisor of two univariate polynomials. Euclid’s algorithm can be easily simulated by the reduction of the Sylvester matrix to an upper triangular form. This is performed by using $c$-$s$ transformation and $QR$-factorization methods. Both procedures are described and numerically compared. Computations are performed in the floating point environment.

The computation of the greatest common divisor (GCD) has many applications in several disciplines including computer graphics, image deblurring problem or computing multiple roots of inexact polynomials. In this paper, Sylvester and Bézout matrices are considered for this purpose. The computation is divided into three stages. A rank revealing method is shortly mentioned in the first one and then the algorithms for calculation of an approximation of GCD are formulated. In the final stage the coefficients...

We consider the solution of linear system with a block-structured matrix of saddle point type. The solution technique is based on the idea of the classical alternating-direction implicit iterative method where symmetric-antisymmetric splitting of the coefficient matrix is used. To find an optimal parameter for solving the system with a symmetric matrix, the polynomial filters are considered. The CGW method is used for systems with skew-symmetric matrix. The numerical tests compare the results obtained...

**Page 1**
Next