A General Numerical Approach to Sensitivity Analysis and Error Analysis with Adjoint Systems
Necessity of computing large sparse Hessian matrices gave birth to many methods for their effective approximation by differences of gradients. We adopt the so-called direct methods for this problem that we faced when developing programs for nonlinear optimization. A new approach used in the frame of symmetric sequential coloring is described. Numerical results illustrate the differences between this method and the popular Powell-Toint method.
A straightforward generalization of a classical method of averaging is presented and its essential characteristics are discussed. The method constructs high-order approximations of the l-th partial derivatives of smooth functions u in inner vertices a of conformal simplicial triangulations T of bounded polytopic domains in ℝd for arbitrary d ≥ 2. For any k ≥ l ≥ 1, it uses the interpolants of u in the polynomial Lagrange finite element spaces of degree k on the simplices with vertex a only. The...
Automatic differentiation is an effective method for evaluating derivatives of function, which is defined by a formula or a program. Program for evaluating of value of function is by automatic differentiation modified to program, which also evaluates values of derivatives. Computed values are exact up to computer precision and their evaluation is very quick. In this article, we describe a program realization of automatic differentiation. This implementation is prepared in the system UFO, but its...
Automatic differentiation (AD) has proven its interest in many fields of applied mathematics, but it is still not widely used. Furthermore, existing numerical methods have been developed under the hypotheses that computing program derivatives is not affordable for real size problems. Exact derivatives have therefore been avoided, or replaced by approximations computed by divided differences. The hypotheses is no longer true due to the maturity of AD added to the quick evolution of machine capacity....
Automatic differentiation (AD) has proven its interest in many fields of applied mathematics, but it is still not widely used. Furthermore, existing numerical methods have been developed under the hypotheses that computing program derivatives is not affordable for real size problems. Exact derivatives have therefore been avoided, or replaced by approximations computed by divided differences. The hypotheses is no longer true due to the maturity of AD added to the quick evolution of machine capacity....
An averaging method for the second-order approximation of the values of the gradient of an arbitrary smooth function u = u(x 1, x 2) at the vertices of a regular triangulation T h composed both of rectangles and triangles is presented. The method assumes that only the interpolant Πh[u] of u in the finite element space of the linear triangular and bilinear rectangular finite elements from T h is known. A complete analysis of this method is an extension of the complete analysis concerning the finite...
Some basic theorems and formulae (equations and inequalities) of several areas of mathematics that hold in Bernstein spaces are no longer valid in larger spaces. However, when a function f is in some sense close to a Bernstein space, then the corresponding relation holds with a remainder or error term. This paper presents a new, unified approach to these errors in terms of the distance of f from . The difficult situation of derivative-free error estimates is also covered.
Initial-boundary value problems of Dirichlet type for parabolic functional differential equations are considered. Explicit difference schemes of Euler type and implicit difference methods are investigated. The following theoretical aspects of the methods are presented. Sufficient conditions for the convergence of approximate solutions are given and comparisons of the methods are presented. It is proved that the assumptions on the regularity of the given functions are the same for both methods. It...
The general method of averaging for the superapproximation of an arbitrary partial derivative of a smooth function in a vertex of a simplicial triangulation of a bounded polytopic domain in for any is described and its complexity is analysed.
We consider the classical Interpolating Moving Least Squares (IMLS) interpolant as defined by Lancaster and Šalkauskas [Math. Comp. 37 (1981) 141–158] and compute the first and second derivative of this interpolant at the nodes of a given grid with the help of a basic lemma on Shepard interpolants. We compare the difference formulae with those defining optimal finite difference methods and discuss their deviation from optimality.
We consider the classical Interpolating Moving Least Squares (IMLS) interpolant as defined by Lancaster and Šalkauskas [Math. Comp.37 (1981) 141–158] and compute the first and second derivative of this interpolant at the nodes of a given grid with the help of a basic lemma on Shepard interpolants. We compare the difference formulae with those defining optimal finite difference methods and discuss their deviation from optimality.
Sensitivity information is required by numerous applications such as, for example, optimization algorithms, parameter estimations or real time control. Sensitivities can be computed with working accuracy using the forward mode of automatic differentiation (AD). ADOL-C is an AD-tool for programs written in C or C++. Originally, when applying ADOL-C, tapes for values, operations and locations are written during the function evaluation to generate an internal function representation....