Page 1 Next

Displaying 1 – 20 of 53

Showing per page

A note on direct methods for approximations of sparse Hessian matrices

Miroslav Tůma (1988)

Aplikace matematiky

Necessity of computing large sparse Hessian matrices gave birth to many methods for their effective approximation by differences of gradients. We adopt the so-called direct methods for this problem that we faced when developing programs for nonlinear optimization. A new approach used in the frame of symmetric sequential coloring is described. Numerical results illustrate the differences between this method and the popular Powell-Toint method.

Approximations of the partial derivatives by averaging

Josef Dalík (2012)

Open Mathematics

A straightforward generalization of a classical method of averaging is presented and its essential characteristics are discussed. The method constructs high-order approximations of the l-th partial derivatives of smooth functions u in inner vertices a of conformal simplicial triangulations T of bounded polytopic domains in ℝd for arbitrary d ≥ 2. For any k ≥ l ≥ 1, it uses the interpolants of u in the polynomial Lagrange finite element spaces of degree k on the simplices with vertex a only. The...

Automatic differentiation and its program realization

Jan Hartman, Ladislav Lukšan, Jan Zítko (2009)

Kybernetika

Automatic differentiation is an effective method for evaluating derivatives of function, which is defined by a formula or a program. Program for evaluating of value of function is by automatic differentiation modified to program, which also evaluates values of derivatives. Computed values are exact up to computer precision and their evaluation is very quick. In this article, we describe a program realization of automatic differentiation. This implementation is prepared in the system UFO, but its...

Automatic differentiation platform : design

Christèle Faure (2002)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

Automatic differentiation (AD) has proven its interest in many fields of applied mathematics, but it is still not widely used. Furthermore, existing numerical methods have been developed under the hypotheses that computing program derivatives is not affordable for real size problems. Exact derivatives have therefore been avoided, or replaced by approximations computed by divided differences. The hypotheses is no longer true due to the maturity of AD added to the quick evolution of machine capacity....

Automatic Differentiation Platform: Design

Christèle Faure (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

Automatic differentiation (AD) has proven its interest in many fields of applied mathematics, but it is still not widely used. Furthermore, existing numerical methods have been developed under the hypotheses that computing program derivatives is not affordable for real size problems. Exact derivatives have therefore been avoided, or replaced by approximations computed by divided differences. The hypotheses is no longer true due to the maturity of AD added to the quick evolution of machine capacity....

Averaging of gradient in the space of linear triangular and bilinear rectangular finite elements

Josef Dalík, Václav Valenta (2013)

Open Mathematics

An averaging method for the second-order approximation of the values of the gradient of an arbitrary smooth function u = u(x 1, x 2) at the vertices of a regular triangulation T h composed both of rectangles and triangles is presented. The method assumes that only the interpolant Πh[u] of u in the finite element space of the linear triangular and bilinear rectangular finite elements from T h is known. A complete analysis of this method is an extension of the complete analysis concerning the finite...

Basic relations valid for the Bernstein spaces B ² σ and their extensions to larger function spaces via a unified distance concept

P. L. Butzer, R. L. Stens, G. Schmeisser (2014)

Banach Center Publications

Some basic theorems and formulae (equations and inequalities) of several areas of mathematics that hold in Bernstein spaces B σ p are no longer valid in larger spaces. However, when a function f is in some sense close to a Bernstein space, then the corresponding relation holds with a remainder or error term. This paper presents a new, unified approach to these errors in terms of the distance of f from B σ p . The difficult situation of derivative-free error estimates is also covered.

Comparison of explicit and implicit difference schemes for parabolic functional differential equations

Zdzisław Kamont, Karolina Kropielnicka (2012)

Annales Polonici Mathematici

Initial-boundary value problems of Dirichlet type for parabolic functional differential equations are considered. Explicit difference schemes of Euler type and implicit difference methods are investigated. The following theoretical aspects of the methods are presented. Sufficient conditions for the convergence of approximate solutions are given and comparisons of the methods are presented. It is proved that the assumptions on the regularity of the given functions are the same for both methods. It...

Complexity of the method of averaging

Dalík, Josef (2010)

Programs and Algorithms of Numerical Mathematics

The general method of averaging for the superapproximation of an arbitrary partial derivative of a smooth function in a vertex a of a simplicial triangulation 𝒯 of a bounded polytopic domain in d for any d 2 is described and its complexity is analysed.

Difference operators from interpolating moving least squares and their deviation from optimality

Thomas Sonar (2005)

ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique

We consider the classical Interpolating Moving Least Squares (IMLS) interpolant as defined by Lancaster and Šalkauskas [Math. Comp. 37 (1981) 141–158] and compute the first and second derivative of this interpolant at the nodes of a given grid with the help of a basic lemma on Shepard interpolants. We compare the difference formulae with those defining optimal finite difference methods and discuss their deviation from optimality.

Difference operators from interpolating moving least squares and their deviation from optimality

Thomas Sonar (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

We consider the classical Interpolating Moving Least Squares (IMLS) interpolant as defined by Lancaster and Šalkauskas [Math. Comp.37 (1981) 141–158] and compute the first and second derivative of this interpolant at the nodes of a given grid with the help of a basic lemma on Shepard interpolants. We compare the difference formulae with those defining optimal finite difference methods and discuss their deviation from optimality.

Efficient calculation of sensitivities for optimization problems

Andreas Kowarz, Andrea Walther (2007)

Discussiones Mathematicae, Differential Inclusions, Control and Optimization

Sensitivity information is required by numerous applications such as, for example, optimization algorithms, parameter estimations or real time control. Sensitivities can be computed with working accuracy using the forward mode of automatic differentiation (AD). ADOL-C is an AD-tool for programs written in C or C++. Originally, when applying ADOL-C, tapes for values, operations and locations are written during the function evaluation to generate an internal function representation....

Currently displaying 1 – 20 of 53

Page 1 Next