×

zbMATH — the first resource for mathematics

Semiautomatic differentiation for efficient gradient computations. (English) Zbl 1270.65012
Bücker, Martin (ed.) et al., Automatic differentiation: Applications, theory, and implementations. Selected papers based on the presentation at the 4th international conference on automatic differentiation (AD), Chicago, IL, USA, July 20–23, 2004. Berlin: Springer (ISBN 3-540-28403-6/pbk). Lecture Notes in Computational Science and Engineering 50, 147-158 (2006).
Summary: Many large-scale computations involve a mesh and first (or sometimes higher) partial derivatives of functions of mesh elements. In principle, automatic differentiation (AD) can provide the requisite partials more efficiently and accurately than conventional finite-difference approximations. AD requires source-code modifications, which may be little more than changes to declarations. Such simple changes can easily give improved results, e.g., when Jacobian-vector products are used iteratively to solve nonlinear equations. When gradients are required (say, for optimization) and the problem involves many variables, “backward AD” in theory is very efficient, but when carried out automatically and straightforwardly, may use a prohibitive amount of memory. In this case, applying AD separately to each element function and manually assembling the gradient pieces – semiautomatic differentiation – can deliver gradients efficiently and accurately. This paper concerns on-going work; it compares several implementations of backward AD, describes a simple operator-overloading implementation specialized for gradient computations, and compares the implementations on some mesh-optimization examples. Ideas from the specialized implementation could be used in fully general source-to-source translators for C and C++.
For the entire collection see [Zbl 1084.65002].

MSC:
65D25 Numerical differentiation
65Y99 Computer aspects of numerical algorithms
Software:
ADOL-C; CompAD; NAGWare; TAF; TFad
PDF BibTeX XML Cite