##
**Evaluating derivatives. Principles and techniques of algorithmic differentiation.**
*(English)*
Zbl 0958.65028

Frontiers in Applied Mathematics. 19. Philadelphia, PA: SIAM, Society for Industrial and Applied Mathematics. xxiv, 369 p. (2000).

Frequently, numerical values for a composite function are calculated by a program. Algorithmic, or automatic differentiation (AD) is concerned with the efficient and accurate evaluation of derivatives for such function. The resulting derivatives can be used in numerous algorithms for nonlinear problems.

The new book is written by a specialist in AD and presents a comprehensive treatment of AD. Earlier textbooks on AD [cf. L. Rall, Automatic differentiation: Techniques and applications (1981; Zbl 0473.68025) and H. Kagiwada, R. Kalaba, N. Rasakhoo and K. Spingarn, Numerical derivatives and nonlinear analysis (1986; Zbl 0665.65017)], cover only the forward mode of AD. The book under review describes all chain-rule based techniques for evaluating derivatives of composite functions with emphasis on the reverse mode. The computation of gradients is always cheap, while the cost of evaluating Jacobian and Hessian matrices depends on the problem structure and its efficient exploitation. Attempts to minimize operations count and memory requirement lead to hard combinatorial optimization problems.

This book is divided into three parts.

Part I (with Chapters 2-5) presents an introduction to the fundamentals of AD and its software. In general, Jacobian and Hessian matrices cannot be obtained cheaply.

Part II (with Chapters 6-9) examines the typical situation, where Jacobians and Hessians are sparse or structured, such that they can be computed cheaply by matrix compression.

Part III (with Chapters 10 - 12) contains more advanced material (very large problems, higher derivatives, generalized gradients, differentiation of codes with nondifferentiabilities) which will be of interest mostly to researchers.

Each chapter concludes with many examples and exercises suitable also for students with a basic knowledge of calculus, procedural programming, and numerical linear algebra.

This well written book will be very useful for graduate students, mathematicians and engineers, who are interested in efficient algorithms for nonlinear problems.

The new book is written by a specialist in AD and presents a comprehensive treatment of AD. Earlier textbooks on AD [cf. L. Rall, Automatic differentiation: Techniques and applications (1981; Zbl 0473.68025) and H. Kagiwada, R. Kalaba, N. Rasakhoo and K. Spingarn, Numerical derivatives and nonlinear analysis (1986; Zbl 0665.65017)], cover only the forward mode of AD. The book under review describes all chain-rule based techniques for evaluating derivatives of composite functions with emphasis on the reverse mode. The computation of gradients is always cheap, while the cost of evaluating Jacobian and Hessian matrices depends on the problem structure and its efficient exploitation. Attempts to minimize operations count and memory requirement lead to hard combinatorial optimization problems.

This book is divided into three parts.

Part I (with Chapters 2-5) presents an introduction to the fundamentals of AD and its software. In general, Jacobian and Hessian matrices cannot be obtained cheaply.

Part II (with Chapters 6-9) examines the typical situation, where Jacobians and Hessians are sparse or structured, such that they can be computed cheaply by matrix compression.

Part III (with Chapters 10 - 12) contains more advanced material (very large problems, higher derivatives, generalized gradients, differentiation of codes with nondifferentiabilities) which will be of interest mostly to researchers.

Each chapter concludes with many examples and exercises suitable also for students with a basic knowledge of calculus, procedural programming, and numerical linear algebra.

This well written book will be very useful for graduate students, mathematicians and engineers, who are interested in efficient algorithms for nonlinear problems.

Reviewer: Manfred Tasche (Rostock)

### MSC:

65D25 | Numerical differentiation |

65-02 | Research exposition (monographs, survey articles) pertaining to numerical analysis |

65F50 | Computational methods for sparse matrices |

68-02 | Research exposition (monographs, survey articles) pertaining to computer science |

68W30 | Symbolic computation and algebraic computation |