##
**Numerical analysis. Theory and experiments.**
*(English)*
Zbl 1442.65001

The market for undergraduate numerical analysis textbooks is very
crowded. There are many reasons for this, as discussed in N. J.
Higham’s lovely column
https://sinews.siam.org/Details-Page/in-search-of-the-perfect-numerical-analysis-textbook,
where he says “Many of us have at some time struggled to find a
completely satisfying textbook for a course we need to teach”, and
then goes on both to ask why this is so, and to give his answers as to
why this is so. Among his answers is the fact that “numerical
analysis continues to evolve”, and this is certainly true.

Brian Sutton’s 2019 book stands out in the crowded market, in part because it is remarkably up-to-date with the evolution of numerical analysis, even though it is aimed directly at beginning numerical analysis students. I believe that this will make an exceptionally good textbook for a beginning course. It has 34 chapters and four appendices, arranged in seven sections. The topics covered are, in order: I. Computation, II. Interpolation, III. Integration (with a chapter on differentiation first), IV. Systems of Linear Equations, V. Linear Differential Equations, VI. Zero finding, and VII. Nonlinear Differential Equations. The four appendices cover interpolation with repeated nodes, complex functions, interpolation in the complex plane, and additional proofs for Newton-Cotes quadrature. Each chapter has about twenty or so student-level exercises.

You can already see that the choice of topics is not quite standard for a first course, and the ordering of topics seems unusual as well. Further differences appear after a while, as one looks more closely. There is nothing really out of the ordinary about the first section, except perhaps for its unusual clarity and detail: in my opinion this is exactly the right level of detail needed for beginning students. I admire his concise but complete explanation of precision versus accuracy, for instance. But the next section, on interpolation, is unusual in that it gets to the barycentric form almost immediately, as a modern treatment should do: this is the correct treatment, because the barycentric form is numerically stable. Even beginning students should learn it.

His chapter on differentiation in his Integration section is also different, concentrating as it does on differentiation matrices (again I believe this to be both modern, and correct: but perhaps I am biased because I just published a paper on differentiation matrices myself). The rest of his Integration section is even more different to classical treatments, using integration matrices. Here to appreciate the way he does it I believe one must adopt similar pedagogical goals. I infer from the style and content that the author believes that the purpose of the course is to get the student to learn how to solve problems by writing their own special-purpose codes to do so, and not simply to learn how to use prepackaged software. In spirit, then, this text is quite old-fashioned (and this time that is not a bad thing). Lots of topics are left out of this section – indeed out of the whole book – but in the limited time one has to teach this course, usually just one term, one absolutely has to leave things out. One can argue about just which topics should be left out as inessential, but the consequences of finite time to teach are inescapable.

As a benefit of aggressively leaving some inessential things out, the author is able to put other essential pedagogical things in, for instance on linear differential equations. The section on linear differential equations is I believe necessary if the students have not yet had a course in differential equations. This seems a good way to do it. Again the treatment leaves out many details, but should allow the students to get up to speed with collocation very quickly. Collocation is a good choice of numerical topic if one wants to teach a course in finite elements later, for instance; or even just boundary value problems for ODE, so I agree with the author on his choice of numerical method for this section.

For the zero-finding section, I also agree with the author about what he and I believe to be a very good companion matrix pencil method for finding the roots of polynomials expressed in the Lagrange basis. You will see why I agree so readily when you read the chapter, which pleasantly surprised me. The companion pencil method discussed here really works because it takes advantage of the exceptionally good conditioning of polynomials expressed in this manner (see [R. M. Corless and S. M. Watt, “Bernstein bases are optimal, but, sometimes, Lagrange bases are better”, SYNASC Proceedings, Timisoara (2004)]). The subsequent chapter on Newton’s method is perhaps less modern, but still very useful.

The final chapters on nonlinear ODE will be difficult to get to in a single-term course. Still, if one has good students, one could do so.

All in all, a very welcome standout addition to the crowded market.

Brian Sutton’s 2019 book stands out in the crowded market, in part because it is remarkably up-to-date with the evolution of numerical analysis, even though it is aimed directly at beginning numerical analysis students. I believe that this will make an exceptionally good textbook for a beginning course. It has 34 chapters and four appendices, arranged in seven sections. The topics covered are, in order: I. Computation, II. Interpolation, III. Integration (with a chapter on differentiation first), IV. Systems of Linear Equations, V. Linear Differential Equations, VI. Zero finding, and VII. Nonlinear Differential Equations. The four appendices cover interpolation with repeated nodes, complex functions, interpolation in the complex plane, and additional proofs for Newton-Cotes quadrature. Each chapter has about twenty or so student-level exercises.

You can already see that the choice of topics is not quite standard for a first course, and the ordering of topics seems unusual as well. Further differences appear after a while, as one looks more closely. There is nothing really out of the ordinary about the first section, except perhaps for its unusual clarity and detail: in my opinion this is exactly the right level of detail needed for beginning students. I admire his concise but complete explanation of precision versus accuracy, for instance. But the next section, on interpolation, is unusual in that it gets to the barycentric form almost immediately, as a modern treatment should do: this is the correct treatment, because the barycentric form is numerically stable. Even beginning students should learn it.

His chapter on differentiation in his Integration section is also different, concentrating as it does on differentiation matrices (again I believe this to be both modern, and correct: but perhaps I am biased because I just published a paper on differentiation matrices myself). The rest of his Integration section is even more different to classical treatments, using integration matrices. Here to appreciate the way he does it I believe one must adopt similar pedagogical goals. I infer from the style and content that the author believes that the purpose of the course is to get the student to learn how to solve problems by writing their own special-purpose codes to do so, and not simply to learn how to use prepackaged software. In spirit, then, this text is quite old-fashioned (and this time that is not a bad thing). Lots of topics are left out of this section – indeed out of the whole book – but in the limited time one has to teach this course, usually just one term, one absolutely has to leave things out. One can argue about just which topics should be left out as inessential, but the consequences of finite time to teach are inescapable.

As a benefit of aggressively leaving some inessential things out, the author is able to put other essential pedagogical things in, for instance on linear differential equations. The section on linear differential equations is I believe necessary if the students have not yet had a course in differential equations. This seems a good way to do it. Again the treatment leaves out many details, but should allow the students to get up to speed with collocation very quickly. Collocation is a good choice of numerical topic if one wants to teach a course in finite elements later, for instance; or even just boundary value problems for ODE, so I agree with the author on his choice of numerical method for this section.

For the zero-finding section, I also agree with the author about what he and I believe to be a very good companion matrix pencil method for finding the roots of polynomials expressed in the Lagrange basis. You will see why I agree so readily when you read the chapter, which pleasantly surprised me. The companion pencil method discussed here really works because it takes advantage of the exceptionally good conditioning of polynomials expressed in this manner (see [R. M. Corless and S. M. Watt, “Bernstein bases are optimal, but, sometimes, Lagrange bases are better”, SYNASC Proceedings, Timisoara (2004)]). The subsequent chapter on Newton’s method is perhaps less modern, but still very useful.

The final chapters on nonlinear ODE will be difficult to get to in a single-term course. Still, if one has good students, one could do so.

All in all, a very welcome standout addition to the crowded market.

Reviewer: Rob Corless (London)

### MSC:

65-01 | Introductory exposition (textbooks, tutorial papers, etc.) pertaining to numerical analysis |