*(English)*Zbl 1026.15004

In recent years needs have been felt in numerous areas of applied mathematics for some kind of partial inverse of a matrix that is singular or even rectangular. Generalized inverses of matrices were first noted by E. H. Moore (1920), who defined a unique inverse for every constant matrix although generalized inverses of differential and integral operators have first been mentioned in print by Fredholm (1903), Hilbert (1904) e.t.c. . A summary of Moore’s work is given in the Appendix of the book. In 1955 Penrose showed that Moore’s inverse, for a given matrix $A$, is the unique matrix $X$ satisfying the four equations

where the symbol $*$ denotes the conjugate transpose. Due to the later discovery and its importance, this unique inverse is now called Moore-Penrose inverse.

In the Introduction, the authors try to describe the transmission from the known inverse of constant and regular matrices to the generalized inverse of rectangular matrices. A historical note, on the discovery of the generalized inverse first for integral and differential operators (1903-1931) and then for constant matrices (1920-1955) is also given in the introduction.

Chapter 0 contains preliminary results from Linear Algebra, that are used in successive chapters, such as scalar vectors, linear transformations and matrices, elementary operations and permutations, Hermite normal forms, Jordan and Smith normal forms e.t.c.. This chapter can be skipped in reading.

Chapter 1 introduces the $\{i,j,\cdots ,$ $k\}$-inverse as the inverse which satisfies equations $\left(i\right),\left(j\right),\cdots ,$ $\cdots ,\left(k\right)$ among equations (1)–(4). Then, it studies the existence and constructions of various inverses i.e. $\left\{1\right\}$-inverses (known as pseudo inverse or generalized inverse), $\{1,2\}$-inverses (semi-inverse or reciprocal inverse), $\{1,2,3\}$-inverses, $\{1,2,4\}$-inverses, and $\{1,2,3,4\}$-inverse (Moore-Penrose inverse or general reciprocal inverse or generalized inverse).

In Chapter 2, a characterization of various generalized inverses is given in terms of solutions of specific linear systems. Some other results presented in this chapter are the following: a) generalized inverses with prescribed range are constructed, b) restricted generalized inverses are defined and used in the solution of “constrained” linear equations, c) the Bott-Duffin inverse is defined and used in the solution of electrical network problems, and d) an application of {1}- and {1,2}-inverses in interval linear programming and the integral solution of linear equations are given respectively.

In Chapter 3 various generalized inverses are characterized and studied in terms of their minimization properties with respect to the class of ellipsoidal (or weighted Euclidean) norms and the more general class of essentially strictly convex cones. An extremal property of the Bott-Duffin inverse with application to electrical networks is also given.

Chapter 4 studies generalized inverses having some of the spectral properties i.e. properties related to eigenvalues and eigenvectors of the inverse of a nonsingular matrix. Only square matrices are considered, since only they have eigenvalues and eigenvectors. More specifically the chapter deals with the inverse $X$ that satisfies the properties : ${A}^{k}XA={A}^{k}$, $XAX=X$, $AX=XA$ where $k$ is the index of $A$. This inverse is called Drazin inverse. The spectral properties of Drazin inverse are shown, while a particular case of Drazin inverse, the group inverse is also studied. Finally, the quasi-commuting inverse and the strong spectral inverse are also defined.

In computing a generalized or ordinary inverse of a matrix, the size of the difficulty of the problem may be reduced if the matrix is partitioned into other submatrices. Chapter 5 studies generalized inverses of partitioned matrices and their application to the solution of linear equations. Intersections of linear manifolds are also studied in order to obtain common solutions of pairs of linear equations and to invert matrices partitioned by rows or columns.

Chapter 6 studies the spectral theory for rectangular matrices. The authors are approaching the singular value decomposition (SVD) of rectangular matrices following the approach of *C. Eckart* and *G. Young* [Bull. Am. Math. Soc. 45, 118-121 (1939; Zbl 0020.19802)]. Some of the applications of the SVD are given and concern: a) the Schmidt approximation theorem that approximates an original matrix by lower rank matrices, provided that the error of approximation is acceptable, b) the polar decomposition theorem, c) the study of the principal angles between subspaces, d) the study of the behavior of the Moore-Penrose inverse of a perturbed matrix $A+E$ and its dependence on ${A}^{\u2020}$ and on the “erro” $E$, and e) the generalization by Penrose of the classical spectral theorem for normal matrices. Finally, a generalization of the SVD based on *C. F. Van Loan* [SIAM J. Numer. Analysis 13, 76-83 (1976; Zbl 0338.65022)] is described and concerns the simultaneous diagonalization of two $n$-columned matrices.

Chapter 7 proposes computational methods for the unrestricted {1}- and {1,2}-inverses, {2}-inverses and the Moore-Penrose inverse. Two iterative methods are used for the computation of the Moore-Penrose inverse: a) the Greville’s method that is a finite iterative method, and b) an iterative method that produces sequences of matrices $\{{X}_{k},k=1,2,\cdots \}$ that converges to the Moore-Penrose inverse ${A}^{\u2020}$ as $k\to \infty $, under certain intial approximations.

Chapter 8 presents a selection of few applications that illustrate the richness and potential of generalized inverses. The list of applications includes: a) the important operation of parallel sum with application in electrical networks etc., b) the linear statistical model, c) the Newton-method for solution of nonlinear equations, without regarding the nonsingularity of the Jacobian matrix, d) the solution of continuous-time auto regressive (AR) representations, e) the properties of the transition matrix of a finite Markov chain, and f) the solution of singular linear difference equations. Finally, the last two sections deal with the matrix volume and its application to surface integrals and probability distributions.

Chapter 9 is a brief and biased introduction to generalized inverses of linear operators between Hilbert spaces, with special emphasis on the similarities to the finite-dimensional case. The results have been applied to integral and differential operators. Integral and series representations of generalized inverses as well as iterative methods for their computation were given in the sequel. Minimal properties of generalized inverses of operators between Hilbert spaces, analogous to the matrix case, have also been studied.

The new material that was added in this second edition (the first edition was in 1974; Zbl 0305.15001), is the preliminary chapter (Chapter 0), the chapter of applications (Chapter 8), an Appendix on the work of E. H. Moore and new exercises and applications.

Each chapter is accompanied by suggestions for further reading, while the bibliography contains 901 references. This bibliography has also been posted by the authors in the Web page of the International Linear Algebra Society http://www.math.technion.ac.il//iic/research.html and updated from time to time. The book contains more than 450 exercises at different levels of difficulty, many of which are solved in detail. This feature makes it suitable either for reference and self-study or for use as a classroom text. It can be used profitably by graduate students or advanced undergraduate students, only elementary knowledge of linear algebra being assumed.

##### MSC:

15A09 | Matrix inversion, generalized inverses |

15-02 | Research monographs (linear algebra) |

15-03 | Historical (linear algebra) |

65F20 | Overdetermined systems, pseudoinverses (numerical linear algebra) |

15A06 | Linear equations (linear algebra) |

90C05 | Linear programming |

15-00 | Reference works (linear algebra) |

47A05 | General theory of linear operators |

62J05 | Linear regression |

65H10 | Systems of nonlinear equations (numerical methods) |

39A10 | Additive difference equations |

60J10 | Markov chains (discrete-time Markov processes on discrete state spaces) |

65F10 | Iterative methods for linear systems |