×

Designing efficient algorithms for parallel computers. (English) Zbl 0634.68020

McGraw-Hill Series in Supercomputing and Artificial Intelligence. New York etc.: McGraw-Hill Book Company. XVI, 288 p.; DM 101.55 (1987).
In the last years many new parallel algorithms were constructed which have no relation with classical sequential algorithms. They are published only in research reports, journals or proceedings spread all over the world. There are only few books devoted to the design of efficient parallel algorithms - this is one of the best of them. It has two goals: to familiarize the reader with “classical” parallel algorithms and to provide practical insight into how efficient algorithms are constructed for pipelined and array processors, multiprocessors and multicomputers.
The book has 11 chapters, 8 of them are devoted to parallel algorithms. Each chapter brings summary, bibliographic notes and many exercises without answers (it is a pity that a solution manual is available only to instructors).
In the introduction, after short motivation for the need of high- performance parallel computers various software and hardware methods for achieving of high performance computation are described. Flynn’s and Händler’s taxonomy for classifying architectures are explained and arguments against the merits of high-level parallelism are discussed. The first two chapters bring a number of basic processor organisation and briefly describe the architecture of array processors, associative processors, multiprocessors and multicomputers. Superlinear speedup is discussed and possible constraints are formulated.
Chapter 3 handles the design of parallel algorithms by exploiting inherent parallelism in an existing sequential algorithm and by inventing or adapting new parallel algorithms. Developing of algorithms for processor arrays is demonstrated on the sum of n values on the SIMD cube- connected, 2D mesh-connected and perfect shuffle connected mesh. Then the problems of developing algorithms for MIMD are discussed: communication and synchronisation, deadlock and task scheduling.
From chapter 4 to chapter 9 there is a systematical survey of various classes of parallel algorithms: internal sorting and fast Fourier transform on various parallel models, parallel searching procedures on tightly coupled multiprocessors, linear recurrence relations and partial differential equations on processor arrays, Gaussian elimination on multiprocessors, algorithms for problems in graph theory and combinatorial algorithms with “divide and conquer” and “branch and bound” problem solving methodology. All these chapters bring new well understandable parallel algorithms for the given architecture, this is a very positive feature of this book.
Logic programming is described in chapter 10. Basic ideas of PROLOG and CONCURRENT PROLOG are explained and BAGEL, parallel architecture for execution of Concurrent Prolog on a systolic array is described. The last chapter of the book is devoted to the pipelined vector processors. A vector computer is defined and concrete machines are briefly evaluated: CRAY-1, CRAY X-MP and CYBER-205. On the example of matrix multiplication and 1D table lookup the development of this algorithm for the CRAY-1 is explained. At the end of the book, a glossary with basic descriptions, up to date 21-pages bibliography and an index are also given.
By pointing out the algorithmical and implementational viewpoint of the results described, this book is strongly recommended for research workers in computer centers, university students and staffs and academic research institutes applying parallel computers to handling practical computation problems. The reviewed book is an important source of information about this new fascinating topic.
Reviewer: J.Mikloško

MSC:

68W99 Algorithms in computer science
68Q25 Analysis of algorithms and problem complexity
68-02 Research exposition (monographs, survey articles) pertaining to computer science
65Y05 Parallel numerical computation
68N25 Theory of operating systems