Computer architecture and parallel processing.

*(English)*Zbl 0534.68006
McGraw-Hill Series in Computer Organization and Architecture. New York etc.: McGraw-Hill Book Company. XVIII, 846 p. DM 141.05 (1984).

The monograph provides comprehensive treatment of pipelined vector computers, array processors, multiprocessors, dataflow and VLSI computers. It is devoted to studying advanced computer architectures, theories of parallel computing, system resource optimization, fast algorithms, efficient programming languages and applications requirement of cost-effective computer systems. All these problems are investigated not isolatedly, but in the complexity of their interrelations. Many results in the book are original due to authors; the other ones are gathered from results of many researchers, designers and users of various journals and technical reports. The book consists of 10 chapters. Each chapter contains sections, bibliographic notes and problems. Sections marked with asterisk are research-oriented topics. Bibliographic notes help to find additional references for extended studies. A Solution Manual to all problems is available from McGraw-Hill for instructors only which used the book as a text for a course in computer architecture and parallel processing. Chapter 1 introduces the basic concepts of parallel processing and computer structures and pave the way to studying the details of theories of parallel computing, machine architectures, system controls, fast algorithms, and parallel programming.

Chapter 2 describes hierarchical memory organizations and input- output subsystems. Some memory allocation management schemes and various organisations of cache memories are presented. The structures of pipeline computers and vector processing are studied in Chapter 3. Instructive piplines and arithmetic pipelines are investigated. Vector processing requirements are introduced with illustrative examples. Various pipeline vector supercomputer systems (STAR-100, TI-ASC, CRAY-1, CYBER-205, VP- 200) and attached scientific processors (AP-120B (FPS-164), IBM 3838, MATP) are described in Chapter 4. Vectorising compiling techniques, optimization methods and performance evaluation issues are also studied.

Chapter 5 deals with interconnection structures and parallel algorithms for SIMD array processors and associative processors. Control mechanism of array processors and their interconnection networks are investigated. The structure of associative memory in associative processor is also studied. SIMD algorithms are presented for matrix manipulation, parallel sorting, Fast Fourier Transform and associative search and retrieval operations. In Chapter 6 an overview of array structured SIMD machines is given: ILLIAC-IV, BSP, MPP, STARAN, PEPE and MPP. Performance enhancement methods are also provided for synchronous array processors.

Chapters 7-9 are devoted to hardware system architecture, operating, operating system controls, parallel algorithms and performance evaluation of multiprocessor systems. Design experiences of three research multiprocessor \((C.mmp,S-1,Cm^*)\) are presented and commercial multiprocessors (IBM 370/168 MP, 3033, and 3381, UNIVAC 1100/80 and 1100/90, CRAY-X-MP, Tandem/16, HEP) are studied.

Chapter 10 investigates new computing concepts and their realization issues in data flow computers and VLSI computations. The requirement of data-driven computations, functional programming languages and various data flow achitectures are studied. Techniques for directly mapping parallel algorithms into hardware structures are investigated and VLSI architectures for designing large-scale matrix arithmetic solvers are presented. Applications of some of these VLSI structures are demonstrated for real-time image processing. The book is designed to be used by seniors and graduate students in computer science, electrical and industrial engineering and in any other fields demanding the use of high- performance parallel supercomputers to solve applications problems.

The prerequisite for reading this monograph is an introductory undergraduate course in computer organization and programming. For vector supercomputers is used FORTRAN, for multiprocessors concurrent PASCAL. Topics which are studied in the book create at present quickly developed new open areas for research and development. The results are changing so rapidly that no book can cover every new progress being made.

However, it is the opinion of the reviewer that from the books devoted at present time to parallel processing this book is one of the best. It brings the broad survey of results in this topic until 1983, it is very good to understand and describes also quite more results in the new frontier computer areas: data flow computers and VLSI computations. I recommend to read it to all seniors and graduate students and to those who use (or want to use) high performance computers for the solution of their large-scale problems.

Chapter 2 describes hierarchical memory organizations and input- output subsystems. Some memory allocation management schemes and various organisations of cache memories are presented. The structures of pipeline computers and vector processing are studied in Chapter 3. Instructive piplines and arithmetic pipelines are investigated. Vector processing requirements are introduced with illustrative examples. Various pipeline vector supercomputer systems (STAR-100, TI-ASC, CRAY-1, CYBER-205, VP- 200) and attached scientific processors (AP-120B (FPS-164), IBM 3838, MATP) are described in Chapter 4. Vectorising compiling techniques, optimization methods and performance evaluation issues are also studied.

Chapter 5 deals with interconnection structures and parallel algorithms for SIMD array processors and associative processors. Control mechanism of array processors and their interconnection networks are investigated. The structure of associative memory in associative processor is also studied. SIMD algorithms are presented for matrix manipulation, parallel sorting, Fast Fourier Transform and associative search and retrieval operations. In Chapter 6 an overview of array structured SIMD machines is given: ILLIAC-IV, BSP, MPP, STARAN, PEPE and MPP. Performance enhancement methods are also provided for synchronous array processors.

Chapters 7-9 are devoted to hardware system architecture, operating, operating system controls, parallel algorithms and performance evaluation of multiprocessor systems. Design experiences of three research multiprocessor \((C.mmp,S-1,Cm^*)\) are presented and commercial multiprocessors (IBM 370/168 MP, 3033, and 3381, UNIVAC 1100/80 and 1100/90, CRAY-X-MP, Tandem/16, HEP) are studied.

Chapter 10 investigates new computing concepts and their realization issues in data flow computers and VLSI computations. The requirement of data-driven computations, functional programming languages and various data flow achitectures are studied. Techniques for directly mapping parallel algorithms into hardware structures are investigated and VLSI architectures for designing large-scale matrix arithmetic solvers are presented. Applications of some of these VLSI structures are demonstrated for real-time image processing. The book is designed to be used by seniors and graduate students in computer science, electrical and industrial engineering and in any other fields demanding the use of high- performance parallel supercomputers to solve applications problems.

The prerequisite for reading this monograph is an introductory undergraduate course in computer organization and programming. For vector supercomputers is used FORTRAN, for multiprocessors concurrent PASCAL. Topics which are studied in the book create at present quickly developed new open areas for research and development. The results are changing so rapidly that no book can cover every new progress being made.

However, it is the opinion of the reviewer that from the books devoted at present time to parallel processing this book is one of the best. It brings the broad survey of results in this topic until 1983, it is very good to understand and describes also quite more results in the new frontier computer areas: data flow computers and VLSI computations. I recommend to read it to all seniors and graduate students and to those who use (or want to use) high performance computers for the solution of their large-scale problems.

Reviewer: J.Miklosko

##### MSC:

68-02 | Research exposition (monographs, survey articles) pertaining to computer science |

68N99 | Theory of software |

68N25 | Theory of operating systems |