Parallel Programming and Compilers
Springer-Verlag New York Inc.
978-1-4612-8416-1 (ISBN)
The second half of the 1970s was marked with impressive advances in array/vector architectures and vectorization techniques and compilers. This progress continued with a particular focus on vector machines until the middle of the 1980s. The major ity of supercomputers during this period were register-to-register (Cray 1) or memory-to-memory (CDC Cyber 205) vector (pipelined) machines. However, the increasing demand for higher computational rates lead naturally to parallel comput ers and software. Through the replication of autonomous processors in a coordinated system, one can skip over performance barriers due technology limitations. In princi ple, parallelism offers unlimited performance potential. Nevertheless, it is very difficult to realize this performance potential in practice. So far, we have seen only the tip of the iceberg called "parallel machines and parallel programming". Parallel programming in particular is a rapidly evolving art and, at present, highly empirical. In this book we discuss several aspects of parallel programming and parallelizing compilers. Instead of trying to develop parallel programming methodologies and paradigms, we often focus on more advanced topics assuming that the reader has an adequate background in parallel processing. The book is organized in three main parts. In the first part (Chapters 1 and 2) we set the stage and focus on program transformations and parallelizing compilers. The second part of this book (Chapters 3 and 4) discusses scheduling for parallel machines from the practical point of view macro and microtasking and supporting environments). Finally, the last part (Le.
1 Parallel Architectures and Compilers.- 1.1 Introduction.- 1.2 Book Overview.- 1.3 Vector and Parallel Machines.- 1.4 Parallelism in Programs.- 1.5 Basic Concepts and Definitions.- 2 Program restructuring for parallel execution.- 2.1 Data Dependences.- 2.2 Common Optimizations.- 2.3 Transformations for Vector/Parallel Loops.- 2.4 Cycle Shrinking.- 2.5 Loop Spreading.- 2.6 Loop Coalescing.- 2.7 Run-Time Dependence Testing.- 2.8 Subscript Blocking.- 2.9 Future Directions.- 3 A Comprehensive Environment for Automatic Packaging and Scheduling of Parallelism.- 3.1 Introduction.- 3.2 A Comprehensive Approach to Scheduling.- 3.3 Auto-Scheduling Compilers.- 4 Static and Dynamic Loop Scheduling.- 4.1 Introduction.- 4.2 The Guided Self-Scheduling (GSS(k)) Algorithm.- 4.3 Simulation Results.- 4.4 Static Loop Scheduling.- 5 Run-Time Overhead.- 5.1 Introduction.- 5.2 Bounds for Dynamic Loop Scheduling.- 5.3 Overhead of Parallel Tasks.- 5.4 Two Run-Time Overhead Models.- 5.5 Deciding the Minimum Unit of Allocation.- 6 Static Program Partitioning.- 6.1 Introduction.- 6.2 Methods for Program Partitioning.- 6.3 Optimal Task Composition for Chains.- 6.4 Details of Interprocessor Communication.- 7 Static Task Scheduling.- 7.1 Introduction.- 7.2 Optimal Allocations for High Level Spreading.- 7.3 Scheduling Independent Serial Tasks.- 7.4 High Level Spreading for Complete Task Graphs.- 7.5 Bounds for Static Scheduling.- 8 Speedup Bounds for Parallel Programs.- 8.1 Introduction.- 8.2 General Bounds on Speedup.- 8.3 Speedup Measures for Task Graphs.- 8.4 Speedup Measures for Doacr Loops.- 8.5 Multiprocessors vs. Vector/Array Machines.- References.
Reihe/Serie | The Springer International Series in Engineering and Computer Science ; 59 |
---|---|
Zusatzinfo | 258 p. |
Verlagsort | New York, NY |
Sprache | englisch |
Maße | 152 x 229 mm |
Themenwelt | Sachbuch/Ratgeber ► Natur / Technik ► Garten |
Mathematik / Informatik ► Informatik ► Betriebssysteme / Server | |
Mathematik / Informatik ► Informatik ► Programmiersprachen / -werkzeuge | |
Informatik ► Theorie / Studium ► Compilerbau | |
Informatik ► Weitere Themen ► Hardware | |
ISBN-10 | 1-4612-8416-3 / 1461284163 |
ISBN-13 | 978-1-4612-8416-1 / 9781461284161 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich