Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Compilers - Alfred V. Aho, Monica S. Lam, Ravi Sethi, Jeffrey D. Ullman

Compilers

Principles, Techniques, and Tools: International Edition
Buch | Softcover
1000 Seiten
2007 | 2nd edition
Pearson (Verlag)
978-0-321-49169-5 (ISBN)
CHF 239,95 inkl. MwSt
zur Neuauflage
  • Titel erscheint in neuer Auflage
  • Artikel merken
Zu diesem Artikel existiert eine Nachauflage
Compilers: Principles, Techniques and Tools, known to professors, students, and developers worldwide as the "Dragon Book," is available in a new edition.  Every chapter has been completely revised to reflect developments in software engineering, programming languages, and computer architecture that have occurred since 1986, when the last edition published.  The authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development.

Alfred V. Aho is Lawrence Gussman Professor of Computer Science at Columbia University. Professor Aho has won several awards including the Great Teacher Award for 2003 from the Society of Columbia Graduates and the IEEE John von Neumann Medal.  He is a member of the National Academy of Engineering and a fellow of the ACM and IEEE.   Monica S. Lam is a Professor of Computer Science at Stanford University, was the Chief Scientist at Tensilica and the founding CEO of moka5. She led the SUIF project which produced one of the most popular research compilers, and pioneered numerous compiler techniques used in industry.   Ravi Sethi launched the research organization in Avaya and is president of Avaya Labs.  Previously, he was a senior vice president at Bell Labs in Murray Hill and chief technical officer for communications software at Lucent Technologies. He has held teaching positions at the Pennsylvania State University and the University of Arizona, and has taught at Princeton University and Rutgers.  He is a fellow of the ACM.   Jeffrey Ullman is CEO of Gradiance and a Stanford W. Ascherman Professor of Computer Science at Stanford University. His research interests include database theory, database integration, data mining, and education using the information infrastructure.  He is a member of the National Academy of Engineering, a fellow of the ACM, and winner of the Karlstrom Award and Knuth Prize.  

1  Introduction

1.1 Language Processors

1.2 The Structure of a Compiler

1.3 The Evolution of Programming Languages

1.4 The Science of Building a Compiler

1.5 Applications of Compiler Technology

1.6 Programming Language Basics

1.7 Summary of Chapter 1

1.8 References for Chapter 1

 

2 A Simple Syntax-Directed Translator  

2.1 Introduction

2.2 Syntax Definition  

2.3 Syntax-Directed Translation

2.4 Parsing

2.5 A Translator for Simple Expressions  

2.6 Lexical Analysis  

2.7 Symbol Tables

2.8 Intermediate Code Generation

2.9 Summary of Chapter 2

 

3 Lexical Analysis

3.1 The Role of the Lexical Analyzer

3.2 Input Buffering  

3.3 Specification of Tokens

3.4 Recognition of Tokens

3.5 The Lexical-Analyzer Generator Lex

3.6 Finite Automata

3.7 From Regular Expressions to Automata

3.8 Design of a Lexical-Analyzer Generator

3.9 Optimization of DFA-Based Pattern Matchers  

3.10 Summary of Chapter 3

3.11 References for Chapter 3

 

4 Syntax Analysis

4.1 Introduction

4.2 Context-Free Grammars

4.3 Writing a Grammar

4.4 Top-Down Parsing  

4.5 Bottom-Up Parsing

4.6 Introduction to LR Parsing: Simple LR

4.7 More Powerful LR Parsers

4.8 Using Ambiguous Grammars

4.9 Parser Generators

4.10 Summary of Chapter 4  

4.11 References for Chapter 4

 

5 Syntax-Directed Translation

5.1 Syntax-Directed Definitions

5.2 Evaluation Orders for SDD's  

5.3 Applications of Syntax-Directed Translation

5.4 Syntax-Directed Translation Schemes

5.5 Implementing L-Attributed SDD's  

5.6 Summary of Chapter 5

5.7 References for Chapter 5  

 

6 Intermediate-Code Generation

6.1 Variants of Syntax Trees  

6.2 Three-Address Code

6.3 Types and Declarations

6.4 Translation of Expressions  

6.5 Type Checking  

6.6 Control Flow

6.7 Backpatching

6.8 Switch-Statements

6.9 Intermediate Code for Procedures  

6.10 Summary of Chapter 6

6.11 References for Chapter 6

 

7 Run-Time Environments

7.1 Storage Organization  

7.2 Stack Allocation of Space

7.3 Access to Nonlocal Data on the Stack  

7.4 Heap Management  

7.5 Introduction to Garbage Collection  

7.6 Introduction to Trace-Based Collection

7.7 Short-Pause Garbage Collection

7.8 Advanced Topics in Garbage Collection

7.9 Summary of Chapter 7

7.10 References for Chapter 7

 

8 Code Generation

8.1 Issues in the Design of a Code Generator

8.2 The Target Language

8.3 Addresses in the Target Code  

8.4 Basic Blocks and Flow Graphs

8.5 Optimization of Basic Blocks

8.6 A Simple Code Generator

8.7 Peephole Optimization  

8.8 Register Allocation and Assignment  

8.9 Instruction Selection by Tree Rewriting  

8.10 Optimal Code Generation for Expressions

8.11 Dynamic Programming Code-Generation

8.12 Summary of Chapter 8  

8.13 References for Chapter 8  

 

9 Machine-Independent Optimizations

9.1 The Principal Sources of Optimization

9.2 Introduction to Data-Flow Analysis

9.3 Foundations of Data-Flow Analysis  

9.4 Constant Propagation  

9.5 Partial-Redundancy Elimination

9.6 Loops in Flow Graphs

9.7 Region-Based Analysis

9.8 Symbolic Analysis

9.9 Summary of Chapter 9

9.10 References for Chapter 9

 

10 Instruction-Level Parallelism

10.1 Processor Architectures

10.2 Code-Scheduling Constraints

10.3 Basic-Block Scheduling

10.4 Global Code Scheduling

10.5 Software Pipelining

10.6 Summary of Chapter 10  

10.7 References for Chapter 10

 

11 Optimizing for Parallelism and Locality  

11.1 Basic Concepts

11.2 Matrix Multiply: An In-Depth Example

11.3 Iteration Spaces

11.4 Affine Array Indexes  

11.5 Data Reuse

11.6 Array Data-Dependence Analysis

11.7 Finding Synchronization-Free Parallelism

11.8 Synchronization Between Parallel Loops

11.9 Pipelining

11.10 Locality Optimizations

11.11 Other Uses of Affine Transforms  

11.12 Summary of Chapter 11

11.13 References for Chapter 11

 

12 Interprocedural Analysis  

12.1 Basic Concepts  

12.2 Why Interprocedural Analysis?

12.3 A Logical Representation of Data Flow  

12.4 A Simple Pointer-Analysis Algorithm

12.5 Context-Insensitive Interprocedural Analysis

12.6 Context-Sensitive Pointer Analysis

12.7 Datalog Implementation by BDD's

12.8 Summary of Chapter 12

12.9 References for Chapter 12

 

A  A Complete Front End

A.1 The Source Language

A.2 Main

A.3 Lexical Analyzer

A.4 Symbol Tables and Types

A.5 Intermediate Code for Expressions  

A.6 Jumping Code for Boolean Expressions

A.7 Intermediate Code for Statements

A.8 Parser  

A.9 Creating the Front End

 

B Finding Linearly Independent Solutions

Index  

 

 

Erscheint lt. Verlag 16.2.2007
Reihe/Serie Pearson International Edition
Sprache englisch
Maße 163 x 234 mm
Gewicht 1360 g
Themenwelt Mathematik / Informatik Informatik Programmiersprachen / -werkzeuge
Informatik Theorie / Studium Compilerbau
ISBN-10 0-321-49169-6 / 0321491696
ISBN-13 978-0-321-49169-5 / 9780321491695
Zustand Neuware
Haben Sie eine Frage zum Produkt?
Mehr entdecken
aus dem Bereich
Grundlagen und Anwendungen

von Hanspeter Mössenböck

Buch | Softcover (2024)
dpunkt (Verlag)
CHF 41,85
a beginner's guide to learning llvm compiler tools and core …

von Kai Nacke

Buch | Softcover (2024)
Packt Publishing Limited (Verlag)
CHF 69,80