Nicht aus der Schweiz? Besuchen Sie lehmanns.de
Network Algorithmics -  George Varghese

Network Algorithmics (eBook)

An Interdisciplinary Approach to Designing Fast Networked Devices
eBook Download: EPUB
2004 | 1. Auflage
496 Seiten
Elsevier Science (Verlag)
978-0-08-047964-4 (ISBN)
Systemvoraussetzungen
66,88 inkl. MwSt
(CHF 65,30)
Der eBook-Verkauf erfolgt durch die Lehmanns Media GmbH (Berlin) zum Preis in Euro inkl. MwSt.
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
In designing a network device, you make dozens of decisions that affect the speed with which it will perform?sometimes for better, but sometimes for worse. Network Algorithmics provides a complete, coherent methodology for maximizing speed while meeting your other design goals.

Author George Varghese begins by laying out the implementation bottlenecks that are most often encountered at four disparate levels of implementation: protocol, OS, hardware, and architecture. He then derives 15 solid principles?ranging from the commonly recognized to the groundbreaking?that are key to breaking these bottlenecks.

The rest of the book is devoted to a systematic application of these principles to bottlenecks found specifically in endnodes, interconnect devices, and specialty functions such as security and measurement that can be located anywhere along the network. This immensely practical, clearly presented information will benefit anyone involved with network implementation, as well as students who have made this work their goal.

FOR INSTRUCTORS: To obtain access to the solutions manual for this title simply register on our textbook website (textbooks.elsevier.com)and request access to the Computer Science subject area. Once approved (usually within one business day) you will be able to access all of the instructor-only materials through the 'Instructor Manual' link on this book's academic web page at textbooks.elsevier.com.

· Addresses the bottlenecks found in all kinds of network devices, (data copying, control transfer, demultiplexing, timers, and more) and offers ways to break them.
· Presents techniques suitable specifically for endnodes, including Web servers.
· Presents techniques suitable specifically for interconnect devices, including routers, bridges, and gateways.
· Written as a practical guide for implementers but full of valuable insights for students, teachers, and researchers.
· Includes end-of-chapter summaries and exercises.
In designing a network device, you make dozens of decisions that affect the speed with which it will perform-sometimes for better, but sometimes for worse. Network Algorithmics provides a complete, coherent methodology for maximizing speed while meeting your other design goals. Author George Varghese begins by laying out the implementation bottlenecks that are most often encountered at four disparate levels of implementation: protocol, OS, hardware, and architecture. He then derives 15 solid principles ranging from the commonly recognized to the groundbreaking that are key to breaking these bottlenecks. The rest of the book is devoted to a systematic application of these principles to bottlenecks found specifically in endnodes, interconnect devices, and specialty functions such as security and measurement that can be located anywhere along the network. This immensely practical, clearly presented information will benefit anyone involved with network implementation, as well as students who have made this work their goal. FOR INSTRUCTORS: To obtain access to the solutions manual for this title simply register on our textbook website (textbooks.elsevier.com)and request access to the Computer Science subject area. Once approved (usually within one business day) you will be able to access all of the instructor-only materials through the "e;Instructor Manual"e; link on this book's academic web page at textbooks.elsevier.com. - Addresses the bottlenecks found in all kinds of network devices, (data copying, control transfer, demultiplexing, timers, and more) and offers ways to break them- Presents techniques suitable specifically for endnodes, including Web servers- Presents techniques suitable specifically for interconnect devices, including routers, bridges, and gateways- Written as a practical guide for implementers but full of valuable insights for students, teachers, and researchers- Includes end-of-chapter summaries and exercises

Front cover 1
Title page 4
Copyright page 5
Table of contents 8
Preface 20
Audience 21
What This Book is About 21
Organitztion of the Book 22
Features 23
Usage 24
Why This Book Was Written 24
Acknowledgements 25
PART I The Rules of the Game 26
CHAPTER 1 Introducing Network Algorithmics 28
1.1 THE PROBLEM: NETWORK BOTTLENECKS 28
1.1.1 Endnode Bottlenecks 29
1.1.2 Router Bottlenecks 30
1.2 THE TECHNIQUES: NETWORK ALGORITHMICS 32
1.2.1 Warm-up Example: Scenting an Evil Packet 33
1.2.2 Strawman Solution 34
1.2.3 Thinking Algorithmically 34
1.2.4 Refining the Algorithm: Exploiting Hardware 35
1.2.5 Cleaning Up 36
1.2.6 Characteristics of Network Algorithmics 38
1.3 EXERCISE 40
CHAPTER 2 Network Implementation Models 41
2.1 PROTOCOLS 42
2.1.1 Transport and Routing Protocols 42
2.1.2 Abstract Protocol Model 42
2.1.3 Performance Environment and Measures 44
2.2 HARDWARE 46
2.2.1 Combinatorial Logic 46
2.2.2 Timing and Power 47
2.2.3 Raising the Abstraction Level of Hardware Design 48
2.2.4 Memories 50
2.2.5 Memory Subsystem Design Techniques 54
2.2.6 Component-Level Design 55
2.2.7 Final Hardware Lessons 56
2.3 NETWORK DEVICE ARCHITECTURES 57
2.3.1 Endnode Architecture 57
2.3.2 Router Architecture 59
2.4 OPERATING SYSTEMS 64
2.4.1 Uninterrupted Computation via Processes 64
2.4.2 Infinite Memory via Virtual Memory 66
2.4.3 Simple I/O via System Calls 68
2.5 SUMMARY 69
2.6 EXERCISES 69
CHAPTER 3 Fifteen Implementation Principles 75
3.1 MOTIVATING THE USE OF PRINCIPLES — UPDATING TERNARY CONTENT-ADDRESSABLE MEMORIES 75
3.2 ALGORITHMS VERSUS ALGORITHMICS 79
3.3 FIFTEEN IMPLEMENTATION PRINCIPLES — CATEGORIZATION AND DESCRIPTION 81
3.3.1 Systems Principles 81
3.3.2 Principles for Modularity with Efficiency 86
3.3.3 Principles for Speeding Up Routines 88
3.4 DESIGN VERSUS IMPLEMENTATION PRINCIPLES 90
3.5 CAVEATS 91
3.5.1 Eight Cautionary Questions 93
3.6 SUMMARY 95
3.7 EXERCISES 95
CHAPTER 4 Principles in Action 98
4.1 BUFFER VALIDATION OF APPLICATION DEVICE CHANNELS 99
4.2 SCHEDULER FOR ASYNCHRONOUS TRANSFER MODE FLOW CONTROL 101
4.3 ROUTE COMPUTATION USING DIJKSTRA’S ALGORITHM 102
4.4 ETHERNET MONITOR USING BRIDGE HARDWARE 105
4.5 DEMULTIPLEXING IN THE X-KERNEL 106
4.6 TRIES WITH NODE COMPRESSION 108
4.7 PACKET FILTERING IN ROUTERS 110
4.8 AVOIDING FRAGMENTATION OF LINK STATE PACKETS 112
4.9 POLICING TRAFFIC PATTERNS 115
4.10 IDENTIFYING A RESOURCE HOG 117
4.11 GETTING RID OF THE TCP OPEN CONNECTION LIST 118
4.12 ACKNOWLEDGMENT WITHHOLDING 121
4.13 INCREMENTALLY READING A LARGE DATABASE 123
4.14 BINARY SEARCH OF LONG IDENTIFIERS 125
4.15 VIDEO CONFERENCING VIA ASYNCHRONOUS TRANSFER MODE 127
PART II Playing with Endnodes 130
CHAPTER 5 Copying Data 132
5.1 WHY DATA COPIES 134
5.2 REDUCING COPYING VIA LOCAL RESTRUCTURING 136
5.2.1 Exploiting Adaptor Memory 136
5.2.2 Using Copy-on-Write 138
5.2.3 Fbufs: Optimizing Page Remapping 140
5.2.4 Transparently Emulating Copy Semantics 144
5.3 AVOIDING COPYING USING REMOTE DMA 146
5.3.1 Avoiding Copying in a Cluster 147
5.3.2 Modern-Day Incarnations of RDMA 148
5.4 BROADENING TO FILE SYSTEMS 150
5.4.1 Shared Memory 150
5.4.2 IO-Lite: A Unified View of Buffering 151
5.4.3 Avoiding File System Copies via I/O Splicing 153
5.5 BROADENING BEYOND COPIES 154
5.6 BROADENING BEYOND DATA MANIPULATIONS 156
5.6.1 Using Caches Effectively 156
5.6.2 Direct Memory Access versus Programmed I/O 160
5.7 CONCLUSIONS 160
5.8 EXERCISES 162
CHAPTER 6 Transferring Control 164
6.1 WHY CONTROL OVERHEAD? 166
6.2 AVOIDING SCHEDULING OVERHEAD IN NETWORKING CODE 168
6.2.1 Making User-Level Protocol Implementations Real 169
6.3 AVOIDING CONTEXT-SWITCHING OVERHEAD IN APPLICATIONS 171
6.3.1 Process per Client 172
6.3.2 Thread per Client 173
6.3.3 Event-Driven Scheduler 175
6.3.4 Event-Driven Server with Helper Processes 175
6.3.5 Task-Based Structuring 176
6.4 FAST SELECT 178
6.4.1 A Server Mystery 178
6.4.2 Existing Use and Implementation of select() 179
6.4.3 Analysis of Select() 180
6.4.4 Speeding Up Select() without Changing the API 182
6.4.5 Speeding Up Select() by Changing the API 183
6.5 AVOIDING SYSTEM CALLS 184
6.5.1 The Virtual Interface Architecture (VIA) Proposal 187
6.6 REDUCING INTERRUPTS 188
6.6.1 Avoiding Receiver Livelock 189
6.7 CONCLUSIONS 190
6.8 EXERCISES 191
CHAPTER 7 Maintaining Timers 194
7.1 WHY TIMERS? 194
7.2 MODEL AND PERFORMANCE MEASURES 196
7.3 SIMPLEST TIMER SCHEMES 197
7.4 TIMING WHEELS 198
7.5 HASHED WHEELS 200
7.6 HIERARCHICAL WHEELS 201
7.7 BSD IMPLEMENTATION 203
7.8 OBTAINING FINE-GRANULARITY TIMERS 204
7.9 CONCLUSIONS 205
7.10 EXERCISES 206
CHAPTER 8 Demultiplexing 207
8.1 OPPORTUNITIES AND CHALLENGES OF EARLY DEMULTIPLEXING 209
8.2 GOALS 209
8.3 CMU/STANFORD PACKET FILTER: PIONEERING PACKET FILTERS 210
8.4 BERKELEY PACKET FILTER: ENABLING HIGH-PERFORMANCE MONITORING 211
8.5 PATHFINDER: FACTORING OUT COMMON CHECKS 214
8.6 DYNAMIC PACKET FILTER: COMPILERS TO THE RESCUE 217
8.7 CONCLUSIONS 220
8.8 EXERCISES 220
CHAPTER 9 Protocol Processing 222
9.1 BUFFER MANAGEMENT 223
9.1.1 Buffer Allocation 224
9.1.2 Sharing Buffers 226
9.2 CYCLIC REDUNDANCY CHECKS AND CHECKSUMS 228
9.2.1 Cyclic Redundancy Checks 229
9.2.2 Internet Checksums 232
9.2.3 Finessing Checksums 234
9.3 GENERIC PROTOCOL PROCESSING 234
9.3.1 UDP Processing 237
9.4 REASSEMBLY 238
9.4.1 Efficient Reassembly 239
9.5 CONCLUSIONS 241
9.6 EXERCISES 242
PART III Playing with Routers 244
CHAPTER 10 Exact-Match Lookups 246
10.1 CHALLENGE 1: ETHERNET UNDER FIRE 247
10.2 CHALLENGE 2: WIRE SPEED FORWARDING 249
10.3 CHALLENGE 3: SCALING LOOKUPS TO HIGHER SPEEDS 253
10.3.1 Scaling via Hashing 253
10.3.2 Using Hardware Parallelism 255
10.4 SUMMARY 256
10.5 EXERCISE 257
CHAPTER 11 Prefix-Match Lookups 258
11.1 INTRODUCTION TO PREFIX LOOKUPS 259
11.1.1 Prefix Notation 259
11.1.2 Why Variable-Length Prefixes? 260
11.1.3 Lookup Model 261
11.2 FINESSING LOOKUPS 263
11.2.1 Threaded Indices and Tag Switching 263
11.2.2 Flow Switching 265
11.2.3 Status of Tag Switching, Flow Switching, and Multiprotocol Label Switching 266
11.3 NONALGORITHMIC TECHNIQUES FOR PREFIX MATCHING 267
11.3.1 Caching 267
11.3.2 Ternary Content-Addressable Memories 267
11.4 UNIBIT TRIES 268
11.5 MULTIBIT TRIES 270
11.5.1 Fixed-Stride Tries 271
11.5.2 Variable-Stride Tries 272
11.5.3 Incremental Update 275
11.6 LEVEL-COMPRESSED (LC) TRIES 275
11.7 LULEA-COMPRESSED TRIES 277
11.8 TREE BITMAP 280
11.8.1 Tree Bitmap Ideas 280
11.8.2 Tree Bitmap Search Algorithm 281
11.9 BINARY SEARCH ON RANGES 282
11.10 BINARY SEARCH ON PREFIX LENGTHS 284
11.11 MEMORY ALLOCATION IN COMPRESSED SCHEMES 286
11.11.1 Frame-Based Compaction 287
11.12 LOOKUP-CHIP MODEL 288
11.13 CONCLUSIONS 290
11.14 EXERCISES 291
CHAPTER 12 Packet Classification 295
12.1 WHY PACKET CLASSIFICATION? 296
12.2 PACKET-CLASSIFICATION PROBLEM 298
12.3 REQUIREMENTS AND METRICS 300
12.4 SIMPLE SOLUTIONS 301
12.4.1 Linear Search 301
12.4.2 Caching 301
12.4.3 Demultiplexing Algorithms 302
12.4.4 Passing Labels 302
12.4.5 Content-Addressable Memories 303
12.5 TWO-DIMENSIONAL SCHEMES 303
12.5.1 Fast Searching Using Set-Pruning Tries 303
12.5.2 Reducing Memory Using Backtracking 306
12.5.3 The Best of Both Worlds: Grid of Tries 306
12.6 APPROACHES TO GENERAL RULE SETS 309
12.6.1 Geometric View of Classification 309
12.6.2 Beyond Two Dimensions: The Bad News 311
12.6.3 Beyond Two Dimensions: The Good News 311
12.7 EXTENDING TWO-DIMENSIONAL SCHEMES 312
12.8 USING DIVIDE-AND-CONQUER 313
12.9 BIT VECTOR LINEAR SEARCH 314
12.10 CROSS-PRODUCTING 317
12.11 EQUIVALENCED CROSS-PRODUCTING 318
12.12 DECISION TREE APPROACHES 321
12.13 CONCLUSIONS 324
12.14 EXERCISES 325
CHAPTER 13 Switching 327
13.1 ROUTER VERSUS TELEPHONE SWITCHES 329
13.2 SHARED-MEMORY SWITCHES 330
13.3 ROUTER HISTORY: FROM BUSES TO CROSSBARS 330
13.4 THE TAKE-A-TICKET CROSSBAR SCHEDULER 332
13.5 HEAD-OF-LINE BLOCKING 336
13.6 AVOIDING HEAD-OF-LINE BLOCKING VIA OUTPUT QUEUING 337
13.7 AVOIDING HEAD-OF-LINE BLOCKING BY USING PARALLEL ITERATIVE MATCHING 339
13.8 AVOIDING RANDOMIZATION WITH iSLIP 341
13.8.1 Extending iSLIP to Multicast and Priority 345
13.8.2 iSLIP Implementation Notes 347
13.9 SCALING TO LARGER SWITCHES 348
13.9.1 Measuring Switch Cost 349
13.9.2 Clos Networks for Medium-Size Routers 349
13.9.3 Benes Networks for Larger Routers 353
13.10 SCALING TO FASTER SWITCHES 358
13.10.1 Using Bit Slicing for Higher-Speed Fabrics 358
13.10.2 Using Short Links for Higher-Speed Fabrics 359
13.10.3 Memory Scaling Using Randomization 360
13.11 CONCLUSIONS 361
13.12 EXERCISES 362
CHAPTER 14 Scheduling Packets 364
14.1 MOTIVATION FOR QUALITY OF SERVICE 365
14.2 RANDOM EARLY DETECTION 367
14.3 TOKEN BUCKET POLICING 370
14.4 MULTIPLE OUTBOUND QUEUES AND PRIORITY 371
14.5 A QUICK DETOUR INTO RESERVATION PROTOCOLS 372
14.6 PROVIDING BANDWIDTH GUARANTEES 373
14.6.1 The Parochial Parcel Service 373
14.6.2 Deficit Round-Robin 375
14.6.3 Implementation and Extensions of Deficit Round-Robin 376
14.7 SCHEDULERS THAT PROVIDE DELAY GUARANTEES 379
14.8 SCALABLE FAIR QUEUING 383
14.8.1 Random Aggregation 384
14.8.2 Edge Aggregation 384
14.8.3 Edge Aggregation with Policing 385
14.9 SUMMARY 386
14.10 EXERCISES 386
CHAPTER 15 Routers as Distributed Systems 387
15.1 INTERNAL FLOW CONTROL 388
15.1.1 Improving Performance 389
15.1.2 Rescuing Reliability 390
15.2 INTERNAL STRIPING 393
15.2.1 Improving Performance 393
15.2.2 Rescuing Reliability 394
15.3 ASYNCHRONOUS UPDATES 396
15.3.1 Improving Performance 397
15.3.2 Rescuing Reliability 398
15.4 CONCLUSIONS 398
15.5 EXERCISES 399
PART IV Endgame 402
CHAPTER 16 Measuring Network Traffic 404
16.1 WHY MEASUREMENT IS HARD 406
16.1.1 Why Counting Is Hard 406
16.2 REDUCING SRAM WIDTH USING DRAM BACKING STORE 407
16.3 REDUCING COUNTER WIDTH USING RANDOMIZED COUNTING 409
16.4 REDUCING COUNTERS USING THRESHOLD AGGREGATION 410
16.5 REDUCING COUNTERS USING FLOW COUNTING 412
16.6 REDUCING PROCESSING USING SAMPLED NETFLOW 413
16.7 REDUCING REPORTING USING SAMPLED CHARGING 414
16.8 CORRELATING MEASUREMENTS USING TRAJECTORY SAMPLING 415
16.9 A CONCERTED APPROACH TO ACCOUNTING 417
16.10 COMPUTING TRAFFIC MATRICES 418
16.10.1 Approach 1: Internet Tomography 419
16.10.2 Approach 2: Per-Prefix Counters 419
16.10.3 Approach 3: Class Counters 420
16.11 STING AS AN EXAMPLE OF PASSIVE MEASUREMENT 420
16.12 CONCLUSION 421
16.13 EXERCISES 422
CHAPTER 17 Network Security 424
17.1 SEARCHING FOR MULTIPLE STRINGS IN PACKET PAYLOADS 426
17.1.1 Integrated String Matching Using Aho–Corasick 427
17.1.2 Integrated String Matching Using Boyer–Moore 428
17.2 APPROXIMATE STRING MATCHING 430
17.3 IP TRACEBACK VIA PROBABILISTIC MARKING 431
17.4 IP TRACEBACK VIA LOGGING 434
17.4.1 Bloom Filters 435
17.4.2 Bloom Filter Implementation of Packet Logging 437
17.5 DETECTING WORMS 438
17.6 CONCLUSION 440
17.7 EXERCISES 440
CHAPTER 18 Conclusions 442
18.1 WHAT THIS BOOK HAS BEEN ABOUT 443
18.1.1 Endnode Algorithmics 443
18.1.2 Router Algorithmics 444
18.1.3 Toward a Synthesis 445
18.2 WHAT NETWORK ALGORITHMICS IS ABOUT 448
18.2.1 Interdisciplinary Thinking 448
18.2.2 Systems Thinking 449
18.2.3 Algorithmic Thinking 450
18.3 NETWORK ALGORITHMICS AND REAL PRODUCTS 452
18.4 NETWORK ALGORITHMICS: BACK TO THE FUTURE 454
18.4.1 New Abstractions 454
18.4.2 New Connecting Disciplines 455
18.4.3 New Requirements 456
18.5 THE INNER LIFE OF A NETWORKING DEVICE 456
APPENDIX Detailed Models 458
A.1 TCP AND IP 458
A.1.1 Transport Protocols 458
A.1.2 Routing Protocols 461
A.2 HARDWARE MODELS 462
A.2.1 From Transistors to Logic Gates 462
A.2.2 Timing Delays 464
A.2.3 Hardware Design Building Blocks 464
A.2.4 Memories: The Inside Scoop 465
A.2.5 Chip Design 466
A.3 SWITCHING THEORY 467
A.3.1 Matching Algorithms for Clos Networks with k = n 467
A.4 THE INTERCONNECTION NETWORK ZOO 468
BIBLIOGRAPHY 470
INDEX 482
15 Principles Used to Overcome Network Bottlenecks 491

Chapter 1

Introducing Network Algorithmics


What really makes it an invention is that someone decides not to change the solution to a known problem, but to change the question.

—DEAN KAMEN

Just as the objective of chess is to checkmate the opponent and the objective of tennis is to win matches, the objective of the network algorithmics game is to battle networking implementation bottlenecks.

Beyond specific techniques, this book distills a fundamental way of crafting solutions to internet bottlenecks that we call network algorithmics. This provides the reader tools to design different implementations for specific contexts and to deal with new bottlenecks that will undoubtedly arise in the changing world of networks.

So what is network algorithmics? Network algorithmics goes beyond the design of efficient algorithms for networking tasks, though this has an important place. In particular, network algorithmics recognizes the primary importance of taking an interdisciplinary systems approach to streamlining network implementations.

Network algorithmics is an interdisciplinary approach because it encompasses such fields as architecture and operating systems (for speeding up servers), hardware design (for speeding up network devices such as routers), and algorithm design (for designing scalable algorithms). Network algorithmics is also a systems approach, because it is described in this book using a set of 15 principles that exploit the fact that routers and servers are systems, in which efficiencies can be gained by moving functions in time and space between subsystems.

The problems addressed by network algorithmics are fundamental networking performance bottlenecks. The solutions advocated by network algorithmics are a set of fundamental techniques to address these bottlenecks. Next, we provide a quick preview of both the bottlenecks and the methods.

1.1 THE PROBLEM: NETWORK BOTTLENECKS


The main problem considered in this book is how to make networks easy to use while at the same time realizing the performance of the raw hardware. Ease of use comes from the use of powerful network abstractions, such as socket interfaces and prefix-based forwarding. Unfortunately, without care such abstractions exact a large performance penalty when compared to the capacity of raw transmission links such as optical fiber. To study this performance gap in more detail we examine two fundamental categories of networking devices, endnodes and routers.

1.1.1 Endnode Bottlenecks


Endnodes are the endpoints of the network. They include personal computers and workstations as well as large servers that provide services. Endnodes are specialized toward computation, as opposed to networking, and are typically designed to support general-purpose computation. Thus endnode bottlenecks are typically the result of two forces: structure and scale.

 Structure: To be able to run arbitrary code, personal computers and large servers typically have an operating system that mediates between applications and the hardware. To ease software development, most large operating systems are carefully structured as layered software; to protect the operating system from other applications, operating systems implement a set of protection mechanisms; finally, core operating systems routines, such as schedulers and allocators, are written using general mechanisms that target as wide a class of applications as possible. Unfortunately, the combination of layered software, protection mechanisms, and excessive generality can slow down networking software greatly, even with the fastest processors.

 Scale: The emergence of large servers providing Web and other services causes further performance problems. In particular, a large server such as a Web server will typically have thousands of concurrent clients. Many operating systems use inefficient data structures and algorithms that were designed for an era when the number of connections was small.

Figure 1.1 previews the main endnode bottlenecks covered in this book, together with causes and solutions. The first bottleneck occurs because conventional operating system structures cause packet data copying across protection domains; the situation is further complicated in Web servers by similar copying with respect to the file system and by other manipulations, such as checksums, that examine all the packet data. Chapter 5 describes a number of techniques to reduce these overheads while preserving the goals of system abstractions, such as protection and structure. The second major overhead is the control overhead caused by switching between threads of control (or protection domains) while processing a packet; this is addressed in Chapter 6.

Figure 1.1 Preview of endnode bottlenecks, solutions to which are described in Part II of the book.

Networking applications use timers to deal with failure. With a large number of connections the timer overhead at a server can become large; this overhead is addressed in Chapter 7. Similarly, network messages must be demultiplexed (i.e., steered) on receipt to the right end application; techniques to address this bottleneck are addressed in Chapter 8. Finally, there are several other common protocol processing tasks, such as buffer allocation and checksums, which are addressed in Chapter 9.

1.1.2 Router Bottlenecks


Though we concentrate on Internet routers, almost all the techniques described in this book apply equally well to any other network devices, such as bridges, switches, gateways, monitors, and security appliances, and to protocols other than IP, such as FiberChannel.

Thus throughout the rest of the book, it is often useful to think of a router as a “generic network interconnection device.” Unlike endnodes, these are special-purpose devices devoted to networking. Thus there is very little structural overhead within a router, with only the use of a very lightweight operating system and a clearly separated forwarding path that often is completely implemented in hardware. Instead of structure, the fundamental problems faced by routers are caused by scale and services.

 Scale: Network devices face two areas of scaling: bandwidth scaling and population scaling. Bandwidth scaling occurs because optical links keep getting faster, as the progress from 1-Gbps to 40-Gbps links shows, and because Internet traffic keeps growing due to a diverse set of new applications. Population scaling occurs because more endpoints get added to the Internet as more enterprises go online.

 Services: The need for speed and scale drove much of the networking industry in the 1980s and 1990s as more businesses went online (e.g., Amazon.com) and whole new online services were created (e.g., Ebay). But the very success of the Internet requires careful attention in the next decade to make it more effective by providing guarantees in terms of performance, security, and reliability. After all, if manufacturers (e.g., Dell) sell more online than by other channels, it is important to provide network guarantees —delay in times of congestion, protection during attacks, and availability when failures occur. Finding ways to implement these new services at high speeds will be a major challenge for router vendors in the next decade.

Figure 1.2 previews the main router (bridge/gateway) bottlenecks covered in this book, together with causes and solutions.

Figure 1.2 Preview of router bottlenecks, solutions to which are described in Parts III and IV of the book.

First, all networking devices forward packets to their destination by looking up a forwarding table. The simplest forwarding table lookup does an exact match with a destination address, as exemplified by bridges. Chapter 10 describes fast and scalable exact-match lookup schemes. Unfortunately, population scaling has made lookups far more complex for routers.

To deal with large Internet populations, routers keep a single entry called a prefix (analogous to a telephone area code) for a large group of stations. Thus routers must do a more complex longest-prefix-match lookup. Chapter 11 describes solutions to this problem that scale to increasing speeds and table sizes.

Many routers today offer what is sometimes called service differentiation, where different packets can be treated differently in order to provide service and security guarantees. Unfortunately, this requires an even more complex form of lookup called packet classification, in which the lookup is based on the destination, source, and even the services that a packet is providing. This challenging issue is tackled in Chapter 12.

Next, all networking devices can be abstractly considered as switches that shunt packets coming in from a set of input links to a set of output links. Thus a fundamental issue is that of building a high-speed switch. This is hard, especially in the face of the growing gap between optical and electronic speeds. The standard solution is to use parallelism via a...

EPUBEPUB (Adobe DRM)

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich
Eine praxisorientierte Einführung mit Anwendungen in Oracle, SQL …

von Edwin Schicker

eBook Download (2017)
Springer Vieweg (Verlag)
CHF 34,15
Unlock the power of deep learning for swift and enhanced results

von Giuseppe Ciaburro

eBook Download (2024)
Packt Publishing (Verlag)
CHF 35,15