Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Enabling technologies
- Part II Network architectures
- Part III Protocols and practice
- Part IV Theory and models
- 15 Theories for buffering and scheduling in Internet switches
- 16 Stochastic network utility maximization and wireless scheduling
- 17 Network coding in bi-directed and peer-to-peer networks
- 18 Network economics: neutrality, competition, and service differentiation
- About the editors
- Index
- References
15 - Theories for buffering and scheduling in Internet switches
from Part IV - Theory and models
Published online by Cambridge University Press: 05 October 2012
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Enabling technologies
- Part II Network architectures
- Part III Protocols and practice
- Part IV Theory and models
- 15 Theories for buffering and scheduling in Internet switches
- 16 Stochastic network utility maximization and wireless scheduling
- 17 Network coding in bi-directed and peer-to-peer networks
- 18 Network economics: neutrality, competition, and service differentiation
- About the editors
- Index
- References
Summary
Introduction
In this chapter we argue that future high-speed switches should have buffers that are much smaller than those used today. We present recent work in queueing theory that will be needed for the design of such switches.
There are two main benefits of small buffers. First, small buffers means very little queueing delay or jitter, which means better quality of service for interactive traffic. Second, small buffers make it possible to design new and faster types of switches. One example is a switch-on-a-chip, in which a single piece of silicon handles both switching and buffering, such as that proposed in [7]; this alleviates the communication bottleneck between the two functions. Another example is an all-optical packet switch, in which optical delay lines are used to emulate a buffer. These two examples are not practicable with large buffers.
Buffers cannot be made arbitrarily small. The reason we have buffers in the first place is to be able to absorb fluctuations in traffic without dropping packets. There are two types of fluctuations to consider: fluctuations due to end-to-end congestion control mechanisms, most notably TCP; and fluctuations due to the inherent randomness of chance alignments of packets.
In Section 15.2 we describe queueing theory which takes account of the interaction between a queue and TCP's end-to-end congestion control. The Transmission Control Protocol tries to take up all available capacity on a path, and in particular it tries to fill the bottleneck buffer.
- Type
- Chapter
- Information
- Next-Generation InternetArchitectures and Protocols, pp. 303 - 323Publisher: Cambridge University PressPrint publication year: 2011