Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Enabling technologies
- Part II Network architectures
- Part III Protocols and practice
- 11 Separating routing policy from mechanism in the network layer
- 12 Multi-path BGP: motivations and solutions
- 13 Explicit congestion control: charging, fairness, and admission management
- 14 KanseiGenie: software infrastructure for resource management and programmability of wireless sensor network fabrics
- Part IV Theory and models
- About the editors
- Index
- References
13 - Explicit congestion control: charging, fairness, and admission management
from Part III - Protocols and practice
Published online by Cambridge University Press: 05 October 2012
- Frontmatter
- Contents
- Contributors
- Preface
- Part I Enabling technologies
- Part II Network architectures
- Part III Protocols and practice
- 11 Separating routing policy from mechanism in the network layer
- 12 Multi-path BGP: motivations and solutions
- 13 Explicit congestion control: charging, fairness, and admission management
- 14 KanseiGenie: software infrastructure for resource management and programmability of wireless sensor network fabrics
- Part IV Theory and models
- About the editors
- Index
- References
Summary
In the design of large-scale communication networks, a major practical concern is the extent to which control can be decentralized. A decentralized approach to flow control has been very successful as the Internet has evolved from a small-scale research network to today's interconnection of hundreds of millions of hosts; but it is beginning to show signs of strain. In developing new end-to-end protocols, the challenge is to understand just which aspects of decentralized flow control are important. One may start by asking how should capacity be shared among users? Or, how should flows through a network be organized, so that the network responds sensibly to failures and overloads? Additionally, how can routing, flow control, and connection acceptance algorithms be designed to work well in uncertain and random environments?
One of the more fruitful theoretical approaches has been based on a framework that allows a congestion control algorithm to be interpreted as a distributed mechanism solving a global optimization problem; for some overviews see [1, 2, 3]. Primal algorithms, such as the Transmission Control Protocol (TCP), broadly correspond with congestion control mechanisms where noisy feedback from the network is averaged at endpoints, using increase and decrease rules of the form first developed by Jacobson. Dual algorithms broadly correspond with more explicit congestion control protocols where averaging at resources precedes the feedback of relatively precise information on congestion to endpoints.
- Type
- Chapter
- Information
- Next-Generation InternetArchitectures and Protocols, pp. 257 - 274Publisher: Cambridge University PressPrint publication year: 2011
References
- 1
- Cited by