Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- Part I Preliminaries
- Part II Single-Hop Networks
- Part III Multihop Networks
- Part IV Extensions
- 21 Communication for Computing
- 22 Information Theoretic Secrecy
- 23 Wireless Fading Channels
- 24 Networking and InformationTheory
- Appendices
- Bibliography
- Common Symbols
- Author Index
- Subject Index
21 - Communication for Computing
from Part IV - Extensions
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- Part I Preliminaries
- Part II Single-Hop Networks
- Part III Multihop Networks
- Part IV Extensions
- 21 Communication for Computing
- 22 Information Theoretic Secrecy
- 23 Wireless Fading Channels
- 24 Networking and InformationTheory
- Appendices
- Bibliography
- Common Symbols
- Author Index
- Subject Index
Summary
In the first three parts of the book we investigated the limits on information flow in networks whose task is to communicate (or store) distributed information. In many real world distributed systems, such as multiprocessors, peer-to-peer networks, networked mobile agents, and sensor networks, the task of the network is to compute a function, make a decision, or coordinate an action based on distributed information. Can the communication rate needed to perform such a task at some node be reduced relative to communicating all the sources to this node?
This question has been formulated and studied in computer science under communication complexity and gossip algorithms, in control and optimization under distributed consensus, and in information theory under coding for computing and the μ -sum problem, among other topics. In this chapter, we study information theoretic models for distributed computing over networks. In some cases, we find that the total communication rate can be significantly reduced when the task of the network is to compute a function of the sources rather than to communicate the sources themselves, while in other cases, no such reduction is possible.
We first show that the Wyner–Ziv theorem in Chapter 11 extends naturally to the case when the decoder wishes to compute a function of the source and the side information. We provide a refined characterization of the lossless special case of this result in terms of conditional graph entropy. We then discuss distributed coding for computing. Although the rate–distortion region for this case is not known in general (even when the goal is to reconstruct the sources themselves), we show through examples that the total communication rate needed for computing can be significantly lower than for communicating the sources themselves. The first example we discuss is the μ -sum problem, where the decoder wishes to reconstruct a weighted sum of two separately encoded Gaussian sources with a prescribed quadratic distortion. We establish the rate–distortion region for this setting by reducing the problem to the CEO problem discussed in Chapter 12. The second example is lossless computing of the modulo-2 sum of a DSBS. Surprisingly, we find that using the same linear code at both encoders can outperform Slepian–Wolf coding.
- Type
- Chapter
- Information
- Network Information Theory , pp. 529 - 548Publisher: Cambridge University PressPrint publication year: 2011