Book contents
- Frontmatter
- Contents
- Preface to the first edition
- Preface to the second edition
- Basic notation and conventions
- Introduction
- Part I Information measures in simple coding problems
- Part II Two-terminal systems
- 6 The noisy channel coding problem
- 7 Rate-distortion trade-off in source coding and the source–channel transmission problem
- 8 Computation of channel capacity and Δ-distortion rates
- 9 A covering lemma and the error exponent in source coding
- 10 A packing lemma and the error exponent in channel coding
- 11 The compound channel revisited: zero-error information theory and extremal combinatorics
- 12 Arbitrarily varying channels
- Part III Multi-terminal systems
- References
- Name index
- Index of symbols and abbreviations
- Subject index
11 - The compound channel revisited: zero-error information theory and extremal combinatorics
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface to the first edition
- Preface to the second edition
- Basic notation and conventions
- Introduction
- Part I Information measures in simple coding problems
- Part II Two-terminal systems
- 6 The noisy channel coding problem
- 7 Rate-distortion trade-off in source coding and the source–channel transmission problem
- 8 Computation of channel capacity and Δ-distortion rates
- 9 A covering lemma and the error exponent in source coding
- 10 A packing lemma and the error exponent in channel coding
- 11 The compound channel revisited: zero-error information theory and extremal combinatorics
- 12 Arbitrarily varying channels
- Part III Multi-terminal systems
- References
- Name index
- Index of symbols and abbreviations
- Subject index
Summary
A basic common characteristic of almost all channel coding problems treated in this book is that an asymptotically vanishing probability of error in transmission is tolerated. This permits us to exploit the global knowledge of the statistics of sources and channels in order to enhance transmission speed. We see again and again that in the case of a correct tuning of the parameters most codes perform in the same manner and thus, in particular, optimal codes, instead of being rare, abound. This ceases to be true if we are dealing with codes that are error-free.
The zero-error capacity of a DMC or compound DMC has been defined in Chapters 6 and 10 as the special case ε = 0 of ε-capacity. To keep this chapter self-contained, we give an independent (of course, equivalent) definition below.
A zero-error code of block length n for a DMC will be defined by a (codeword) set C ⊂ Xn, rather than by an encoder–decoder pair (f, φ), understanding that the message set coincides with the codeword set and the encoder is the identity mapping. This definition makes sense because if to a codeword set C there exists a decoder φ : Yn → C that yields probability of error equal to zero, this decoder is essentially unique.
- Type
- Chapter
- Information
- Information TheoryCoding Theorems for Discrete Memoryless Systems, pp. 184 - 208Publisher: Cambridge University PressPrint publication year: 2011