Book contents
- Frontmatter
- Contents
- Preface to the first edition
- Preface to the second edition
- Basic notation and conventions
- Introduction
- Part I Information measures in simple coding problems
- Part II Two-terminal systems
- 6 The noisy channel coding problem
- 7 Rate-distortion trade-off in source coding and the source–channel transmission problem
- 8 Computation of channel capacity and Δ-distortion rates
- 9 A covering lemma and the error exponent in source coding
- 10 A packing lemma and the error exponent in channel coding
- 11 The compound channel revisited: zero-error information theory and extremal combinatorics
- 12 Arbitrarily varying channels
- Part III Multi-terminal systems
- References
- Name index
- Index of symbols and abbreviations
- Subject index
10 - A packing lemma and the error exponent in channel coding
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface to the first edition
- Preface to the second edition
- Basic notation and conventions
- Introduction
- Part I Information measures in simple coding problems
- Part II Two-terminal systems
- 6 The noisy channel coding problem
- 7 Rate-distortion trade-off in source coding and the source–channel transmission problem
- 8 Computation of channel capacity and Δ-distortion rates
- 9 A covering lemma and the error exponent in source coding
- 10 A packing lemma and the error exponent in channel coding
- 11 The compound channel revisited: zero-error information theory and extremal combinatorics
- 12 Arbitrarily varying channels
- Part III Multi-terminal systems
- References
- Name index
- Index of symbols and abbreviations
- Subject index
Summary
In this chapter we revisit the coding theorem for a DMC. By definition, for any R > 0 below capacity, there exists a sequence of n-length block codes (fn, φn) with rates converging to R and maximum probability of error converging to zero as n → ∞. On the other hand, by Theorem 6.5, for codes of rate converging to a number above capacity, the maximum probability of error converges to unity. Now we look at the speed of these convergences. This problem is far more complex than its source coding analog and it has not been fully settled yet.
We saw in Chapter 6 that the capacity of a DMC can be achieved by codes, all codewords of which have approximately the same type. In this chapter we shall concentrate attention on constant composition codes, i.e., codes all codewords of which have the very same type. We shall investigate the asymptotics of the error probability for codes from this special class. The general problem reduces to this one in a simple manner.
Our present approach will differ from that in Chapter 6. In that chapter channel codes were constructed by defining the encoder and the decoder simultaneously, in a successive manner. Here, attention will be focused on finding suitable encoders; the decoder will be determined by the encoder in a way to be specified later.
- Type
- Chapter
- Information
- Information TheoryCoding Theorems for Discrete Memoryless Systems, pp. 144 - 183Publisher: Cambridge University PressPrint publication year: 2011