Book contents
- Frontmatter
- Contents
- Preface
- Reading this book quickly
- Part I Introduction
- Part II Fitting the timetable
- Chapter 4 The order of the Markov chain
- Chapter 5 Stationarity of the Markov chain
- Chapter 6 Homogeneity
- Chapter 7 Everyday computations of stationarity, order and homogeneity
- Chapter 8 Sampling distributions
- Chapter 9 Lag sequential analysis
- Part III The timetable and the contextual design
- References
- Index
Chapter 4 - The order of the Markov chain
from Part II - Fitting the timetable
Published online by Cambridge University Press: 10 November 2009
- Frontmatter
- Contents
- Preface
- Reading this book quickly
- Part I Introduction
- Part II Fitting the timetable
- Chapter 4 The order of the Markov chain
- Chapter 5 Stationarity of the Markov chain
- Chapter 6 Homogeneity
- Chapter 7 Everyday computations of stationarity, order and homogeneity
- Chapter 8 Sampling distributions
- Chapter 9 Lag sequential analysis
- Part III The timetable and the contextual design
- References
- Index
Summary
Originally, information theory was devised in response to a need to conceptualize the amount of information in a message that might be transmitted through a noisy channel. In this event errors would clearly take place in transmitting the message, and they would be more or less serious depending on the amount of redundancy in the language system used. In English, for example, if we received the word “Q_ICK” we could guess with complete confidence that the missing letter was “U.” We can make these guesses because languages have a temporal structure.What does this temporal structure mean? It means simply that we gain information in prediction by knowing the past.
Temporal structure means that we gain information in prediction by knowing the past.
There was also a great deal of concern during World War II about breaking secret codes that the Axis powers used to transmit information. Conceptually the code-breaking problem is very similar to the problems we have in analyzing a sequence of codes. We inductively seek to determine the sequential structures that are present in the data.
Shannon's Approximations to English
In his 1949 paper, Claude Shannon generated a series of approximations to English. He used a 27-symbol alphabet: 26 letters and a space. The first approximation assumed that all symbols were independent and equally probable; these obviously not very good assumptions produced
XFMOL_RXKHRJFFJUJ_ZLPWCFWKCYJ_FFJEYV KCQSGHYD_QPAAMKBZAACIBZLHJQD
which looks very little like a typical sample of English text. Shannon's next approximation was to account for asymmetries in the frequencies of the symbols.
- Type
- Chapter
- Information
- Sequential AnalysisA Guide for Behavorial Researchers, pp. 35 - 59Publisher: Cambridge University PressPrint publication year: 1990