Book contents
- Frontmatter
- Contents
- Preface
- 1 Recurrence
- 2 Existence of invariant measures
- 3 Ergodic theorems
- 4 Ergodicity
- 5 Ergodic decomposition
- 6 Unique ergodicity
- 7 Correlations
- 8 Equivalent systems
- 9 Entropy
- 10 Variational principle
- 11 Expanding maps
- 12 Thermodynamic formalism
- Appendix A Topics in measure theory, topology and analysis
- Hints or solutions for selected exercises
- References
- Index of notation
- Index
7 - Correlations
Published online by Cambridge University Press: 05 February 2016
- Frontmatter
- Contents
- Preface
- 1 Recurrence
- 2 Existence of invariant measures
- 3 Ergodic theorems
- 4 Ergodicity
- 5 Ergodic decomposition
- 6 Unique ergodicity
- 7 Correlations
- 8 Equivalent systems
- 9 Entropy
- 10 Variational principle
- 11 Expanding maps
- 12 Thermodynamic formalism
- Appendix A Topics in measure theory, topology and analysis
- Hints or solutions for selected exercises
- References
- Index of notation
- Index
Summary
The models of dynamical systems that interest us the most, transformations and flows, are deterministic: the state of the system at any time determines the whole future trajectory; when the system is invertible, the past trajectory is equally determined. However, these systems may also present stochastic (that is, “partly random”) behavior: at some level coarser than that of individual trajectories, information about the past is gradually lost as the system is iterated. That is the subject of the present chapter.
In probability theory one calls the correlation between two random variables X and Y the number
C(X,Y) = E[(X −E[X])(Y −E[Y])] = E[XY]−E[X]E[Y].
Note that the expression (X − E[X])(Y − E[Y]) is positive if X and Y are on the same side (either larger or smaller) of their respective means, E[X] and E[Y], and it is negative otherwise. Therefore, the sign of C(X,Y) indicates whether the two variables exhibit, predominantly, the same behavior or opposite behaviors, relative to their means. Furthermore, correlation close to zero indicates that the two behaviors are little, if at all, related to each other.
Given an invariant probability measure μ of a dynamical system f : M → M and given measurable functions φ,ψ : M → ℝ, we want to analyze the evolution of the correlations
Cn(φ,ψ) = C(φ o fn,ψ)
when time n goes to infinity. We may think of φ and ψ as quantities that are measured in the system, such as temperature, acidity (pH), kinetic energy, and so forth. Then Cn(φ,ψ) measures how much the value of φ at time n is correlated with the value of ψ at time zero; to what extent one value “influences” the other.
For example, if φ = XA and ψ = XB are characteristic functions, then ψ(x) provides information on the position of the initial point x, whereas φ(fn(x)) informs on the position of its n-th iterate fn(x). If the correlation Cn(φ,ψ) is small, then the first information is of little use to make predictions about the second one. That kind of behavior, where correlations approach zero as time n increases, is quite common in important models, as we are going to see.
- Type
- Chapter
- Information
- Foundations of Ergodic Theory , pp. 181 - 212Publisher: Cambridge University PressPrint publication year: 2016