Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 The basics
- 3 Numerical methods
- 4 Lyapunov vectors
- 5 Fluctuations, finite-time and generalised exponents
- 6 Dimensions and dynamical entropies
- 7 Finite-amplitude exponents
- 8 Random systems
- 9 Coupled systems
- 10 High-dimensional systems: general
- 11 High-dimensional systems: Lyapunov vectors and finite-size effects
- 12 Applications
- Appendix A Reference models
- Appendix B Pseudocodes
- Appendix C Random matrices: some general formulas
- Appendix D Symbolic encoding
- Bibliography
- Index
4 - Lyapunov vectors
Published online by Cambridge University Press: 05 February 2016
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 The basics
- 3 Numerical methods
- 4 Lyapunov vectors
- 5 Fluctuations, finite-time and generalised exponents
- 6 Dimensions and dynamical entropies
- 7 Finite-amplitude exponents
- 8 Random systems
- 9 Coupled systems
- 10 High-dimensional systems: general
- 11 High-dimensional systems: Lyapunov vectors and finite-size effects
- 12 Applications
- Appendix A Reference models
- Appendix B Pseudocodes
- Appendix C Random matrices: some general formulas
- Appendix D Symbolic encoding
- Bibliography
- Index
Summary
The linear stability analysis of fixed points involves the computation not only of eigenvalues but also of eigenvectors. Equivalently, a complete characterisation of chaotic or stochastic dynamical systems requires going beyond the knowledge of the LEs, including the identification of the (local) orientation of stable and unstable manifolds.
An eigenvector of a given linear transformation can be identified as a direction that is mapped onto itself. This definition cannot be straightforwardly extended to contexts where different transformations are applied at different times, as no invariant direction is expected to exist. It is, however, possible to rephrase the definition in such a way that a generalisation becomes possible. Eigenvectors can, in fact, be viewed as the only directions which, if iterated forwards and backwards in time, are accompanied by an expansion rate which coincides with the eigenvalues of the given matrix (here, for the sake of simplicity, we assume that no complex eigenvalues exist). In this definition, the very fact that the direction itself is invariant becomes a secondary property. Accordingly, it can be extended to any sequence of matrices, requiring that the observed average expansion rate has to coincide with one of the LEs of the given system. Such directions, often referred to as covariant Lyapunov vectors in the physics literature, are nothing but the vectors Ek introduced in Section 2.3.2, while referring to the Oseledets splitting. They had been introduced already by Oseledets (1968) and later formalised as tangent directions of invariant manifolds (Ruelle, 1979) but for many years escaped the attention of researchers, probably due to the lack of effective algorithms to determine them. The first computation of covariant vectors was performed in the context of time-series analysis (see Section 3.7) (Bryant et al., 1990; Brown et al., 1991). Since then, covariant vectors have been occasionally used as a tool to determine Lyapunov exponents via a transfer matrix approach (Politi et al., 1998) or to characterise spatio-temporal chaos (Kockelkoren, 2002). Only after the development of two effective computational methods (Ginelli et al., 2007;Wolfe and Samelson, 2007), the usefulness of covariant vectors was eventually recognised.
Here, we provide a heuristic discussion of the subject, while a more formal introduction is given in Section 4.1. For simplicity, we assume that all of the LEs of an N-dimensional system are different.
- Type
- Chapter
- Information
- Lyapunov ExponentsA Tool to Explore Complex Dynamics, pp. 54 - 69Publisher: Cambridge University PressPrint publication year: 2016