Book contents
- Frontmatter
- Dedication
- Contents
- List of Algorithms
- Notation
- Preface
- I Classical Methods
- II Factors and Groupings
- III Non-Gaussian Analysis
- 9 Towards Non-Gaussianity
- 10 Independent Component Analysis
- 11 Projection Pursuit
- 12 Kernel and More Independent Component Methods
- 13 Feature Selection and Principal Component Analysis Revisited
- Problems for Part III
- References
- Author Index
- Subject Index
- Data Index
13 - Feature Selection and Principal Component Analysis Revisited
from III - Non-Gaussian Analysis
Published online by Cambridge University Press: 05 June 2014
- Frontmatter
- Dedication
- Contents
- List of Algorithms
- Notation
- Preface
- I Classical Methods
- II Factors and Groupings
- III Non-Gaussian Analysis
- 9 Towards Non-Gaussianity
- 10 Independent Component Analysis
- 11 Projection Pursuit
- 12 Kernel and More Independent Component Methods
- 13 Feature Selection and Principal Component Analysis Revisited
- Problems for Part III
- References
- Author Index
- Subject Index
- Data Index
Summary
Den Samen legen wir in ihre Hände! Ob Glück, ob Unglück aufgeht, lehrt das Ende (Friedrich von Schiller, Wallensteins Tod, 1799). We put the seed in your hands! Whether it develops into fortune or mistfortune only the end can teach us.
Introduction
In the beginning – in 1901 – there was Principal Component Analysis. On our journey through this book we have encountered many different methods for analysing multidimensional data, and many times on this journey, Principal Component Analysis reared its – some might say, ugly – head. About a hundred years since its birth, a renaissance of Principal Component Analysis (PCA) has led to new theoretical and practical advances for high-dimensional data and to SPCA, where S variously refers to simple, supervised and sparse. It seems appropriate, at the end of our journey, to return to where we started and take a fresh look at developments which have revitalised Principal Component Analysis. These include the availability of high-dimensional and functional data and the necessity for dimension reduction and feature selection and new and sparse ways of representing data.
Exciting developments in the analysis of high-dimensional data have been interacting with similar ones in Statistical Learning. It is not clear where analysis of data stops and learning from data starts. An essential part of both is the selection of ‘important’ and ‘relevant’ features or variables.
- Type
- Chapter
- Information
- Analysis of Multivariate and High-Dimensional Data , pp. 421 - 475Publisher: Cambridge University PressPrint publication year: 2013