Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
5 - Reproducing Kernel Hilbert Spaces
from Part One - Machine Learning
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
This chapter provides a theoretical analysis of reproducing kernel Hilbert spaces. It starts by showing that every Hilbert space of functions in which point evaluations are continuous linear functionals possesses a reproducing kernel. It proceeds by showing that every positive semidefinite kernel gives rise to a reproducing kernel Hilbert space—this is the Moore--Aronszajn theorem. Finally, the Mercer theorem offers an explicit representation of this reproducing kernel Hilbert space under additional conditions on the kernel.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 31 - 40Publisher: Cambridge University PressPrint publication year: 2022