Book contents
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
Foreword
Published online by Cambridge University Press: 28 January 2010
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
Summary
July 1997 saw the start of a six month international research programme entitled Neural Networks and Machine Learning, hosted by the Isaac Newton Institute for Mathematical Sciences in Cambridge. During the programme many of the world's leading researchers in the field visited the Institute for periods ranging from a few weeks up to six months, and numerous younger scientists also benefited from a wide range of conferences and tutorials.
Amongst the many successful workshops which ran during the six month Newton Institute programme, the one week workshop on the theme of Online Learning in Neural Networks, organized by David Saad, was particularly notable. He succeeded in assembling an impressive list of speakers whose talks spanned essentially all of the major research issues in on-line learning. The workshop took place from 17 to 21 November, with the Newton Institute's purpose-designed building providing a superb setting.
This book resulted directly from the workshop, and comprises invited chapters written by each of the workshop speakers. It represents the first book to focus exclusively on the important topic of on-line learning in neural networks. On-line algorithms, in which training patterns are treated sequentially, and model parameters are updated after each presentation, have traditionally played a central role in many neural network models. Indeed, on-line gradient descent formed the basis of the first effective technique for training multi-layered networks through error back-progagation. It remains of great practical significance for training large networks using data sets comprising several million examples, such as those routinely used for optical character recognition.
- Type
- Chapter
- Information
- On-Line Learning in Neural Networks , pp. 1 - 2Publisher: Cambridge University PressPrint publication year: 1999