Book contents
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
1 - Introduction
Published online by Cambridge University Press: 28 January 2010
- Frontmatter
- Contents
- Acknowledgements
- List of contributors
- Foreword
- 1 Introduction
- 2 On-line Learning and Stochastic Approximations
- 3 Exact and Perturbation Solutions for the Ensemble Dynamics
- 4 A Statistical Study of On-line Learning
- 5 On-line Learning in Switching and Drifting Environments with Application to Blind Source Separation
- 6 Parameter Adaptation in Stochastic Optimization
- 7 Optimal On-line Learning in Multilayer Neural Networks
- 8 Universal Asymptotics in Committee Machines with Tree Architecture
- 9 Incorporating Curvature Information into On-line Learning
- 10 Annealed On-line Learning in Multilayer Neural Networks
- 11 On-line Learning of Prototypes and Principal Components
- 12 On-line Learning with Time-Correlated Examples
- 13 On-line Learning from Finite Training Sets
- 14 Dynamics of Supervised Learning with Restricted Training Sets
- 15 On-line Learning of a Decision Boundary with and without Queries
- 16 A Bayesian Approach to On-line Learning
- 17 Optimal Perceptron Learning: an On-line Bayesian Approach
Summary
Artificial neural networks (ANN) is a field of research aimed at using complex systems, made of simple identical non-linear parallel elements, for performing different types of tasks; for review see (Hertz et al 1990),(Bishop 1995) and (Ripley 1996). During the years neural networks have been successfully applied to perform regression, classification, control and prediction tasks in a variety of scenarios and architectures. The most popular and useful of ANN architectures is that of layered feed-for ward neural networks, in which the non-linear elements (neurons) are arranged in successive layers, and the information flows unidirectionally; this is in contrast to the other main generic architecture of recurrent networks where feed-back connections are also permitted. Layered networks with an arbitrary number of hidden units have been shown to be universal approximators (Cybenko 1989; Hornik et al 1989) for continuous maps and can therefore be used to implement any function defined in these terms.
Learning in layered neural networks refers to the modification of internal network parameters, so as to bring the map implemented by the network as close as possible to a desired map. Learning may be viewed as an optimization of the parameter set with respect to a set of training examples instancing the underlying rule. Two main training paradigms have emerged: batch learning, in which optimization is carried out with respect to the entire training set simultaneously, and on-line learning, where network parameters are updated after the presentation of each training example (which may be sampled with or without repetition).
- Type
- Chapter
- Information
- On-Line Learning in Neural Networks , pp. 3 - 8Publisher: Cambridge University PressPrint publication year: 1999