Book contents
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Scaling Up Machine Learning: Introduction
- Part One Frameworks for Scaling Up Machine Learning
- 2 MapReduce and Its Application to Massively Parallel Learning of Decision Tree Ensembles
- 3 Large-Scale Machine Learning Using DryadLINQ
- 4 IBM Parallel Machine Learning Toolbox
- 5 Uniformly Fine-Grained Data-Parallel Computing for Machine Learning Algorithms
- Part Two Supervised and Unsupervised Learning Algorithms
- Part Three Alternative Learning Settings
- Part Four Applications
- Subject Index
- References
4 - IBM Parallel Machine Learning Toolbox
from Part One - Frameworks for Scaling Up Machine Learning
Published online by Cambridge University Press: 05 February 2012
- Frontmatter
- Contents
- Contributors
- Preface
- 1 Scaling Up Machine Learning: Introduction
- Part One Frameworks for Scaling Up Machine Learning
- 2 MapReduce and Its Application to Massively Parallel Learning of Decision Tree Ensembles
- 3 Large-Scale Machine Learning Using DryadLINQ
- 4 IBM Parallel Machine Learning Toolbox
- 5 Uniformly Fine-Grained Data-Parallel Computing for Machine Learning Algorithms
- Part Two Supervised and Unsupervised Learning Algorithms
- Part Three Alternative Learning Settings
- Part Four Applications
- Subject Index
- References
Summary
In many ways, the objective of the IBM Parallel Machine Learning Toolbox (PML) is similar to that of Google's MapReduce programming model (Dean and Ghemawat, 2004) and the open source Hadoop system, which is to provide Application Programming Interfaces (APIs) that enable programmers who have no prior experience in parallel and distributed systems to nevertheless implement parallel algorithms with relative ease. Like MapReduce and Hadoop, PML supports associative-commutative computations as its primary parallelization mechanism. Unlike MapReduce and Hadoop, PML fundamentally assumes that learning algorithms can be iterative in nature, requiring multiple passes over data. It also extends the associative-commutative computational model in various aspects, the most important of which are:
The ability to maintain the state of each worker node between iterations, making it possible, for example, to partition and distribute data structures across workers
Efficient distribution of data, including the ability for each worker to read a subset of the data, to sample the data, or to scan the entire dataset
Access to both sparse and dense datasets
Parallel merge operations using tree structures for efficient collection of worker results on very large clusters
In order to make these extensions to the computational model and still address ease of use, PML provides an object-oriented API in which algorithms are objects that implement a predefined set of interface methods. The PML infrastructure then uses these interface methods to distribute algorithm objects and their computations across multiple compute nodes.
- Type
- Chapter
- Information
- Scaling up Machine LearningParallel and Distributed Approaches, pp. 69 - 88Publisher: Cambridge University PressPrint publication year: 2011