7 - Random Forests – trees everywhere
Published online by Cambridge University Press: 05 June 2012
Summary
The clearest path into the Universe is through a forest wilderness.
John MuirThe world's a forest, in which all lose their way;
though by a different path each goes astray.
George VilliersHaving discussed single decision trees we move on to large collections of trees and Random Forests, RF. As a single scheme this began as a collection of methods smartly pulled together by Leo Breiman and Adele Cutler. They built on previous work, for example in doing bootstrap draws from the data upon which to train single trees, and also helped promote the novel idea of randomly sampling from the list of features; see Note 1.
RF continues to demonstrate excellent performance on a wide range of data. Currently it is our preferred learning machine, for its speed, convenience, and generally good performance.
Random Forests in less than five minutes
Here is an outline of the Random Forests procedure.
(1) Sample the data, using bootstrap draws; keep track of the data selected, and call the data not selected the (OOB) out-of-bag data; recall that bootstrap sampling means sampling with replacement.
(2) Sample the list of features; here we don't sample with replacement (as in boot draws), choosing in each draw only a small number of the features from the complete list (this small number is, basically, the main user-specified parameter for the whole RF scheme).
(3) Grow a single decision tree using the data and features selected; don't bother with pruning or finding the optimal size or complexity of the tree.
[…]
- Type
- Chapter
- Information
- Statistical Learning for Biomedical Data , pp. 137 - 154Publisher: Cambridge University PressPrint publication year: 2011
- 1
- Cited by