Book contents
3 - A mangle of machines
Published online by Cambridge University Press: 05 June 2012
Summary
Avoid strange and unfamiliar words as a sailor avoids rocks at sea.
From On Analogy, by Julius Caesar (ca. 54 BCE)Barring that natural expression of villainy which we all have, the machine looked honest enough.
(With apologies to Mark Twain)Introduction
Our survey covers learning machines that have been studied intensively and applied widely. We do not focus on detailed discussion of the numerous versions of each, and we barely cover the full spectrum of learning machines now available; see Note 1. The goal of this chapter is to display the set of core ideas that propel each method. We also do not linger over the mathematical details and, as before, we make an effort to sharply limit any viewing of the mathematical details; see Note 2.
Linear regression
A simple classification or prediction method can often be obtained by fitting a linear regression model. It is a very classical and still very important method.
Let the outcome be y and the single predictor be x. Then linear regression of y on x is written as:
y = a + bx,
where the constants a and b have to be estimated from the data. The next step up in generality is to allow for multiple predictors:
x1, x2,…,xk,
where the xi can be any collection of discrete or continuous predictors.
- Type
- Chapter
- Information
- Statistical Learning for Biomedical Data , pp. 41 - 56Publisher: Cambridge University PressPrint publication year: 2011