Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 The Basic Bootstraps
- 3 Further Ideas
- 4 Tests
- 5 Confidence Intervals
- 6 Linear Regression
- 7 Further Topics in Regression
- 8 Complex Dependence
- 9 Improved Calculation
- 10 Semiparametric Likelihood Inference
- 11 Computer Implementation
- Appendix A Cumulant Calculations
- Bibliography
- Name Index
- Example index
- Subject index
2 - The Basic Bootstraps
Published online by Cambridge University Press: 05 June 2013
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 The Basic Bootstraps
- 3 Further Ideas
- 4 Tests
- 5 Confidence Intervals
- 6 Linear Regression
- 7 Further Topics in Regression
- 8 Complex Dependence
- 9 Improved Calculation
- 10 Semiparametric Likelihood Inference
- 11 Computer Implementation
- Appendix A Cumulant Calculations
- Bibliography
- Name Index
- Example index
- Subject index
Summary
Introduction
In this chapter we discuss techniques which are applicable to a single, homogeneous sample of data, denoted by y1, …, yn. The sample values are thought of as the outcomes of independent and identically distributed random variables Y1, …, Yn whose probability density function (PDF) and cumulative distribution function (CDF) we shall denote by f and F, respectively. The sample is to be used to make inferences about a population characteristic, generically denoted by θ, using a statistic T whose value in the sample is t. We assume for the moment that the choice of T has been made and that it is an estimate for θ, which we take to be a scalar.
Our attention is focused on questions concerning the probability distribution of T. For example, what are its bias, its standard error, or its quantiles? What are likely values under a certain null hypothesis of interest? How do we calculate confidence limits for θ using T?
There are two situations to distinguish, the parametric and the nonparametric. When there is a particular mathematical model, with adjustable constants or parameters ψ that fully determine f, such a model is called parametric and statistical methods based on this model are parametric methods. In this case the parameter of interest θ is a component of or function of ψ. When no such mathematical model is used, the statistical analysis is nonparametric, and uses only the fact that the random variables Yj are independent and identically distributed.
- Type
- Chapter
- Information
- Bootstrap Methods and their Application , pp. 11 - 69Publisher: Cambridge University PressPrint publication year: 1997
- 5
- Cited by