Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T07:39:25.031Z Has data issue: false hasContentIssue false

Optimal choice between parametric and non-parametric bootstrap estimates

Published online by Cambridge University Press:  24 October 2008

Stephen Man Sing Lee
Affiliation:
Statistical Laboratory, University of Cambridge, 16 Mill Lane, Cambridge CB2 ISB

Abstract

A parametric bootstrap estimate (PB) may be more accurate than its non-parametric version (NB) if the parametric model upon which it is based is, at least approximately, correct. Construction of an optimal estimator based on both PB and NB is pursued with the aim of minimizing the mean squared error. Our approach is to pick an empirical estimate of the optimal tuning parameter ε∈[0, 1] which minimizes the mean square error of εNB+(1−ε) PB. The resulting hybrid estimator is shown to be more reliable than either PB or NB uniformly over a rich class of distributions. Theoretical asymptotic results show that the asymptotic error of this hybrid estimator is quite close in distribution to the smaller of the errors of PB and NB. All these errors typically have the same convergence rate of order . A particular example is also presented to illustrate the fact that this hybrid estimate can indeed be strictly better than either of the pure bootstrap estimates in terms of minimizing mean squared error. Two simulation studies were conducted to verify the theoretical results and demonstrate the good practical performance of the hybrid method.

Type
Research Article
Copyright
Copyright © Cambridge Philosophical Society 1994

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

[1]Bickel, P. J. and Freedman, D. A.. Some asymptotic theory for the bootstrap. Ann. Statist. 9 (1981), 11961217.CrossRefGoogle Scholar
[2]Efron, B.. Bootstrap methods: another look at the jackknife. Ann. Statist. 7 (1979), 126.CrossRefGoogle Scholar
[3]Efron, B.. Jackknife-after-bootstrap standard errors and influence functions. (With discussion.) J. Roy. Statist. Soc. Ser. B 54 (1992), 83127.Google Scholar
[4]Fernholz, L. T.. vonMises Calculus for Statistical Functional (Springer-Verlag, 1983).CrossRefGoogle Scholar
[5]Hjort, N. L.. Contribution to the discussion of David Hinkley's lectures on bootstrapping techniques. Written version presented at Nordic Conference in Mathematical Statistics. Scand. J. Statist. to appear.Google Scholar
[6]Kendall, M. G. and Stuart, A.. The Advanced Theory of Statistics, vol. 1, 4th edn. (Griffin, 1977).Google Scholar
[7]Léger, C. and Romano, J. P.. Bootstrap choice of tuning parameters. Ann. Inst. Stat.Math. 42 (1990), 709735.CrossRefGoogle Scholar
[8]Serfling, R. J.. Approximation Theorems of Mathematical Statistics (Wiley, 1980).CrossRefGoogle Scholar
[9]Silverman, B. W. and Young, G. A.. The bootstrap: to smooth or not to smooth? Biometrika 74 (1987), 469479.CrossRefGoogle Scholar