Book contents
- Frontmatter
- Contents
- 14 Introduction to Tests of Hypotheses
- 15 Uniformly Most Powerful Tests
- 16 Unbiased Tests and Invariant Tests
- 17 Likelihood Based Tests
- 18 General Asymptotic Tests
- 19 Multiple Tests
- 20 Set Estimation and Confidence Regions
- 21 Inequality Constraints: Estimation and Testing
- 22 Nonnested Models
- 23 Asymptotic Efficiency
- 24 Asymptotic Theory
- Review of Linear Algebra and Matrix Calculus
- Review of Probability
- Index
24 - Asymptotic Theory
Published online by Cambridge University Press: 04 August 2010
- Frontmatter
- Contents
- 14 Introduction to Tests of Hypotheses
- 15 Uniformly Most Powerful Tests
- 16 Unbiased Tests and Invariant Tests
- 17 Likelihood Based Tests
- 18 General Asymptotic Tests
- 19 Multiple Tests
- 20 Set Estimation and Confidence Regions
- 21 Inequality Constraints: Estimation and Testing
- 22 Nonnested Models
- 23 Asymptotic Efficiency
- 24 Asymptotic Theory
- Review of Linear Algebra and Matrix Calculus
- Review of Probability
- Index
Summary
In the preceding chapters the main asymptotic properties of estimators and test statistics have often been derived heuristically. In this chapter we present a more rigorous proof of the consistency of these estimators as well as a more detailed derivation of their asymptotic distributions. As a prerequisite we provide some useful tools that underlie the Taylor series expansions that were used to establish, for instance, the asymptotic equivalences among the classical testing procedures.
There are several methods of proofs for establishing the consistency of an “extremum” estimator, i.e., an estimator obtained by maximizing or minimizing a statistical criterion. We have chosen to present those methods that appear to apply to the largest number of situations and for which the regularity conditions are the easiest to verify. Note that the class of extremum estimators contains the M-estimators (maximum likelihood estimator, pseudo maximum likelihood estimators, etc.) as well as moment type estimators (asymptotic least squares estimators, generalized methods of moments, etc.).
Existence of an Extremum Estimator
Before developing such an asymptotic theory it is necessary to reconsider the definition of an estimator. In Chapter 2 an estimator was defined as a mapping from the set of observations (sample space) to the set of possible values for the parameter (parameter space). In fact, since we are interested in the distribution of an estimator, in its expectation, variance, etc., then this probability distribution must have a meaning, and, for this reason, the mapping should be measurable. In particular, the sample space and the parameter space must be endowed with some σ-algebrae.
- Type
- Chapter
- Information
- Statistics and Econometric Models , pp. 383 - 404Publisher: Cambridge University PressPrint publication year: 1995
- 1
- Cited by