A number of introductory statistics books published during recent years have presented the basic theory of point estimation (see, for example, [1]–[4]). The time now seems ripe for a few cautionary tales to be told, or retold, for there is more to the estimation of a parameter than searching for an unbiased estimator or maximising the likelihood of a random sample. Having said that, I need at once to emphasise the distinction between methods of estimation on the one hand, and properties of estimators on the other. For a particular problem, the method of maximum likelihood, fcr example, provides a possible set of estimators for the parameters involved. Other methods such as least squares or minimum-χ2 might provide alternatives; but no estimator should ever be put to use until its properties have been investigated in some depth. And here lies the rub; for, as I intend to show in the following paragraphs, the choice of criteria by which one estimator is judged superior to another rests, ultimately, with the individual statistician or researcher responsible for the estimate produced. In other words, no completely objective solution to the estimation problem is tenable.