Published online by Cambridge University Press: 21 June 2018
Consider a field experiment laid out in a randomized complete block design in which you study three types of fertilizers for two winter wheat cultivars. One year, one location – the experiment is not repeated. You design it and then spend a lot of time and money to conduct it. You cultivate the soil and take care of the plants; you worry about them; you never know what can happen, so you cannot wait for the crop to be harvested. And that day finally comes. The crop is harvested, and everything is fine. And here you are, all went great, you have the data in hands, and now a simple thing to do – analyse them. Well, yes, the experiment was conducted in one year, and you are aware you cannot be sure the outcome would be the same next year or elsewhere, but whatever – suffice it to consider the conclusions as preliminary and get on with the interpretation. Why should you not? It was a properly designed experiment that took samples from the underlying infinite populations of the two winter wheat cultivars in the three water regimes studied. Statistics is here to help you out, is not it? Well, it is not. Statistics will not help you out whether the experiment was poorly designed. Agricultural science literature seldom explains what populations are studied and what types of samples are taken in designed experiments. To fill this gap, we discuss various aspects of the sampling process in designed experiments. In doing so, we look at the survey sampling methodology, a statistical framework for studying finite populations – we do this because survey sampling has developed into the advanced theory of sampling processes, and this background can help us understand the intrinsic aspects of sampling in designed experiments.