Published online by Cambridge University Press: 01 April 1999
Characterizing deleterious genomic mutations is important. Most of the few current estimates come from the mutation–accumulation (M-A) approach, which has been extremely time- and labour-consuming. There is a resurgent interest in implementing this approach. However, its estimation properties under different experimental designs are poorly understood. By simulations we investigate these issues in detail. We found that many of the previous M-A experiments could have been more efficiently implemented with much less time and expense while still achieving the same estimation accuracy. If more than 100 lines are employed in M-A and if each line is replicated at least 10 times during each assay, an experiment of 10 M-A generations with two assays (at the beginning and at the end of M-A) may achieve at least the same estimation quality as a typical M-A experiment. The number of replicates per M-A line necessary for each assay largely depends on the magnitude of environmental variance. While 10 replicates are reasonable for assaying most fitness traits, many more are needed for viability, which has an exceptionally large environmental variance. The investigation is mainly carried out using Bateman–Mukai's method of moments for estimation. Estimation using Keightley's maximum likelihood is also investigated and discussed. These results should not only be useful for planning efficient M-A experiments, but also may help empiricists in deciding to adopt the M-A approach with manageable labour, time and resources.