We thank Haddaway & Pullin (Reference Haddaway and Pullin2013) for their thoughtful response to our article, and for taking it in the constructive spirit in which it was intended. We have three thoughts in the light of our original article and their response.
Firstly, we are delighted that Haddaway & Pullin agree with us that evidence-informed conservation is a useful term in the context of policy making. As they observe, the distinction between evidence-informed and evidence-based approaches is widely recognized in other areas of policy but it is not yet common in conservation. We are also glad that Haddaway & Pullin share our concern about the use of evidence-based conservation thinking to address complex policy questions, particularly those with socio-economic dimensions. It is reassuring to hear that the limitations of evidence-based conservation for tackling such questions are recognized by experts in the field.
Secondly, we believe more thought is needed about whether evidence-based conservation should be seen as separate from policy. Haddaway & Pullin suggest that evidence-based conservation should be understood as a form of science, and that it ‘does not have a view on how policy works or favour one model over another’. We don't think this can be right. Evidence-based conservation is surely, by definition, describing and endorsing a particular form of policy that is based on evidence. If its practitioners have no view on the relationship between evidence and policy then a better name for their work would be something like ‘conservation evidence synthesis’, which would in turn support ‘evidence-informed conservation’. We remain convinced that those practising evidence-based conservation do seek to make deliberate interventions in policy, even if only in the policies of conservation organizations. They see better information as a necessary requirement for better policy, and they set out to make it available to decision makers. In doing so, they become embroiled as actors in the policy making process, promoting the evidence they have synthesized over other forms of knowledge.
Thirdly, we remain nervous about the power of academic researchers and scientists, and the risk that they will only gather the research easily available to them (on the web and in journals, for example), and pronounce it (or allow it to be taken) as the sum of all evidence. Many conservation problems need to be understood from the field, not the library and computer room, and many people far from centres of calculation have valuable knowledge. The failure of many synthetic reviews to capture evidence from outside formal academic sources is often a function of practical constraints on access to that evidence—in Haddaway & Pullin's term it is not ‘available evidence’. This is not the fault of the reviewer but it does limit the usefulness of the end product.
We have learned a great deal from the literature on evidence-based conservation, and have been impressed with much of the best practice available. Evidence-based conservation has begun to cast light on what is and is not known in many areas of conservation work, and how certain we should be about it. There is much to be learned from stored formal knowledge, and potentially even more from oral and experiential knowledge. There is also much to be learned from other evidence-based policy practitioners, and we would urge those interested in distilling knowledge relevant to conservation to read widely about the role of evidence in policy outside their discipline.