Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T12:02:20.925Z Has data issue: false hasContentIssue false

Chapter Seven - Informing conservation decisions through evidence synthesis and communication

from Part I - Identifying priorities and collating the evidence

Published online by Cambridge University Press:  18 April 2020

William J. Sutherland
Affiliation:
University of Cambridge
Peter N. M. Brotherton
Affiliation:
Natural England
Zoe G. Davies
Affiliation:
Durrell Institute of Conservation and Ecology (DICE), University of Kent
Nancy Ockendon
Affiliation:
University of Cambridge
Nathalie Pettorelli
Affiliation:
Zoological Society of London
Juliet A. Vickery
Affiliation:
Royal Society for the Protection of Birds, Bedfordshire

Summary

The volume of evidence from scientific research and wider observation is greater than ever before, but much is inconsistent and scattered in fragments over increasingly diverse sources, making it hard for decision-makers to find, access and interpret all the relevant information on a particular topic, resolve seemingly contradictory results or simply identify where there is a lack of evidence. Evidence synthesis is the process of searching for and summarising a body of research on a specific topic in order to inform decisions, but is often poorly conducted and susceptible to bias. In response to these problems, more rigorous methodologies have been developed and subsequently made available to the conservation and environmental management community by the Collaboration for Environmental Evidence. We explain when and why these methods are appropriate, and how evidence can be synthesised, shared, used as a public good and benefit wider society. We discuss new developments with potential to address barriers to evidence synthesis and communication and how these practices might be mainstreamed in the process of decision-making in conservation.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

7.1 Introduction

The volume of evidence from scientific research and wider observation is greater than ever. Approximately 2.5 million articles are published annually (Plume & van Weijen, Reference Plume and van Weijen2014) and this rate is increasing at around 3–3.5% per year (Ware & Mabe, Reference Ware and Mabe2015). Conservation is no exception to this trend and the result is a rapidly expanding body of potentially useful information for decision-makers (Li & Zhao, Reference Li and Zhao2015). While the expansion of research represents an important increase in knowledge generation, much of this information is scattered in fragments over increasingly diverse sources. This, along with the sheer volume, makes it harder for decision-makers to find, access and digest all of the relevant information on a particular topic, resolve seemingly contradictory results or simply identify a lack of evidence. Evidence synthesis is the process of searching for, and summarising, a body of research on a specific topic in order to inform decisions. The extent of relevant research may range from nothing, or one or two primary studies, to many hundreds. Despite the obvious potential value of synthesising findings from multiple studies (where two studies may be all that is needed to add value through synthesis), methods of rigorous evidence synthesis have been largely neglected until recently. We argue that it is time to place evidence synthesis as a central pillar of evidence-informed decision-making in conservation and environmental management.

As an enterprise, evidence synthesis is very broad and includes many and diverse methodologies, some more rigorous than others. For example, syntheses labelled as ‘literature reviews’ often lack standardised methodology, fail to report their methods and therefore lack transparency or the potential for repeatability (O’Leary et al., Reference O’Leary, Kvist and Bayliss2016). Additionally, these literature reviews do not deal with the risk of bias in either the primary research (e.g. poor-quality experimental design and conclusions that may not be supported by a given study) or the synthesis process (e.g. selective use of information). Meta-analysis approaches have become popular where significant amounts of quantitative data are available, but they are often biased in the way they select and include studies in their analysis (Koricheva & Gurevitch, Reference Koricheva and Gurevitch2014). In response to these problems, more rigorous methodologies, such as systematic reviews, have been developed. These were first used in the health sector through the work of the Cochrane Collaboration (Higgins & Green, Reference Higgins and Green2011), and have subsequently been applied to conservation and environmental management by the Collaboration for Environmental Evidence (Pullin & Knight, Reference Pullin and Knight2009; Collaboration for Environmental Evidence, 2018).

In this chapter we make a case for rigorous evidence synthesis: we explain why these methods are appropriate, how they can benefit wider society and how evidence can be synthesised, shared and used as a public good. Although evidence synthesis can inform a broad range of decision-making contexts, we focus here on two major aspects of conservation where evidence might be useful. First, in measuring the direct and indirect impacts of human activity on the natural world, and second, the effectiveness of conservation efforts to mitigate those impacts.

7.2 The central role of evidence synthesis in informing decisions in conservation policy and practice

Many factors can contribute to making a decision. In contexts where social and political stakes are high, as is common for conservation policy, scientific evidence will likely only inform decisions, rather than act as the primary driving force behind them. Although evidence is sometimes crucial, it may equally be ignored or overruled by other factors, such as political context, infrastructure and capacity. Ideally, evidence synthesis should play a central role in providing reliable evidence and enabling the wider society to understand or challenge decisions that might affect them. Making decisions without considering all available evidence might perpetuate biases, increase the likelihood of taking a wrong or costly action, or lead to missed opportunities to achieve faster or more cost-efficient outcomes. In a democratic society, comprehensive and rigorous evidence synthesis and open communication makes ‘sidelining’ (i.e. deliberately ignoring evidence) and/or biased (i.e. selective) use of evidence by authorities more difficult without challenge and transparent justification.

Unfortunately, evidence synthesis is itself often ‘bypassed’ completely or manipulated to get the answer required (i.e. policy-based evidence) (Dicks et al., Reference Dicks, Walsh and Sutherland2014). There may be significant resistance to the use of transparent evidence synthesis in the face of vested interests, and this may partly explain why organised and independent evidence synthesis receives so little attention or funding. Rigorous scientific evidence could also be seen as a threat to those with entrenched beliefs. Beyond outright opposition, complacency or inaccessibility of evidence might inhibit adoption of synthesis findings even when good intentions towards informed decision-making exist.

Fortunately, most decision-makers in conservation want practical advice that is grounded in the best available evidence (Cook et al., Reference Cook, Mascia and Schwartz2013). Leveraging syntheses and integrating their findings into decision-making processes requires an understanding of how and when evidence is necessary, and what level of confidence is needed to inform a decision. Such considerations will determine the choice of synthesis method(s), which should reflect practical needs to guide management decisions or future research. Syntheses can be used either to generate a new theory, conceptual framework or hypothesis (e.g. applying existing theory to a different context) or to test an existing hypothesis (e.g. evaluating the effectiveness of an intervention). In the context of effectiveness of interventions, evidence syntheses are relevant to decisions at several critical stage points in the life cycle of a programme or initiative: (1) initial scoping of a new topic early on in strategic planning (e.g. informing a new strategy on land use for a philanthropic foundation (Snilstviet et al., Reference Snilstviet, Stevenson and Villar2016)); (2) identification or validation of specific intervention designs (e.g. understanding how gender composition affects outcomes of resource management groups (Leisher et al., Reference Leisher, Temsah and Booker2016)); (3) benchmarking of institutional outcomes against other programmes (e.g. investments in community forest management by the Global Environment Facility (Bowler et al., Reference Bowler, Buyung-Ali and Healey2010)); (4) evaluation of overall effectiveness of an intervention across multiple contexts or applications (e.g. effects of property regimes in different biomes (Ojanen et al., Reference Ojanen, Zhou and Miller2017)). Understanding the purpose of the syntheses for informing the different stages of decision-making will ensure selection of a suitable method, appropriate engagement of stakeholders and relevant communication of findings.

Some evidence synthesis methods, such as systematic review, have been described as following the ‘information deficit model’ (Owens, Reference Owens2000); that is to say, they follow the assumption that the simple production and push delivery of evidence that fills a gap will be sufficient to achieve uptake. However, this perception misrepresents the full process behind the methodology. Systematic reviews can be socially inclusive, with extensive stakeholder contribution to formulating a question and approach, including setting the scope of the topic. This engagement attempts to ensure the findings of a review will fill a real and important synthesis gap (a knowledge need where sufficient primary research exists to allow synthesis) and respond to stakeholder demand. When engaging with stakeholders, a balance needs to be struck between involving them in the design of the review and independence from undue vested interest (Haddaway & Crowe, Reference Haddaway and Crowe2018). In the field of conservation, this balance is very much dependent on the nature of the question and the extent of vested interests (Kløcker Larsen & Nilsson, Reference Kløcker Larsen and Nilsson2017). Many aspects of evidence synthesis are collective, with stakeholders having shared motivation to benefit from the findings. In other cases, evidence synthesis is conducted in contested areas, with stakeholders that hold opposing views and may be hostile to the process and its findings. In the latter case, it is important to have a process that allows consultation when appropriate but also provides independence when necessary. For example, for some key steps, such as initial formulation of the question, engagement with stakeholders is usually essential (Land et al., Reference Land, Macura and Bernes2017), while other steps may need to be conducted free of such vested interests. To date, systematic reviews have engaged with a spectrum of stakeholders at different levels. Some reviews, for example those that are more academic or have specific commissioners (e.g. private goods reviews (Oliver & Dickson, Reference Oliver and Dickson2016)), have only passively engaged stakeholders by informing or consulting them (typically only at the beginning of the review process), while others have employed more in-depth engagement, extending to co-design of review methods and scope (Land et al., Reference Land, Macura and Bernes2017).

Alongside the purpose of syntheses, the level of confidence required to make a decision determines their method and scope. In some instances, where evidence of effectiveness is key, uncertainty in the evidence base hampers decision-making. In such instances one might ask ‘How much evidence is enough?’ or ‘How much uncertainty is acceptable?’ (Salafsky & Redford, Reference Salafsky and Redford2013). The need for evidence synthesis in the conservation sector may also vary depending on aspects of spatial scale, complexity and controversy. For example, decisions regarding inexpensive and low-risk local-scale interventions (e.g. applied to improve biodiversity or habitat conditions in nature reserves) may benefit most from locally generated, rigorous evidence, or more commonly from primary research studies conducted in similar contexts. This evidence could be provided by a single, self-generated study (as in adaptive management), be internally generated by the relevant organisation, or come from collating evidence from similar case studies. In contrast, decisions regarding expensive, often large-scale, high-risk programmes (e.g. to eradicate poaching and illegal trade in wildlife), where stakeholders are likely to be global and might hold conflicting views, may benefit from an independent global-scale, multi-context evidence synthesis. This might require a rigorous analysis of what works, where and when and for whom, involving analysis of heterogeneity in outcome and identification of effect modifiers. Often within conservation, a broader set of evidence types (e.g. controlled trials, case studies, quantitative and qualitative research) is needed to fully capture the complexity of conservation contexts.

7.3 Key aspects of rigorous evidence synthesis and why they are needed

To be reliable, evidence syntheses should consider all available evidence and attempt to provide the most accurate and precise estimation of the truth. A suite of methodologies has been developed that maximises transparency and repeatability while minimising subjectivity, susceptibility to bias or influence of vested interest. The most widespread of these, systematic reviews and systematic maps, are well-documented secondary research methods that follow detailed guidance (e.g. Collaboration for Environmental Evidence, 2018) and use step-wise processes set out in an a-priori protocol to comprehensively identify and collate all available evidence (Table 7.1).

Table 7.1 Overview of systematic evidence synthesis stages and the issues they address. For an explanation of bias see Collaboration for Environmental Evidence (2018) or Bayliss and Beyer (Reference Bayliss and Beyer2015)

Systematic review stageDescriptionDefining featuresType of issue addressed
Review question identification and formulation (with stakeholder engagement)Question is carefully identified and formulated with help of stakeholdersSocial acceptance, relevance and legitimacy of the review process
ProtocolProtocol outlines the intended method in detail. Protocol is peer-reviewed and published on an open-access platformPublic acceptance, peer reviewReview bias, question creep
Searching for relevant literatureComprehensive searches for grey and commercially published literature from a variety of sourcesComprehensiveness, repeatability (through transparency)Publication bias
Eligibility screeningCareful screening of all identified articles according to pre-determined inclusion criteriaConsistencySelection bias, review bias
Critical appraisal of study validity (optional for systematic maps)A detailed assessment of the susceptibility to bias and generalisability of each studyAccount for variability in internal validity and power of individual studiesSusceptibility to bias in individual studies and in study weighting by reviewers
Data coding and extractionTransparent coding and, in case of systematic reviews, extraction of study findingConsistency, repeatability (through transparency), minimising subjectivitySelection bias
Qualitative and/or qualitative data synthesis (not required for systematic maps)Well-documented and comprehensive synthesis of qualitative and/or quantitative study findingsComprehensiveness, repeatability (through transparency)Selection bias, vote-counting, publication bias
Reporting and communication of review findingsTransparent reporting of the review results with extensive supplementary informationRepeatability (through transparency), avoiding overreachDiscussion bias

Systematic reviews in conservation and environmental management have most commonly aimed to answer specific cause-and-effect type questions, for example relating to the effect of a management intervention or exposure on a subject of concern. (e.g. ‘What is the impact of a specific factor x on a subject z?’). In contrast, systematic maps collate and catalogue available evidence on a relatively broad subject, describing the nature of the evidence base and highlighting evidence clusters and gaps, along with methodological patterns in primary research (Collaboration for Environmental Evidence, 2018). Systematic maps can be used as an initial step of an evidence synthesis pathway to identify subtopics suitable for a systematic review and subtopics where there is insufficient evidence to make synthesis of primary data worthwhile. In such latter cases, which are common in conservation, the map may identify individual primary studies that provide useful evidence (for an example of a systematic review question generated from a map, see www.eviem.se/en/projects/SR15-Prescribed-forest-burning/).

Systematic reviews were originally developed in response to an absence of easily accessible and rigorous synthesis of available evidence. However, recent assessments have shown that non-systematic reviews that aim to inform environmental policy and practice are still prevalent, but have low methodological reliability, suffering from lack of transparency and methodological rigour, and are consequently highly susceptible to bias (Woodcock et al., Reference Woodcock, Pullin and Kaiser2014, Reference Woodcock, O’Leary and Kaiser2017; O’Leary et al., Reference O’Leary, Kvist and Bayliss2016). Moreover, the term ‘systematic review’ is often used by authors (and not challenged by editors or peer reviewers) when the reviews are in no way systematic. The production of substandard and ‘fake’ systematic reviews is increasing in all fields, from public health to environmental management and education (Haddaway et al., Reference Haddaway, Land and Macura2016; Ioannidis, Reference Ioannidis2016; Haddaway, Reference Haddaway2017; Pussegoda et al., Reference Pussegoda, Turner and Garritty2017); they are ‘fake’ in the sense that they lack necessary comprehensiveness, transparency and reliability (Haddaway, Reference Haddaway2017). This further confuses the issue for potential readers, with only a handful of environmental journals requiring authors to follow accepted standards of conduct and reporting (see Collaboration for Environmental Evidence, 2018). A potential evidence user can use keywords like ‘systematic review’ in their search and have it return documents that claim to be such, when in fact they are not. The misuse of the term ‘systematic review’ can undermine efforts towards effective decision-making and is a key reason for establishing independent standards.

Stakeholders, including scientists, rarely have the time or training to differentiate between a ‘true’ systematic review and one that misses critical components of the method (resulting in increased risk of bias and lack of transparency) especially when published in an outlet such as a peer-reviewed journal. To enhance the uptake of more rigorous and reliable synthesis methodologies and maximise the potential of evidence to inform decisions, independent coordinating bodies have been founded in different sectors of society to provide guidelines and standards for evidence synthesis. In the field of medicine this process began in the 1990s with the establishment of the Cochrane Collaboration, which aimed to conduct systematic reviews in order to provide healthcare professionals with the best available evidence on the effectiveness of clinical interventions (Higgins & Green, Reference Higgins and Green2011). The methods were transferred to the field of conservation and environmental management in the early 2000s (Pullin & Stewart, Reference Pullin and Stewart2006) and are now under the coordination of the Collaboration for Environmental Evidence. These independent coordinating bodies provide guidelines for and training in the conduct of systematic reviews and systematic maps, as well as registering, endorsing and publishing such evidence syntheses. Syntheses registered through the coordinating bodies are scrutinised by methodology experts, guaranteeing a level of reliability and rigour (Collaboration for Environmental Evidence, 2018).

In circumstances where vested interests might potentially influence the outcome of an evidence synthesis, these independent organisations provide a framework and platform to assist the review team to achieve and demonstrate independence of the synthesis process. The framework allows for full engagement of commissioners and other stakeholders in formulation of the review question and planning of the review protocol, followed by independent peer review and publication of the protocol prior to the conduct of the review. In cases where conflict or the risk of undue influence from particular stakeholders is high, the review process should be conducted by an independent review team and the report submitted for independent peer review. Following this process, the review findings may be endorsed by the independent organisation.

7.4 New developments that address barriers to evidence synthesis and communication

There are persistent barriers to the conduct of environmental evidence syntheses and communication of their findings. First, the high resource costs required have been a major disincentive to producing high-quality syntheses, despite their critical value for effective conservation. Second, efficient and effective means of communicating results and facilitating their use for real-life decision-making scenarios are haphazardly applied. These barriers limit the ability of evidence synthesis to dynamically and adaptively respond to conservation challenges. However, new developments in big data and deep learning approaches are offering exciting opportunities to harness evidence syntheses and promote them to broader audiences.

Conducting rigorous evidence syntheses, such as systematic reviews, can carry both significant monetary and human resource costs (Dicks et al., Reference Dicks, Walsh and Sutherland2014). These costs are particularly prohibitive for organisations with critical needs for evidence, but who have limited time and resources to engage in such synthesis efforts or even to glean needed information from lengthy synthesis reports (Elliott et al., Reference Elliott, Turner and Clavisi2014). Moreover, high costs make updating syntheses to create a dynamic evidence base with the most up-to-date knowledge effectively impossible using current technology (Garritty et al., Reference Garritty, Tsertsvadze and Tricco2010). Additionally, the window of opportunity for decision-making may be shorter than the time in which a credible synthesis can be completed. Thus, to be useful to conservation, evidence syntheses must be optimised to efficiently find, collate and communicate existing evidence (Boyack & Klavans, Reference Boyack and Klavans2014).

In a policy space where decision-making timelines are short and demands for rigorous, reliable evidence are high, methods assisted by advances in computing can support rapid evidence collation as well as increase cost efficiency (Shemilt et al., Reference Shemilt, Khan and Park2016). Computer-assisted approaches range from tools that manage data and streamline the synthesis process to tools powered by machine learning algorithms that allow rapid screening and extraction of evidence with reduced human intervention (Kohl et al., Reference Kohl, McIntosh and Unger2018). Promising computer-assisted approaches, including automatic term recognition, document clustering, automatic document classification and document summarisation (Frantzi et al., Reference Frantzi, Ananiadou and Mima2000; O’Mara-Eves et al., Reference O’Mara-Eves, Thomas and McNaught2015) have been trialled in medical and health topics (Ananiadou et al., Reference Ananiadou, Rea and Okazaki2009) and are beginning to be tested in ecological topics (Westgate et al., Reference Westgate, Barton and Pierson2015; Grubert & Siders, Reference Grubert and Siders2016; Roll et al., Reference Roll, Correia and Berger-Tal2018).

These developments are encouraging for increased efficiency of the synthesis processes and potentially enabling dynamic syntheses that continuously update with new evidence as it becomes available. However, there are certain caveats and limitations that must be considered prior to widespread employment of computer-assisted tools. First, unlike medicine and fields such as economics, the semantics of conservation are highly heterogeneous and non-standardised (Westgate & Lindenmayer, Reference Westgate and Lindenmayer2017), posing difficulties for both efficient and comprehensive searching, and reliable application of machine learning algorithms to sort and mine text for desired patterns. Second, thus far, the performance of these approaches remains largely untested empirically, particular for conservation and environmental topics. As the value of evidence synthesis methods is in their transparency and credibility, reliable data on the efficacy of different computer-assisted approaches are important for uptake and expansion. Third, many existing computer-assisted platforms are fee-based or require programming skills, limiting their utility to a broader field of users. To improve global ability to address pervasive environmental threats, we need to democratise access to the tools that can help decision-making worldwide, not solely in countries or among researchers with means.

7.5 Mainstreaming evidence synthesis for decision support

Efforts to engage in open science and collaborative practice between conservation and technology fields will require forming collaborative partnerships and fostering conversation between evidence producers, evidence users and data scientists, to build a cohesive and engaged community of practice to open channels of communication to all users (Joppa, Reference Joppa2015). This will allow the broader community to use existing efforts as a starting point and avoid reinventing the wheel and wasting already limited resources (Lowndes et al., Reference Lowndes, Best and Scarborough2017). Furthermore, collaborative partnerships and creative funding can foster the long-term sustainability of tools that can live on to serve users. Too often, tools and platforms are created in good faith but require maintenance and updating and lack the ongoing funding and personnel to do so. This is particularly important as tools are most useful when they can dynamically respond to user needs and emerging technologies. This is a critical stepping stone for breaking down barriers to understanding and using evidence synthesis methodologies, as without a dynamic toolbox, synthesis methods will reman aloof from the needs of a diversifying and widening audience.

Evidence synthesis conducted to Collaboration for Environmental Evidence standards generates systematic reviews and systematic maps that are theoretically accessible to all. Yet, simply because something is available does not mean that the potential user is aware of it, knows where to find it, or even how to make sense of it. This is particularly the case for those new to the concept of evidence synthesis. Indeed, many practitioners and policy-makers rely on past experience or consult colleagues, rather than make use of the full suite of evidence (Pullin et al., Reference Pullin, Knight and Stone2004; Young et al., Reference Young, Corriveau and Nguyen2016). These issues create a number of inherent challenges for those decision-makers seeking to be evidence-informed and also broader potential audiences, such as stakeholders and wider society.

One of the mantras of science communication is ‘know your audience’ (Wilson et al., Reference Wilson, Ramey and Donaldson2016; Cooke et al., Reference Cooke, Gallagher and Sopinka2017) and to have impact, the findings of an evidence synthesis need to be effectively tailored and communicated to different groups of people in different ways and through different media. Communication efforts should, for example, be sensitive to the fact that different groups vary in their ‘trust’ of the science they encounter from different sources (e.g. academic journals, colleagues, social media) (Wilson et al., Reference Wilson, Ramey and Donaldson2016; Cooke et al., Reference Cooke, Gallagher and Sopinka2017).

A study that surveyed the willingness of practitioners to use a synopsis of relevant literature on bird conservation found that participants were more likely to use the evidence to inform decisions if it was easily accessible and in a clearly summarised format (Walsh et al., Reference Walsh, Dicks and Sutherland2014). Similar summaries are needed to complement evidence syntheses. These summaries may then need to be further refined and transformed into policy briefs. Policy briefs are often written through the cultural lens of a given organisation and a given issue, meaning that these are unlikely to be useful if prepared in a generic format. Sundin et al. (Reference Sundin, Andersson and Watt2018) recently proposed the use of storytelling as a tool to effectively communicate the results of evidence syntheses. This method could give meaning to the evidence and can be communicated through videos (e.g. see https://youtu.be/4uPowxn2skg), presentations or public forums (e.g. newspapers, magazines). Nevertheless, uptake of these methods in science communication is generally slow and also could still rely on poorly conducted syntheses (McKinnon et al., Reference McKinnon, Cheng and Dicks2018).

There has also been a rise in various knowledge management platforms and data-visualisation tools to explore underlying data that support evidence synthesis (e.g. www.3ieimpact.org/en/evaluation/evidence-gap-maps/, or www.cedar.iph.cam.ac.uk/resources/evidence/). These platforms present data from synthesis projects using interactive features and intuitive visualisations. For example, the Evidence for Nature and People Data Portal (www.natureandpeopleevidence.org) allows users to filter data according to desired parameters – such as diving into a data set to examine a specific intervention or outcome or geographic region, and visualising resultant trends. Syntheses, and in particular systematic maps, can be multi-layered and complex, precipitating a need for an interface that is graphical and intuitive, allowing a broader audience to use it (Figure 7.1).

Figure 7.1 An example of an evidence ‘heat map’ linking conservation interventions with human well-being outcomes. The map allows the user to assess the evidence base for gaps and gluts as well as clicking on each box to further examine the relevant studies.

If reported responsibly, these platforms and visualisations can play an important role in how stakeholders access evidence. A challenge for these approaches is to communicate that evidence syntheses are only estimates of the truth, which depend on the reliability of the evidence with which they were made. There is potential for evidence to be misinterpreted if the relative weight or reliability of a given element is misconstrued when visualised. Regardless of the output, it is important that authors of evidence syntheses communicate any uncertainty in the evidence and the risks associated with relying on studies that have high risk of bias.

Although it is laudable to communicate the findings of a topical evidence synthesis, additional efforts are also needed to communicate to practitioners the value of systematic reviews or maps, how they differ from other evidence synthesis methods and how they can be integrated with existing science advice and decision-making processes within different regions or institutions. Writing academic papers and delivering presentations at scientific conferences is unlikely to reach the typical practitioner, so creative approaches to outreach are needed to access and inform them.

Without use of rigorous evidence synthesis, policies and practice claiming to be ‘evidence-informed’ can be meaningless. For conservation and the environmental sector in general, the value of evidence synthesis has yet to be fully realised and we have the feeling that its time is yet to come. However, the recent methodological developments, awareness-raising and capacity development, together with new technologies for faster and more efficient conduct, suggest this time is not far away. Conservation is an interdisciplinary field and cannot remain for long in a state of relative evidence synthesis deficit in comparison with other sectors with which it seeks to be relevant. Although still marginalised, the methodology and infrastructure to build conservation’s evidence base through rigorous synthesis now exist at a global level. A commitment to evidence-informed decision-making that recognises the central role of rigorous evidence synthesis is required by key actors in the sector if these potential benefits are to be achieved.

References

Ananiadou, S. S., Rea, B. B., Okazaki, N. N., et al. 2009. Supporting systematic reviews using text mining. Social Science Computer Review, 27, 509523. https://doi.org/10.1177/0894439309332293CrossRefGoogle Scholar
Bayliss, H. R. & Beyer, F. R. 2015. Information retrieval for ecological syntheses. Research Synthesis Methods, 6, 136148.Google Scholar
Bowler, D., Buyung-Ali, L., Healey, J. R., et al. 2010. The Evidence Base for Community Forest Management as a Mechanism for Supplying Global Environmental Benefits and Improving Local Welfare: A STAP Advisory Document. Washington, DC: Scientific and Technical Advisory Panel. Global Environment Facility.Google Scholar
Boyack, K. W. & Klavans, R. 2014. Creation of a highly detailed, dynamic, global model and map of science. Journal of The Association for Information Science and Technology, 65, 670685. https://doi.org/10.1002/asi.22990Google Scholar
Collaboration for Environmental Evidence. 2018. Guidelines and Standards for Evidence Synthesis in Environmental Management. Version 5.0. Available from www.environmentalevidence.org/information-for-authors (accessed 8 March 2018).Google Scholar
Cook, C. N., Mascia, M. B., Schwartz, M. W., et al. 2013. Achieving conservation science that bridges the knowledge–action boundary. Conservation Biology, 27, 669678. https://doi.org/10.1111/cobi.12050Google Scholar
Cooke, S. J., Gallagher, A. J., Sopinka, N. M., et al. 2017. Considerations for effective science communication. FACETS, 2, 233248. https://doi.org/10.1139/facets-2016–0055CrossRefGoogle Scholar
Dicks, L. V., Walsh, J. C. & Sutherland, W. J. 2014. Organising evidence for environmental management decisions: a ‘4S’ hierarchy. Trends in Ecology and Evolution, 29, 607613. https://doi.org/10.1016/j.tree.2014.09.004CrossRefGoogle ScholarPubMed
Elliott, J. H., Turner, T., Clavisi, O., et al. 2014. Living systematic reviews: an emerging opportunity to narrow the evidence–practice gap. PLoS Medicine, 11, 16. https://doi.org/10.1371/journal.pmed.1001603Google Scholar
Frantzi, K., Ananiadou, S. & Mima, H. 2000. Automatic recognition of multi-word terms: the C-value/NC-value method. International Journal on Digital Libraries, 3, 115130. https://doi.org/10.1007/s007999900023CrossRefGoogle Scholar
Garritty, C., Tsertsvadze, A., Tricco, A. C., et al. 2010. Updating systematic reviews: an international survey. PLoS ONE, 5, e9914. https://doi.org/10.1371/journal.pone.0009914CrossRefGoogle ScholarPubMed
Grubert, E. & Siders, A. 2016. Benefits and applications of interdisciplinary digital tools for environmental meta-reviews and analyses. Environmental Research Letters, 11, 093001. https://doi.org/10.1088/1748–9326/11/9/093001Google Scholar
Haddaway, N. R., Land, M. & Macura, B. 2016. ‘A little learning is a dangerous thing’: a call for better understanding of the term ‘systematic review’. Environment International, 99, 356360. https://doi.org/10.1016/j.envint.2016.12.020Google Scholar
Haddaway, N. R. 2017. Response to ‘Collating science-based evidence to inform public opinion on the environmental effects of marine drilling platforms in the Mediterranean Sea’. Journal of Environmental Management, 203, 612614. https://doi.org/10.1016/j.jenvman.2017.03.043Google Scholar
Haddaway, N. R. & Crowe, S. 2018. Experiences and lessons in stakeholder engagement in environmental evidence synthesis: a truly special series. Environmental Evidence, 7, art11.Google Scholar
Higgins, J. & Green, S. 2011. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1. Cochrane Collaboration. Available from http://handbook-5–1.cochrane.org (accessed 8 March 2018).Google Scholar
Ioannidis, J. P. A. 2016. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. The Milbank Quarterly, 94, 485514. https://doi.org/10.1111/1468–0009.12210CrossRefGoogle ScholarPubMed
Joppa, L. N. 2015. Technology for nature conservation: an industry perspective. Ambio, 44, 522526. https://doi.org/10.1007/s13280-015–0702-4Google Scholar
Kløcker Larsen, R. & Nilsson, A.E. 2017. Knowledge production and environmental conflict: managing systematic reviews and maps for constructive outcome. Environmental Evidence, 6. https://doi.org/10.1186/s13750-017–0095-xGoogle Scholar
Kohl, C., McIntosh, E. J., Unger, S., et al. 2018. Online tools supporting the conduct and reporting of systematic reviews and systematic maps: a case study on CADIMA and review of existing tools. Environmental Evidence, 7(8). https://doi.org/10.1186/s13750-018–0115-5Google Scholar
Koricheva, J. & Gurevitch, J. 2014. Uses and misuses of meta-analysis in plant ecology. Journal of Ecology, 102, 828844. https://doi.org/10.1111/1365–2745.12224CrossRefGoogle Scholar
Land, M., Macura, B., Bernes, C., et al. 2017. A five-step approach for stakeholder engagement in prioritisation and planning of environmental evidence syntheses. Environmental Evidence, 6(25). https://doi.org/10.1186/s13750-017–0104-0CrossRefGoogle Scholar
Leisher, C., Temsah, G., Booker, F., et al. 2016. Does the gender composition of forest and fishery management groups affect resource governance and conservation outcomes? A systematic map. Environmental Evidence, 5(6). https://doi.org/10.1186/s13750-016–0057-8Google Scholar
Li, W. & Zhao, Y. 2015. Bibliometric analysis of global environmental assessment research in a 20-year period. Environmental Impact Assessment Review, 50, 158166. https://doi.org/10.1016/j.eiar.2014.09.012Google Scholar
Lowndes, J. S. S., Best, B. D., Scarborough, C., et al. 2017. Our path to better science in less time using open data science tools. Nature Ecology and Evolution, 1, 160. https://doi.org/10.1038/s41559-017–0160Google Scholar
McKinnon, M. C., Cheng, S. H., Dupre, S., et al. 2016. What are the effects of nature conservation on human well-being? A systematic map of empirical evidence from developing countries. Environmental Evidence, 5(8).Google Scholar
McKinnon, M. C., Cheng, S., Dicks, L., et al. 2018 Seek higher standards to honestly assess conservation effectiveness. Mongabay. Available online on 17 April 2018.Google Scholar
O’Leary, B. C., Kvist, K., Bayliss, H. R., et al. 2016. The reliability of evidence reviews in environmental science and conservation. Environmental Science Policy, 64, 7582. https://doi.org/10.1016/j.envsci.2016.06.012Google Scholar
O’Mara-Eves, A., Thomas, J., McNaught, J., et al. 2015. Using text mining for study identification in systematic reviews: a systematic review of current approaches. Systematic Reviews, 4(5). https://doi.org/10.1186/2046–4053-4–5Google Scholar
Ojanen, M., Zhou, W., Miller, D. C., et al. 2017. What are the environmental impacts of property rights regimes in forests, fisheries and rangelands? Environmental Evidence, 6(12). https://doi.org/10.1186/s13750-017–0090-2Google Scholar
Oliver, S. & Dickson, K. 2016. Policy-relevant systematic reviews to strengthen health systems: models and mechanisms to support their production. Evidence and Policy, 12(2), 235259. https://doi.org/10.1332/174426415X14399963605641Google Scholar
Owens, S. 2000. ‘Engaging the public’: information and deliberation in environmental policy. Environment and Planning A, 32, 11411148. https://doi.org/10.1068/a3330Google Scholar
Plume, A. & van Weijen, D. 2014. Publish or perish? The rise of the fractional author …. Research Trends, 38. www.researchtrends.com/issue-38-september-2014/publish-or-perish-the-rise-of-the-fractional-author/ (accessed 12 December 2019).Google Scholar
Pullin, A. S., Knight, T. M., Stone, D. A., et al. 2004. Do conservation managers use scientific evidence to support their decision-making? Biological Conservation, 119, 245252. https://doi.org/10.1016/j.biocon.2003.11.007Google Scholar
Pullin, A. S. & Stewart, G. B. 2006. Guidelines for systematic review in conservation and environmental management. Conservation Biology, 20, 16471656. https://doi.org/10.1111/j.1523–1739.2006.00485.xGoogle Scholar
Pullin, A. S. & Knight, T. M. 2009. Doing more good than harm: building an evidence-base for conservation and environmental management. Biological Conservation, 142, 931934. https://doi.org/10.1016/j.biocon.2009.01.010CrossRefGoogle Scholar
Pussegoda, K., Turner, L., Garritty, C., et al. 2017. Systematic review adherence to methodological or reporting quality. Systematic Reviews, 6(131). https://doi.org/10.1186/s13643-017–0527-2Google Scholar
Roll, U., Correia, R. A. & Berger-Tal, O. 2018. Using machine learning to disentangle homonyms in large text corpora. Conservation Biology, 32, 716724. https://doi.org/10.1111/cobi.13044Google Scholar
Salafsky, N. & Redford, K. 2013. Defining the burden of proof in conservation. Biological Conservation, 166, 247253. https://doi.org/10.1016/j.biocon.2013.07.002CrossRefGoogle Scholar
Shemilt, I., Khan, N., Park, S., et al. 2016. Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews. Systematic Reviews, 5, 140. https://doi.org/10.1186/s13643-016–0315-4Google Scholar
Snilstviet, B., Stevenson, J., Villar, P. F., et al. 2016. Land-use change and forestry programmes: evidence on the effects on greenhouse gas emissions and food security. www.3ieimpact.org/media/filer_public/2016/11/17/egm3-landuse-forest.pdf (accessed 8 March 2018).Google Scholar
Sundin, A., Andersson, K. & Watt, R. 2018. Rethinking communication: integrating storytelling for increased stakeholder engagement in environmental evidence synthesis. Environmental Evidence, 7(6). https://doi.org/10.1186/s13750-018–0116-4CrossRefGoogle Scholar
Walsh, J. C., Dicks, L. V. & Sutherland, W. J. 2014. The effect of scientific evidence on conservation practitioners’ management decisions. Conservation Biology, 29, 8898. https://doi.org/10.1111/cobi.12370Google Scholar
Ware, M. & Mabe, M. 2015. An overview of scientific and scholarly journal publishing. International Association of Scientific, Technical and Medical Publishers, www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf (accessed 8 March 2018).Google Scholar
Westgate, M. J., Barton, P. S, Pierson, J. C., et al. 2015. Text analysis tools for identification of emerging topics and research gaps in conservation science. Conservation Biology, 29, 16061614. https://doi.org/10.1111/cobi.12605Google Scholar
Westgate, M. J. & Lindenmayer, D. B. 2017. The difficulties of systematic reviews. Conservation Biology, 31, 10021007. https://doi.org/10.1111/cobi.12890Google Scholar
Wilson, M. J., Ramey, T. L., Donaldson, M. R., et al. 2016. Communicating science: sending the right message to the right audience. FACETS, 1(1), 127137. https://doi.org/10.1139/facets-2016–0015Google Scholar
Woodcock, P., Pullin, A. S. & Kaiser, M. J. 2014. Evaluating and improving the reliability of evidence in conservation and environmental science: a methodology. Biological Conservation, 176, 5462. https://doi.org/10.1016/j.biocon.2014.04.020Google Scholar
Woodcock, P., O’Leary, B. C., Kaiser, M. J., et al. 2017. Your evidence or mine? Systematic evaluation of reviews of marine protected area effectiveness. Fish and Fisheries, 18, 668–81. https://doi.org/10.1111/faf.12196Google Scholar
Young, N., Corriveau, M., Nguyen, V. M., et al. 2016. How do potential knowledge users evaluate new claims about a contested resource? Problems of power and politics in knowledge exchange and mobilization. Journal of Environmental Management, 184, 380388. https://doi.org/10.1016/j.jenvman.2016.10.006Google Scholar
Figure 0

Table 7.1 Overview of systematic evidence synthesis stages and the issues they address. For an explanation of bias see Collaboration for Environmental Evidence (2018) or Bayliss and Beyer (2015)

Figure 1

Figure 7.1 An example of an evidence ‘heat map’ linking conservation interventions with human well-being outcomes. The map allows the user to assess the evidence base for gaps and gluts as well as clicking on each box to further examine the relevant studies.

(after McKinnon et al., 2016)

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×