Introduction
Historically, emerging and re-emerging infectious disease (EID) events have had devastating health impacts, particularly on human populations. Records suggest that influenza pandemics have occurred regularly for at least 500 years (an estimated 50 million people died during the 1918 pandemic), and at least 35 million people might be currently infected with HIV [Reference Morens1, Reference Fauci and Folkers2]. The majority of EIDs are caused by pathogens of animal origin [Reference Jones3-Reference Woolhouse and Gowtage-Sequeria5]. For example, recently emerged viral diseases of animal origin that have caused high case-fatality rates in humans include coronaviruses (SARS and MERS-CoV), influenza A (H5N1 and H7N9), henipaviruses (Nipah and Hendra), and Ebola haemorrhagic disease (World Health Organisation Global Outbreak and Response Disease Outbreak News, www.who.int/csr/don/en/) and debate continues about the animal origins of antimicrobial resistance, for example meticillin-resistant Staphylococcus aureus [Reference Heller6, Reference Butaye7]. The emergence of antimicrobial resistance is considered to be one of the greatest current threats to global human health [8]. Many drivers of EID events have been proposed, including interconnected economic, social and environmental changes that allow microbial adaption through mutation, geographical spread, and altered host range [Reference Morse9, Reference Louria10]. Risk factors for EIDs include climate change, ecological changes such as intensification of agriculture and deforestation, changes in human demographics such as population growth and migration, and globalization of trade and travel [Reference Morse9, Reference Jones11, Reference Black and Nunn12]. In a review of the occurrence of EIDs between 1940 and 2004, the number of reports was found to be increasing [Reference Jones3]. This trend is expected to continue, particularly with an increasing global human population (www.census.gov/population/international/data/idb) and increasing trade and travel. Therefore, preparedness is essential to mitigate the potential risk of high-impact EIDs [Reference Brownlie4].
EID preparedness encompasses a range of activities to enhance prevention and control of high-impact EID events, in which the benefits of preventing or reducing the impact of the event far outweigh the investment required in such activities. Traditionally, these activities have been focused around tactical (immediate and short-term) planning. Surveillance has been the mainstay of EID preparedness, both for early identification of spatial, temporal and demographic clusters of adverse health events indicative of an EID, and for prevention of re-emergence of known infectious diseases via the early application of control measures. Consequently, preparedness is currently targeted at known emerging and re-emerging infectious diseases and the responses required should they be detected. However, with increasing recognition of the greater occurrence of EID events and the broad range of risk factors associated with this phenomenon, the scope of preparedness has widened. Through foresight programmes, information is now collected by environmental scanning to detect and assess events and trends that are not specific to health events – but are related to these known risk factors – allowing anticipation of future needs for EID prevention and mitigation [Reference Brownlie4]. Therefore, directing activities for EID preparedness now encompasses strategic (long-term) as well as tactical planning.
This review describes tools currently available for detection, prioritization and investigation of EIDs and threats to human, animal and environmental health (One Health), and how these tools might be integrated to form a systematic approach for directing EID preparedness. A range of tools that have application in this field have been developed during the past century: mathematical models of infectious diseases were first created in the early 1900s (the models of malaria created by Ross for example) to explore how infectious diseases persist in populations and how they might be controlled, and extended to simulation models in the 1980s and 1990s with the increasing power of modern computers and availability of appropriate data; risk assessment methods were developed in the 1970s and 1980s in response to disease risks associated with hazards in the environment and food and in the 1990s as import risk assessment in response to increasing travel and trade and risk of global spread of infectious diseases; formal disease prioritization tools were first developed in the 1990s and have been extended during the past decade; and more recently, environmental and horizon scanning methods have been developed as broad, risk mitigation tools. Some of these tools have been developed specifically for infectious diseases (disease modelling) or more broadly within other disciplines and fields of study (risk assessment and prioritization) and adapted for use in infectious disease research. Overarching these tools is disease surveillance, which has been undertaken in one form or another since the beginnings of recorded history. Given this history of tool development and adaptation, the integration of such tools to address emerging and re-emerging infectious diseases requires a framework so that the sum of efforts are effective. Current approaches which rely on the application of just a single tool can be successful; but with increasingly complex, multifactorial health problems − typified by EIDs – such approaches can be inadequate. For example, the emergence of West Nile virus (WNV) in the United States in 1999 was unanticipated despite data on air traffic movements (‘globalization’), niche modelling and risk assessment [Reference Garmendia, Van Kruiningen and French13]. Since then (after the fact), risk assessment modelling has been applied to determine the likelihood of disease emergence elsewhere [Reference Brown14, Reference Hernández-Jover, Roche and Ward15]. Similarly, the emergence of pathogenic bluetongue and Schmallenberg viruses in northern Europe was unanticipated, even though the spread of bluetongue viruses in southern Europe as a result of climate change was a well-established phenomenon [Reference Baylis16, Reference Guis17]. Avian influenza H1N1 is thought to have emerged in Mexico, a country that was not predicted by various tools to be a ‘hotspot’ for zoonoses emergence [Reference Jones3]. In this review, initially methods to collect and assess information for identification of EIDs and their drivers − including environmental and horizon scanning and surveillance−are discussed. This is followed by a review of methods used to prioritize and investigate requirements for EID preparedness, including disease prioritization, risk assessment and simulation modelling. We then discuss current uses of these methods – individually and as integrated pathways – as well as methodological and external constraints that limit identification, prioritization and investigation of EIDs and future human and animal health threats.
Information collection and assessment
Information is a fundamental requirement to detect the presence of EIDs in a timely manner and anticipate future potential human, animal and environmental health risks. Characteristics of the information regarding its volume, scope (both geographical and disciplinary), disease specificity and degree of certainty, are related to how information is collected and assessed – using surveillance, or horizon and environmental scanning – and ultimately the projected time-frame for the use of the information to direct activities for EID preparedness (Fig. 1). For example, surveillance applications tend to have a narrow geographical and temporal scope, whereas environmental scanning is broad.
Environmental scanning is the process of collecting and assessing information to identify events and trends in the global environment (for example, demographic, social, technological, behavioural and economic changes). This type of scanning is an input component to a group of activities known as ‘strategic foresight’, in which a vision of plausible future scenarios can be developed for the purpose of long-term (strategic) planning in organizations [Reference Horton18]. The basic steps of a foresight programme are shown in Figure 2. Foresight programmes, and hence environmental scanning, are not techniques that are specific to disease identification and investigation; they are used by many organizations to improve or secure their future positions in the global environment. However, during the last decade environmental scanning has been used to collect and investigate information about the drivers of infectious disease emergence with a view to enhancing long-term preparedness for EIDs through foresight programmes [Reference Brownlie4, Reference Willis19, Reference King and Thomas20]. The definition of ‘long-term’ is subjective; for example, a projection of 10 − 25 years was selected within the UK foresight programme for the detection and identification of infectious diseases (www.gov.uk/government/publications/infectious-diseases-preparing-for-the-future). Although ‘horizon scanning’ is a term often used synonymously with ‘environmental scanning’, in the context of identification and investigation of EIDs horizon scanning is also used to describe information collection that is targeted at health-specific sources. Therefore for the purposes of this review we separate the two terms: environmental scanning refers to information about the global drivers of infectious disease emergence, and horizon scanning refers to information collection about adverse health events.
Systematic methods for environmental scanning to enhance EID preparedness are not well established, in part because environmental scanning is developed on an ad hoc basis to meet current and anticipated needs specific to the organization [Reference Slaughter21]. In addition, the drivers for emergence of infectious diseases include economic-, social-, environmental- and pathogen-associated factors that interconnect to form a continuously evolving global milieu. Therefore, environmental scanning collects information with little or no disease specificity, very broad geographical and disciplinary scope, and usually a high degree of uncertainty (Fig. 1). Consequently, there is both a wealth of information available at any point in time and uncertainty forces collection over a relatively long time period in order to recognize topics of interest and detect trends – selecting the relevant information and assessing its quality is challenging, and a potential limitation of environmental scanning [Reference Frishammar22].
Information sources that are scanned in this process include a wide range of literature (peer-reviewed and grey literature, government reports and web-based information; for example blogs, list-servers and other information networks) and informal data sources such as public opinion polls and media reports, as well as expert opinion elicitation and industry workshops. Environmental scanning can be organized, or supplemented, by commercial services such as ‘Shaping Tomorrow’ (www.shapingtomorrow.com), that also provide database systems for storage of scanning hits (relevant information). Tangible outputs include information about disease drivers and emerging global issues, as well as intangible benefits through increased collaboration within and between disciplines and organizations. Information collected from environmental scanning is used in foresight activities such as scenario planning, causal layered analysis, and backcasting [Reference Horton18, Reference King and Thomas20, Reference Voros23]. These activities aim to enhance strategy planning by developing future scenarios that are plausible given current information, then assessing requirements to achieve or mitigate the chances of reaching those scenarios. Strategy plans can include policy changes required today or research to develop systems and technology to meet future requirements. Established foresight programmes include the Australian Department of Agriculture's strategic foresight programme (http://www.daff.gov.au/animal-plant-health/animal/strategy), Foresight for Canadian Animal Health [Reference Willis19] and the UK foresight project for detection and identification of infectious diseases (www.gov.uk/government/publications/infectious-diseases-preparing-for-the-future). In a report published in 2006, the UK foresight programme identified future scenarios in which the threat of emerging infectious diseases in the UK, China and sub-Saharan Africa increased over the next 10–25 years; drivers that were consistently considered to be important were increased travel, migration and trade, increased exposure to exotic plants and animals, and adulterated or incorrectly used drugs leading to drug-resistant organisms [Reference Brownlie4]. Climate change was expected to influence disease distribution in both the UK and Africa. Factors that were particularly important in Africa were poverty, conflicts, systems of governance, urbanization, intensification of agriculture, and lack of a disease prevention and control capacity. Drivers of future risk of disease in China also included increased amounts of animal waste, changing sexual lifestyles, changing public attitude to risk perception, loss of genetic diversity in agriculture, and increased levels of wealth and education. Improved cross-disciplinary collaboration and threat detection, identification and monitoring systems were required to meet the challenges posed by these future threats. Since this report was published, activities for EID preparedness have included research to develop surveillance systems and improve the diagnosis of infectious diseases, an example of which is the development of a bio-security chip that can identify 1132 different viruses. This biochip was used in the diagnosis of equine encephalosis virus in Israel, a virus previously unreported north of Southern Africa [Reference Mildenberg24].
In the context of detection and identification of EIDs, horizon scanning is used to describe surveillance activities that collect and assess a broad range of data associated with adverse health events to compliment traditional disease surveillance for early warning of EIDs [Reference Walsh and Morgan25–Reference Morgan27]. Information collected for this type of horizon scanning is not necessarily disease-specific, but still has broad geographical scope and comes from a large range of sources; it can be used in short-term activities such as improving time to disease outbreak detection and identification, as well as enhancing medium- to long-term strategy planning as an input to foresight programmes (Fig. 1).
Horizon scanning has been facilitated by advances in technology and the development of internet-based disease outbreak reporting systems such as the International Society for Infectious Diseases' Program for Monitoring Emerging Disease (ProMED-mail, www.promedmail.org), the Global Public Health Intelligence Network (GPHIN, Centre for Emergency Preparedness and Response, Canada), HealthMap (www.healthmap.org), BioCaster (www.biocaster.nii.ac.jp/_dev/), EMPRES-i (www.empres-i.fao.org/eipws3g/), and Aquatic Animal Health (www.aquatic.animalhealth.org/home). Other sources of information for horizon scanning include reports of disease outbreaks from the World Health Organization (WHO) and the World Organization for Animal Health (OIE), peer-reviewed and grey literature, media reports and surveillance reports (such as laboratory data). Formal horizon scanning programmes include the Global Disease Detection (GDD) Program (Centers for Disease Control and Prevention; www.cdc.gov/globalhealth/gdder/gdd/default.htm), the Threat Tracking Tool used by the European Centre for Disease Prevention and Control (www.ecdc.europa.eu) and the risk analysis framework used by the Human Animal Infections Risks and Surveillance (HAIRS) group in the UK [Reference Walsh and Morgan25–Reference Morgan27]. The GDD Operations Center is a centralized electronic reporting system. Data from horizon scanning is collected and aggregated with information from GDD partners worldwide, and analysed to identify requirements to provide operational and financial support to strengthen global public health surveillance and response capacity via the Global Outbreak Alert and Response Network (GOARN) and the WHO [Reference Hitchcock28]. By contrast, the HAIRS group is an inter-departmental and cross-disciplinary group of people that meet on a monthly basis to assess the zoonotic and EID risk to the UK population of hazards identified through horizon scanning by organizations such as the Department for the Environment, Food and Rural Affairs (Defra) and Public Health England. Using qualitative algorithms to estimate risk, potential hazards are classified according to the level of response required. This systematic process ensures that hazards are consistently assessed so that actions are justified and therefore defensible. Importantly, knowledge gaps are identified and lack of evidence of risk is differentiated from evidence of no risk. Recent reports from the HAIRS group include a qualitative assessment of the risk presented to human health by cats infected with Mycobacterium bovis, and an assessment of the zoonotic potential of Brucella species in marine mammals (www.hpa.org.uk/webw/HPAweb&HPAwebStandard/HPAweb_C/1317138638591).
Surveillance methods for the systematic collection of information about specific diseases or syndromes – for example, data from laboratory submissions, surveys and health records – are well established and described in both human and animal health contexts [Reference Dufour, Hendrikx, Dufour and Hendrikx29–Reference German31]. Surveillance data can be collected locally or regionally – The European Surveillance System (TESSy; http://www.ecdc.europa.eu/en/activities/surveillance/TESSy/Pages/TESSy.aspx) is an example of a regional system in which data from about 50 communicable diseases is collected from multiple surveillance sources. This type of data is likely to have a relatively high level of certainty and be applicable to disease-specific control measures in the short-term, such as outbreak response and tactical planning (Fig. 1). In the context of EID detection it is recognized that traditional surveillance based on collection and analysis of disease or syndrome-specific data has limitations due to the logistics and funding required to systematically collect and report this type of data in a timely manner or over sufficient time periods to detect trends. This is even more difficult in countries constrained by limited public or animal health systems and transport infrastructure, or political and cultural constraints that limit reporting [Reference Halliday32]. While some systems have been developed at very low cost and with wide coverage, routine analysis remains problematic and a barrier to application for EID detection [Reference Ward and Kelman33]. There is also a spatial mismatch between surveillance systems and the areas in which infectious diseases emerge. Jones et al. [Reference Jones3] suggest that the risk of an EID event is greater in South and East Asia, sub-Saharan Africa and South and Central America, for all pathogen types except zoonotic pathogens from wildlife and drug-resistant pathogens (which are as likely to occur in Europe and some areas of North America) [Reference Jones11]. This is supported by a more recent study that found that over 50% of WHO-confirmed infectious disease outbreaks between 1996 and 2009 occurred in Africa [Reference Chan34]. However, a review of surveillance systems for emerging zoonoses (212 peer-reviewed articles describing 221 emerging zoonoses' surveillance or monitoring systems) found that nearly 70% of these systems were based in Europe and North America [Reference Vrbova35]. Although most EIDs are of animal origin, more than 50% of systems evaluated data solely from humans and 70% targeted known pathogens. Moreover, despite the existence of guidelines for evaluation of surveillance systems [Reference German31, Reference Buehler36], only 8% of the articles reported evaluation of the systems; this is a critical requirement to ensure accuracy of reports.
Syndromic surveillance has been facilitated by advances in technology, and can supplement traditional surveillance data to reduce time to EID detection and identification. These methods include ‘infoveillance’ – collection of data via web-based sources and crowdsourcing – and mobile phone reporting (reviewed by Walker [Reference Walker37]). Current infoveillance examples include Google Flu Trends which uses aggregated Google search terms data as an indicator of influenza-like illness (www.google.org/flutrends/), FluTracking.net which invites people to complete a weekly online survey for influenza surveillance (www.flutracking.net/Info), and Flu Detector which infers incidence of influenza-like illness for England and Wales using Twitter feeds (geopatterns.enm.bris.ac.uk/epidemics/). Examples of mobile phone reporting systems include syndromic surveillance for adverse health events in humans in Papua New Guinea, and veterinary syndromic surveillance in Sri Lanka [Reference Robertson38, Reference Rosewell39]. The authors reported similar advantages and limitations. Mobile phone-based surveillance appeared to be acceptable and feasible in low-resource settings, and reporting time for events was reduced in some instances compared to existing traditional surveillance systems. However, validity was difficult to assess and sustainability was potentially limited through technical, geographical, political and social barriers. Recently, a mobile phone reporting system has been developed in Indonesia, primarily as a means to assist stakeholders involved in animal health in the field (farmers, veterinarians, veterinary technicians), and not as a tool to gain information for regional or national disease surveillance [Reference Syibli40]. Initial reports indicate that this system is likely to be a comprehensive and sustained animal health information system (www.wiki.isikhnas.com). The ease of implementation of the system indicates that designing animal and human health information systems for the benefit of those who submit data might be an effective way to design syndromic surveillance systems to provide early warning of adverse health events.
The divisions between environmental scanning, horizon scanning and surveillance are not distinct – they are a spectrum of information collection methods across a spectrum of information types, and information from one area inherently supplements and influences collection of other types and sources of information. These methods have been developed in response to the need to rapidly respond to emerging infectious diseases as well as understand and anticipate drivers of emergence to mitigate the impact of future EID events. The time to detection and public communication of EID events has improved within the last decade [Reference Chan34], but it is unknown whether this is the result of improvements in scanning and surveillance, or the requirements of the International Health Regulations that came into force in 2007 in which member countries must immediately notify the WHO if an event might constitute a public health emergency of international concern [41]. Evaluation of scanning and surveillance systems is essential to develop accuracy and assess benefit. A recent study suggested that information from Google Flu Trends could be unreliable as surveillance for influenza pandemics [Reference Olson42]. Although infoveillance is currently considered supplementary to traditional surveillance, this study highlights the need to develop guidelines and methods to evaluate electronic information collecting and reporting systems as well as traditional surveillance systems. In the face of an increasing rate of emergence of infectious diseases and scarce resources for information collection and assessment, it is likely that reliance on electronic reporting systems – either formal or informal – will increase across both scanning and surveillance.
Collection and assessment of information is just the first stage in preparation for EIDs. As threats emerge or EID events unfold, prioritization is required to allocate resources, and further investigation using risk analysis and simulation modelling is needed to design the most appropriate prevention and control strategies. The following sections discuss these tools and their links to EID preparedness.
Prioritization
Understanding the importance to those affected of the range of potential impacts of emerging threats and EIDs is essential to develop tactical and strategic plans appropriate to the social, cultural, economic and environmental context in which prevention and control activities take place. Resources (capital items and consumables, and availability of the time and expertise needed to deliver effective prevention and control) are also limited, and this is compounded by the increasing occurrence of EIDs that provide competing interests for resource allocation. Therefore, following detection and identification of emerging threats and EIDs, prioritization is required to direct resources for prevention and control, taking this complex background – against which the success of prevention and control is judged – into consideration. Defining the highest priority emerging threats and EIDs is problematical. Diseases cause a variety of tangible and intangible economic, social and environmental impacts and it is recognized that the perception of the importance of these impacts varies between stakeholders [Reference Wilson, Ward and Garner43]. For example, it has been suggested that the general public's perception of EIDs is disproportionately large compared to their actual impact, and that the opportunity cost associated with focusing on EIDs exceeds the benefit achieved in their control [Reference Butler44]. Focusing on tangible economic impacts and neglecting the many intangible social impacts of disease might explain this mismatch between priorities and impacts. Therefore, prioritization of EIDs and human, animal and environmental health threats must account for both the scale of disease impacts as well as the importance of those impacts to decision-makers. Further, the prioritization method must be rapid, transparent and give consistent and repeatable results, so that resource allocation is timely and justified.
The main purpose of disease prioritization in the context of EIDs has been to direct surveillance. These studies have prioritized either EIDs alone [Reference Cox, Sanchez and Revie45, Reference Havelaar46], or together with zoonotic [Reference Ng and Sargeant47–Reference McKenzie, Simpson and Langstaff49] or communicable diseases in general [Reference Economopoulou50–Reference Carter53]. More recently, prioritization has been used as a tool to direct resources for a broader range of activities to improve EID preparedness, including assessment for immediate response and research (such as risk assessment and disease spread modelling), as well as surveillance [Reference Humblet54–Reference Brookes56]. Most prioritization studies have been undertaken in North America, Europe and Australasia. Until recently, disease prioritization used methods developed on an ad hoc basis. However, driven by requirements for transparency and repeatability [Reference Giesecke57], the methodology for prioritization has evolved to follow decision-science methodology using multi-criteria decision analysis (MCDA).
The steps for disease prioritization using MCDA are shown in Figure 3, and are briefly described as follows. Once the purpose of the prioritization is established, the relevant stakeholders and decision-makers are defined and diseases to be prioritized are selected. A group of criteria are chosen that describe the disease impacts on which the prioritization decision is based, and objective measurements for each disease are collected according to the criteria. Stakeholder or decision-maker preferences are evaluated to weight the criteria to reflect their importance to the stakeholders. Separation of objective disease measurements from the subjective criterion weights is a key point in ensuring transparency of the prioritization process, because it removes bias due to decision-makers' opinions and level of knowledge about named diseases. It is important that preferences are evaluated using mechanisms that force stakeholders to make trade-offs between criteria presented within the scale and context of the prioritization. This ensures that criterion weights validly reflect opinion about the importance of disease impacts; Keeney [Reference Keeney58] and Steel et al. [Reference Steele59] provide further information regarding this, and Dodgson et al. [Reference Dodgson60] describe different methods for evaluating stakeholder preferences. Aggregation of disease measurements with criterion weights produces an overall score for each disease, and diseases can be ranked according to median or mean score. Prioritization is an iterative process; as new information becomes available (regarding either new threats, or changes in stakeholders values), or the understanding of impacts is refined through simulation modelling, prioritization should be repeated to ensure that resources are justifiably allocated. MCDA used in disease prioritization has developed two distinct methods to evaluate stakeholder preferences; traditional MCDA in which criteria are weighted directly, and MCDA in which criteria are weighted indirectly.
An example of prioritization using traditional MCDA is the decision-support tool known as e-THiR, developed for Defra's Veterinary Risk Group in the UK [Reference Del Rio Vilas55]. This tool prioritizes emerging animal health threats identified by horizon scanning or surveillance, and uses criteria that reflect public opinion, potential impacts of the threat, and capability for response as part of a decision support framework for the management of emerging and existing animal health threats. Del Rio Vilas et al. [Reference Del Rio Vilas55] describe the use of this tool with real case examples. The benefits of e-THiR – and MCDA in general – include the ability to systematically and consistently evaluate threats weighted according to the values of decision-makers. Therefore, the process provides auditable output that can be used as a decision aid to justifiably direct tactical and strategic planning. A particular advantage of e-THiR − and other traditional MCDA methods for disease prioritization − is that threats can be rapidly assessed, both at initial implementation and during on-going use of the tool. A general limitation of traditional MCDA is that evaluation of the opinion of large groups of stakeholders is difficult to implement, making these methods more suitable for use with small groups of experts. Del Rio Vilas et al. also noted that limitations of e-THiR included potential lack of comprehensiveness of criteria (a trade-off for simplicity, to increase acceptability of the tool within the organization), subjectivity of criterion measurements due to scarce or poor quality data, and over-estimation of priority due to biased reporting of some threats. However, these limitations are not specific to this tool; balancing the complexity required to achieve useful information outputs against the simplicity needed to ensure that the process does not become intractable, as well as dealing with insufficient or uncertain data and biases in data availability, are challenges common all forms of disease evaluation.
Disease prioritization using indirect weighting follows the same steps as traditional MCDA (Fig. 3). However, instead of asking stakeholders to directly evaluate criteria, stakeholders are asked to evaluate realistic disease scenarios. Mathematical techniques are then used to infer weights for the criteria; techniques for this include probabilistic inversion and conjoint analysis, both recently used in prioritization of EIDs in Canada, The Netherlands and Australia [Reference Havelaar46–48, Reference Brookes56]. Although these techniques are complex and slower to implement than traditional MCDA, disease prioritization using indirect weighting allows web-based survey administration in which non-technical terminology can be used to describe scenarios. This makes the prioritization process accessible to a wider range of stakeholders, including people who are not disease experts − such as the general public and farmers [Reference Ng and Sargeant47, Reference Brookes61]. Once implemented, prioritization of newly detected threats and EIDs is as rapid in MCDA frameworks that use indirect weighting as those that use direct weighting methods such as e-THiR.
The greatest value in using MCDA for disease prioritization comes from its ability to quantify the importance of disease impacts. In particular, ‘public perception’ – the value that the public places on disease impacts – is recognized as an important driver of policy in animal and public health [Reference Ng and Sargeant62]. However, what constitutes ‘public perception’ is poorly understood and has previously been considered intangible [Reference Döring and Nerlich63]. MCDA, especially using indirect weighting of criteria, enables quantification of public perception.
Although this section has discussed the use of MCDA solely for disease prioritization, methods from decision science (such as MCDA) are used extensively as decision aids in other fields including environmental science and homeland security [Reference Bragge, Ehrgott, Naujoks, Stewart and Wallenius64, Reference Linkov65], and also have current limited use to direct resource allocation in health settings [Reference Del Rio Vilas66, Reference Mintiens and Vose67]. These diverse applications of decision-science demonstrate that there is potential further extension of these methods to enhance the development of tactical and strategic plans for emerging risks and EIDs that are acceptable according to current social, cultural, economic and environmental values.
Risk assessment
Risk analysis methods in animal and public health have been used in the last decades to investigate how likely an undesirable event would be, the broad scale potential consequences of the occurrence of this event and the mitigation strategies to reduce the occurrence of this event. These methods provide objective, transparent and repeatable assessments. As MacDiarmid & Pharo [Reference MacDiarmid and Pharo68] described, risk analysis methods are used to help decision-makers answer the questions: ‘What can go wrong?’, ‘How likely is to go wrong?’, ‘What would be the consequences of it going wrong?’, and ‘What can be done to reduce the likelihood or the consequences of it going wrong?’. However, accurate assessments of the potential risk associated with specific health events or diseases usually require a substantial amount of high-quality data. Often these essential data are lacking, in which case justified assumptions are needed. Threats and EIDs (‘What can go wrong?’) need to be initially identified as part of the risk analysis process and depending on the aim of the risk analysis, identification of these threats and EIDs will follow different methodologies. The decision-maker will generally have a well-defined objective which will drive identification of these threats and EIDs [Reference Vose69].
One of the main applications of risk analysis in animal health is the assessment of the potential risks linked with the international trade of animals or animal products. Since the creation of the World Trade Organization (WTO) in 1995, trade in live animals and food of animal origin between different countries has substantially increased, which delivers benefits to both importing and exporting countries. The agreement on the application of Sanitary and Phytosanitary measures (SPS agreement), which came into force with the creation of the WTO, sets out the legal framework for all international trade to protect human, animal and plant life or health, while guaranteeing that these measures are not more restrictive than those applied at a national level. The agreement establishes that measures applied must be based on international standards and recommendations; however, when these do not exist, a science-based risk assessment must be conducted to set the trade measures. Risk analysis has since facilitated international trade, as well as protected human and animal health in the importing countries, through the assessment of the risk posed by potential hazards associated with a specific commodity and the measures that could be applied to reduce this risk to an acceptable level [70]. The World Organization for Animal Health (OIE) sets the standards for risk analysis in relation to animal health [71]. Import risk analyses, which are conducted by government agencies, are an important tool for biosecurity protection. The initial phase of an import risk analysis is the hazard identification process, during which the pathogenic agents that could be present in the imported commodity and are exotic to the importing country are identified for further investigation during the subsequent risk assessment. Some examples of recent import risk analyses conducted by the Australian Government Department of Agriculture – Biosecurity Risk Analysis are the import risk analyses (IRAs) for freshwater ornamental finfish (with respect to gourami iridovirus and related viruses) and for prawns and prawn products. These IRAs are conducted to classify potential quarantine risks and develop policies to manage them (http://www.daff.gov.au/ba/ira/final-animal).
Although increased international trade has proven to benefit the economy of trading partners, a consequence of this increased trade is that the potential risk of spread of pathogens affecting animals and humans between countries has expanded. According to Brown [Reference Brown72], in the last two decades at least one new emerging disease has been identified every year. An example was the introduction and establishment of WNV into the United States in 1999 and subsequent spread across North America, Central and South America and the Caribbean, causing severe neurological disease and many fatalities in humans, horses and birds [Reference Murray, Mertens and Despres73]. A WNV-infected mosquito in an intercontinental plane landing at New York airport was considered to be the most likely entry route into the United States [Reference Wilkins and Del Piero74, Reference Pollock75]. Risk assessment can also be used to understand why infectious diseases emerge. For example, since the introduction of WNV into the Western Hemisphere, risk assessments have been used to investigate the potential introduction of WNV in several countries, such as the Galapagos [Reference Kilpatrick76], Hawaii [Reference Kilpatrick77], Barbados [Reference Douglas78] and Australia [Reference Hernández-Jover, Roche and Ward15]. The main aim of these assessments was to predict the likely introduction of the virus through different pathways, thus providing some guidance for directing resources for the prevention of this introduction. Hernández-Jover et al. [Reference Hernández-Jover, Roche and Ward15] also investigated the potential spatio-temporal spread of WNV to susceptible species and the impact of the resulting outbreak on human and animal health. This study developed a generic framework that could be applied to assess the potential introduction of other mosquito-borne diseases via international aircraft movements.
Risk analysis methods are also being applied to investigate situations involving wildlife disease. The International Union for Conservation of Nature and the OIE have recently published specific guidelines for wildlife disease risk analysis (DRA) [79]. These guidelines aim to provide decision-makers (such as wildlife managers, government and industry representatives) with the information on how to incorporate the wildlife DRA process into their day-to-day activities, supporting the identification of risk mitigation strategies. Overall, the DRA process provides a framework to investigate how to reduce the potential disease risks associated with wildlife affecting species conservation, animal and human health, agriculture and ecosystems.
Another recent example using a risk analysis framework for investigating the emergence of EIDs is the one developed by Ward & Hernández-Jover [Reference Ward and Hernández-Jover80]. This framework was used to understand the emergence of rabies in the eastern islands of Indonesia, so that scarce resources can be targeted to surveillance activities and the sensitivity of surveillance systems increased. By integrating information on the historical spread of rabies, anthropological studies, and the opinions of local animal health experts, eight critical parameters defining the potential disease spread pathways were identified. Focusing on these key components can allow the identification of areas (islands) most at-risk of an emerging rabies event, a form of spatial risk mapping.
Risk assessment supports EID preparedness by providing tools to accurately assess the potential likelihood of introduction and the spread of previously identified EIDs, resource allocation and identification of mitigation strategies.
Disease simulation modelling
Disease simulation models aim to represent reality in a simplified form so that the behaviour of a disease system can be better understood. Although based on mathematical models, disease simulation models tend to focus more on estimating the impact of a disease on a population and therefore have a natural application when combined with risk assessments to define infectious disease impact at a finer and more dynamic scale [Reference Hagerman81]. Such models – if developed correctly and appropriately validated – can then be used to guide policy development and decision-making with a view to reducing the impact of a disease event, such as the spread of an emerging infectious disease [often in veterinary medicine, exotic and transboundary diseases such as foot-and-mouth disease (FMD), highly pathogenic avian influenza and classical swine fever (CSF)]. Key requirements for developing such simulation models are a description of the population at-risk – the structure of herds and flocks, their geographical distribution and how they come into contact through networks and spatially – and the factors that influence disease transmission events. In livestock systems, complex models have been developed for FMD [Reference Ward82] and CSF [Reference Cowled83]. Impact is often measured as number of herds infected, animals culled, vaccine used, and time to control an outbreak. Epidemiological simulation models have been linked with economic models to measure the impact of disease outbreaks and associated control efforts, including vaccination and traceability systems [Reference Hagerman81]. Such models have also have also incorporated capacity and resource constraints to allow realistic evaluation of control strategies [Reference Hagerman81, Reference Ward82]. An advantage of simulation models when used to explore EIDs is the inclusion of a broad range of drivers of disease spread. These can include population immunity, population turnover, environmental, economic and behavioural drivers. The effect of modifying one or more of these drivers on disease emergence can be investigated.
Traditionally, disease spread models have been developed for emerging (transboundary and epidemic) diseases that have a priori (even if qualitatively) been determined to have high impact. In these situations, the focus is on determining the most effective (generally cost-effectiveness) approach to minimising impact. Thus, within a framework for emerging infectious diseases, disease spread modelling is generally seen as the ‘final’ step in a linear process. However, disease modelling – if approached as a generic tool – can be used to investigate which scenarios might have the greatest impact. For example, scenarios in which an EID affects only one animal species with high morbidity/mortality versus another scenario in which a similar disease affects many species but with lower morbidity/mortality could be explored with respect to impact and thus guide the process of scanning, prioritization and risk assessment. If certain scenarios are predicted to have substantially larger impacts, then these should be the focus of future scanning/prioritization/risk assessment efforts. Furthermore, there has been little focus on modelling the presence of several pathogens within a population on the emergence of one of these as a disease event. Integrating disease modelling tools with other components of emerging infectious disease surveillance and response is a critical need in order to effectively manage these risks.
Discussion
Developing appropriate tactical and strategic plans for the prevention and control of emerging threats and EIDs requires balancing control and prevention measures with the potential and actual risks of an EID against a complex social, cultural and economic background in a global environment. This balance can be difficult to achieve. For example, in the absence of an EID outbreak, mitigation measures can be criticized as too stringent if they limit trade and the travel of people who perceive the risk or impact on them to be low or negligible. By contrast, in the event of an EID measures can quickly be criticized as inadequate, particularly by those who are directly affected. For example, the UK government was widely criticized for its handling of both the bovine spongiform encephalopathy (BSE) epidemic in the 1980s and 1990s, and the 2001 FMD outbreak. Responses to both were considered inadequate and subsequent inquiries found that the lack of a systematic, science-based mechanism for assessing and effectively managing risk resulted in insufficient mitigation measures for BSE, and inadequate preparation due to prioritization of resources to mitigate the impacts of BSE was a factor in the under-resourced response to FMD [Reference Anderson84, Reference Phillips, Bridgeman and Ferguson-Smith85]. There can also be unexpected events following the detection of emerging risks that complicate effective control. Panic due to the perceived risk of a suspected pneumonic plague outbreak in India in 1994 caused mass migration of people, potentially spreading the disease and hampering control efforts [Reference Deodhar, Yemul and Banerjee86]. Moreover, mitigation strategies can be found simplistic and inadequate once instigated. For example, complex social and cultural constraints are major barriers to control of the current outbreak of Ebola virus disease in West Africa requiring increased collaboration between anthropologists, politicians and health professionals and their organizations [Reference Check87, 88]. Mechanisms must be in place to rapidly update tactical plans in the event of unexpected challenges. In addition to achieving an appropriate level of prevention and control and meeting unexpected challenges, tactical and strategic plans must be continuously updated as the global risk landscape changes and new information arises.
The examples above illustrate some of the difficulties encountered when implementing prevention and control measures for emerging threats and EIDs. As already discussed, prioritization, risk assessment and disease modelling can be used individually to assist tactical and strategic planning following information collection and assessment. However, when used together, these tools can provide a comprehensive understanding of emerging threats and EIDs, not only about their potential impact and risk, but also the importance of the emerging threat or EID according to current values of decision-makers and relative to the myriad health concerns that compete for limited resources. Figure 4 illustrates how prioritization, risk assessment and simulation modelling naturally integrate based on the flow of information from surveillance and scanning, and the cycle of information as it is refined by these tools into knowledge useful for tactical and strategic planning. Prioritization assesses information from surveillance and scanning according to the values of decision-makers, and research of high-priority emerging threats and EIDs using risk assessment and disease modelling refine knowledge and provide more detailed understanding of the impacts and their probability of occurrence. This knowledge in turn refines prioritization, and as new information continues to arise, it can be assessed in the context of a more thorough understanding of existing health concerns. At each stage, information is systematically processed to deliver knowledge relevant to tactical and strategic planning. An example of a framework in current use is the Risk Management Cycle used by Defra [Reference Del Rio Vilas55]. Following the BSE and FMD crises in the UK, it was recognized that a systematic process to identify and prioritize emerging threats and EIDs was essential to underpin EID preparedness through improved surveillance and contingency planning [Reference Scudamore89]. In the resulting framework developed to achieve these aims, information from horizon scanning and surveillance is used to identify emerging threats and EIDs. Depending on priority (assessed using the prioritization tool e-THiR [Reference Del Rio Vilas55]) and the current state of knowledge, recommendations can be made regarding allocation of resources or changes to policy, and further research such as risk assessment and disease modelling can be instigated. Knowledge gained from this process is used to update D2R2 (Disease briefing, Decision support, Ranking and Risk assessment database), which is used as a resource to refine the prioritization process (Rupert Hine, personal communication). In this framework, systematic assessment of information characterizes emerging threats and EIDs in the context of existing health concerns according to the values of decision-makers who represent stakeholders. This enables response and contingency planning to be well-directed, and as new information arises (from research as well as scanning and surveillance) plans can be rapidly updated. The framework is science-based and transparent; therefore, activities can be justified and are defensible.
Collaboration is essential to both collect and make maximum use of information gained from scanning and surveillance. At the level of surveillance, a ‘One Health’ approach is the minimum requirement [Reference Stärk90], but as the scope of information collection widens through horizon scanning and environmental scanning, increased cross-disciplinary collaboration is required that might include economists, social and environmental scientists, decision analysts, and experts in information technology, politics and logistics. Collection of information must also be global, both geographically and in disciplinary scope, covering everything that affects the interactions between host species, their pathogens and their environment. However, geographical, cultural and political barriers can all limit global collaboration and effective information gathering. Drawing on a greater diversity of data sources and types is one way to improve information collection and lead to a greater likelihood that EIDs are identified and correctly assessed. It is also likely the current trend in increasingly scarce resources for traditional surveillance will continue, placing greater dependence on novel data sources and ways of collecting information. One new such data source that has been proposed is crowdsourcing [Reference Chunara, Smolinski and Brownstein91]. Basing EID preparedness and response on a single data source is likely to result in EIDs being undetected or taking longer to be detected. Greater diversity in collection techniques and different foci will lead to greater diversity in information. This is more a policy and political than technical issue. As a consequence, programmes tend to focus on what emerges – not preventing what might emerge.
Currently, preparedness focuses on horizon scanning and surveillance for rapid detection and identification of EIDs, and control measures focus on mitigating the impact of EIDs after they have emerged. Anticipation of specific EIDs is not possible; the nature of information from environmental scanning is non-specific and highly uncertain, and relates to the risk factors that drive EID events which are not sufficiently understood to permit prediction. However, in terms of surveillance, there is value in focusing on areas undergoing rapid change in either animal populations, production systems or marketing systems, as well as areas undergoing rapid socio-ecological change – for example, land use change in combination with other factors. Human population shifts – either due to civil conflict or economic drivers rapidly attracting or dispersing human populations – also are likely to be important to target in surveillance systems. Therefore, at a minimum, the use of information from environmental scanning through foresight programmes can develop capabilities for early detection in the most likely areas. In this way, preparation for EIDs can be diversified across both anticipatory activities − such as where to focus research and surveillance − and prevention and control of known EIDs.
An example is the case of preparation for pandemic influenza A; the emergence of pandemic strains of influenza virus has received much attention during the past decade. We know that more influenza viruses will emerge and we now know that some of these will be highly pathogenic (HPAI) in poultry, and some will have low pathogenicity (LPAI). Both H7N9 and H5N6 influenza viruses have recently emerged from poultry production systems [Reference Qi92, Reference Lam93] that earlier led to the emergence of H5N1. Based on recent history, we also now know that some of these influenza viruses will cause fatal disease in humans and might be spread globally via human-to-human transmission. Rather than detecting new cases of disease in humans quickly (humans as a sentinel), within a framework for EID preparedness and response we should also be allocating resources to prevention – that is, to address some of the drivers – as distinct from also allocating much-needed resources to public health and veterinary services for disease surveillance and response activities. Knowing that the milieu that supports virus evolution and spread still exists will not prevent new viruses from emerging – it requires action. However, taking preventive action within such animal production systems still presents many challenges that cover the broad spectrum of economic, social, technological and behavioural drivers.
Integration of the tools described in this review aims to ensure that both the drivers of EIDs and EID events are recognised and reported in a timely manner, resources are prioritised effectively, and that maximum information is gained from risk assessment and simulation modelling to direct comprehensive tactical and strategic plans. We propose that an integrated approach to EID preparedness through the coordinated application of available tools should provide a greater overall benefit than individual tools applied in an ad hoc manner. Ultimately, the foundation of EID prevention lies in anticipating, recognising and taking action to alter the course of the drivers of EIDs. Addressing these drivers is a global challenge required to achieve sustainable human development, and health security is only one part of this. Until these drivers are addressed, the focus is preparedness for EID events; horizon scanning and surveillance are the foundation for this, without which tactical and strategic plans fail. Although there is rapid development of electronic reporting methods and novel methods for information collection and collation, reducing traditional surveillance should be questioned unless the validity of these methods can be assessed. Paradoxically, anthropogenic drivers of EIDs – for example, advances in technology and communication that have facilitated increased trade and travel – have allowed development of these information collection and assessment techniques. Until the drivers for EID events are addressed, will we get ahead of the curve that we create, or will we just chase it? This is an intriguing question that is beyond the scope of this review.
Declaration of Interest
None.