Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-24T12:44:35.925Z Has data issue: false hasContentIssue false

Information gain as a tool for assessing biosignature missions

Published online by Cambridge University Press:  04 August 2023

Benjamin Fields
Affiliation:
Blue Marble Space Institute of Science, Seattle, WA 98104, USA Wheaton College, Wheaton, IL 60187, USA
Sohom Gupta
Affiliation:
Blue Marble Space Institute of Science, Seattle, WA 98104, USA Indian Institute of Science Education and Research Kolkata, Mohanpur, 741246, West Bengal, India
McCullen Sandora*
Affiliation:
Blue Marble Space Institute of Science, Seattle, WA 98104, USA
*
Corresponding author: McCullen Sandora; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We propose the mathematical notion of information gain as a way of quantitatively assessing the value of biosignature missions. This makes it simple to determine how mission value depends on design parameters, prior knowledge and input assumptions. We demonstrate the utility of this framework by applying it to a plethora of case examples: the minimal number of samples needed to determine a trend in the occurrence rate of a signal as a function of an environmental variable, and how much cost should be allocated to each class of object; the relative impact of false positives and false negatives, with applications to Enceladus data and how best to combine two signals; the optimum tradeoff between resolution and coverage in the search for lurkers or other spatially restricted signals, with application to our current state of knowledge for solar system bodies; the best way to deduce a habitability boundary; the optimal amount of money to spend on different mission aspects; when to include an additional instrument on a mission; the optimal mission lifetime; and when to follow/challenge the predictions of a habitability model. In each case, we generate concrete, quantitative recommendations for optimizing mission design, mission selection and/or target selection.

Type
Research Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

The overarching goal of the field of astrobiology is to search for signs of life elsewhere in the universe. As it is a highly interdisciplinary endeavour, the strategies proposed to achieve this are multifaceted, spanning a range of solar system and exoplanet targets and detection techniques. As such, it is sometimes challenging even for seasoned practitioners to assess the value of upcoming missions. Recently, there have been efforts to address this. The Ladder of Life Detection (Neveu et al., Reference Neveu, Hays, Voytek, New and Schulte2018), Confidence of Life Detection score (Green et al., Reference Green, Hoehler, Neveu, Domagal-Goldman, Scalice and Voytek2021), and Standards of Evidence framework (Meadows et al., Reference Meadows, Graham, Abrahamsson, Adam, Amador-French, Arney, Barge, Barlow, Berea, Bose, Bower, Chan, Cleaves, Corpolongo, Currie, Domagal-Goldman, Dong, Eigenbrode, Enright, Fauchez, Fisk, Fricke, Fujii, Gangidine, Gezer, Glavin, Grenfell, Harman, Hatzenpichler, Hausrath, Henderson, Johnson, Jones, Hamilton, Hickman-Lewis, Jahnke, Kacar, Kopparapu, Kempes, Kish, Krissansen-Totton, Leavitt, Komatsu, Lichtenberg, Lindsay, Maggiori, Marais, Mathis, Morono, Neveu, Ni, Nixon, Olson, Parenteau, Perl, Quinn, Raj, Rodriguez, Rutter, Sandora, Schmidt, Schwieterman, Segura, Şekerci, Seyler, Smith, Soares, Som, Suzuki, Teece, Weber, Simon, Wong and Yano2022) aim to provide a language for communicating the evidential strength of any observed biosignature, both to the public and amongst scientists. The nine axes of merit has been proposed as a framework for describing the utility of any technosignature detection mission in Sheikh (Reference Sheikh2020). Lorenz (Reference Lorenz2019) provides a risk versus payoff framework for life detection missions, which even factors in the mission success probability.

While many of these frameworks are fantastic at isolating the relevant mission aspects necessary for judging the merits of a mission, they are fundamentally descriptive processes, and leave up to the judgement of the practitioner how to prioritize each mission dimension. As such, they are not suited for any final decision making process, whether it be selecting between missions or optimizing mission design.

In this note we propose a simple method of assessing mission value in terms of information gain, and demonstrate the utility of this formalism through considering several idealized mission designs. This framework allows to distil the specifics of a mission down to a single number, which is useful for many purposes. At the outset, we should acknowledge the shortcomings of this procedure: clearly, there is no unique way of ranking missions; our approach is capable of providing one such quantification, but other factors are surely important. While our approach encompasses many mission aspects, it specifically disregards, for instance, any ancillary benefits a mission would provide apart from biosignature detection. Secondly, we wholeheartedly disavow mistaking performance metrics for the indefinite concept of mission value. Any practitioner interested in using a method like this should be well aware of the dangers inherent in using performance metrics for decision making- this is exemplified in Goodhart's law, whereby when a quantitative proxy is used to drive decision making, there is a tendency for people to optimize for the proxy itself, sometimes to the detriment of the original intended goal (Chrystal et al., Reference Chrystal, Mizen and Mizen2003). As such, we do not endorse a plan to ‘maximize information gain’, nor do we believe it would be particularly useful to very precisely calculate the information of any particular mission. The intent of our framework is to provide a rough heuristic that is easily calculable in a manner that makes the dependence on mission parameters highly apparent, so that these can be quickly and straightforwardly varied to determine their effect on the overall mission value.

In the remainder of this section we go into more detail on the process of calculating information gain for a given mission setup. In the following sections, we demonstrate the utility and flexibility of this approach by applying it to several disparate scenarios, and deriving concrete, quantitative recommendations for optimizing mission design. In section ‘Ratio of 2 signals’ we apply this formalism to the challenge of determining whether the occurrence rate of a biosignature depends on some system variable. In section ‘The tradeoff between false positives and false negatives’ we consider the effects false positives and false negatives have on information. In section ‘Lurkers’ we consider a tradeoff between resolution and coverage in searching for a biosignature. Section ‘Further applications’ contains more applications, including: how to best determine the limiting value of some system parameter for life, how to apportion budget to various aspects of a mission, when to include an instrument on a mission, how to determine the optimal mission lifetime and when to follow a habitability model, and when to challenge it.

Bayes estimation and information gain

In this work we propose to operationalize the dictum of many proposed missions, to ‘maximize science return’, in the following way- by equating science return with information gain, in the information theoretic sense, we may write the science return as

(1)$$S = \int {\rm d}f\, p( f) \, \log p( f) $$

This explicitly incorporates a value for how to reduce raw mission data, in the form of a bit stream, down to the distilled representation relevant for our immediate purposes. As such, this framework is more general, and can suitably be applied to a variety of scientific goals across disciplines.

As a simple application of the above framework, let us imagine the simplest setup, with a single biosignature, in the absence of false positives and negatives, and drawing from a uniform population. We wish to estimate the fraction of locations possessing this biosignature f, given a total number of systems surveyed by a mission N tot, which returns N det detections. If the occurrence rate were known, the number of detections would follow a Bernoulli distribution, and so when the number of detections is known instead, the signal occurrence rate f follows the conjugate distribution f ~ β(N tot,  N det,  f), where β refers to the beta distributionFootnote 1 (Sandora and Silk, Reference Sandora and Silk2020).

(2)$$\eqalign{\beta( N_{\rm{tot}},\; N_{{\rm{det}}},\; f) & = B( N_{\rm{tot}},\; N_{\rm{det}}) \, f^{N_{\rm{det}}}( 1-f) ^{N_{\rm{tot}}-N_{\rm{det}}}\cr B( N_{\rm{tot}},\; N_{\rm{det}}) & = {( N_{\rm{tot}} + 1) !\over N_{\rm{det}}!( N_{\rm{tot}}-N_{\rm{det}}) !}}$$

This distribution has the following mean and variance:

(3)$$\mu = {N_{\rm{det}} + 1\over N_{\rm{tot}} + 2},\; \quad \sigma^2 = {( N_{\rm{det}} + 1) ( N_{\rm{tot}}-N_{\rm{det}} + 1) \over ( N_{\rm{tot}} + 2) ^2( N_{\rm{tot}} + 3) }$$

If the mean is well separated from 0, this has μ → f o = N det/N tot and σ 2 → f o(1 − f o)/N tot. If the mean is consistent with 0 to observed uncertainty, μ → 1/N tot and $\sigma ^2\rightarrow 1/N_{\rm {tot}}^2$. While simple, already this setup indicates how precision depends on survey size, for multiple different regimes. This can be related to science yield (information gain) through equation (1).

For a broad class of distributions, including the setup above, information reduces to S ≈ log σ, so maximizing information gain becomes equivalent to minimizing variance, which sets the measurement uncertainty. In this limit, 1 bit of information is gained if the error bars for the measured quantity are decreased by a factor of 2.

The above expression assumes initial ignorance. For the case where we are updating knowledge obtained from a previous mission, we can instead use the Kullback–Leibler (KL) divergence as the generalization:

(4)$$\Delta S = KL( p\Vert q) = \int {\rm d}f\, p( f) \, \log\left({\,p( f) \over q( f) }\right)$$

This is a measure of information gain as one moves from an initial distribution p to a final one q. This has the property that KL(pq) ≥ 0, with equality only if p = q, guaranteeing that information is always gained with new missions. This reduces to the prior case if we take the initial distribution q to be uniform.

In the case where we are updating our knowledge of biosignature occurrence rate from a previous mission, both subject to the conditions outlined above, we have p ~ β(N tot,  N det,  f) and $q\sim \beta ( N_{\rm {tot}}^{( 0) },\; \, N_{\rm {det}}^{( 0) },\; \, f)$ then we can find the exact formula for information gain from Rauber et al. (Reference Rauber, Braun and Berns2008):

(5)$$\eqalign{\Delta S& = \log{B\left(N_{\rm{tot}},\; N_{\rm{det}}\right)\over B\left(N_{\rm{tot}}^{( 0) },\; N_{\rm{det}}^{( 0) }\right)}\cr & \quad + \Delta ( N_{\rm{tot}}-N_{\rm{det}} ) \psi( N_{\rm{tot}}-N_{\rm{det}} + 1) \cr & \quad + \Delta N_{\rm{det}}\psi( N_{\rm{det}} + 1) -\Delta N_{\rm{tot}}\psi( N_{\rm{tot}} + 2) }$$

where ψ(x) = dΓ(x)/dx/Γ(x) is the digamma function. Using the approximation ψ(x) ~ log x − 1/2x, we can see that this is a generalization of the formula ΔS ≈ logσ to situations where we start with some information. Additionally, in the limit where the sample size is incremented by 1 in an updated experiment, we find ΔS ≈ 1/N tot.

This framework can be compared to other, more rigorous statistical methods of estimating survey efficacy. If one had data from a set of observations, the usual procedure would be to perform a battery of statistical tests (e.g. Neyman Pearson, binomial, kernel tests) aimed at determining the precise degree to which the tested hypothesis is (dis)favoured compared to some null hypothesis. While rejecting the null hypotheses to some specified confidence level (0.05 is a standard choice) is suitable for large data sets, this procedure must be treated with care when the data samples are small and difficult to obtain (Levine et al., Reference Levine, Weber, Hullett, Park and Lindsey2008). A well-known way for estimation of parameters is to use the method of moments (Casella and Berger, Reference Casella and Berger2001), which yields potentially biased but consistent estimators. Another method is bootstrapping from existing data to estimate the parameters of the parent distribution via the plug-in principle (Mukherjee and Sen, Reference Mukherjee and Sen2019). However, these are often of little help for determining the amount of data needed to gain a specified level of significance, nor which targets we should favour when collecting this data. Modelling the output of future missions can help to proactively determine the amount needed, and the authors regard this as a necessary step of the mission planning process, but this is laborious, must be redone from scratch for each new mission, and, in our opinions, does not always lead to an enlightening understanding of why a certain amount of data is required. Our framework aims to augment these existing procedures and guide intuition by showing how back of the envelope calculations can lead us to concrete, actionable recommendations for mission design.

We now demonstrate the wider applicability of our framework by applying it to extensions of these simple setups.

Ratio of 2 signals

As a first application of our framework, we turn our attention to the case where we want to determine how the occurrence rate of a biosignature depends on some system parameter. This is a generalization of the analysis done in Sandora and Silk (Reference Sandora and Silk2020), which made the idealization that all systems being measured were equally probable of hosting life. Here, we are interested in determining the minimum number of systems required to be observed to detect a population trend, and how best to allocate mission time towards systems of varying characteristics to most effectively distinguish whether a trend exists.

This analysis has many applications. Indeed, we may expect the probability of biosignatures to depend on planet mass, stellar mass, incident irradiation, planet temperature, age, metallicity, etc. For illustrative purposes, we focus our attention on a particular case, the system's position within our galaxy, though the analysis will be general enough to easily adapt to any of the other quantities mentioned.

There are numerous schools of thought on how habitability may depend on galactic radius (for a review, see Jiménez-Torres et al., Reference Jiménez-Torres, Pichardo, Lake and Segura2013). Metallicity (the abundance of elements higher than hydrogen) is observed to decrease with galactic radius (Daflon and Cunha, Reference Daflon and Cunha2004), which may either be a boon or bane for life. On the one hand, a minimum amount of material is required for planet formation (Johnson and Li, Reference Johnson and Li2012), and planets are expected to be more abundant around more metal-rich stars (Balbi et al., Reference Balbi, Hami and Kovačević2020). On the other hand, the prevalence of hot Jupiters also has been shown to increase with metallicity and, depending on the formation mechanism, could preclude the presence of terrestrial planets in the habitable zone (Fischer and Valenti, Reference Fischer and Valenti2005). Additionally, galactic centres (usually) host an active galactic nucleus (AGN), which spews high energy radiation that can strip planetary atmospheres (Jiménez-Torres et al., Reference Jiménez-Torres, Pichardo, Lake and Segura2013). These two competing factors have lead to the hypothesis that there is a galactic habitable zone (Lineweaver et al., Reference Lineweaver, Fenner and Gibson2004), analogous to the circumstellar habitable zone, comprised of a possibly thin annular region, only within which habitable planets can exist. This hypothesis has drawn some criticism, not least because stars are known to migrate radially throughout the course of their evolution, defying simple attempts at placing their orbit (Prantzos, Reference Prantzos2008).

Here, we treat the GHZ as a hypothesis, and ask what sort of data we would need to either verify or falsify it. We may divide the alternatives into four scenarios: (i) null hypothesis- the biosignature occurrence rate does not depend on galactic radius (ii) Z- the occurrence rate is greater towards the centre (iii) HJ- the occurrence rate is greater towards the edge and (iv) AGN- the occurrence rate is 0 inside some critical radius.

A fully statistically rigorous approach would do something along the lines of fitting a curve for occurrence rate versus galactic radius, then determining to what degree of certainty the slope is away from 0. Here, we opt for a much simpler method of bucketing our population into two bins, then comparing the occurrence rates of each bin and determining if they are different from each other to the statistical power afforded by the mission. While this simplified analysis is not a replacement for the full approach, it allows us to easily track the dependence on mission parameters, and so can be useful for planning purposes.

We therefore assume we have two populations, either from the same or different missions. If mission 1 surveys M tot planets and detects M det signals, and mission 2 surveys N tot planets and detects N det signals, the signal occurrence rate in each population will be given by beta distributions X M ~ β(M tot,  M det,  f) and X N ~ β(N tot,  N det,  f) respectively. Then the inferred ratio of these two occurrence rates X = X M/X N, is given by the following distribution (Pham-Gia, Reference Pham-Gia2000):

(6)$$X\sim \left\{\matrix{ {\displaystyle B( \alpha,\; \beta_N) \, { } _2F_1( \alpha,\; 1-\beta_M;\; \alpha + \beta_N;\; x) \over \displaystyle B( \alpha_M,\; \beta_M) B( \alpha_N,\; \beta_N) }x^{\alpha_M-1}, \; & x\leq 1 \cr \cr \displaystyle {B( \alpha,\; \beta_M) \, { } _2F_1\left(\alpha,\; 1-\beta_N;\; \alpha + \beta_M;\; \frac{\scriptstyle1}{x}\right)\over \displaystyle B( \alpha_M,\; \beta_M) B( \alpha_N,\; \beta_N) }x^{-\alpha_N-1},\; & x> 1 }\right.$$

where α M = M det + 1, β M = M tot − M det + 1, α N = N det + 1, β N = N tot − N det + 1, α = α M + α N. The hypergeometric function is $_2F_1( a,\; \, b;\; c;\; x) = \sum _{n = 0}^\infty ( a) _n( b) _n/( c) _nx^n/n!$, with (a)k = Γ(a + k)/Γ(a) the k-th order rising Pochhammer symbol.

The moments of this distribution can be written as

(7)$$E\left[X^k\right] = {( M_{\rm{det}} + 1) _k\over ( M_{\rm{tot}} + 2) _k}{( N_{\rm{tot}} + 2-k) _k\over ( N_{\rm{det}} + 1-k) _k}$$

From this we can derive the mean

(8)$$\mu = {M_{\rm{det}} + 1\over M_{\rm{tot}} + 2}{N_{\rm{tot}} + 1\over N_{\rm{det}}} = {\,f_M\over f_N} + {\cal O}( 1/N) $$

where $f_M = M_{\rm {det}}/M_{\rm {tot}}$ and $f_N = N_{\rm {det}}/N_{\rm {tot}}$

Similarly, the variance can be written

(9)$$\eqalign{\sigma^2& = {( M_{\rm{det}} + 1) ( M_{\rm{det}} + 2) N_{\rm{tot}} ( N_{\rm{tot}} + 1) \over ( M_{\rm{tot}} + 2) ( M_{\rm{tot}} + 3) ( N_{{\rm{det}}}-1) N_{\rm{det}}}-\mu^2\cr & = {\,f_M^2\over f_N^2}\left({1-f_N\over f_NN_{\rm{tot}}} + {1-f_M\over f_MM_{\rm{tot}}}\right) + {\cal O}\left(1/N^2\right)}$$

The variance is not symmetric about the two ratios f M and f N since the associated distribution X M/X N is not symmetric. However, the quantity σ 2/μ 2 is symmetric, and has a characteristic 1/N behaviour, but in this case dependent on the number of detected signals N det = f NN tot and M det = f MM tot, rather than the total number of systems observed. Note also that this variance contains two additive contributions from the uncertainties in both surveys.

If one occurrence rate is smaller than the other, a correspondingly larger number of total systems need to be surveyed to adequately measure any trend with system parameters, rather than the overall occurrence rate. For rare signals, f M,N ≪ 1, the number of detections in each survey should be roughly equal. This implies that surveys should collect more signals from regions where the signal is expected to be rarer.

To use this to distinguish between two competing hypotheses predicting different values of the ratio f M/f N, we would want the measurement error $e = \sqrt {\sigma ^2}$ to be smaller than the difference in predictions. So if we are trying to distinguish between the different galactic habitability scenarios outlined above, we would want $\sigma \lesssim \mu$.

Cost analysis

We now use this analysis to determine the optimal way of apportioning funds to the two populations. We denote the fraction of funding given to mission 2 as c, and as an example assume the cost functions for these surveys are given by the same power law relation M = M 0(1 − c)q and N = N 0c q, where M 0 and N 0 are the yields that would be obtained if all resources were put into mission M and N, respectively. As we discuss in section ‘What is the optimal amount of money to spend on different mission aspects?’, we expect q > 1 if the two populations come from two different missions, as increasing budget allows multiple mission aspects to be improved simultaneously. For apportioning observing time within the same mission, we instead expect q < 1, as increasing observing time yields diminishing returns, discussed further in Sandora and Silk (Reference Sandora and Silk2020). We wish to minimize the variance in equation (9), corresponding to maximum information gain. The derivative of eqn (9) yields

(10)$$\eqalign{{\partial\over \partial c }\sigma^2 & = {\,f_M^2\over f_N^2}\Bigg[{-q\over N_0}{1-f_N\over f_N}c^{-( q + 1) }\cr & \quad + {q\over M_0}{1-f_M\over f_M}( 1-c) ^{-( q + 1) }\Bigg]}$$

This makes use of the approximation that both M det and N det are substantially removed from 0. Setting this to zero and solving for c gives us the optimal cost

(11)$$c = {1\over 1 + s^{{1}/( q + 1) }},\; \quad s = {N_0\over M_0}{\,f_N\over f_M}{1-f_M\over 1-f_N}$$

Yielding survey sizes

(12)$$N = N_0{1\over \left(1 + s^{{1}/( q + 1) }\right)^q},\; \quad M = M_0{s^{{q}/( q + 1) }\over \left(1 + s^{{1}/( q + 1) }\right)^q}$$

From here, it can be seen that if f M = f N and M tot = N tot, then c = 1/2, i.e. both surveys should get equal priority. The relative cost is plotted for generic values of observed fraction in Fig. 1.

Figure 1. Optimum cost fraction c to minimize variance. Unless specified, these plots take q = 1 and M 0 = N 0. Here, a low value of c corresponds to allocating most funds to mission M.

This allows us to explore how the optimal cost depends on various parameters. The recommendations are sensible - the better the observations in population M, the more cost should be invested on N and vice versa to get maximum information. If q is large, the optimal cost will deviate more strongly from 1/2 compared to the q = 1 case, holding other parameters fixed. Similarly if q is small, we have c ≈ 1/2 over a broad range of parameter values. In the lower right panel we see that more money should be spent on missions that have lower yields.

Lastly, note that from the above expression, we see that the optimal survey sizes depend on biosignature occurrence rates, which are not necessarily known beforehand. If these are unknown, a well designed survey would be able to adapt to information as it arrives, to alter upcoming target selection as current estimates of the occurrence rates dictate.

The tradeoff between false positives and false negatives

In this section, we apply our procedure to solar system planetary biosignature missions. Here, we address ambiguity in the interpretation of mission results. This ambiguity can arise from two significant contributing factors. Firstly, there could be biosignature experiments where a positive result may be a false positive, e.g. some geochemical trickery masquerading as life. Alternatively, there may be situations where a negative result could be a false negative, e.g. there may be entire communities of microbes present which the instrument is not sensitive enough to detect. Any realistic mission will have to deal with both. In such a scenario, and assuming finite resources/design constraints, one key decision which will have to be made is which of the two sources of ambiguity to prioritize minimizing to arrive at a higher overall certainty of whether or not life is present. As discussed in Foote et al. (Reference Foote, Sinhadc, Mathis and Walker2022), the need to mitigate these leads to the drive towards the most definitive biosignatures possible, and as exhaustive a determination of abiogenesis rate as possible.

Unlike large scale surveys of many exoplanet systems, solar system based biosignature search strategies are characterized by information depth rather than information breadth. In other words, in the solar system life detection mission, we are going to be successively surveying the same planet over and over again, gaining more and more information and constraining our statistical parameters. For this reason, the ideal framework from which to approach the problem is a Bayesian one, which can successively adjust prior probabilities based on new information (Catling et al., Reference Catling, Krissansen-Totton, Kiang, Crisp, Robinson, DasSarma, Rushby, Del Genio, Bains and Domagal-Goldman2018).

We define the false positive rate as the probability of a detection given nonlife, FP = P(D|NL) and the false negative rate as the probability of a nondetection given life, FN = P(ND|L). The quantities needed for the information can be derived via Bayes’ theorem:

(13)$$P( L \mid D,\; FP,\; FN,\; f) = {P( D \mid L,\; FP,\; FN,\; f) P( L \mid FP,\; FN,\; f) \over P( D \mid FP,\; FN,\; f) }$$

Where

(14)$$P( L \mid FP,\; FN,\; f) = P( L) = f$$

is the probability of life existing in the location where the probe searched. The probability that the instrument registers a signal is

(15)$$P( D \mid FP,\; FN,\; f) = ( 1-\rm{FN}) {\it f} + \rm{FP}( 1- {\it f} ) $$

Therefore, the posterior probability of life given a detection is given by

(16)$$P( L \mid D,\; FP,\; FN,\; f) = {( 1-\rm{FN}) {\it f} \over ( 1-\rm{FN}) {\it f } + \rm{FP}( 1-{\it f } ) }$$

and

(17)$$P( L\vert ND,\; FP,\; FN,\; {\it f } ) = {\rm{FN} {\it f}\over \rm{FN} {\it f } + ( 1-\rm{FP}) ( 1- {\it f } ) }$$

To arrive at a decision making framework, we approach the problem in terms of information gain. Here, the information gain measures the uncertainty present in data–the larger the Shannon Entropy, the lower the certainty of the data and the higher the likelihood that it may be random. In this case the probabilities are conditional, so we use the Shannon Entropy for conditional probabilities, given in terms of the event of detection, nondetection and the presence of life below. This is the information of the posterior probability distributions, weighted by their probability of occurrence.

(18)$$\eqalign{S( \rm{FP},\; \rm{FN}) & = -\int {\rm d}f p_f( f) \bigg[P( D) \, s\big(P( L\vert D) \big)\cr & \quad + P( ND) \, s\big(P( L\vert ND) \big)\bigg]}$$

where s(p) = plog p + (1 − p)log (1 − p).

Lastly, as the probability of life f is not known ahead of time, our expression requires that we integrate over all possible values. This requires a prior distribution p f(f), which incorporates our current estimates on the probability of life existing in a particular locale, given the current uncertainties in the probability of life emerging, as well as any geological history which may have impacted the site's habitability (Westall et al., Reference Westall, Foucher, Bost, Bertrand, Loizeau, Vago, Kminek, Gaboyer, Campbell, Bréhéret, Gautret and Cockell2015; Lorenz, Reference Lorenz2019). This is an inescapable part of the calculation, and so to investigate the dependence of our results on any assumptions we make about this quantity we explore four different options: (i) the value of f is arbitrarily taken to be 0.001 (ii) the value of f is taken to be 0.5 (iii) the probability of any value of f is uniformly distributed and (iv) the probability is log-uniform, p f(f) ~ 1/f. This last option requires a minimum value for f, which we take to be 10−10 in our calculations. The choice of this value is arbitrary, but mainly influences the overall scale of the expected information gain through S ~ 1/log (p min), rather than the dependence on the variables FN and FP. The expected information gain for these three choices is displayed in Fig. 2.

Figure 2. Heat map of the conditional entropy in terms of FP and FN. The top two plots take particular values for the probability of life existing at the sampling location f. The bottom two integrate over a uniform and log-uniform distribution for f, respectively. The white dashed line in each separates regions where it is more beneficial to decrease FP (above the line) and FN (below the line).

In these plots, the blue regions throughout the midsection have highest entropy, corresponding to maximum ambiguity in result interpretation. The red regions in the upper right and lower left corners are both places where the information entropy approaches zero, corresponding to certain result interpretation. The rightmost red region is a pathological point in parameter space where both FP or FN approach 1. In our parameterization, this region indicates that the signal (or lack thereof) is almost sure to convey the opposite of what the instrument was designed to do, and the experiment is certain to be wrong. This is what every biosignature mission should deliberately avoid.

In the bottom left corner, we approach our second region where S → 0, this time where FP and FN both approach zero, representing the ideal scenario. Such unambiguous results will be unobtainable for any realistic mission, but these plots do provide a metric that can be used to evaluate missions with different values of FP and FN. For a preexisting mission design that can be improved upon incrementally, the recommendation given by this analysis is to go in the direction of maximum gradient, which depends on the mission's position in the plot. Through these plots, we have placed a white dashed line demarcating the region where it is more important to prioritize false positives (above the curve) from the region where it is more important to prioritize false negatives (below). This line is exactly equal to the diagonal FP=FN for the uniform and f = 0.5 cases, but is substantially different in the others. Notably, in the region where both FP and FN are small, it remains more important to prioritize false positives. This is a consequence of our assumption that f ≪ 1; the opposite conclusion would hold if we consider f ≈ 1.

This procedure, is, of course, reliant on a method of estimating false positive and negative rates for a mission, both of which may be difficult in practice. FP can be estimated by characterizing the instrumental sensitivity, and accurately accounting for all abiotic signal sources with careful geochemical modelling. This is often fraught with controversy (for a recent review see Harman and Domagal-Goldman, Reference Harman and Domagal-Goldman2018), as historical examples can attest (see e.g. Barber and Scott, Reference Barber and Scott2002; Yung et al., Reference Yung, Chen, Nealson, Atreya, Beckett, Blank, Ehlmann, Eiler, Etiope, Ferry, Forget, Gao, Hu, Kleinböhl, Klusman, Lefèvre, Miller, Mischna, Mumma, Newman, Oehler, Okumura, Oremland, Orphan, Popa, Russell, Shen, Lollar, Staehle, Stamenković, Stolper, Templeton, Vandaele, Viscardy, Webster, Wennberg, Wong and Worden2018; Bains and Petkowski, Reference Bains and Petkowski2021). FN, on the other hand, may be just as difficult to ascertain, as life may be present in far lower abundances than Earth based analogue environments indicate, yielding uncertainty in overall signal strength (Cockell et al., Reference Cockell, Kaltenegger and Raven2009). Additionally, extraterrestrial life may have entirely different metabolisms, and so may not produce a targeted biosignature (Johnson et al., Reference Johnson, Graham, Anslyn, Conrad, Cronin, Ellington, Elsila, Girguis, House and Kempes2019).

As further illustration of the application of this formalism, we next turn to two more examples: the combination of multiple instruments, and the recent detection of methane on Enceladus.

Combining signals from multiple instruments

Here we investigate the effect of combining signals to mitigate false positives and negatives, which can provide redundancy and ameliorate the difficulties in interpretation present for a single signal in isolation. Generally, this can be done in a variety of different ways, to either increase the coverage or certainty of a detection. We illustrate the various approaches in a simple setup, and show that our framework yields recommendations for which choice is preferable.

Let's restrict our attention to the combination of two signals. The first is fixed, say the detection of a certain biosignature (say, methane), which carries with it a probability of false positive and false negative. We then have two options for including a second instrument. In option A, we're concerned with false positives, so we include a second instrument to detect a secondary signal which we expect to be present in the case of life (say, homochirality). We then claim detection of life only if both these signals register. While this increases our confidence in a detection, it comes at an increased risk of false negatives, as if either instrument fails to register, we would reject the signal of the other.

Alternatively, we may be more concerned with false negatives, and so may include an additional instrument to measure a third biosignature (say, a spectral red edge), which would be complimentary to the first. This improves our original coverage, and so alleviates concern for false negatives. However, this also increases the chances of false positives, for if either instrument registers a signal, we would claim detection.

The question we wish to address is which of these two options is preferable, given the parameters of the setup. For simplicity, we assume that the false positive and false negative rates of all three instruments are identical, to avoid a proliferation of variables. We also assume that the signals which they measure are uncorrelated with each other (as a high degree of correlation would render additional instruments not worthwhile). In this case, we have

(19)$$\eqalign{P_A( \rm{FP}) = \rm{FP}^2,\; \quad P_{\it A}( \rm{FN}) = 1-( 1-\rm{FN}) ^2\cr P_{\it B} ( \rm{FP}) = 1-( 1-\rm{FP}) ^2,\; \quad P_{\it B}( \rm{FN}) = \rm{FN}^2}$$

With this, we display the difference in information gained from these two setups in Fig. 3, using the value f = .001. We notice that if FP > FN, then scenario A is preferred, where both signals must register to count as a detection. Also of note is that the difference in the two setups is greatest when either one of the two probabilities is either very small or very large.

Figure 3. Difference in two signal entropy for two different signal combination strategies. If this difference is positive, it is more beneficial to use the ‘and’ strategy, so that both signals must register to count as a detection. If the difference is negative, the ‘or’ strategy, where either signal registering would count as a detection, is preferable.

Direct application: the Enceladus data

The Cassini spacecraft, as part of its mission to study Saturn and its moons, passed through a plume erupting from the south pole of Enceladus to collect a sample. Instrumental analysis of the sample revealed a relatively high abundance of methane (CH4), molecular hydrogen (H2) and carbon dioxide (CO2) (Waite et al., Reference Waite, Glein, Perryman, Teolis, Magee, Miller, Grimes, Perry, Miller, Bouquet, Lunine, Brockwell and Bolton2017). Methane is considered a potential biosignature because on Earth, microbial communities around hydrothermal vents obtain energy through chemical disequilibrium present in H2 emanating from the vents, resulting in the release of methane as a waste product, a process known as methanogenesis Lyu et al. (Reference Lyu, Shao, Akinyemi and Whitman2018). However, this is not an unambiguous signal, as methane can also be produced abiotically in hydrothermal vent systems via serpentization (Mousis et al., Reference Mousis, Lunine, Waite, Magee, Lewis, Mandt, Marquer and Cordier2016). One method of distinguishing between these two scenarios is then to compare the ratio of fluxes, as biogenic production would yield a higher relative methane rate than would occur abiotically.

This situation represents an ideal test case for our decision making tool–an ambiguous biosignature result, with potential for both false positives and false negatives in the data.

An analysis of the Cassini findings by Affholder et al. (Reference Affholder, Guyot, Sauterey, Ferrière and Mazevet2021) compared outgassing rates of simulated populations of archaea and abiotic serpentization, and claimed moderate support for the biotic hypothesis on Bayesian grounds. The model also indicated that at the population sizes and rates of methanogenesis demanded by the data, the archaea would have a negligible effect on the abundance of H2–therefore, the high concentration of H2 present in the data did not count against the biotic methanogenesis hypothesis.

It should be noted that this model still carries considerable ambiguity, and a substantial portion of their model runs produce the observed signal abiotically. Regardless, they present quantitative estimates for the probabilities of false positives and false negatives in actual data. Therefore, it is well suited for an application of our decision making tool.

According to the biogeochemical models in Affholder et al. (Reference Affholder, Guyot, Sauterey, Ferrière and Mazevet2021), the Cassini plume flybys resulted in FP=.76 and FN=.24. Here FP and FN are determined by the limits of instrumental sensitivity; from Waite et al. (Reference Waite, Glein, Perryman, Teolis, Magee, Miller, Grimes, Perry, Miller, Bouquet, Lunine, Brockwell and Bolton2017) the sensitivity threshold is estimated to be about 10x smaller than the observed values. This is illustrated in Fig. 4, where we place a white star to indicate where in the diagram out current knowledge of these biosignatures lies.

Figure 4. White star corresponds to the estimated FP and FN of the Enceladus Cassini flythrough mission. The underlying plot is the same as shown in Fig. 2, with uniform prior for f.

Lurkers

As a further application, we apply our approach to the spatial problem, where we are concerned with the tradeoff between coverage and resolution when looking for a biosignature. Much of the language we use in this section is couched in terms of technosignature ‘lurkers’, but our formalism is general and can equally be applied to microbial colonies, molecular biomarkers, etc.

Background and context

An ongoing area of technosignature research considers physical artefacts and probes which may be hiding in our solar system, referred to as ‘lurkers’ (Benford, Reference Benford2019). The idea is that if extraterrestrial civilizations are present in our galaxy, they may have seeded the galaxy with probes, which, either defunct, dormant or active, could be present in our solar system (Haqq-Misra and Kopparapu, Reference Haqq-Misra and Kopparapu2012). The plausibility of this was argued by Hart and Tippler, for the time it takes self replicating von Neumann devices to cover the galaxy is modelled to be much shorter than the age of the galaxy itself (Gray, Reference Gray2015). Therefore, in spite of the considerable energy requirements and engineering challenges inherent in interstellar travel, a reasonable case can be made for past extraterrestrial visitation of our solar system which may have left behind artefacts (Freitas, Reference Freitas1983). Hart and Tippler's argument frames the absence of such lurkers as justification for the Fermi Paradox and pessimism about the prevalence of intelligent life in general (Gray, Reference Gray2015). However, as deduced by Haqq-Misra and Kopparapu (Reference Haqq-Misra and Kopparapu2012), only a small fraction of the solar system's total volume has been imaged at sufficient resolution to rule out the presence of lurkers. Therefore it remains an open question warranting further investigation and dedicated search programmes.

This motivates us to apply our formalism to this setup to determine how to effectively allocate resources in order to maximize our chances of finding signs of lurkers. While a survey with unlimited resources may be able to scour every square inch of a planet (or moon, asteroid, etc.), any realistic mission will have budgetary and lifetime constraints. The two extreme strategies would then be to perform a cursory inspection across the entire planet area, but potentially miss signals with insufficient strength, or to perform a high resolution search in a small region of the planet. Here we show that the ideal mission forms a compromise between these two extremes, and deduce the optimal design to maximize chances of success (which in our language corresponds to the mission yielding most information).

Mathematical framework

The setup we consider consists of a survey of resolution R, which is the two dimensional spatial resolution of a survey. Our survey covers a fraction of the planet area f R. We assume that our survey results in no detection (as otherwise, our concern with interpretive statistics becomes lessened considerably). We would like to compute the probability that at least one object is present, given a null result, p(N > 0|null), and want to optimize mission parameters so that this quantity is as small as possible given some constraints. Of course, this is equivalent to the formulation where we try to maximize the quantity p(N = 0|null).

To compute this, first note that, in the scenario where there are N objects uniformly spaced on the planet surface, all of size D, we have

(20)$$p( \rm{null}\vert {\it N},\; {\it D}) = ( 1-{\it f_R}) ^{\it N} {\it \theta} \, ( \it D-R) + {\rm 1}\, \theta \,( R-D) $$

Here θ(x) is the Heaviside function, which is 1 for positive values and 0 for negative values. From here, we can see that a null result is guaranteed if our mission resolution is above the object size. If the mission does have sufficient resolution, we see that the probability of a null signal diminishes with increasing survey coverage or number of objects.

Inverting this expression requires priors for both the number of objects and their size. If the prior distribution of object sizes is p D(D), this expression can be integrated to find

(21)$$\eqalign{\,p( \rm{null}\vert {\it N}) & = \int {\rm d}D\, p_D( D) \, p( \rm{null}\vert {\it N},\; {\it D}) \cr & = 1-\left(1-( 1-f_R) ^N\right)P_D( D> R) }$$

And if the prior distribution of number of objects is p N(N), the total probability of nondetection is

(22)$$p( {\rm null}) = \sum p( {\rm null}\vert N) \, p_N( N) .$$

Then by Bayes’ theorem,

(23)$$p( N\vert {\rm null}) = {\,p( {\rm null}\vert N) p( N) \over p( {\rm null}) }.$$

Note that here, in the limit R → ∞, the survey is completely uninformative, and we have p(N|null) → p(N).

Then, the probability of at least one object being present given a null result is

(24)$$p( N> 0\vert {\rm null}) = 1-{\,p_N( N = 0) \over p( {\rm null}) }.$$

In the limit of complete survey f R → 1, we find

(25)$$p( N> 0\vert {\rm null}) \rightarrow{P_D( D< R) \over \Theta + P_D( D< R) },\; \quad \Theta = {P_N( N = 0) \over P_N( N> 0) }$$

reproducing the results of Haqq-Misra and Kopparapu (Reference Haqq-Misra and Kopparapu2012).

To simplify this general analysis, we now specialize to the case where the number of objects is constrained to be either 0 or N*, p N(N) = (1 − p*)δ N,0 + p*δ N,N*, where δ i,j is the Kronecker delta. Then we have

(26)$$p( N> 0\vert {\rm null}) = {\,p^\ast \, \left(P_D( D< R) + ( 1-f_R) ^{N^\ast }\, P_D( D> R) \right)\over 1-p^\ast \, \left(1-( 1-f_R) ^{N^\ast }\right)\, P_D( D> R) }$$

We display this expression for various parameter values in Fig. 5, which plots the ratio of mean lurker size to resolution D/R versus the coverage fraction f R. In the first three panes, we assume the distribution of lurker diameters follows a normal distribution (with mean 1 and standard deviation 0.5). The aim of a mission should be to get as close to the ‘red region’ of these plots as possible given mission constraints. We overlay a constraint curve A ~ tan −1(1/R) to encapsulate a typical area/resolution tradeoff, but this form specifies a family of curves with differing base values. For a given constraint curve, this will single out a unique mission design that minimizes this quantity (thus maximizing science return). This point is denoted by the white star in each of these plots.

Figure 5. The probability that lurkers exist on a surface as a function of mean diameter over resolution D/R and fraction of surface covered f R. The vertical dashed line corresponds to mean lurker diameter, and the dashed curve corresponds to a constraint on mission design. The white star is the point of lowest p(N > 0|null) restricting to the constraint. The top two and bottom left plots take a normal distribution for lurker diameter, and the lower right plot takes a power law.

Comparing the top two plots, we can see that the optimum point does not depend on the lurker probability p* at all, a fact that can be derived from the form of equation (26). Comparing to the lower left plot, we can see that the optimum point is shifted towards lower f R and higher D/R for larger N*, which is reasonable, as there would be a greater probability of spotting a lurker in a given area if more are present on the planet surface. The bottom right plot assumes a power law distribution (with p(D) ∝ D −2) of lurker sizes to highlight the effect this has on our recommendation; again, this shifts the optimum point towards lower f R and higher D/R.

Direct application to solar system bodies

Here, we will demonstrate a direct application of this statistical framework, by examining the recommendations we find for a series of different bodies in the solar system. We use the spatial resolutions of planetary missions and surveys to establish upper limits on the probability of lurkers which may have gone undetected up until now, allowing us to make a series of recommendations for each solar system body/region of space. For this, we display the resolution and fraction of area covered for multiple missions to bodies around our solar system in Table 1.

Table 1. Upper limit diameters for lurkers on the surfaces of solar system bodies and regions of surveyed space

Original concept for this table and all numbers adapted from (Lazio, Reference Lazio2022).

The estimates quoted are for illustrative purposes only, and there are some nuances which the table does not account for. Our model makes a simplifying assumption by assuming that lurkers will be unambiguously distinguishable from background geological features. This may not necessarily be true, especially if deliberately camouflaged. For the same reason, even excluding the possibility of deliberate camouflage or stealth, using diameter as our only proxy for detectability has its limitations because the lurkers’ albedo relative to that of its surroundings is also a factor. A low albedo lurker on a dark planetary surface may blend in, appearing to be a shadow of another feature or a dark rock. Conversely, a lurker with an albedo that contrasts more with surrounding geology may be easier to detect. Additionally, these are remote sensing missions, and so any subsurface/sub-ocean area would be inaccessible and are not accounted for in this application.

The values in Table 1 are displayed alongside our estimate for p(N > 0|null) in Fig. 6. As fiducial values, we use p* = 0.5, N* = 10, and 〈D〉 = 10 m, which is a reasonable upper estimate for the size of an interstellar probe (Freitas, Reference Freitas1983). From this, we can draw a number of conclusions:

  • Of the bodies considered, we are most certain that no lurkers exist on the moon, followed by Earth. This is because although the Earth has been mapped to higher resolution on the continents, the oceans prevent coverage for the majority of the surface.

  • The gradient at each point dictates which aspect of a particular mission should be improved. In a nutshell, whichever is worse between the covered fraction and resolution is what should be improved, but this formalism allows for that comparison to be made quantitatively.

  • Several bodies have near-complete low resolution maps, with small patches of high resolution. This allows us to compare which are more valuable for confirming the absence of lurkers. For Mars, for example, the high resolution images give us most confidence that lurkers are absent.

  • Of the bodies considered, we are least certain about the absence of lurkers on the outer solar system moons. Areas even further out, such as the Kuiper belt and Oort cloud, are even less certain.

  • Despite mapping efforts, many of these bodies still have p(N > 0|null) ≈ 0.5, which is our reference probability that lurkers are present. Therefore we are nowhere close to being able to claim that no lurkers are present in the solar system.

Figure 6. Probability that lurkers exist on the various solar system bodies listed in Table 1. This assumes an a priori probability of 1/2, and considers the scenario where 10 lurkers would be present, with a mean diameter of 10 m.

We may concern ourselves with the overall probability that lurkers are present in the total solar system, which is given by the compound probability of each body separately, p(N > 0|null)tot = 1 − ∏b∈bodies (1 − p(N > 0|null)b). This total probability is most sensitive to the body with lowest probability, suggesting that if our goal is to minimize this quantity, we ought to prioritize the bodies we are least certain about first.

Further applications

What is the best way to determine a habitability boundary?

Supposing we do begin to detect biosignatures, we will become interested in the problem of determining the range of conditions for which life can persist. Though there are a number of variables that certainly play a role (temperature, planet size, star size, atmospheric mass, water content, ellipticity, obliquity, etc.), in this section we focus on a single abstract quantity t. We wish to outline a procedure for determining the range of t which can sustain life. For simplicity we focus on determining the lower bound, denote the current range as (0,  t 1), and treat the variable as uniformly distributedFootnote 2.

In the scenario where all systems with habitable characteristics are guaranteed to possess biosignatures, this problem reduces to the standard method of bisecting the remaining interval until the desired precision is reached. The subtlety here is that if a planet does not exhibit the biosignature, we would not be able to infer that life cannot exist within that range, because presumably life does not inhabit every potentially habitable environment. Let us discuss the certain case first as a warm up, to outline how information gain can be used to arrive at this classic result.

As stated, before our measurement we know that the lower bound for t lies somewhere in the range (0,  t 1). We will perform a measurement of a system with t 0. A priori, the probability of a detection (the probability that the lower bound is less than t 0) is P(det) = t 0/t 1 ≡ c 0, and the probability of no detection is P(miss) = 1 − c 0. If a signal is detected, then the updated distribution of possible lower bounds is $P_\rm {det} = {\cal U}( 0,\; \, t_0)$, and if no signal is detected, the distribution becomes $P( t_\rm {lower}) = {\cal U}( t_0,\; \, t_1)$. The expected information gain from this measurement is

(27)$$\eqalign{\langle S\rangle & = S( P_{\rm{det}}) P_{\rm{det}} + S( P_{\rm{miss}}) P_{\rm{miss}}\cr & = -c_0\log c_0-\left(1-c_0\right)\log\left(1-c_0\right)}$$

This quantity is maximized at t 0 = t 1/2, recovering the classic bisection method.

Now, let's generalize this to the case where life is only present on a fraction of all habitable planets, denoted f (independent of t in this section). In this case, the probability of detection is given by P det = f c 0, which is the product of the probability of the lower bound being smaller than the observed system, and that life is present. The absence of a detection could be either because the lower bound is greater than the system's value, or because no life happens to be present. So, the probability of no detection is given by the sum of two terms, P miss = P(t 0 < t l) + P(t l < t 0)(1 − f) = 1 − f c 0. The expected information gain is given by

(28)$$\eqalign{\langle S\rangle & = f c_0\log c_0 + ( 1-c_0) \log( 1-fc_0) \cr & \quad + ( 1-f) c_0\log\left({1-fc_0\over 1-f}\right)}$$

After some manipulation, it is found that this quantity is maximized at

(29)$$c_0 = {( 1-f) ^{1/f-1}\over 1 + f( 1-f) ^{1/f-1}}$$

When f → 1, this expression goes to 1/2, reproducing the certain case. When f → 0 it goes to 1/e, reminiscent of the classic secretary problem, which aims to determine an optimal value in complete absence of information.

The expected information gained by an optimal measurement is

(30)$$\langle S\rangle\vert _{\rm{opt}} = \log\left(1 + f( 1-f) ^{1/f-1}\right)$$

This goes to 1/e for the certain case, representing 1 nat of information gain, and f/e for small f, which is 1 nat multiplied by the detection probability.

We can also determine the convergence rate towards the true lower bound by looking at the expected maximum possible value for a given time step. Recall that in the certain case, this is given by 1/2t 0 + 1/2t 1 = 3/4t 1, so that this converges as (3/4)N after N time steps. In the uncertain case, the formula instead yields t 0 f c 0 + t1(1 − f c 0). We do not display the full expression after substituting the optimal value of c 0 here, but instead report its limit for small f, which becomes (1 − (e − 1)/e 2f)t 1. This series converges as approximately exp( − .232fN), which is considerably slower than the certain case.

What is the optimal amount of money to spend on different mission aspects?

Another application is in determining how much money to spend on each aspect of a mission. In this section we will abstractly define a number of mission aspects {a i}, with the total number of observed planets scaling as $N_{\rm {tot}} = N_0\prod _i a_i^{q_i}$. Here, we assume that the total number scales with each aspect to some power. For example, the aspects could correspond to telescope diameter D t, mission lifetime T, spectral resolution Δλ, etc., as found in Sandora and Silk (Reference Sandora and Silk2020) where $N_{\rm {tot}}\propto D_t^{12/7}T^{3/7}\Delta \lambda ^{3/7}$. In the next subsection we will treat a step-wise increase as well. Now, we assume that a mission has a fixed total budget c tot apportioned amongst each aspect as $c_{\rm {tot}} = \sum _ic_i$. We would like to determine the optimal way of assigning each c i so as to maximize the number of observed planets, as well as the resultant scaling of mission yield with cost.

This is a straightforward convex optimization problem, that can be solved inductively by finding the location where the gradient of the total yield vanishes for every component budget. The solution is

(31)$$c_i = {q_i\over q_{\rm{tot}}}c_{\rm{tot}},\; \quad N_{\rm{tot}} = N_0{\prod_iq_i^{q_i}\over q_{\rm{tot}}^{q_{\rm{tot}}}}\, c_{\rm{tot}}^{q_{\rm{tot}}}$$

Here, we have defined $q_{\rm {tot}} = \sum _iq_i$, and assume nothing other than that each exponent is positive.

This yields a concrete optimal fraction of budget to yield to each mission aspect. Particularly noteworthy is the fact that the total yield scales superlinearly with budget for all practical applications of this formalism. This implies, among other things, that the cost per planet decreases dramatically with increased scale. The cost per information gain, however, does not, but is instead minimized at the value

(32)$$c_\rm{b} = {e\over \left(( {\prod_iq_i^{q_i}}/{q_{\rm{tot}}^{q_{\rm{tot}}}}) \, N_0\right)^{1/q_{\rm{tot}}}}$$

This minimum occurs because the information return depends only logarithmically on total number, and so these diminishing returns must be balanced against initial startup costs. For the reference scenario above, taking spectral resolution, telescope diameter and mission length (to each scale linearly with cost) gives q tot = 18/7, and that the optimal cost is $109 when N 0 = 121.8.

Several things to note about this expression: optimal pricing decreases slowly for missions with higher yield potential,  and is smaller for missions that scale faster. If followed,  this gives the optimal number of planets for a mission to observe as $N_{opt} = e^{q_{\rm {tot}}}$,  which results in q tot nats of information. However,  minimizing cost per information gain in this manner neglects the cost associated with mission development,  which can be significantly higher than the costs for the duration of the mission. This is included in section ‘The marginal value theorem and mission lifetimes’,  where we find that the recommended number of systems to observe is substantially larger.

When should an additional instrument be included?

In the previous subsection,  we focussed on mission aspects that affected the mission yield continuously and monotonically. Some aspects do not scale this way; a particular case is which an aspect is either included or not,  and its effectiveness cannot be improved other than it simply being present. An example would be whether to include a starshade for a space mission,  or whether to include an instrument capable of making a specific measurement that would lead to higher yields.

First,  we will consider the case that the type of biosignatures being measured are not affected by the inclusion of a binary aspect,  but that only the total yield is. In this case,  we can say that the total yield is $N_{\rm {tot}}( b,\; c) = N_0( b) c^q_{\rm {tot}}$,  where b is the binary aspect,  and N 0(b) = N 0 if the aspect is absent and N 0(b) = N 1 if it is present. If the cost of the binary aspect is c b,  then this instrument should be included when

(33)$$N_1\left(c_{\rm{tot}}-c_b\right)^{q_{\rm{tot}}}> N_0\, c_{\rm{tot}}^{q_{\rm{tot}}}$$

It may also happen that a binary aspect does not affect the overall number of planets observed,  but allows the measurement of a different class of biosignatures. This is the case,  for example,  when deciding whether to include an instrument that can detect oxygen. In this scenario,  the yield without this instrument is given by N tot,  whereas with the instrument it is given by ${f\!_{\rm {bio}} {\it N}_{\rm {tot}}^2}$. In this case,  the instrument should be included when

(34)$$f_{\rm{bio}}\, N_0\, \left(c_{\rm{tot}}-c_b\right)^{2q_{\rm{tot}}}> c_{\rm{tot}}^{q_{\rm{tot}}}$$

From this,  we can see that the number of planets observed should be at least as large as 1/f bio,  plus some to counteract the fact that fewer systems will be observed with the instrument. If the occurrence rate is too low,  the mission will not be expected to observe a signal anyway,  and so the measurement of the second class of biosignatures would be a waste.

The marginal value theorem and mission lifetimes

In this subsection we address the question of how long a mission should be run. The yield scales sublinearly with mission lifetime,  as the brightest,  most promising stars are targeted first,  leaving the stars which require longer integration times for later in the mission. This gives diminishing returns as the mission goes on.

The marginal value theorem from optimal foraging theory (Charnov, Reference Charnov1976) can be applied to determine the point at which a mission should be abandoned in favour of a new mission with greater yield potential. Indeed,  collecting exoplanet data closely resembles the process of sparse patch foraging,  wherein samples are collected from a finite resource in order of desirability until the yield of a patch is poor enough to warrant the time investment to travel to another. The analogy is completed when we consider that it takes considerable time to design and construct missions. Though different missions are always being run in parallel,  they consume a fixed portion of the budget which could otherwise have gone to some other mission.

We will consider a mission at fixed cost that takes a development period t dev before it begins collecting data. Then the yield as a function of time is $N( t) = N_0( t-t_{\rm {dev}}) ^{q_t}$,  where it should be understood that when this quantity is not positive,  it is 0. It is convenient to nondimensionalize the variables in this expression by measuring time in years,  so that N 0 represents the number of planets returned in the first year of operation. To determine the optimal stopping time we use the marginal value theorem,  which maximizes the average information gain per time,  this sets $\dot S( t_0) = S( t_0) /t_0$. This can then be solved for t 0,  yielding

(35)$$t_{operation} = {1\over W_0\left(N_0^{1/q_t}t_{\rm{dev}} /e\right)}t_{\rm{dev}}$$

Here,  W 0(x) is the Lambert W function,  which is the solution to the equation $W_0e^{W_0} = x$,  and asymptotes to a logarithm for large arguments. So,  it can be seen that the operation time scales not quite linearly with development time,  and decreases with increasing first year yield. The total science return over the mission lifetime is given by

(36)$$S( t_0) = q_t\left[W_0\left(N_0^{1/q_t}t_{\rm{dev}} /e\right) + 1\right] = q_t\left[{t_{dev}\over t_{op}} + 1\right]$$

When is it best to follow a prediction,  and when is it best to challenge it?

We now turn our attention to the scenario where,  according to some leading model of habitability,  a particular locale should be devoid of life. According to this model,  any attempt at measuring a biosignature around such a location would be fruitless,  and would have been better spent collecting data from a safer bet. However,  we are unlikely to ever be $100\%$ confident in any of our habitability models,  and so it is possible that testing the model will result in it being overturned. Here,  we show that our framework allows us to determine a criterion for when this habitability model should be obeyed,  and when it should be challenged,  based off of the confidence one assigns to its truthfulness. A concrete example would be whether to check hot Jupiters for biosignatures,  as by most reasonable accounts they do not fulfil the requirements for life (Fortney et al., Reference Fortney, Dawson and Komacek2021).

As a toy example,  let us consider the case where there are two possibilities: the leading theory,  which dictates that life is impossible on some type of system,  and a second theory,  which assigns a nonzero chance that life may exist. The confidence we assign to the standard theory is denoted p. The prior distribution for the fraction of systems harbouring a signal is then

(37)$$p_b( f) = p\, \delta( f) + ( 1-p) \, \delta( f-f_2) $$

Here δ is the delta function,  which equals 0 when its argument is nonzero and integrates to 1. More thoroughly,  we could consider a range of values for f 2,  but this significantly complicates the analysis,  and leads to little new insight.

Now we determine the expected amount of information gained from a measurement of a system. According to the above prior,  the probability of measuring no signal is P 0 = p + (1 − p)(1 − f 2),  and the probability of measuring a signal is P 1 = (1 − p)f 2. Then the expected information gain 〈ΔS〉 = P 0 S(p(f|no detection)) + P 1S(p(f|detection)) − S(p b(f)) is:

(38)$$\eqalign{\langle\Delta S\rangle& = ( 1-p) ( 1-f_2) \log\left({( 1-p) ( 1-f_2) \over p + ( 1-p) ( 1-f_2) }\right)\cr & \quad- p\log( p + ( 1-p) ( 1-f_2) ) -( 1-p) \log( 1-p) }$$

If we further take the limits p → 1, f 2 → 0,  this expression simplifies somewhat:

(39)$$\langle\Delta S\rangle\rightarrow( 1-p) \Big[\,f_2( 1-\log( 1-p) ) + ( 1-f_2) \log( 1-f_2) \Big]$$

From this,  it can be seen that the expected amount of information gain is proportional to (1 − p),  the probability that the theory's recommendation is incorrect.

Equation (38) is displayed in Fig. 7,  from where it can be seen that expected information is maximized when f 2 is large and p is small. As example values,  take 〈ΔS(p = .9, f 2 = .1)〉 = .02,  〈ΔS(p = .99, f 2 = .1)〉 = .005.

Figure 7. Information gain for measurement of an unlikely signal as a function of possible signal occurrence rate f 2 and confidence in the theory proclaiming its impossibility p.

To determine whether this measurement should be made,  we need to compare it to an alternative measurement yielding a different amount of expected information gain,  and see which is larger. In the standard case of measuring a biosignature of frequency f well localized from 0,  the information is S = 1/2log (f(1 − f)/N) after N measurements,  and so the information gain is ΔS = 1/(2N). So,  for the case p = .99,  f 2 = .1,  the expected value for the risky measurement exceeds the standard after the standard measurement has been made 107 times. This suggests a strategy of collecting measurements around likely places first,  until the gains become small enough that measuring the unlikely signal becomes more interesting,  on the off chance that it could invalidate the current theory of habitability.

The above assumed that no measurement of the unlikely signal had been attempted previously. We may wish to determine how many times to challenge a theory before accepting it. For this scenario,  if N previous measurements have been made,  the prior will be

(40)$$p_b( f) \propto p\, \delta( f) + ( 1-p) ( 1-f_2) ^N\delta( f-f_2) $$

With constant of proportionality enforcing that this is normalized to 1. The expected information gain is,  in the limit of N → ∞, 

(41)$$\langle\Delta S\rangle\rightarrow {1-p\over p}\, N\, f_2\, ( 1-f_2) ^N\, \log\left({1\over 1-f_2}\right)$$

The full expression is plotted in Fig. 8. From here,  we can clearly see a steep dropoff at N > 1/f 2, beyond which empirical evidence strongly indicates the absence of a signal.

Figure 8. Information gain of an unlikely signal after a series of N unsuccessful measurements. Above $N\gtrsim 1/f_2$ the likelihood of measuring a signal becomes exponentially suppressed.

Discussion

In this work, we have utilized the concept of information gain as a method to assess the value of biosignature missions, and demonstrated that this is capable of generating nontrivial recommendations for mission design and target prioritization, across various domains. Throughout, this has resembled less of a first-principles approach, whereby any mission setup can automatically be loaded into a master equation to straightforwardly generate recommendations, but rather a style of thinking about missions in terms of the quality of the knowledge that they generate as a function of mission parameters. The concept of information explicitly depends on the quantities that are abstracted out of the raw data any mission would return, and so at the outset any analysis of this sort relies on value judgements on the part of the practitioner. We have illustrated that this process does not always yield a unique encapsulation of the data, as for instance when we chose to define how we detect population trends through the distribution of the occurrence rate ratio, rather than some other measure. While there is no guarantee that the recommendations will be independent of the way we set up the problem, the advantage this framework has is in making the mission goals and assumptions explicit when deriving recommendations.

At this point we'd like to reiterate that we are ardently against using information gain as a quantity to maximize for designing missions, since attempts to substitute quantitative proxies for vague overarching goals often results in pathological maximization procedures, often to the detriment of the original intent (Chrystal et al., Reference Chrystal, Mizen and Mizen2003). Rather, we advocate the use of this framework for generating rules of thumb and an intuitive understanding for when certain mission aspects ought to be improved upon. Additionally, we note that this framework explicitly depends upon an honest and accurate assessment of mission capabilities. Even with rigorous design practices, often there ends up being uncertainties in instrumental sensitivities, background noise and plausibility of modelling scenarios that prevent perfect assessment of the relevant parameters. On top of this, human biases preclude even perfect experts from being able to make impartial assessments for any new venture (Pohorille and Sokolowska, Reference Pohorille and Sokolowska2020). Nevertheless, our formalism is useful for determining the sensitivity of our recommendations to these unknowns.

We hope that by demonstrating the practicality of this framework to a variety of cases, the reader will find it productive to apply to other, new problems. To this end, our approach has often been the following: first, we seek out a continuous quantity that can be measured with a particular mission setup. Next, we compute the distribution of values compatible with observations, given the model parameters and any other additional assumptions that must be used. Then, lastly, we vary these mission parameters, subject to any relevant mission constraints such as finite budget/lifetime/sample size, and determine the combination of mission parameters that equivalently maximizes information, minimizes uncertainty or maximizes probability of the desired outcome. Our hope is that this may be useful as a tool to aid in determining the design parameters of future missions, providing transparency and impartiality in selecting among upcoming missions and ultimately maximizing our chances of successfully detecting biosignatures.

Data

Code and data used in this paper are located here: github.com/SennDegrare/Bio-signatures.

Acknowledgements

We would like to thank Jacob Haqq Misra, Joe Lazio and Joe Silk for fruitful discussions.

Competing interest

The authors report no conflict of interest.

Footnotes

*

Equal contribution

1 This is dependent on the choice of the prior distribution for f, which we take here to be uniform. However, using other priors does not change the behaviour of the population mean and variance by much (Sandora and Silk, Reference Sandora and Silk2020).

2 if a quantity is not uniformly distributed, its cumulative distribution function can be used as t everywhere in this section.

References

Abelson, PH (1989) Voyager 2 at Neptune and triton. Science 246, 13691369.CrossRefGoogle ScholarPubMed
Affholder, A, Guyot, F, Sauterey, B, Ferrière, R and Mazevet, S (2021) Bayesian analysis of Enceladus's plume data to assess methanogenesis. Nature Astronomy 5, 805814.CrossRefGoogle Scholar
Bains, W and Petkowski, JJ (2021) Astrobiologists are rational but not Bayesian. International Journal of Astrobiology 20, 312318.CrossRefGoogle Scholar
Balbi, A, Hami, M and Kovačević, A (2020) The habitability of the galactic bulge. Life 10.CrossRefGoogle ScholarPubMed
Barber, DJ and Scott, ER (2002) Origin of supposedly biogenic magnetite in the Martian meteorite Allan hills 84001. Proceedings of the National Academy of Sciences 99, 65566561.CrossRefGoogle ScholarPubMed
Benford, J (2019) Looking for lurkers: co-orbiters as seti observables. The Astronomical Journal 158, 150.CrossRefGoogle Scholar
Casella, G and Berger, R (2001) Statistical Inference. Cambridge: Duxbury Resource Center.Google Scholar
Catling, DC, Krissansen-Totton, J, Kiang, NY, Crisp, D, Robinson, TD, DasSarma, S, Rushby, AJ, Del Genio, A, Bains, W and Domagal-Goldman, S (2018) Exoplanet biosignatures: a framework for their assessment. Astrobiology 18, 709738.CrossRefGoogle ScholarPubMed
Charnov, EL (1976) Optimal foraging, the marginal value theorem. Theoretical Population Biology 9, 129136.CrossRefGoogle ScholarPubMed
Chrystal, KA, Mizen, PD and Mizen, P (2003) Goodhart's law: its origins, meaning and implications for monetary policy. Central Banking, Monetary Theory and Practice: Essays in Honour of Charles Goodhart 1, 221243.Google Scholar
Cockell, CS, Kaltenegger, L and Raven, JA (2009) Cryptic photosynthesis–extrasolar planetary oxygen without a surface biological signature. Astrobiology 9, 623636.CrossRefGoogle ScholarPubMed
Daflon, S and Cunha, K (2004) Galactic metallicity gradients derived from a sample of OB stars. The Astrophysical Journal 617, 1115.CrossRefGoogle Scholar
Figueredo, PH and Greeley, R (2000) Geologic mapping of the northern leading hemisphere of Europa from Galileo solid-state imaging data. Journal of Geophysical Research: Planets 105, 2262922646.CrossRefGoogle Scholar
Fischer, DA and Valenti, J (2005) The planet-metallicity correlation. The Astrophysical Journal 622, 1102.CrossRefGoogle Scholar
Foote, S, Sinhadc, P, Mathis, C and Walker, SI (2022) False positives and the challenge of testing the alien hypothesis. preprint arXiv:2207.00634.Google Scholar
Fortney, JJ, Dawson, RI and Komacek, TD (2021) Hot Jupiters: origins, structure, atmospheres. Journal of Geophysical Research: Planets 126, e2020JE006629.CrossRefGoogle Scholar
Freitas, RA (1983) The search for extraterrestrial artifacts(seta). British Interplanetary Society, Journal(Interstellar Studies) 36, 501506.Google Scholar
Freitas, RA Jr (1985) There is no fermi paradox. Icarus 62, 518520.CrossRefGoogle Scholar
Gray, RH (2015) The fermi paradox is neither fermi's nor a paradox. Astrobiology 15, 195199.CrossRefGoogle ScholarPubMed
Green, J, Hoehler, T, Neveu, M, Domagal-Goldman, S, Scalice, D and Voytek, M (2021) Call for a framework for reporting evidence for life beyond earth. Nature 598, 575579.CrossRefGoogle ScholarPubMed
Haqq-Misra, J and Kopparapu, RK (2012) On the likelihood of non-terrestrial artifacts in the solar system. Acta Astronautica 72, 1520.CrossRefGoogle Scholar
Harman, CE and Domagal-Goldman, S (2018) Biosignature false positives. Technical report, Springer International Publishing.CrossRefGoogle Scholar
Hawkins, S, Boldt, J, Darlington, E, Espiritu, R, Gold, R, Gotwols, B, Grey, M, Hash, C, Hayes, J, Jaskulek, S, Kardian, C, Keller, M, Malaret, E, Murchie, S, Murphy, P, Peacock, K, Prockter, L, Reiter, R, Robinson, M, Schaefer, E, Shelton, R, Sterner, R, Taylor, H, Watters, T and Williams, B (2007) The mercury dual imaging system on the messenger spacecraft. Space Science Reviews 131, 247338.CrossRefGoogle Scholar
Jiménez-Torres, JJ, Pichardo, B, Lake, G and Segura, A (2013) Habitability in different milky way stellar environments: A stellar interaction dynamical approach. Astrobiology 13, 491509.CrossRefGoogle ScholarPubMed
Johnson, JL and Li, H (2012) The first planets: the critical metallicity for planet formation. The Astrophysical Journal 751, 81.CrossRefGoogle Scholar
Johnson, S, Graham, H, Anslyn, E, Conrad, P, Cronin, L, Ellington, A, Elsila, J, Girguis, P, House, C and Kempes, C (2019) Agnostic approaches to extant life detection. Mars Extant Life: What's Next? 2108, 5026.Google Scholar
Keeter, B (2017) Pluto's icy plains in highest-resolution views from new horizons. https://www.nasa.gov/image-feature/pluto-s-icy-plains-in-highest-resolution-views-from-new-horizons (Accessed January 31, 2023).Google Scholar
Kirk, RL, Howington-Kraus, E, Rosiek, MR, Anderson, JA, Archinal, BA, Becker, KJ, Cook, DA, Galuszka, DM, Geissler, PE, Hare, TM, Holmberg, IM, Keszthelyi, LP, Redding, BL, Delamere, WA, Gallagher, D, Chapel, JD, Eliason, EM, King, R and McEwen, AS (2008) Ultrahigh resolution topographic mapping of mars with mro hirise stereo images: meter-scale slopes of candidate phoenix landing sites. Journal of Geophysical Research: Planets 113.CrossRefGoogle Scholar
Lazio, J (2022) Technosignatures in the solar system. In The First Penn State SETI Symposium. Zenodo.Google Scholar
Levine, TR, Weber, R, Hullett, C, Park, HS and Lindsey, LLM (2008) A critical assessment of null hypothesis significance testing in quantitative communication research. Human Communication Research 34, 171187.CrossRefGoogle Scholar
Lineweaver, CH, Fenner, Y and Gibson, BK (2004) The galactic habitable zone and the age distribution of complex life in the milky way. Science 303, 5962.CrossRefGoogle ScholarPubMed
Lorenz, RD (2019) Calculating risk and payoff in planetary exploration and life detection missions. Advances in Space Research 64, 944956.CrossRefGoogle Scholar
Lyu, Z, Shao, N, Akinyemi, T and Whitman, WB (2018) Methanogenesis. Current Biology 28, R727R732.CrossRefGoogle ScholarPubMed
Meadows, V, Graham, H, Abrahamsson, V, Adam, Z, Amador-French, E, Arney, G, Barge, L, Barlow, E, Berea, A, Bose, M, Bower, D, Chan, M, Cleaves, J, Corpolongo, A, Currie, M, Domagal-Goldman, S, Dong, C, Eigenbrode, J, Enright, A, Fauchez, TJ, Fisk, M, Fricke, M, Fujii, Y, Gangidine, A, Gezer, E, Glavin, D, Grenfell, JL, Harman, S, Hatzenpichler, R, Hausrath, L, Henderson, B, Johnson, SS, Jones, A, Hamilton, T, Hickman-Lewis, K, Jahnke, L, Kacar, B, Kopparapu, R, Kempes, C, Kish, A, Krissansen-Totton, J, Leavitt, W, Komatsu, Y, Lichtenberg, T, Lindsay, M, Maggiori, C, Marais, D, Mathis, C, Morono, Y, Neveu, M, Ni, G, Nixon, C, Olson, S, Parenteau, N, Perl, S, Quinn, R, Raj, C, Rodriguez, L, Rutter, L, Sandora, M, Schmidt, B, Schwieterman, E, Segura, A, Şekerci, F, Seyler, L, Smith, H, Soares, G, Som, S, Suzuki, S, Teece, B, Weber, J, Simon, FW, Wong, M and Yano, H (2022) Community report from the biosignatures standards of evidence workshop. preprint arXiv:2210.14293.Google Scholar
Mousis, O, Lunine, J, Waite, J, Magee, B, Lewis, W, Mandt, K, Marquer, D and Cordier, D (2016) Formation conditions of enceladus and origin of its methane reservoir. The Astrophysical Journal 701, 3942.CrossRefGoogle Scholar
Mukherjee, R and Sen, B (2019) On efficiency of the plug-in principle for estimating smooth integrated functionals of a nonincreasing density. Electronic Journal of Statistics 13, 44164448.CrossRefGoogle Scholar
Neveu, M, Hays, LE, Voytek, MA, New, MH and Schulte, MD (2018) The ladder of life detection. Astrobiology 18, 13751402.CrossRefGoogle ScholarPubMed
Olkin, CB, Spencer, JR, Grundy, WM, Parker, AH, Beyer, RA, Schenk, PM, Howett, CJ, Stern, SA, Reuter, DC, Weaver, HA, Young, LA, Ennico, K, Binzel, RP, Buie, MW, Cook, JC, Cruikshank, DP, Ore, CMD, Earle, AM, Jennings, DE, Singer, KN, Linscott, IE, Lunsford, AW, Protopapa, S, Schmitt, B, Weigle, E and the New Horizons Science Team (2017) The global color of Pluto from new horizons. The Astronomical Journal 154, 258.CrossRefGoogle Scholar
Park, RS, Vaughan, AT, Konopliv, AS, Ermakov, AI, Mastrodemos, N, Castillo-Rogez, JC, Joy, SP, Nathues, A, Polanskey, CA, Rayman, MD, Riedel, JE, Raymond, CA, Russell, CT and Zuber, MT (2019) High-resolution shape model of Ceres from stereophotoclinometry using Dawn imaging data. Icarus 319, 812827.CrossRefGoogle Scholar
Patterson, GW, Collins, GC, Head, JW, Pappalardo, RT, Prockter, LM, Lucchitta, BK and Kay, JP (2010) Global geological mapping of Ganymede. Icarus 207, 845867.CrossRefGoogle Scholar
Pham-Gia, T (2000) Distributions of the ratios of independent beta variables and applications. Communications in Statistics - Theory and Methods 29, 26932715.CrossRefGoogle Scholar
Pohorille, A and Sokolowska, J (2020) Evaluating biosignatures for life detection. Astrobiology 20, 12361250.CrossRefGoogle ScholarPubMed
Prantzos, N (2008) On the ‘galactic habitable zone’. Strategies of Life Detection, pp. 313–322.CrossRefGoogle Scholar
Rauber, T, Braun, T and Berns, K (2008) Probabilistic distance measures of the Dirichlet and beta distributions. Pattern Recognition 41, 637645.CrossRefGoogle Scholar
Roatsch, Th, Waehlisch, M, Giese, B, Hoffmeister, A, Matz, K-D, Scholten, F, Kuhn, A, Wagner, R, Neukum, G, Helfenstein, P and Porco, C (2008) High-resolution Enceladus atlas derived from Cassini-ISS images. Planetary and Space Science 56, 109116.CrossRefGoogle Scholar
Robinson, MS, Brylow, SM, Tschimmel, M, Humm, D, Lawrence, SJ, Thomas, PC, Denevi, BW, Bowman-Cisneros, E, Zerr, J, Ravine, MA, Caplinger, MA, Ghaemi, FT, Schaffner, JA, Malin, MC, Mahanti, P, Bartels, A, Anderson, J, Tran, TN, Eliason, EM, McEwen, AS, Turtle, E, Jolliff, BL and Hiesinger, H (2010) Lunar Reconnaissance Orbiter Camera (lROC) instrument overview. Space Science Reviews 150, 81124.CrossRefGoogle Scholar
Sandora, M and Silk, J (2020) Biosignature surveys to exoplanet yields and beyond. Monthly Notices of the Royal Astronomical Society 495, 10001015.CrossRefGoogle Scholar
Saunders, RS, Spear, AJ, Allin, PC, Austin, RS, Berman, AL, Chandlee, RC, Clark, J, Decharon, AV, De Jong, EM, Griffith, DG, Gunn, JM, Hensley, S, Johnson, WTK, Kirby, CE, Leung, KS, Lyons, DT, Michaels, GA, Miller, J, Morris, RB, Morrison, AD, Piereson, RG, Scott, JF, Shaffer, SJ, Slonski, JP, Stofan, ER, Thompson, TW and Wall, SD (1992) Magellan mission summary. Journal of Geophysical Research: Planets 97, 1306713090.CrossRefGoogle Scholar
Sheikh, SZ (2020) Nine axes of merit for technosignature searches. International Journal of Astrobiology 19, 237243.CrossRefGoogle Scholar
Waite, JH, Glein, CR, Perryman, RS, Teolis, BD, Magee, BA, Miller, G, Grimes, J, Perry, ME, Miller, KE, Bouquet, A, Lunine, JI, Brockwell, T and Bolton, SJ (2017) Cassini finds molecular hydrogen in the Enceladus plume: evidence for hydrothermal processes. Science 356, 155159.CrossRefGoogle ScholarPubMed
Westall, F, Foucher, F, Bost, N, Bertrand, M, Loizeau, D, Vago, JL, Kminek, G, Gaboyer, F, Campbell, KA, Bréhéret, J-G, Gautret, P and Cockell, CS (2015) Biosignatures on mars: what, where, and how? implications for the search for Martian life. Astrobiology 15, 9981029.CrossRefGoogle ScholarPubMed
Yung, YL, Chen, P, Nealson, K, Atreya, S, Beckett, P, Blank, JG, Ehlmann, B, Eiler, J, Etiope, G, Ferry, JG, Forget, F, Gao, P, Hu, R, Kleinböhl, A, Klusman, R, Lefèvre, F, Miller, C, Mischna, M, Mumma, M, Newman, S, Oehler, D, Okumura, M, Oremland, R, Orphan, V, Popa, R, Russell, M, Shen, L, Lollar, BS, Staehle, R, Stamenković, V, Stolper, D, Templeton, A, Vandaele, AC, Viscardy, S, Webster, CR, Wennberg, PO, Wong, ML and Worden, J (2018) Methane on mars and habitability: challenges and responses. Astrobiology 18, 12211242.CrossRefGoogle ScholarPubMed
Zurek, RW and Smrekar, SE (2007) An overview of the Mars Reconnaissance Orbiter (MRO) science mission. Journal of Geophysical Research: Planets 112.CrossRefGoogle Scholar
Figure 0

Figure 1. Optimum cost fraction c to minimize variance. Unless specified, these plots take q = 1 and M0 = N0. Here, a low value of c corresponds to allocating most funds to mission M.

Figure 1

Figure 2. Heat map of the conditional entropy in terms of FP and FN. The top two plots take particular values for the probability of life existing at the sampling location f. The bottom two integrate over a uniform and log-uniform distribution for f, respectively. The white dashed line in each separates regions where it is more beneficial to decrease FP (above the line) and FN (below the line).

Figure 2

Figure 3. Difference in two signal entropy for two different signal combination strategies. If this difference is positive, it is more beneficial to use the ‘and’ strategy, so that both signals must register to count as a detection. If the difference is negative, the ‘or’ strategy, where either signal registering would count as a detection, is preferable.

Figure 3

Figure 4. White star corresponds to the estimated FP and FN of the Enceladus Cassini flythrough mission. The underlying plot is the same as shown in Fig. 2, with uniform prior for f.

Figure 4

Figure 5. The probability that lurkers exist on a surface as a function of mean diameter over resolution D/R and fraction of surface covered fR. The vertical dashed line corresponds to mean lurker diameter, and the dashed curve corresponds to a constraint on mission design. The white star is the point of lowest p(N > 0|null) restricting to the constraint. The top two and bottom left plots take a normal distribution for lurker diameter, and the lower right plot takes a power law.

Figure 5

Table 1. Upper limit diameters for lurkers on the surfaces of solar system bodies and regions of surveyed space

Figure 6

Figure 6. Probability that lurkers exist on the various solar system bodies listed in Table 1. This assumes an a priori probability of 1/2, and considers the scenario where 10 lurkers would be present, with a mean diameter of 10 m.

Figure 7

Figure 7. Information gain for measurement of an unlikely signal as a function of possible signal occurrence rate f2 and confidence in the theory proclaiming its impossibility p.

Figure 8

Figure 8. Information gain of an unlikely signal after a series of N unsuccessful measurements. Above $N\gtrsim 1/f_2$ the likelihood of measuring a signal becomes exponentially suppressed.