Hostname: page-component-7bb8b95d7b-lvwk9 Total loading time: 0 Render date: 2024-09-29T17:24:49.154Z Has data issue: false hasContentIssue false

Dead rats, dopamine, performance metrics, and peacock tails: Proxy failure is an inherent risk in goal-oriented systems

Published online by Cambridge University Press:  26 June 2023

Yohan J. John
Affiliation:
Neural Systems Laboratory, Department of Health and Rehabilitation Sciences, Boston University, Boston, MA, USA [email protected]
Leigh Caldwell
Affiliation:
Irrational Agency, London, UK [email protected]
Dakota E. McCoy
Affiliation:
Department of Materials Science and Engineering, Stanford University, Stanford, CA, USA [email protected] Hopkins Marine Station, Stanford University, Pacific Grove, CA, USA Department of Biology, Duke University, Durham, NC, USA
Oliver Braganza*
Affiliation:
Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany Institute for Socioeconomics, University of Duisburg-Essen, Duisburg, Germany
*
Corresponding author: Oliver Braganza; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

When a measure becomes a target, it ceases to be a good measure. For example, when standardized test scores in education become targets, teachers may start “teaching to the test,” leading to breakdown of the relationship between the measure – test performance – and the underlying goal – quality education. Similar phenomena have been named and described across a broad range of contexts, such as economics, academia, machine learning, and ecology. Yet it remains unclear whether these phenomena bear only superficial similarities, or if they derive from some fundamental unifying mechanism. Here, we propose such a unifying mechanism, which we label proxy failure. We first review illustrative examples and their labels, such as the “cobra effect,” “Goodhart's law,” and “Campbell's law.” Second, we identify central prerequisites and constraints of proxy failure, noting that it is often only a partial failure or divergence. We argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises that tends to make the proxy a worse approximation of the goal. Third, we develop this perspective for three concrete contexts, namely neuroscience, economics, and ecology, highlighting similarities and differences. Fourth, we outline consequences of proxy failure, suggesting it is key to understanding the structure and evolution of goal-oriented systems. Our account draws on a broad range of disciplines, but we can only scratch the surface within each. We thus hope the present account elicits a collaborative enterprise, entailing both critical discussion as well as extensions in contexts we have missed.

Type
Target Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

In the spring of 1902, French colonial officials in Hanoi, fearful of the bubonic plague, declared war against a self-inflicted rat infestation (Vann, Reference Vann2003). Officials incentivized rat catchers by offering a reward for every delivered corpse. In the following months the number of delivered rat corpses increased exponentially, yet the underground population seemed unaffected. As the heaps of corpses grew and became a nuisance, officials started rewarding the delivery of rat tails rather than whole animals. At the same time the city expanded its incentive scheme to the general population, promising a 1 cent bounty for every tail delivered. Residents quickly began to deliver thousands of tails. However, increasing numbers of rats without tails were soon observed scurrying through the city, perhaps left alive to breed and therefore supply the newly valuable tails. Even worse, enterprising individuals began to breed rats, farming the tails in order to collect rewards more efficiently.

This is an example of what is known in economics as Goodhart's law: “When a measure becomes a target, it ceases to be a good measure” (Goodhart, Reference Goodhart1975; Strathern, Reference Strathern1997). The measure (rat corpses or tails) is an operational proxy for some goal or purpose (reduction of the rat population). However, when it becomes the target, its correlation with this goal decreases, or in extreme cases, disappears, leading to unintended and often adverse outcomes. In this case the rat population of Hanoi ended up surging when the programme was terminated: The now-worthless rats were set free in the city. The proxy failed both to approximate and to guide goal-oriented action. As we will demonstrate, “Goodhart-like” phenomena have been discovered and rediscovered across a broad range of contexts and scales, ranging from centralized governance to distributed social systems and from evolutionary competition to artificial intelligence (see Table 1). Although the physical mechanisms vary from case to case, there are several structural features that recur throughout them (see, e.g., Biagioli & Lippman, Reference Biagioli and Lippman2020; Braganza, Reference Braganza2022b; McCoy & Haig, Reference McCoy and Haig2020), indicating that the similarities are not superficial.

Table 1. Examples of proxy failure

Examples from different literatures and across scales, in which Goodhart's law or one of its analogues has been explicitly or implicitly invoked. Note that many examples refer to reviews or theoretical integrations of primary literature within the respective fields. Instances, which have led to the coining of “laws” (or similar), have been highlighted in boldface. Rows with no boldface entry generally refer to at least one such "law". Our examples can be categorized into five domains highlighted by colour. Yellow: social systems with central regulator; blue: machine learning or artificial intelligence research; red: distributed social systems relying on metrics; green: ecological systems; grey: neuroscience.

Here, we will draw out these structural commonalities to reveal a single core phenomenon: When the pursuit of a goal depends on the optimization of an imperfect proxy in a complex system, a pressure emerges that pushes the proxy away from the goal. Optimization here means that some regulatory feedback mechanism promotes the increase of the proxy via incentivization, reinforcement, selection, or some other means. We propose that the disparate “Goodhart-like” phenomena can be unified under the more descriptive term proxy failure. Similar to the notion of “market failure,” the term is not meant to imply a complete failure, but only a divergence of the proxy from the underlying goal. The aim of this paper is twofold: (1) to set up a unified theoretical perspective that facilitates recognition of proxy failure across biological and social scales, and (2) to explore implications of this perspective. In particular, recognizing how the mechanisms and constraints of proxy failure between diverse systems overlap or differ should help to understand when and how it can be mitigated, or indeed harnessed.

The remainder of the paper is structured as follows. Section 2 briefly reviews examples of proxy divergence across contexts. Section 3 extracts and analyses the common pattern underlying these examples: Proxy-based optimization of regulatory goals. Section 4 elaborates these ideas in three concrete contexts: neuroscience (where we are, to the best of our knowledge, the first to apply the framework of proxy failure), economics, and ecology. Section 5 discusses consequences of proxy failure across biological and social systems, including how it can drive complexity and how it affects the structure and behaviour of nested hierarchical systems. Section 6 concludes.

2. A brief overview

The best-known label for proxy failure may be Goodhart's law, traced to the economist Charles Goodhart in the context of monetary policy (Goodhart, Reference Goodhart1975). However, the law is known to be closely related to a jumble of other laws, fallacies, or principles (Table 1), some of which predate Goodhart (Csiszar, Reference Csiszar, Biagioli and Lippman2020; Merton, Reference Merton1940; Rodamar, Reference Rodamar2018). They occur when a regulator or administrator with some goal incentivizes citizens or employees. The latter then identify weaknesses that allow them to “game” or “hack” the incentive system intentionally (Table 1, yellow area). For instance, the McNamara fallacy, the Lucas critique, and Goodhart's law all involve governmental incentive schemes that are “gamed” by soldiers or citizens. We write “gamed” in scare-quotes, because agents can also be understood to simply be rationally responding to incentives (Lucas, Reference Lucas1976). The economics literature abounds with analogous cases: A “principal's” incentive schemes are “gamed” by rational agents (Baker, Reference Baker2002; Bénabou & Tirole, Reference Bénabou and Tirole2016; Bonner & Sprinkle, Reference Bonner and Sprinkle2002; Kerr, Reference Kerr1975; we will discuss this in more detail in sect. 4.2).

In such social systems, it is clear that human intentions play a key role in proxy failure. But recent research in machine learning and artificial intelligence (AI) has shown that intentional gaming is not necessary for proxy failure to occur. Instead, it is a consequence of proxy-based optimization in itself (Table 1, blue area; Amodei et al., Reference Amodei, Olah, Steinhardt, Christiano, Schulman and Mané2016; Ashton, Reference Ashton2020; Beale, Battey, Davison, & MacKay, Reference Beale, Battey, Davison and MacKay2020; Manheim & Garrabrant, Reference Manheim and Garrabrant2018). The AI literature is beginning to uncover the causal (Ashton, Reference Ashton2020; Everitt, Hutter, Kumar, & Krakovna, Reference Everitt, Hutter, Kumar and Krakovna2021) and statistical (Beale et al., Reference Beale, Battey, Davison and MacKay2020; Zhuang & Hadfield-Menell, Reference Zhuang and Hadfield-Menell2021) foundations of the phenomenon. Indeed, AI appears to be an ideal microcosm to study Goodhart's law, its variants (Demski & Garrabrant, Reference Demski and Garrabrant2020; Manheim & Garrabrant, Reference Manheim and Garrabrant2018), mitigation strategies (Amodei et al., Reference Amodei, Olah, Steinhardt, Christiano, Schulman and Mané2016; Everitt et al., Reference Everitt, Hutter, Kumar and Krakovna2021; Thomas & Uminsky, Reference Thomas and Uminsky2022), and the potentially severe adverse social consequences (Bostrom, Reference Bostrom2014; O'Neil, Reference O'Neil2016; Pueyo, Reference Pueyo2018; Zuboff, Reference Zuboff2019). A full analysis of proxy failure in AI is beyond the scope of the present paper; see the citations above for excellent treatments.

An additional set of studies raises the question of whether a well-defined human regulator with a clear goal is required for proxy failure to occur. These studies describe proxy failure in distributed social systems, such as academia or markets, where the goal may be a dynamically changing social agreement (Table 1, red area; Biagioli & Lippman, Reference Biagioli and Lippman2020; Braganza, Reference Braganza2022b; Fire & Guestrin, Reference Fire and Guestrin2019; Poku, Reference Poku2016). For instance, the academic publication process (Biagioli & Lippman, Reference Biagioli and Lippman2020; Braganza, Reference Braganza2020; Smaldino & McElreath, Reference Smaldino and McElreath2016), education (Nichols & Berliner, Reference Nichols and Berliner2005), healthcare (O'Mahony, Reference O'Mahony2018; Poku, Reference Poku2016), and competitive markets (Braganza, Reference Braganza2022a, Reference Braganza2022b; Mirowski & Nik-Khah, Reference Mirowski and Nik-Khah2017) appear to be subject to proxy failure, even though it is difficult to clearly define the respective goals. To the best of our knowledge, the phenomenon has not yet been explicitly explored in democratic elections, but practices such as “vote buying” (Finan & Schechter, Reference Finan and Schechter2012) or insincere election promises (Thomson et al., Reference Thomson, Royed, Naurin, Artés, Costello, Ennser-Jedenastik and Praprotnik2017) clearly fit the pattern. Building on these insights, Goodhart's law has recently been invoked in yet another context, namely ecology (Table 1, green area; McCoy & Haig, Reference McCoy and Haig2020). In the present paper, we add neuroscience to the list (Table 1, grey area; sect. 4.1).

Together, these examples raise the question of whether proxy failure is a fundamental risk in any complex, goal-oriented system.

3. A unified theoretical perspective

Despite the striking similarity between the diverse phenomena in Table 1, a unified explanation has not yet been offered. To this end, we first set up a consistent terminology (sect. 3.1). Briefly, we frame proxy failure in terms of a regulator with a goal, who uses a proxy to incentivize or select agents in service of this goal. Our account can be summarized in the following propositions and corollaries (Table 2), which we expand on in the remainder of the paper.

Table 2. Key propositions and their corollaries

3.1. A unifying terminology: Regulator, goal, agent, proxy

To facilitate comparisons across disparate instances of proxy failure, we define four inter-related terms: regulator, goal, agent, and proxy (Box 1). A regulator is any entity or system with an apparent goal. To pursue this goal, the regulator incentivizes or selects agents based on some proxy. The agents in turn pursue the proxy, or are selected on the basis of the proxy. We will now examine each term in sequence.

Box 1. Glossary of terms

A regulator is any entity with an apparent goal. To pursue this goal, the regulator incentivizes or selects agents based on some proxy.

Regulator: the part of the system that pursues a goal by influencing the agents' behaviour or properties using some form of feedback.

Goal: the state of the system that the regulator seeks to establish.

Agent: an entity, process, or abstract system that is the target of regulation.

Proxy: an output or property of each agent that the regulator uses to approximate the goal. This may be a cue or signal in biology, or a performance metric or indicator in social contexts.

A regulator is the part of the system that pursues a goal by influencing the agents' behaviour or properties using some form of feedback. The regulator predicates feedback on agents' performance in terms of the proxy. As we will explore below, the regulator may not only be a distributed system (e.g., a market) or an institution (a government), but it can also be a person, a neural subsystem (the dopamine system), or an ecological subpopulation (peahens). Conscious awareness is not required for regulatory feedback, which can occur via selection or reinforcement (rewards and punishments). In the case of the Hanoi rat massacre, the regulator was the French colonial government, which pursued its goal of reducing the rat population through feedback in the form of monetary incentives. In the context of sexual selection, the regulator is the population of potential mating partners and the feedback is access to mating opportunities.

A goal refers to the state of the system that the regulator seeks to establish. In human social contexts, such goals are sometimes explicit and simple (reducing a rat population, increasing shareholder value) and sometimes difficult to precisely define (social welfare, improved education, or scientific progress). The degree to which goals are captured by their respective proxy measures may vary considerably (Braganza, Reference Braganza2022b), even though many proxies were formulated explicitly to quantify abstract goals (e.g., test scores in education or the journal impact factor in academia). In nonhuman contexts, such as in ecology, explicit goals cannot generally be assumed. A peahen may act as if to optimize her offspring's fitness, but she might not consciously represent or monitor this goal. In such cases, imputing goals nevertheless remain an efficient way to account for empirical findings, a perspective known as teleonomic reasoning (Mayr, Reference Mayr1961; Appendix 1).

An agent is an entity, process, or abstract system that is the target of regulation. Agents are subject to competition, selection, or reinforcement on the basis of some output or property. They need not be human or conscious (or even active in any sense), but they must possess multiple ways of producing or expressing a proxy, which can be influenced by feedback from the regulator. In the case of the Hanoi rat massacre, the agents were the people bringing in rat corpses and tails. In educational systems, the agents are teachers who end up “teaching to the test.” In the brain, agents are neural representations “competing” for neural resources (see sect. 4.1). In sexual selection, agents are peacocks trying to appear attractive to choosy peahens.

A proxy is an output or property of each agent that the regulator uses as an approximation of the goal, or the extent to which a goal is achieved. An agent's “performance” in terms of the proxy is the key factor determining the regulator's feedback. Examples of proxies (and their respective goals) include: The number of rat tails (approximating rat population size) in the Hanoi example; test-scores (approximating educational quality); peacock plumage (approximating offspring quality); and dopamine signals (approximating value in decision making). The proxy can typically be expressed as a scalar metric (i.e., a magnitude). This scalar may be aggregated from multidimensional inputs or serve as a summary of multiple metrics. The regulator uses the proxy to adjust its feedback to the agents (reward, ranking, selection, or breeding opportunity). This induces agents to increase proxy production, either actively or via passive selection. The term proxy focuses attention on the fact that the metric should approximate a goal, and that the fidelity of approximation is key.

A regulator with respect to one system may be an agent when viewed as part of a different system. For instance, in many cases of sexual competition, males are regulated by female choices, but females are themselves competing against each other. Furthermore, agents and regulators may be organized in nested hierarchies (Ellis & Kopel, Reference Ellis and Kopel2019; Haig, Reference Haig2020; Kirchhoff, Parr, Palacios, Friston, & Kiverstein, Reference Kirchhoff, Parr, Palacios, Friston and Kiverstein2018), where agents at one level are regulators to a lower level (consider a corporation where general management regulates departments, which in turn regulate subdepartments, etc.; more on this in sect. 5). In this context we should note that we will employ anthropomorphic language in the description of both human and nonhuman systems, simply because it is useful and concise (Ågren & Patten, Reference Ågren and Patten2022; Dennett, Reference Dennett, Beckermann, McLaughlin and Walter2009; Appendix 1).

3.2. Why proxies are imperfect: Limits of legibility and prediction, and the necessity to choose

It is natural to wonder if a regulator could dispense with proxies and focus directly on the goal. We argue that this is often very difficult or impossible. Proxies are almost inevitable in complex goal-oriented systems. A regulator, even if it takes the form of a distributed process, faces three limitations: Restricted legibility; imperfect prediction; and the necessity to choose. These limitations also apply to agents.

First, regulators are limited by legibility (Scott, Reference Scott2008). Many abstract goals cannot be observed directly, given sensory and computational limits; a proxy is an approximation of the goal that can feasibly be observed, rewarded, and pursued. Consider the abstract goal of reducing a rat population. A continuous rat census is simply not feasible in practice. Even if it were, it would not be possible to incentivize individual agents based on this information, for reasons such as the free-rider problem (Hardin & Cullity, Reference Hardin, Cullity and Zalta2020). Therefore, a regulator must operationalize goals based on observable signals that can be incentivized. The Hanoi government chose to measure the number of delivered rat corpses. This proxy is legible by both the agents and the regulator, it can be used as an incentive, and it is a plausible correlate of the underlying goal. The processing of such a signal constitutes a regulatory model (Conant & Ashby, Reference Conant and Ashby1970), which captures how proxies are related to agent actions (or traits), and how well these promote the regulatory goal.

Second, regulators are limited in their ability to predict the future. For a proxy to perfectly capture a complex goal, the regulatory model would often have to represent all relevant factors and their possible consequences. This may be possible in very simple systems, but in complex biological or social systems a perfect proxy implies a “Laplacian demon”: A model that can perfectly predict any future state of the world given any behaviour of an agent (Conant & Ashby, Reference Conant and Ashby1970). Absent the ability to perfectly predict the future, a proxy remains prone to unanticipated failure modes in the future.

Third, regulators are limited by the need to choose. Decision systems across scales can be associated with scalar metrics such as utility, fitness, profit, or impact factor. The reason is that a system tasked with identifying the “better” of any two options in a consistent manner requires options to be rank-ordered. This implies that the options can be described using a scalar metric (see revealed preference approach in economics; Houthakker, Reference Houthakker1950; Samuelson, Reference Samuelson1938; von Neumann & Morgenstern, Reference von Neumann and Morgenstern1944). “Consistent” here means that decisions are coherent and not self-contradictory, for instance that they are transitive (if A > B, and B > C then A > C). Clearly, not all goals can be described in this way (e.g., in socioeconomic systems; Arrow, Reference Arrow1950). But to the degree that consistent regulatory decisions are desirable, a single scalar proxy is implied. It is important to reiterate that this proxy may summarize highly complex, multidimensional information (e.g., multiple contingent input metrics). It may be explicitly constructed (e.g., an objective function in AI, or an academic impact factor), or be implicit, requiring that researchers infer it from an observed set of choices (e.g., utility or fitness) or regulatory system (e.g., voting).

These three limitations shape all regulators, agents, and proxies, because all decision-making systems are practically realized by something physical – and, therefore, something finite. Machine-learning algorithms are limited by computational capacity and training data; peahens and human regulators are limited by the constraints of their eyes and brains. Any proxy reflects the idiosyncrasies, biases, and limitations inherited from the physical system that mediates it.

3.3. Factors that drive or constrain proxy failure

We propose that a pressure towards proxy failure arises whenever two conditions are met:

  • There is regulatory feedback based on the proxy, which has consequences for agents. Agents must be rewarded or selected based on the proxy, which induces optimization of the proxy, either through agent behaviour or through passive evolutionary dynamics. Many commentators emphasize the role of the agent's stakes with respect to proxy performance: The higher the stakes, the higher the pressure for proxy divergence (Muller, Reference Muller2018; O'Neil, Reference O'Neil2016; Smaldino & McElreath, Reference Smaldino and McElreath2016).

  • The system is sufficiently complex. What we mean here is that there must be many causal pathways leading to the proxy that are partially independent of the goal (or vice versa). A system may contain proxy-independent actions that lead to the goal, goal-independent actions that produce the proxy, and/or actions that affect both proxy and goal but unequally (Fig. 1).

Figure 1. Regulator, goal, agent, proxy, and their potential causal links. Proxy failure can occur when a regulator with a goal uses a proxy to incentivize/select agents. (A) In complex causal networks the causes (arrows) of proxy and goal generally will not perfectly overlap. There may be proxy-independent causes of the goal (c1), goal-independent causes of the proxy (c3), as well as causes of both proxy and goal (c2; note that this subsumes cases in which an additional direct causal link between proxy and goal exists). (B) The regulator makes the proxy a “target” for agents in order to foster (c2). Yet this will tend to induce a shift of actions/properties towards (c3), potentially at the cost of (c1). The causal effects of goals on regulators or proxies on agents are depicted as grey arrows, given that they reflect indirect teleonomic mechanisms such as incentivization or selection. Note that these diagrams are illustrative rather than comprehensive. For instance, the causal diagram of the Hanoi rat massacre would require an “inhibitory” arrow from proxy to goal, as breeding rats directly harmed the goal rather than just diverting resources from it.

The first condition leads to proxy optimization. Note that, although in many systems agent behaviour is the outcome of multiple agent-objectives, the present “generalized” account is best served by focusing on only the proxy-objective and considering competing agent-objectives as constraints on proxy-optimization. Regulatory feedback, which may operate through incentives or selection, will result in amplifying the proxy. The second condition implies that there are actions that optimize the proxy but do not necessarily optimize the goal. We posit that, the greater the causal complexity of the system, the lower the probability that the two optimizations perfectly coincide. Although this hypothesis remains to be rigorously tested, a number of system-theoretic considerations support it. Firstly, more complex systems tend to have more “failure modes” (Manheim, Reference Manheim2018). The more complex a system is, the more difficult it will be for a regulator to map every possible agent action to the entirety of its consequences. Yet, once the proxy becomes a target, agents will begin a search for actions that efficiently maximize the proxy. In practice, actions that optimize only the proxy often seem to be cheaper for agents than actions that simultaneously optimize the goal (Smaldino & McElreath, Reference Smaldino and McElreath2016; Vann, Reference Vann2003). Causal complexity may increase the probability that such actions exist. More complex systems may also tend to produce more unstable and nonlinear relations (Bardoscia, Battiston, Caccioli, & Caldarelli, Reference Bardoscia, Battiston, Caccioli and Caldarelli2017), leading to heavier tails in outcome distributions. This has also been linked to an increased probability of proxy failure (Beale et al., Reference Beale, Battey, Davison and MacKay2020). Intuitively, we can think of an increased probability that some actions exist that very efficiently optimize the proxy, but have detrimental (and potentially catastrophic; Bostrom, Reference Bostrom2014) effects on the goal. Even if agents are passive and exploration occurs through random sampling or mutation, the system will sooner or later “find” those actions or properties that optimize the proxy in the most efficient manner, irrespective of the consequences for the goal.

Another way to think about this is to note that opportunities for proxy failure are latent in the countless number of possible future environments. The regulatory model, and the proxy it gives rise to, can only reflect a finite set of environments. For organisms, this is the environment of evolutionary adaptation (Burnham, Reference Burnham2016); for AI it is the training environment (Hennessy & Goodhart, Reference Hennessy and Goodhart2021). By contrast, there are infinite possible future environments (Kauffman, Reference Kauffman2019), in which the correlation between proxy and goal may not hold. Because of this imbalance, even random change in a complex environment should be expected to promote proxy divergence. But we can go further: Proxy-based optimization actually biases exploration towards such environments. It was precisely the incentive for delivering rat tails that led to rats being bred. An insightful subcategorization of mechanisms underlying this bias has been outlined by Manheim and Garrabrant (Reference Manheim and Garrabrant2018; see Appendix 2).

Importantly, the presence of a pressure towards proxy failure does not guarantee failure: The extent to which a proxy diverges depends on a variety of constraints (which will be elaborated in more detail throughout sects. 4 and 5). For instance, if proxy production diverges sufficiently from goal attainment, the entire system may collapse, as it did in the case of the Hanoi rat massacre (Vann, Reference Vann2003). In other cases, proxy pressure may lead to inflation, because agent “hacks” to produce more proxy are immediately met by an increase in the proxy level required by the regulator. This appears to have driven school grade inflation (McCoy & Haig, Reference McCoy and Haig2020), and it is in fact the natural consequence in any system in which proxy performance matters on a relative rather than an absolute scale (Frank, Reference Frank2011). Some proxies may remain informative, even as they are undergoing inflation, such that the pressure to diverge never leads to actual proxy failure (Frank, Reference Frank2011; Harris, Daon, & Nanjundiah, Reference Harris, Daon and Nanjundiah2020). In other cases, divergence may be precluded because proxies are directly causally linked to the goal, as is assumed for honest (unfakeable) “index” signals in biology. Finally, some decrease (or noise) in the correlation between proxy and goal may be unproblematic for a regulator, depending on its tolerance for such noise (Dawkins & Guilford, Reference Dawkins and Guilford1991).

We will argue that whenever there is some additional selection or optimization pressure that operates on the regulator itself, proxy divergence is constrained. Conversely, in situations where regulators face few constraints, such as in some human social institutions, proxy failure may be persistent and far-reaching (Muller, Reference Muller2018).

4. Three examples: Neuroscience, economics, and ecology

In this section, we will explore the concepts introduced above in three concrete contexts, neuroscience, economics, and ecology. An overview is shown in Table 3.

Table 3. Proxy failure in neuroscience, economics, and ecology

Summary of phenomena is described in Section 4.

4.1. Proxy failure in neuroscience: A new perspective on habit formation and addiction

To pursue objectives, a person implicitly delegates tasks to different parts of their own body and mind. We propose that the dynamics of addiction and other maladaptive habits can be productively understood in terms of proxy failure. Although this framing is novel, its empirical basis is well established (Volkow, Wise, & Baler, Reference Volkow, Wise and Baler2017). The example reiterates that subpersonal agents and regulators can be susceptible to proxy failure even without intentional hacking or gaming by agents. In doing so, it sheds novel light on a key question in “normative behavioural economics” (Dold & Schubert, Reference Dold and Schubert2018), namely why behaviour may systematically diverge from preferences (Box 2).

Box 2. On revealed vs. normative preferences

Throughout this section, we assume that an addict's behaviours can conflict with their goals. Behavioural economists refer to this as a divergence of “revealed” and “normative” preferences (Beshears, Choi, Laibson, & Madrian, Reference Beshears, Choi, Laibson and Madrian2008). It is related to the biological concept of “evolutionary mismatch” (Burnham, Reference Burnham2016), which casts the “normative preference” (goal) as biological fitness. In “evolutionary mismatch,” an evolutionary goal – e.g., easily putting on weight to store energy for an uncertain future – lingers as revealed preference that was once also normative. In other words, a change in environment or context led a once adaptive behaviour to become maladaptive. But the behavioural economics concept is broader, in that human goals are not defined as biological fitness but can be any arbitrary notion of, e.g., utility or wellbeing, which may change dynamically (Dold & Schubert, Reference Dold and Schubert2018). Importantly, the present account works for either definition of “normative preference,” requiring only that revealed and normative preferences are not automatically equated.

It should be noted that the distinction between revealed and normative preferences is not uncontroversial, particularly within economics (Sugden, Reference Sugden2017; Thaler, Reference Thaler2018). For instance, Becker and Murphy (Reference Becker and Murphy1988) have famously argued that, because an addict takes drugs, this behaviour by definition reflects their preferences. This approach eliminates the possibility of neural proxy failure by assumption (Hodgson, Reference Hodgson2003). The issue remains controversial today, as reflected in debates for (Thaler & Sunstein, Reference Thaler and Sunstein2008), or against (Cowen & Dold, Reference Cowen and Dold2021; Rizzo & Whitman, Reference Rizzo and Whitman2019), so-called “nudging” as a remedy for what we here cast as neural proxy failure.

The brain can be seen as serving the goal of promoting the organism's fitness, utility, or wellbeing (the present account works for each definition of the goal; see Box 2). The variability of the environment and the diversity of actions available to humans make such goals immensely complex to achieve. To manage this complexity, individual neural systems take up subtasks, such as evaluating the relevance of perceived signals or the consequences of possible actions. For this evaluation, the brain must allocate limited resources such as attention or time, and it must associate perceptual inputs and potential actions with some measure of value. This function is performed by a sophisticated “valuation system,” which involves several parts of the limbic system, including medial and orbital prefrontal cortices, the nucleus accumbens, the amygdala, and the hippocampus (Lebreton, Jorge, Michel, Thirion, & Pessiglione, Reference Lebreton, Jorge, Michel, Thirion and Pessiglione2009; Levy & Glimcher, Reference Levy and Glimcher2012). These brain regions are strongly influenced by a neuromodulatory system that plays a key role in neural valuation: the dopamine system (Lewis & Sesack, Reference Lewis, Sesack, Bloom, Björklund and Hökfelt1997; Puig, Rose, Schmidt, & Freund, Reference Puig, Rose, Schmidt and Freund2014).

Many addictive drugs affect the dopamine system (reviewed in Wise & Robble, Reference Wise and Robble2020). Dopamine is often described as a pleasure chemical, but it is more accurately understood as one of the brain's proxies for “value,” broadly construed. The neuromodulator mediates behavioural “wanting” rather than hedonic “liking” (Berridge & Robinson, Reference Berridge and Robinson2016). Berke (Reference Berke2018) argues that dopamine enables dynamic estimates of whether a limited internal resource such as energy, attention, or time should be expended on a particular stimulus or action. Coddington and Dudman (Reference Coddington and Dudman2019) suggest that dopamine signalling reflects a “scalar estimate of consensus” – that is, a proxy – aggregated from complex inputs from across the brain, to estimate “value.” Analogously, the dopamine signal has also been described as approximating “formal economic utility” (Schultz, Reference Schultz2022).

When the dopamine system is functioning appropriately, it contributes to the welfare of the organism. Bursts of dopamine accompany unexpected rewards and other salient events (Bromberg-Martin, Matsumoto, & Hikosaka, Reference Bromberg-Martin, Matsumoto and Hikosaka2010), which enable synaptic credit assignment among neural representations of stimuli or courses of action (Coddington & Dudman, Reference Coddington and Dudman2019; Glimcher, Reference Glimcher2011). Over time, dopamine-modulated synaptic weights in motivation-related brain regions (Reynolds, Hyland, & Wickens, Reference Reynolds, Hyland and Wickens2001; Shen, Flajolet, Greengard, & Surmeier, Reference Shen, Flajolet, Greengard and Surmeier2008), including the nucleus accumbens, come to serve as proxies for the average value of the corresponding representations. Representations compete for internal resources such as energy, attention, or time (Berke, Reference Berke2018; Peters et al., Reference Peters, Schweiger, Pellerin, Hubold, Oltmanns, Conrad and Fehm2004). This class of neural competition can be described as “behaviour prioritization,” rather than the more specific “decision making,” as it spans a spectrum that includes attentional control (Anderson et al., Reference Anderson, Kuwabara, Wong, Gean, Rahmim, Brašić and Yantis2016; Beck & Kastner, Reference Beck and Kastner2009), exploration (DeYoung, Reference DeYoung2013), deliberative weighing of options (Deserno et al., Reference Deserno, Huys, Boehme, Buchert, Heinze, Grace and Schlagenhauf2015), and unconscious habit formation (Yin & Knowlton, Reference Yin and Knowlton2006).

Many drugs of addiction interfere with normal dopamine signalling, effectively hacking the neural proxy for value. This “hijacking” of the dopamine reward signal (Schultz, Reference Schultz2022) causes the brain to overvalue drug-related stimuli and actions (Franken, Booij, & van den Brink, Reference Franken, Booij and van den Brink2005; Wise, Reference Wise2004). Simultaneously, the dopaminergic action of drugs can lead to a decrease in overall dopamine sensitivity, because of well-established biochemical habituation effects (Diana, Reference Diana2011; Volkow et al., Reference Volkow, Wise and Baler2017). This leads to a decreased sensitivity to the drug, eliciting the need to seek progressively increasing dosages. At the same time any nondrug-related behaviours become less “valued.” The resulting neglect of basic social or bodily needs can lead to severe impairments in health and wellbeing. A similar dopamine-mediated phenomenon can occur in the case of any maladaptive habit, such as excessive unhealthy eating (Small & DiFeliceantonio, Reference Small and DiFeliceantonio2019; Volkow et al., Reference Volkow, Wise and Baler2017): The adaptive function of the dopamine system is undermined by the targeted pursuit of dopamine-triggering activities.

Note that the “regulator,” here, is the behaviour prioritization system of the addict's brain, and the “agents” are neural representations, which compete for a limited resource: Neuronal credit assignment (Cruz & Paton, Reference Cruz and Paton2021). The “regulator” consists of a neural circuit linking the basal ganglia, thalamus, and prefrontal cortex, that mediates competition between synaptic weights and representations (Bullock, Tan, & John, Reference Bullock, Tan and John2009; Mink, Reference Mink2018; Seo, Lee, & Averbeck, Reference Seo, Lee and Averbeck2012). The brain's “agents” are subpersonal entities, which cannot “game” the system. Instead, the dopamine system modulates synaptic weights, which ultimately represent the “value” of different “representations” of actions or stimuli. Drug use is an action that directly amplifies these proxies independent of their usual relationship to wider goals. Therefore, use-related synaptic patterns are strengthened, leading to increased drug-seeking.

The key limitations necessitating a proxy system are legibility, prediction, and the need to choose (sect. 3.2). Dopaminergic drugs mediate changes to synapses that are “legible” to the brain's valuation system: They manipulate the “currency” that mediates ordinary decisions. Natural selection cannot possibly anticipate all possible experiences and chemicals that can trigger elevated dopamine, reflecting the limits of prediction in a complex system. Similarly, the pattern-recognition systems in the brain cannot anticipate all possible consequences of the organism's actions. The need to choose implies that the brain must rank-order possible actions at any given moment.

But neural proxy failure is nevertheless contingent on numerous additional factors. A propensity for addiction has been linked to factors in childhood and adolescence (Leyton & Vezina, Reference Leyton and Vezina2014). More generally, most individuals consciously constrain behavioural impulses to use drugs or engage in other potentially maladaptive behaviours on a day-to-day basis. Indeed, the conscious shaping of habits and preferences appears crucial to a self-determined life (Dold & Schubert, Reference Dold and Schubert2018). Such individual efforts are embedded into legal and social constraints that can enable or discourage addiction and related behaviours (Braganza, Reference Braganza2022a).

In summary, addiction and maladaptive habit formation can be viewed as neural instances of proxy failure. To our knowledge, this is the first time that a neural phenomenon has been described in terms of Goodhart-like dynamics. The proxy failure perspective suggests that the risks of addiction and maladaptive habit formation may be natural consequences of pursuing goals via neural representations of (i.e., proxies for) value. If this is the case, a “brain disease” framing of addiction may contribute to excessive medicalization (Satel & Lilienfeld, Reference Satel and Lilienfeld2014) at the expense of exploring structural changes at broader environmental, social, or economic levels (Heather, Reference Heather2017). Later, we will explore a case in which humans appear to harness proxy divergence within themselves in order to support their conscious goals (sect. 5.3).

4.2. Proxy failure in economics: Performance metrics and indicatorism

The disciplines in which proxy failure may have been studied most intensely, albeit under different names, are management and economics (Holmström, Reference Holmström2017; Kerr, Reference Kerr1975; van der Kolk, Reference van der Kolk2022). Consider, for instance, the use of key performance indicators (KPIs). Price and Clark (Reference Price and Clark2009) state in unequivocal terms that: “people who are measured on KPIs have an incentive to play games. […] The indicator ceases to indicate.” A prominent early example was Lincoln Electric's plan to incentivize typists in its secretarial pool through a piece-rate for each key stroked (The Lincoln Electric Company, 1975). This apparently led to typists continuously tapping the same key during lunch hours (Baker, Reference Baker2002). Many additional entertaining examples are collected in Stephen Kerr's management classic: “On the folly of rewarding A, while hoping for B” (Kerr, Reference Kerr1975). More recent examples include the Wells Fargo scandal, where excessive incentives led to fraudulent “cross-selling” (opening additional accounts for customers without their knowledge and against their interest; Tayan, Reference Tayan2019). Many more empirical examples have been described in management and accounting research (Bonner & Sprinkle, Reference Bonner and Sprinkle2002; Bryar & Carr, Reference Bryar and Carr2021; Franco-Santos & Otley, Reference Franco-Santos and Otley2018) – for example, concerning CEO pay (Bénabou & Tirole, Reference Bénabou and Tirole2016; Edmans, Fang, & Lewellen, Reference Edmans, Fang and Lewellen2017).

Inspired by such empirical observations, economic theorists have developed a range of mathematical equilibrium models (Box 3), which describe proxy failure within what is known as the principal–agent framework (Aghion & Tirole, Reference Aghion and Tirole1997; Baker, Reference Baker2002; Bénabou & Tirole, Reference Bénabou and Tirole2016; Holmström, Reference Holmström2017). The task of the principal or “regulator” is cast as the design of an “optimal incentive contract,” given noise or the potential for active “distortion” (Baker, Reference Baker2002; Hennessy & Goodhart, Reference Hennessy and Goodhart2021). In other words, the principal must translate observable agent outputs into regulatory feedback (e.g., monetary incentives or promotions). The recurring result is that the potential for distortion implies a weaker optimal incentive strength (Baker, Reference Baker2002). In other words, whenever proxies do not perfectly capture the goal, a reduction in the reward for the proxy is warranted. In fact, a particularly important question in the early literature was why pure fixed wage contracts (i.e., contracts without performance-contingent pay) are so common in real businesses (Holmström, Reference Holmström2017). Why do companies not generally, or at least more frequently, base wages on measured performance? The answer is: The risk of proxy failure.

Box 3. Proxy failure and equilibrium models

Formal economic as well as ecological models often rely on equilibrium analysis, implicitly assuming an unchanging world. This is partially owed to prevalent methodological traditions that are focused on analytic models. Proxy failure in such a framework is by definition static, where divergence, if present, has progressed to its final level. This equilibrium assumption is intricately linked to ongoing controversies on proxy failure across disciplines (see, e.g., Box 5).

Disequilibrium research recognizes that many systems exist outside equilibrium (Wilson & Kirman, Reference Wilson and Kirman2016). Examples of proxy failure may be observable within corporations because markets have not yet had time to offer corrective feedback to novel forms of “hacking.” More generally, a dynamic perspective raises the question of whether equilibrium analysis is sufficient to capture proxy failure, which may be best understood as an “out-of-equilibrium” phenomenon.

Another focus of the economic literature is the role of proxies in large hierarchical organizations. For instance, we can think of KPIs as the communication interface between general management and individual departments. Aghion and Tirole (Reference Aghion and Tirole1997) develop a theory of delegation of authority in hierarchical corporate contexts, exploring the complex roles of information and “incentive congruence” in determining how proxies can be gainfully used. More generally, proxies appear to reflect the division of labour in complex organizations or multistep processes, which may occur on different timescales. Individual employees, particularly in sales and marketing roles, are typically incentivized for achieving short-term objectives such as signing a contract with a customer, or making contact with a potential buyer. Whole departments can then be measured on the aggregate numbers achieved by their employees. Each department or subdivision must regulate its own subdivisions, necessitating further proxy measures. The use of proxies at each level of such a hierarchy naturally entails the risk of proxy failure, which indeed has been extensively documented. For instance, Tardieu, Daly, Esteban-Lauzán, Hall, and Miller (Reference Tardieu, Daly, Esteban-Lauzán, Hall and Miller2020) explore proxy failure in the context of “parochial” KPIs: Those measured by individual departments that are imperfectly related to wider business outcomes. Ittner, Larcker, and Meyer (Reference Ittner, Larcker and Meyer1997) provide an example from a large bank: “A system put in place to provide incentives for branch managers to increase customer satisfaction quickly began to reward some of the most unprofitable branches” (Baker, Reference Baker2002).

In all these examples proxies diverge from the underlying goals of an organization. But proxies are nevertheless useful, indeed unavoidable, in practice. The informational factors necessitating proxy use by organizations are special cases of the three limits that necessitate proxies (sect. 3.2). Agents in a complex organization often have insufficient knowledge of the top-level organizational goal and how to promote it – the “legibility” limit. Proxies are expedient means of communicating the goal through department hierarchies to coordinate agent behaviour. The prediction limit describes that neither employers nor employees can foresee all the consequences of a given behaviour for the organization. The choice limit is reflected in the frequent evaluation of infinite possible employee actions by single scalar metrics such as KPIs. In addition, psychology is at play: Without direct incentives, which are (i) close in time, (ii) relatively certain, and (iii) directly personally relevant, employees may have little motivation to contribute to an organizational goal (Jones & Rachlin, Reference Jones and Rachlin2009; Kobayashi & Schultz, Reference Kobayashi and Schultz2008; Strombach et al., Reference Strombach, Weber, Hangebrauk, Kenning, Karipidis, Tobler and Kalenscher2015).

Another core insight of the economics and business literature concerns a fundamental constraint on proxy failure. If we posit a corporation's goal as competitive success in the market, proxy failure within businesses is naturally constrained by market selection (Braganza, Reference Braganza2022b). If proxies designed to regulate employees diverge too far from the corporation's goal, profit and share prices may plummet – causing behaviour change. In the worst-case scenario, the whole organization may go bankrupt. Market selection thus provides a hard constraint on proxy failure within a business. Whether implemented by savvy managers or market forces, the constraint controls regulatory models themselves. Because of this, the economic literature has tended to view the very existence of a given practice in real corporations (e.g., fixed wages) as evidence of its economic optimality (e.g., Baker, Reference Baker2002; Holmström, Reference Holmström1979, Reference Holmström2017).

This insight also explains why proxy failure is likely even more relevant in nonmarket-based economies, where the hard constraint of market selection is missing. An apocryphal anecdote (Shaffer, Reference Shaffer1963) tells of a Soviet factory rewarded by the Central Committee for the number of nails it produced, resulting in the output of millions of flimsy, useless nails. When the proxy was changed to instead reward the weight produced, the factory retooled to manufacture one single immense nail. Without the market constraint, proxy failure can remain unmitigated. Specifically, market proxies (e.g., profitability) ensure that consumers must actually find a product useful by measuring whether they buy it. However, the assumption that market proxies thus automatically serve our societal goals is also clearly problematic (Box 4).

Box 4. On market selection and social welfare

Important proxies in economics are price and profit (Friedman & Oren, Reference Friedman and Oren1995; Mas-Colell, Whinston, & Green, Reference Mas-Colell, Whinston and Green1995). In the standard economic view, markets are collections of players who regulate each other through price competition: Buyers (as regulators) seek out the lowest marginal price/utility ratio, using price as a proxy to decide the best supplier (agent); conversely, sellers take on the role of regulator and supply to whichever agent will pay the highest price. A secondary implication is that profit functions as a proxy for welfare increases, systematically directing economic resources to where they will most efficiently enhance welfare. In theory, this results in an equilibrium where all achieve their goals and total consumption directly tracks total welfare (Fellner & Goehmann, Reference Fellner and Goehmann2020).

In practice, many proxy failures are seen. Economists think of such cases as “market failures.” A prominent example is when the interaction between buyers and sellers adversely affects third parties (i.e., entails a so-called “negative externality”), for instance because of environmental degradation or the risk of catastrophic climate change (Fremstad & Paul, Reference Fremstad and Paul2022; Wiedmann, Lenzen, Keyßer, & Steinberger, Reference Wiedmann, Lenzen, Keyßer and Steinberger2020). These phenomena imply that some welfare costs are not accounted for in the price and profit proxies leading to “decoupling” from the underlying goal (Kelly & Snower, Reference Kelly and Snower2021; Mayer, Reference Mayer2021; Raworth, Reference Raworth2018). One canonical way to address such proxy failure is by leveraging “Pigouvian taxes” (e.g., a tax on CO2 emissions), which can “internalize externalities” into market prices, thus realigning the market-level proxies with the underlying societal goal (Masur & Posner, Reference Masur and Posner2015).

Another mechanism that potentially decouples economic proxies from their underlying goal is “persuasive advertising” (Box 8).

Finally, many scholars have pointed out that humans are to some degree purpose driven, that is, not solely interested in extrinsic (proxy) incentives and that extrinsic incentives can indeed undermine worker morale and human wellbeing more generally (Muller, Reference Muller2018; van der Kolk, Reference van der Kolk2022). This suggests an approach to mitigating proxy failure that is unique to human contexts: The promotion of intrinsic incentives towards the goal (Mayer, Reference Mayer2021; Pearce, Reference Pearce1982). Some firms advertise “social objectives,” such as to “Accelerate the world's transition to sustainable energy” for Tesla, to provide both guidance and motivation to employees, beyond their material incentives. This could mitigate proxy failure by directly changing agents' utility, as modelled in the multitasking literature (Braganza, Reference Braganza2022b; Holmström, Reference Holmström2017). It may also provide a second-order protective function by promoting employee vigilance towards the possibility of unanticipated avenues for proxy failure. Mayer (Reference Mayer2021) suggests that an explicit corporate purpose, combining both profitability and social/ecological responsibility, can mitigate proxy failure both within a firm and in the larger societal context.

In summary, economics and business research reveals key insights into proxies and their potential for failure. From theories of contract structure and measurement, to the theory of prices as a signal for optimal resource allocation (Friedman & Oren, Reference Friedman and Oren1995), to political debate about the effectiveness of the profit motive – choosing and evaluating proxies is a core activity of economists. The present perspective places this literature within a larger framework, suggesting that the mechanisms explored therein may inform, and be informed by, mechanisms studied in other disciplines.

4.3. Proxy failure in ecology: Sexual selection, nested goals, and countervailing forces

Recently, McCoy and Haig (Reference McCoy and Haig2020) have argued that proxy failure occurs in ecological systems in a manner analogous to many social systems. Here, we introduce and expand their argument and highlight the intriguing similarities to the case of economics.

Proxy failure provides a useful framework to understand evolution. For instance, peahens regulate peacocks via mate choice and mothers regulate offspring via embryo selection. The proxy in such cases is any trait or “signal” expressed by the regulated agent that affects regulation, such as a peacock's tail or an embryo's ability to produce hormones. Individual organisms can respond to such pressures as humans do, by deliberately increasing production of the proxy; for example, males of many species prefer to display near less attractive rivals, apparently so that they can themselves appear better (Gasparini, Serena, & Pilastro, Reference Gasparini, Serena and Pilastro2013). But deliberation is not necessary – animals with greater proxy values may also simply be selected and pass on proxy-producing genes. A discrepancy between a signal and the information it was originally selected to convey, that is, proxy failure, is often termed deception (without implying intentionality). It seems undisputable that both honest and deceptive signalling are widespread in ecological systems (Dawkins & Guilford, Reference Dawkins and Guilford1991; Krebs & Dawkins, Reference Krebs, Dawkins, Krebs and Davies1984; McCoy & Haig, Reference McCoy and Haig2020; Wickler, Reference Wickler1965). However, the question of whether stable ecological signals can safely be considered “honest” and “adaptive” has been a subject of controversy since Darwin (Box 5).

Box 5. On “honest” signalling or “adaptationism”

In “The Evolution of Beauty” Richard Prum (Reference Prum2017) reviews the contested history of “honest signalling” theories (also see Prum, Reference Prum2012). Adaptationists, whom Prum and others (e.g., Haig, Reference Haig2020; Lynch, Reference Lynch2007) identify as the mainstream of evolutionary biologists, assume that biological signals should be assumed “honest” or “reliable” indicators of underlying fitness unless proven otherwise (Zahavi et al., Reference Zahavi, Butlin, Guilford and Krebs1993). In this view, natural selection is the key force in evolution, and any trait or signal that exists can safely be assumed “adaptive” or “reliable” at equilibrium. The implication is that proxy failure and unreliable signalling more generally are of negligible importance, because (at equilibrium) they would have been eliminated by natural selection. One prominent account termed “costly signalling” theory (Spence, Reference Spence1973; Zahavi, Reference Zahavi1975) suggests that seemingly wasteful signals reliably communicate fitness because fitter individuals can afford more of them. Although many in ecology (and economics; Connelly et al., Reference Connelly, Certo, Ireland and Reutzel2011) accept honest signalling theories such as costly signalling as a “scientific principle,” others view it as an “erroneous hypothesis” (Penn & Számadó, Reference Penn and Számadó2020; Számadó & Penn, Reference Számadó and Penn2018). Interestingly, conclusions often seem to hinge on which parts of a system are tacitly assumed to be at equilibrium (Penn & Számadó, Reference Penn and Számadó2020).

By contrast, Darwin's theory of sexual selection suggested that “aesthetic” traits could be selected and become stable independent of their implications for “fitness,” i.e., their “adaptive value.” Later scholars have coined the term “runaway selection” for such processes (Fisher, Reference Fisher1930; Prum, Reference Prum2010; Rendell et al., Reference Rendell, Fogarty and Laland2011), suggesting they may underlie much of the diversity of display traits in the animal kingdom (Prum, Reference Prum2017; also see sect. 5.1 and Box 6). From the “adaptationist” perspective, runaway signals must be viewed as proxy failure because signals do not reliably indicate fitness.

Sexual selection in colourful birds and fish is a useful case study for proxy failure (McCoy et al., Reference McCoy, Shultz, Vidoudez, van der Heide, Dall, Trauger and Haig2021). Many female birds and fish choose a mate based on a proxy signal of quality. A famous proxy is carotenoid-based pigmentation: These red–orange–yellow colours are often considered “honest signals” of fitness, either because they incur a cost (Spence, Reference Spence1973; Zahavi, Reference Zahavi1975) or are causally linked to metabolic processes (Maynard-Smith & Harper, Reference Maynard-Smith and Harper2003; Weaver, Santos, Tucker, Wilson, & Hill, Reference Weaver, Santos, Tucker, Wilson and Hill2018). It is thought that female birds and fish prefer redder males because this trait reliably (i.e., honestly) communicates their fitness. However, some male fish and birds have evolved strategies to maximize the proxy without increasing their carotenoid pigments. Some male fish have evolved the ability to synthesize alternative (potentially cheaper) pigments (pteridines; Grether, Hudon, & Endler, Reference Grether, Hudon and Endler2001); some male birds have evolved light-harnessing structures that make their feathers appear more colourful without requiring more pigment (McCoy et al., Reference McCoy, Shultz, Vidoudez, van der Heide, Dall, Trauger and Haig2021). Females have sensory limits; they cannot always tell what produces the red colour. In other words, the evidence suggests that carotenoid-based signalling is not always honest – male birds and fish can “game” the system, driving proxy failure.

Other apparent instances of proxy failure appear in mother–embryo conflict. Babies are costly; therefore, mothers assess embryo quality early and automatically abort less-fit embryos based on proxy signals. Embryos are incentivized to “hack” this system and ensure their own survival. For instance, marsupial infants are born mostly helpless and must find and latch on to a maternal teat to survive. However, some marsupials birth more offspring than they have nipples available, so the infants must literally race to their mother's nipples. Slower individuals are doomed (Flynn, Reference Flynn1922; Hill & Hill, Reference Hill and Hill1955). For example, Hartman (Reference Hartman1920) observed 18 neonates of Didelphis virginiana reaching the pouch and competing for only 13 teats. Mothers seem to screen infants for quality through this race to the teat; speed is a proxy of quality. In response, infants are born with exceptionally strong forearms – they overinvest in performance at the maternal test. A weaker infant who invested their developmental energy into forearms will win against a healthier rival who invested their developmental energy equally across body parts.

Plants similarly show instances of proxy failure in embryo selection. In many plants, far more initial embryos are produced than ultimately survive, because of various stages of maternal screening and selective abortion, but embryos have evolved many strategies to boost their chances of being selected (Shaanker, Ganeshaiah, & Bawa, Reference Shaanker, Ganeshaiah and Bawa1988). For example, the plant Caesalpinia pulcherrima produces seed pods that have between 1 and 10 seeds per pod, and the mother selectively aborts pods with fewer seeds. However, some of these “less fit” pods evade abortion attempts – and then absorb their siblings, acting in a manner contrary to the evolutionary interests of the mother. Similarly, embryos in the woody tree Syzygium cuminii secrete factors to promote abortion of their peers – potentially a response to maternal selection for healthy embryos (Arathi, Ganeshaiah, Shaanker, & Hegde, Reference Arathi, Ganeshaiah, Shaanker and Hegde1996). For a thorough treatment, see Willson and Burley (Reference Willson and Burley1983). Another example of proxy failure in embryo selection in humans, other primates, and horses is presented in Section 5.1.

So what constrains proxy failure in ecology – which forces can keep signals honest (Andersson & Simmons, Reference Andersson and Simmons2006; Connelly, Certo, Ireland, & Reutzel, Reference Connelly, Certo, Ireland and Reutzel2011; Harris et al., Reference Harris, Daon and Nanjundiah2020; Weaver et al., Reference Weaver, Santos, Tucker, Wilson and Hill2018)? We have already noted the controversial nature of questions regarding when or even if ecological signals are honest (Box 5). Here we sidestep such controversies by simply reviewing proposed mechanisms without attempting to claim a general validity (or falsity) of any specific mechanism. In general, the natural selection of regulators should promote “honest signals” (Maynard-Smith & Harper, Reference Maynard-Smith and Harper2003). Such selection should (i) constrain divergence where it occurs and (ii) systematically lead to the identification of proxies that are more resistant to failure. First, even though we have outlined cases in which presumed “honest signals” were gamed, this does not imply that some proxies may not be more resistant to gaming than others, for example, because they are more tightly causally linked to the actual goal (so-called “index signals”; Maynard-Smith & Harper, Reference Maynard-Smith and Harper2003; Weaver et al., Reference Weaver, Santos, Tucker, Wilson and Hill2018). Another proposed mechanism to keep signals honest, which has been highly influential across ecology and economics (Connelly et al., Reference Connelly, Certo, Ireland and Reutzel2011), is “costly signalling” or the “handicap principle” (Zahavi, Reference Zahavi1975; Box 5). It is the idea that, if the cost of a signal is negatively correlated with fitness, then fitter individuals can afford to produce more signal. Regardless of mechanism, selection should favour those individuals that do not unnecessarily fall prey to deceptive signals. This in turn should lead to the selection of more honest signals whenever possible.

Another intriguing parallel to the business literature is that life is often organized in nested hierarchies (Ellis & Kopel, Reference Ellis and Kopel2019; Haig, Reference Haig2020; Kirchhoff et al., Reference Kirchhoff, Parr, Palacios, Friston and Kiverstein2018; Okasha, Reference Okasha2006). Goals at one level of natural selection can become proxies at another level (Farnsworth, Albantakis, & Caruso, Reference Farnsworth, Albantakis and Caruso2017). Species compete within ecosystems; individuals compete within species; cells compete within individuals; genes compete within genomes; and so on (Okasha, Reference Okasha2006). At the highest level, natural selection regulates all living creatures (agents) according to how well they pass their genetic material on to future generations (the goal – see Appendix 1 for discussion of this choice of word). This genetic goal is operationalized through many simpler behavioural proxies: Most creatures are “wired” to survive predation, find food, attract a mate, invest in healthy offspring, and more. These proxy tasks are not themselves important except insofar as they contribute to the goal of passing on genetic material. Each of these proxies has become a goal, itself operationalized by further proxies.

This nested hierarchy offers many future areas of research into proxy failure in evolution. Consider, for example, “selfish” genetic elements (Ågren & Clark, Reference Ågren and Clark2018; Haig, Reference Haig2020) that disproportionately insert themselves into the genome of an organism's offspring; they have “hacked” the organismal proxy of reproduction by jumping up a level in the hierarchy to the actual goal (transmitting genetic information). Certain weeds have evolved to mimic crop plants through a proxy divergence process known as Vavilovian mimicry (Vavilov, Reference Vavilov1922), requiring a “model” (the crop), “mimic” (weed), and “operator” (human or herbicide) (McElroy, Reference McElroy2014). Similarly, signalling between predators and prey, plants and animals, cleaner fish and their clients, and more, is likely all subject to proxy failure and its consequences.

In summary, proxies and proxy failure appear to be as common and well researched in ecological systems as in economic systems, albeit characterized using different terms, such as deceptive or “runaway” signalling. Although the regulatory feedback mechanism is often selection rather than incentivization, the similarities in the resulting dynamics are striking (McCoy & Haig, Reference McCoy and Haig2020). Yet it should also be emphasized that this does not negate the important differences between biological and human contexts: It seems neither accurate nor desirable to reduce individual and social human goals to biological fitness.

5. Consequences of proxy failure: The proxy treadmill, the proxy cascade, and proxy appropriation

In the previous sections we have defined proxy failure and given examples from diverse fields. Here, we will describe three important consequences of the phenomenon. In each case proxies, goals, regulators, or agents, across multiple systems interact. An overview is shown in Table 4.

Table 4. Overview of consequences of proxy failure

Summary of phenomena is described in Section 5.

5.1. The proxy treadmill: When agent and regulator race to arms

McCoy and Haig (Reference McCoy and Haig2020) describe a second-order dynamic arising from proxy failure, termed the “proxy treadmill” (Fig. 2). The proxy treadmill describes an arms race between agent and regulator, leading to ratcheting cycles of hacking and counter-hacking. This leads to the emergence of three hallmark phenomena, namely (i) instability, (ii) escalation, and (iii) elaboration. We argue below that these phenomena promote the seemingly inexorable growth of complexity in both biological and social systems.

  • Instability arises in a system when an arms race between regulator and agents prevents stable equilibria from being attained. Optimization implies that agents will never stop “searching” for new ways to hack a signal, and given the complexity of ecological systems, they are likely to find one eventually, triggering a new cycle of counter-hacking by the regulator.

  • Escalation describes a quantitative manifestation of instability, whereby agents find novel ways to produce more of the same proxy, and regulators respond by requiring higher proxy levels. It can account for the size of peacock tails, the inflation of embryonic hormone production, the inflation of test scores in education, or publication counts in academia.

  • Elaboration describes a qualitative manifestation of instability, whereby proxy failure continuously drives the modification of given proxies, or the addition of new proxies. It helps characterize the evolutionary elaboration of the peacock's tail into complex shapes, the chemical modification and addition of hormones secreted by embryos, and the addition of novel criteria in education (extra-curriculars) or academic competition (alt-metrics).

Figure 2. The proxy treadmill. Illustration of the proxy treadmill and its consequences. (A) An agent “hacks” a proxy by, say, finding cheaper ways to make it. The regulator responds by “counter-hacking,” for example, requiring higher levels of the proxy, or requiring new proxy attributes. Each pursues their own goal (which for the agent, here, is reduced to wanting to maximize the proxy). (B) This results in instability, escalation, and elaboration of the signalling system.

McCoy and Haig (Reference McCoy and Haig2020) introduce the proxy treadmill and its three hallmarks in the context of mother–embryo conflict, reviewing traces it has left in the genomes of both primates and horses. A signal originally used by the mother as a proxy for embryo viability during early pregnancy, luteinizing hormone, was hijacked by the embryo to improve its own chances of survival: Embryos produce a hormone that mimics luteinizing hormone called chorionic gonadotropin or CG (Casarini, Santi, Brigante, & Simoni, Reference Casarini, Santi, Brigante and Simoni2018). Several successive rounds of hacking and counter-hacking appear to have ensued, leading to an escalation to a present in which “heroic” amounts of hormone are produced by embryos (Zeleznik, Reference Zeleznik1998) and required by mothers (McCoy & Haig, Reference McCoy and Haig2020). Simultaneously, elaboration of the signal occurred, as embryos accumulated mutations that changed the structure of the hormone, for instance increasing the half-life and thereby the concentration of the hormone in circulation (Henke & Gromoll, Reference Henke and Gromoll2008). This illustrates how instability, escalation, and elaboration have shaped the present signalling system. Analogous processes occur when embryos evolve new “mimics” of proxy signals or mothers interpret existing signals in new ways (see Haig, Reference Haig1993; McCoy & Haig, Reference McCoy and Haig2020). The authors liken this to the addition of novel selection criteria in job interviews, after grades have inflated to a degree of relative uninformativeness.

A key tenet of the proxy treadmill is that uninformative proxies may sometimes linger for a variety of reasons, despite the pressures of natural selection on both the regulator and the agents. For instance, regulators may keep observing relatively uninformative proxies because of a first-mover disadvantage, which is closely related to Fisherian runaway signalling (Fisher, Reference Fisher1930; McCoy & Haig, Reference McCoy and Haig2020; Prum, Reference Prum2010; Box 6). Fisher argued that any peahen with a preference for peacocks with small tails will have sons who are unattractive to other peahens. In other words, large tail sizes could stabilize, not because they signal fitness, but because the male trait and the female preference for that trait are jointly inherited. By this mechanism, competitive selection can lock in a real selective advantage for an otherwise uninformative, or even a fitness-decreasing, proxy. McCoy and Haig note the similarity with Princeton University's unilateral attempt to halt grade inflation (Stroebe, Reference Stroebe2016): No other universities followed suit, students began ranking Princeton lower, and Princeton eventually abandoned the policy (Finefter-Rosenbluh & Levinson, Reference Finefter-Rosenbluh and Levinson2015). Other reasons why uninformative proxies may linger are simple slack in the selection of regulators or that it can be economical for regulators to employ even poor proxies, if the cost of acquiring more accurate information is high (see Dawkins & Guilford, Reference Dawkins and Guilford1991).

Box 6. Relation between the proxy treadmill and Fisherian “runaway”

The proxy treadmill generalizes a prominent model of sexual selection, namely Fisherian runaway (Fisher, Reference Fisher1930; Lüpold et al., Reference Lüpold, Manier, Puniamoorthy, Schoff, Starmer, Luepold and Pitnick2016; Prum, Reference Prum2010). Fisherian runaway is one possible instance of a proxy treadmill in the domain of sexual selection, which rests on the genetic association between a trait and a preference because of coinheritance. A related mechanism is “runaway cultural niche construction,” where the association of trait and selection mechanism (the niche) occurs because of geographic colocalization (Rendell et al., Reference Rendell, Fogarty and Laland2011). Both models answer the question of why signals might escalate (i.e., run away), positing specific association mechanisms. Modelling papers have also shown that Fisherian runaway may promote trait elaboration (Iwasa & Pomiankowski, Reference Iwasa and Pomiankowski1995). The proxy treadmill is agnostic about the association mechanism and even whether one exists. Instead, all it requires is a conflict of interest between low-quality males and choosy females: In its minimal form the proxy treadmill reflects simply the continuous breakdown and replacement of signals.

The proxy treadmill bears striking similarities to what might be called the “academic proxy treadmill” (Biagioli & Lippman, Reference Biagioli and Lippman2020), and the “bureaucratic proxy treadmill” (Akerlof & Shiller, Reference Akerlof and Shiller2015). Biagioli and Lippman (Reference Biagioli and Lippman2020) describe how in academia, “new metrics and indicators are being constantly introduced or modified often in response to the perceived need to adjust or improve their accuracy and fairness. They carry the seed of their never-ending tuning and hacking, as each new metric or new tuning of an old one can be subsequently manipulated in different ways.” Similar mechanisms of continuous hacking and counter-hacking appear to occur when a government regulates corporations, which find ways to hack the regulations, triggering novel regulations (Akerlof & Shiller, Reference Akerlof and Shiller2015). The predictable results, analogous to biological cases, are continuously expanding and ever more complex bureaucracies (Muller, Reference Muller2018).

Together, this suggests that the proxy treadmill may act as a driver of complexity in both biological and social systems. Some proxies may rapidly inflate and then completely fail, as was the case for the Hanoi rat massacre. Other proxies may inflate and then remain stable, even though they have become less informative, such as inflated grades (McCoy & Haig, Reference McCoy and Haig2020), publication metrics (Biagioli & Lippman, Reference Biagioli and Lippman2020), or “aesthetic signals” (Prum, Reference Prum2012). Still other proxies may be impossible to inflate because they are causally linked to the goal (index signals) or may remain informative even while inflating (costly signals; Harris et al., Reference Harris, Daon and Nanjundiah2020; Zahavi, Butlin, Guilford, & Krebs, Reference Zahavi, Butlin, Guilford and Krebs1993). Crucially, any change in a proxy or incorporation of a new proxy by the regulator gives rise to novel affordances for the agent, which can lead to new rounds of hacking and counter measures. The result is an ecology of signals, constantly being renegotiated, but tending to increase in complexity and elaboration over time. Note that we do not mean to imply it is the only driver of complexity; as for example in biological systems numerous additional forces have been studied (Box 7).

Box 7. Drivers of biological complexity

Importantly, the proxy treadmill is just one driver of biological complexity. Many additional forces, both adaptive and nonadaptive (Lynch, Reference Lynch2007), likely play key roles: The accumulation of neutral mutations (Gray, Lukeš, Archibald, Keeling, & Doolittle, Reference Gray, Lukeš, Archibald, Keeling and Doolittle2010), the evolution of mechanisms to suppress conflict at lower levels (Buss, Reference Buss1988), and other evolutionary arms races. The relation between the proxy treadmill and other evolutionary arms races (e.g., predator–prey or host–parasite) is particularly close, and so merits disambiguation: The proxy treadmill explicitly concerns a signal and its interpretation, which need not be present for other arms races.

In summary, proxy failure may lead to a second-order dynamic: The proxy treadmill, an arms race of hacking and counter-hacking that causes instability, inflation, and elaboration. This phenomenon again highlights the question of whether equilibrium analyses may systematically obscure some of the more interesting behaviours of biological and social systems: Signalling systems may remain far from equilibrium because of high levels of instability, and equilibria that are reached may be transient. Proxy failure and the proxy treadmill may be key drivers of complexity in both biological and social systems.

5.2. The proxy cascade: When high-level proxies constrain lower-level proxy failure

Throughout this paper, we have repeatedly noted that proxy-based decision systems are embedded in larger hierarchies, where high-level goals may constrain or shape low-level goals (Heylighen & Joslyn, Reference Heylighen, Joslyn and Meyers2001). For instance, markets contain corporations, which contain departments, which contain individuals (Aghion & Tirole, Reference Aghion and Tirole1997). Individuals are themselves nested hierarchical systems, containing organs such as the brain which in turn are organized in subsystems, down to the level of cells and organelles (Farnsworth et al., Reference Farnsworth, Albantakis and Caruso2017). How, then, might proxies and goals interact across multiple levels? Here, we argue that high-level regulation can constrain and control proxy failure at all lower levels – a cascade of regulation.

An important point to reiterate before we proceed is that the identification of proxy failure is inseparable from the definition of a goal at a specific level. In a hierarchy, the proxy of a “parent” regulator typically becomes the goal for a “child” agent. The failure claim is thus specific to the goal of the regulator but not to that of the agent. We emphasize this point, because considering proxy failure across levels requires particular attention to (i) goals at multiple levels and (ii) the specific goal that underlies the claim of proxy failure.

How can regulation cascade down a sequence of proxies in a nested hierarchy, effectively allowing the pursuit of integrated, high-level goals? Consider a corporation in which management has created a proxy (sales revenue) to promote its goal (say, profitability; Fig. 3). The sales department adopts sales revenue as their goal, and therefore creates a proxy (the number of telephone calls made) to motivate the outputs of its salespeople. The proxy of each higher level becomes the goal of the lower level. Proxy failure can occur at any level. But importantly, the corporation itself is subject to regulation at a still higher level, namely market selection.

Figure 3. The proxy cascade. Proxies often regulate the interaction between nested modules of complex hierarchical systems, such as corporations. Goals of lower levels tend to be proxies used by higher levels to regulate and coordinate those lower levels. The proxy cascade describes how a higher-level constraint cascades down the levels, constraining proxy failure at all lower levels.

First, assume proxy failure occurs at the department level: The sales department artificially inflates sales revenue numbers, perhaps by an accounting trick. If such department-level proxy failure undermines firm profitability (the firm-level goal) to a large enough extent, then bankruptcy might ensue. In practice, such an outcome may be mitigated by corporate management, which is thus efficiently incentivized to constrain department-level proxy failure. Regardless of the mechanism, market selection provides a higher-level constraint on the degree of proxy divergence at the department level.

Second, assume proxy failure occurs at the subdepartment level. Recall that the sales department chose the number of calls made as its proxy for the goal of increased revenue. Now employees have the goal to maximize the number of calls, but they may do this in ways that do not lead to increased sales. Perhaps employees purposely keep calls short to increase total number, even though they would be more likely to make a sale if they stayed on the line longer with potential clients. This would undermine the department-level goal as well as the corporate-level goal: More calls, lower sales revenue, less profit. Would this lower-level proxy failure also be constrained? It arguably would. First, if the department-level proxy is constrained to function reasonably well, then department managers are efficiently incentivized to mitigate lower-level proxy failure. Second, if, for whatever reason, subdepartment-level proxies do excessively diverge, the firm will go bankrupt just like a corporation with excessively divergent proxies at the department level.

Therefore, wherever proxy failure occurs in a hierarchy, it seems to be constrained by the highest-level proxy – what we term a proxy cascade. In the words of Heylighen and Joslyn (Reference Heylighen, Joslyn and Meyers2001), “higher-level negative feedbacks typically constrain the growth of lower-level positive feedbacks.” Informational limits (sect. 3.2) will still produce the pressure towards proxy failure and runaway dynamics at each level, but regulation of the regulators causes a counter pressure which cascades down the levels. At each level, this may lead to dynamic phenomena such as the proxy treadmill, the identification of more stable or honest proxies, or a mix of such phenomena. Overall, the result is a noisy self-organization of the cascade of proxies, ultimately allowing the integrated pursuit of a high-level goal.

The proxy cascade also seems to capture adaptationist (see Box 5) aspects of the interaction between natural selection and sexual selection. For example, female birds choose males based on how beautiful the males are – a proxy of their quality as a mate. But if the male develops ornaments that are too cumbersome or burdensome, the male (and his offspring) cannot survive the rigours of natural selection. This is not merely theoretical; Prum (Reference Prum2017) describes “evolutionary decadence,” by which female aesthetic choice shapes outrageous ornaments in certain bird species – potentially increasing extinction risk. Beyond mate choice, another instance of the proxy cascade concerns cooperative proxies for intimately associated organisms (such as hosts and symbionts); when cooperation is not properly enforced, or when proxies for cooperation fail, species face extinction risk (Ågren, Davies, & Foster, Reference Ågren, Davies and Foster2019); for example, Wolbachi endosymbionts threaten survival of populations of butterfly by altering sex ratios (Dyson & Hurst, Reference Dyson and Hurst2004). In this manner, natural selection may exert a constraining hand over proxy failure at lower levels in ecology. We offer these as initial ideas of proxy cascades in biology, but note that substantial further theoretical and empirical research is needed.

The proxy cascade suggests that high-level regulation can constrain lower-level proxy failure. It explains why high-level selection systems, such as natural selection or the market, can so efficiently coordinate complex structures towards coherent high-level goals. Further, the cascade of proxies suggests another possible opportunity for our pursuit of human goals – and that is the subject of the next section.

5.3. Proxy appropriation: When high-level goals hack low-level proxies

Finally, we introduce the novel idea that higher-level systems may harness or “appropriate” proxy failure at lower levels to achieve their high-level goals. We illustrate this using the example of artificial sweeteners.

Humans sense sweetness via the oral T1 receptor system, which employs a molecular “lock and key” mechanism (DuBois, Reference DuBois2016; Li et al., Reference Li, Staszewski, Xu, Durick, Zoller and Adler2002). The “goal” of such taste receptors is to enable the organism to base food choices on estimates of the nutritional value of ingested foods. The specific mechanism by which “assessment” takes place resides in the intricate molecular structure of the receptor dimer (the lock; Perez-Aguilar, Kang, Zhang, & Zhou, Reference Perez-Aguilar, Kang, Zhang and Zhou2019), which was shaped by evolution to “identify” valuable substrates (the keys). If “lock” and “key” sufficiently match, the molecules bind and T1 receptors initiate a complex signalling cascade, which ultimately feeds into a value estimate of the neural reward system (Breslin, Reference Breslin2013). Sweetness, as assessed by T1 receptor binding probability, can thus be understood as a proxy for nutritional value. All organisms, from bacteria to humans, rely on similar molecular assessment mechanisms to “infer” the nutritional value of molecular substrates, in order to guide ingestion decisions towards the goals of survival and homoeostasis (Jeckelmann & Erni, Reference Jeckelmann and Erni2020).

If receptor binding probability can be considered a “molecular proxy,” then what constitutes hacking? Some molecules might mimic the specific biophysical properties of the target molecules leading to “non-specific” binding (Frutiger et al., Reference Frutiger, Tanno, Hwu, Tiefenauer, Vörös and Nakatsuka2021). The artificial sweetener aspartame binds to the T1 receptor, eliciting a 200-fold more powerful sweetness effect than conventional sugars (Magnuson, Carakostas, Moore, Poulos, & Renwick, Reference Magnuson, Carakostas, Moore, Poulos and Renwick2016). Although the exact molecular mechanisms remain incompletely understood (DuBois, Reference DuBois2016), the consequence is that subjective sweetness ceases to track nutritional value. In other words, the pursuit of the proxy (whatever the receptor binds) has undermined its relation to the original evolutionary goal of that receptor (identifying energy-dense sugar molecules). Similar accounts emerge for some cases of neural or ecological proxy failure when they are traced down to the molecular level – they often involve “nonspecific” binding of receptors that play key roles in regulatory systems (see sects. 4.1 and 4.3).

The case of aspartame is particularly interesting, however, because it seems to reflect proxy appropriation rather than proxy failure: Proxy failure with respect to the goal of ingesting sugar would seem to imply undernourishment, but consumers choose sweeteners precisely because they allow sweet sensations without high calorific value. The fact that aspartame undermines the original purpose of both the molecular receptors and the neural reward system, thus becomes a feature. Our conscious self can hack our own reward system by intervening at the molecular level. Although the excessive use of artificial sweeteners entails problems of its own (Borges et al., Reference Borges, Louzada, de Sá, Laverty, Parra, Garzillo and Millett2017; Hsiao & Wang, Reference Hsiao and Wang2013), this phenomenon nevertheless illustrates that lower-level proxy failure can be harnessed to promote higher-level goals, allowing us to overcome narrow evolutionary dictates.

Above, we have observed the interaction between proxies from the molecular level to the level of the conscious individual. Remarkably, the ramifications of the proxy nature of the T1 receptor can also be traced further up the levels to social and economic proxy systems. The choices of individuals to consume aspartame translate to the profitability of products containing aspartame. In other words, the molecular proxy performance determines the market-level proxy performance, directing human activities and resources towards the production, processing, distribution, marketing, and regulation of the artificial sweetener. In this example, the goals of conscious individuals are arguably what directs the whole system, harnessing proxy failure at lower levels and guiding proxies at higher levels. To what extent this can be generalized to other chemical components of our remarkably unexplored modern diets remains to be explored (Barabási, Menichetti, & Loscalzo, Reference Barabási, Menichetti and Loscalzo2019). Indeed, corporate practices that harness addictive dynamics (e.g., in fast food; Small & DiFeliceantonio, Reference Small and DiFeliceantonio2019) appear to reflect additional instances, in which proxy failure at a lower level (“disrupted gut–brain signalling”) is harnessed to serve higher-level goals (profitability; Box 8).

Box 8. On “informative” vs. “persuasive” advertising (and marketing in general)

A perennial debate within economics concerns the question of whether corporate practices such as advertising primarily serve consumer or corporate goals (Akerlof & Shiller, Reference Akerlof and Shiller2015; Becker & Stigler, Reference Becker and Stigler1977; Hodgson, Reference Hodgson2003; Susser, Roessler, & Nissenbaum, Reference Susser, Roessler and Nissenbaum2019). By the “informative” view, advertising is predominantly “informative” and not intended to change consumer “tastes” (Becker & Stigler, Reference Becker and Stigler1977). The upshot is that advertising strictly serves consumer goals (Kirkpatrick, Reference Kirkpatrick1994; Rizzo & Whitman, Reference Rizzo and Whitman2019).

By contrast, the “persuasive” view suggests that advertising (or marketing more generally; Franklin, Ashton, Gorman, & Armstrong, Reference Franklin, Ashton, Gorman and Armstrong2022) can manipulate consumer behaviour and preferences (Hodgson, Reference Hodgson2003; Packard, Reference Packard1958; Zuboff, Reference Zuboff2019). This reflects an instance of proxy appropriation because consumers cease to be the “sovereign” of the market (Chirat, Reference Chirat2020; Galbraith, Reference Galbraith1998). Instead, market-level proxies hack neural proxies to serve the higher-level goal of maximizing profitability and consumption (Braganza, Reference Braganza2022a, Reference Braganza2022b). Topical examples abound: For instance, “dark patterns” are digital interfaces designed to increase profits by deceiving or manipulating consumer (Luguri & Strahilevitz, Reference Luguri and Strahilevitz2021; Mathur et al., Reference Mathur, Friedman, Mayer, Narayanan, Acar, Lucherini and Chetty2019). Descriptions of such practices in both digital (Zuboff, Reference Zuboff2019) and nondigital (Akerlof & Shiller, Reference Akerlof and Shiller2015; Hanson & Kysar, Reference Hanson and Kysar1999) contexts are too numerous to list. We subsume within this view any marketing practice that harnesses addictive consumer behaviour (Hadland, Rivera-Aguirre, Marshall, & Cerdá, Reference Hadland, Rivera-Aguirre, Marshall and Cerdá2019; Newall, Reference Newall2019; Stuckler, McKee, Ebrahim, & Basu, Reference Stuckler, McKee, Ebrahim and Basu2012). Note the close relation of the present issue and the controversies discussed in Boxes 2 and 4.

Exploring these cases suggests two important observations. The first is that categorizing a phenomenon as proxy failure requires attributing a specific goal to the system. It is often natural to consider the goals of conscious individuals. But it is important to recognize that the system may ultimately be guided by goals at other levels. The second is that proxies at one level tend to be goals at other levels, raising the question to which degree a chain or web of proxies is sufficient to understand proxy failure, and teleonomic behaviour more generally. Goals appear as abstract objects, which help us understand why proxies are the way they are or how they might develop into the future. Yet the only tangible footprints that goals leave behind in the physical world are proxies.

6. Conclusion and outlook

Certain forms of knowledge and control require a narrowing of vision. The great advantage of such tunnel vision is that it brings into sharp focus certain limited aspects of an otherwise far more complex and unwieldy reality. This very simplification, in turn, makes the phenomenon at the centre of the field of vision more legible and hence more susceptible to careful measurement and calculation. Combined with similar observations, an overall aggregate, synoptic view of a selective reality is achieved, making possible a high degree of schematic knowledge, control, and manipulation.

Seeing Like a State (Scott, Reference Scott2008)

In this paper we have argued that a plethora of diverse phenomena across biological and social scales can be categorized as proxy failure. Specifically, we suggest that whenever a regulator seeks to pursue a goal by incentivizing or selecting agents based on their production of a proxy, a pressure arises that tends to push the proxy away from the goal. We have explored examples of proxy failure in three domains, namely neuroscience, economics, and ecology, drawing attention to numerous similarities concerning both drivers and constraints of the phenomenon. We then outlined three characteristic consequences of proxy failure, which similarly recur across natural and social systems: The proxy treadmill suggests that signalling equilibria will often be unstable, tend to inflation, and drive increasing complexity. The proxy cascade suggests that, within nested hierarchical systems, higher levels can constrain proxy failure at lower levels, allowing goal-directed behaviour of the entire system. Finally, proxy appropriation suggests that higher-level proxies can harness proxy failure at lower levels to serve higher-level goals. The proxy perspective thus offers a powerful lens to explore how goal-oriented systems can achieve, or fail to achieve, their goals, and how the tension between both can drive complexity and coordination across biological and social systems.

Our account provides a theoretical unification across diverse disciplines and terminologies. However, the proposed synthesis is far from exhaustive: There are numerous instances of proxy failure in areas we have neglected (e.g., Muller, Reference Muller2018). Further, over the course of writing this paper we have come to realize that formal theories and models of proxy failure tend to be highly domain specific: It is unclear how they relate to each other, or how a formal model of the unified mechanism presently outlined might look. Proxy failure draws attention to a variety of difficult conceptual questions, including about the appropriateness of using analytic equilibrium modelling to study what may be better understood as out-of-equilibrium phenomena. It also touches on a range of controversies within disciplines, such as (in biology) whether signals are generally honest or (in economics) whether revealed preferences are generally normative. We have briefly discussed some of these issues in Boxes 28. But there is clearly much need of further investigation and debate. For these reasons we view open peer commentary as particularly well-suited to the topic: We look forward to additional examples of proxy failure and fresh theoretical considerations.

The proxy failure framework also dovetails with important lessons from the chequered history of human social engineering. In Seeing Like a State, the anthropologist James C. Scott describes how attempts to impose rationalized, static regulatory models onto populations routinely fail. The reason, we have argued, is that exploratory behaviour aimed at proxy maximization will eventually uncover the blind spots of a regulatory model. Our account suggests that the risk of proxy failure should never be underestimated, particularly when proxies have become established. In the words of Bryar and Carr (Reference Bryar and Carr2021) “unless you have a regular process to independently validate the metric, assume that over time something will cause it to drift and skew the numbers” (emphasis added). In the worst case, proxies continue to be observed uncritically and effectively “displace the goal” (Merton, Reference Merton1940; Wouters, Reference Wouters, Biagioli and Lippman2020). Muller (Reference Muller2018) observes that proxies often direct “time and resources away from doing and towards documenting” that can lead to a “mushroom-like growth of administrative staff.” This may have the additional effect of “crowding out” intrinsic incentives (Frey & Jegen, Reference Frey and Jegen2001): As teachers or doctors are increasingly micromanaged, they will lose both the time and the will to perform aspects of their job that are difficult to measure. The result is a system with disillusioned practitioners, in which an ever-increasing share of total resources is directed towards increasing and measuring proxies (Muller, Reference Muller2018).

But in the best case, proxy failure is continuously assessed and proxies are modified, deemphasized, or discarded as appropriate. There are numerous examples where this appears to have happened: Akerlof and Shiller (Reference Akerlof and Shiller2015) describe a process of continuous hacking and counter-hacking between regulators and pharmaceutical companies over the course of the twentieth century. Despite many successful attempts by agents to subvert regulation, it seems clear that the overall safety of pharmaceuticals has dramatically improved since the times when arsenic was sold as a universal cure. Similarly, dynamic regulation has contributed to increases in the safety of many consumer products and workplaces. Finally, in some cases it may be best to pause – and ask if the costs of an externally imposed proxy system outweigh its benefits (Muller, Reference Muller2018). The present perspective suggests that, when we do opt for proxies, then one of the greatest risks to real progress will be to underestimate the risk of proxy failure.

Acknowledgements

We thank David Haig, the participants of the Bonn AI-ethics seminar, and six anonymous reviewers for helpful comments on the draft.

Financial support

Y. J. J. was supported by the US National Institutes of Mental Health Grant Nos. 01MH057414 and R01MH117785 (both awarded to Helen Barbas). D. E. M. was supported by Stanford Science Fellowship and the NSF Postdoctoral Research Fellowships in Biology PRFB Program, grant 2109465. O. B. was supported by grants from Bonfor (GEROK) and VW Foundation (Originalitätsverdacht).

Competing interest

None.

Appendix 1. A note on anthropomorphic language

We employ anthropomorphic terminology to describe both human and nonhuman (or nonconscious) decision-making systems simply because it is useful and concise (Ågren & Patten, Reference Ågren and Patten2022; Haig, Reference Haig2020). Anthropomorphic terms – such as “goal,” “decision,” or “hack” – are often metaphorical, but avoiding them makes the discussion of proxy failure unnecessarily arduous. Our generalization rests on two concepts: Teleonomy and selection by consequences, which share broad currency (McCoy & Haig, Reference McCoy and Haig2020; Smaldino & McElreath, Reference Smaldino and McElreath2016).

Teleonomy (Mayr, Reference Mayr1961) or the “intentional stance” (Dennett, Reference Dennett, Beckermann, McLaughlin and Walter2009) justifies the conceptualization of systems “as if” they had goals (Mayr, Reference Mayr1961; McShea, Reference McShea2016) because it is useful and efficient, even if these goals are not explicitly or consciously represented anywhere. Teleonomy is applied widely in biology (Ellis & Kopel, Reference Ellis and Kopel2019; Farnsworth et al., Reference Farnsworth, Albantakis and Caruso2017; Hartwell, Hopfield, Leibler, & Murray, Reference Hartwell, Hopfield, Leibler and Murray1999; Roux, Reference Roux2014) and underlies our use of anthropomorphic language in nonhuman contexts independent of debates about the ontological status of “goals.”

Selection by consequences (Skinner, Reference Skinner1981) points out a functional equivalence between natural selection and behaviour. Behaviour can be seen as the selection of actions; just as natural selection reflects the selection of traits. In both cases, it is the “consequences” of a given behaviour/trait that determine whether it will be selected. The implication is that reinforcement, competition, and selection can all be viewed as analogous optimization processes that influence the agents’ actions or traits, amplifying some and inhibiting others. Note that by drawing on this specific concept, we by no means endorse Skinnerian views (e.g., behaviourism) more broadly.

Appendix 2. Relation to Manheim and Garrabrant (Reference Manheim and Garrabrant2018)

Manheim and Garrabrant (Reference Manheim and Garrabrant2018) suggest a categorization of (at least) four mechanisms by which the optimization of a proxy measure can lead to its decorrelation from the goal: (1) regressional, (2) extremal, (3) causal, and (4) adversarial Goodhart (see also Demski & Garrabrant, Reference Demski and Garrabrant2020, for intuitive visualizations). We will briefly introduce each mechanism and then outline how they are related to our framework.

  1. (1) Regressional Goodhart, also known as the “Optimizer's Curse” (Smith & Winkler, Reference Smith and Winkler2006), is based on the observation that for any “noisy” (i.e., imperfect) correlation, the highest (i.e., optimal) values of one variable (the proxy) will tend to exceed the regression line, leading to a systematic “shortfall” with respect to the other variable (the goal) in the region where the proxy is optimized.

  2. (2) Extremal Goodhart describes a situation in which “optimization pushes you outside the range where the correlation exists, into portions of the distribution which behave very differently” (Demski & Garrabrant, Reference Demski and Garrabrant2020). It is arguably a generalized version of regressional Goodhart for nonlinear relations.

  3. (3) Causal Goodhart describes a situation in which an action to optimize the proxy destroys the initial correlation altogether. The main difference between (1) and (2) is that here a causal intervention in the real world is assumed, whereas the two previous cases pertain to purely inferential phenomena.

  4. (4) Adversarial Goodhart describes a situation in which the use of the proxy induces “agents” to causally intervene in the real world, destroying the correlation that motivated the regulator to choose the proxy.

Although, Manheim and Garrabrant describe only the third case as causal, we think this is somewhat misleading, or rather a consequence of framing some cases as purely “inferential” rather than “control” problems:

Inferential Goodhart: Cases 1 and 2 reflect inference problems. No causal effects are assumed to happen in the “real world.” The underlying distributions and causal networks linking proxy and goal are assumed as given and not modifiable (because an inferential procedure generally will not contain actuators). In these cases “when a measure becomes a target” means that the inferential algorithm, over the course of optimization, moves to a region of the data where the approximation becomes worse or breaks down. In this case, causation happens entirely within the inferential optimization procedure (the movement), such that it may not be seen as “real” causation. The inferential framing abstracts away from the causes that underlie the noise in “regressional” or the nonlinearity in “extremal” Goodhart (these causes would correspond to the causes that asymmetrically affect proxy and goal in our framework).

Control Goodhart: Cases 3 and 4 reflect control problems where a mechanism that causally affects the “real world” is assumed. The “measure becoming the target” changes the distributions of proxy and/or goal. For case 3, the optimization algorithm itself manipulates the proxy values. For case 4, the “incentivization” of some “adversary” leads these to cause changed distributions. Case 4 thus seems to be a special case of 3, where the regulator and agent are more easily associated with distinct goals.

Cases 1–3 correspond to cases where agents are simply passive entities, strategies, or actions, being selected by the regulator because they are associated with larger proxy values (cases 1 and 2) or cause the proxy to increase (case 3). Although we would argue that strict inference problems that lead to no action in the “real world” are of minor interest here, they may explain actions based on faulty inference. If we assume an action based on the inferences in cases 1 and 2, then the proxy failure becomes overtly causal. Consider a robot subject to inferential Goodhart: Once the robot acts on the faulty inference, he causes the measure to become a worse measure. Finally, cases 3 and 4 are functionally equivalent, because they simply differ in the mechanism by which the proxy is optimized (see Appendix 1). In this manner, all four cases can be related to our general mechanism.

References

Aghion, P., & Tirole, J. (1997). Formal and real authority in organizations. The Journal of Political Economy, 105(1), 129.CrossRefGoogle Scholar
Ågren, J. A., & Clark, A. G. (2018). Selfish genetic elements. PLoS Genetics, 14(11), e1007700. https://doi.org/10.1371/JOURNAL.PGEN.1007700CrossRefGoogle ScholarPubMed
Ågren, J. A., Davies, N. G., & Foster, K. R. (2019). Enforcement is central to the evolution of cooperation. Nature Ecology & Evolution, 3(7), 10181029. https://doi.org/10.1038/s41559-019-0907-1CrossRefGoogle ScholarPubMed
Ågren, J. A., & Patten, M. M. (2022). Genetic conflicts and the case for licensed anthropomorphizing. Behavioral Ecology and Sociobiology, 76(12), 166. https://doi.org/10.1007/s00265-022-03267-6CrossRefGoogle ScholarPubMed
Akerlof, G. A., & Shiller, R. J. (2015). Phishing for phools: The economics of manipulation and deception (pp. 1288). Princeton University Press.CrossRefGoogle Scholar
Albo, M. J., Winther, G., Tuni, C., Toft, S., & Bilde, T. (2011). Worthless donations: Male deception and female counter play in a nuptial gift-giving spider. BMC Evolutionary Biology, 11, 329. https://doi.org/10.1186/1471-2148-11-329CrossRefGoogle Scholar
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. ArXiv:1606.06565. http://arxiv.org/abs/1606.06565Google Scholar
Anderson, B. A., Kuwabara, H., Wong, D. F., Gean, E. G., Rahmim, A., Brašić, J. R., … Yantis, S. (2016). The role of dopamine in value-based attentional orienting. Current Biology: CB, 26(4), 550555. https://doi.org/10.1016/j.cub.2015.12.062CrossRefGoogle ScholarPubMed
Andersson, M., & Simmons, L. W. (2006). Sexual selection and mate choice. Trends in Ecology & Evolution, 21(6), 296302. https://doi.org/10.1016/J.TREE.2006.03.015CrossRefGoogle ScholarPubMed
Arathi, H. S., Ganeshaiah, K. N., Shaanker, R. U., & Hegde, S. G. (1996). Factors affecting embryo abortion in Syzygium cuminii (L.) Skeels (Myrtaceae). International Journal of Plant Sciences, 157(1), 4952. https://doi.org/10.1086/297319CrossRefGoogle Scholar
Arrow, K. J. (1950). A difficulty in the concept of social welfare. Journal of Political Economy, 58(4), 328346. https://doi.org/10.1086/256963CrossRefGoogle Scholar
Ashton, H. (2020). Causal Campbell–Goodhart's law and reinforcement learning. ArXiv:2011.01010. http://arxiv.org/abs/2011.01010Google Scholar
Backwell, P. R. Y., Christy, J. H., Telford, S. R., Jennions, M. D., & Passmore, J. (2000). Dishonest signalling in a fiddler crab. Proceedings of the Royal Society of London. Series B: Biological Sciences, 267(1444), 719724. https://doi.org/10.1098/rspb.2000.1062CrossRefGoogle Scholar
Baker, G. (2002). Distortion and risk in optimal incentive contracts. Journal of Human Resources, 37(4), 728751.CrossRefGoogle Scholar
Barabási, A. L., Menichetti, G., & Loscalzo, J. (2019). The unmapped chemical complexity of our diet. Nature Food, 1(1), 3337. https://doi.org/10.1038/s43016-019-0005-1CrossRefGoogle Scholar
Bardoscia, M., Battiston, S., Caccioli, F., & Caldarelli, G. (2017). Pathways towards instability in financial networks. Nature Communications, 8(1), 17. https://doi.org/10.1038/ncomms14416CrossRefGoogle ScholarPubMed
Beale, N., Battey, H., Davison, A. C., & MacKay, R. S. (2020). An unethical optimization principle. Royal Society Open Science, 7(7), 200462. https://doi.org/10.1098/rsos.200462CrossRefGoogle ScholarPubMed
Beck, D. M., & Kastner, S. (2009). Top-down and bottom-up mechanisms in biasing competition in the human brain. Vision Research, 49(10), 11541165. https://doi.org/10.1016/J.VISRES.2008.07.012CrossRefGoogle ScholarPubMed
Becker, G., & Stigler, G. (1977). De gustibus non est disputandum. American Economic Review, 67(2), 7690.Google Scholar
Becker, G. S., & Murphy, K. M. (1988). A theory of rational addiction. Journal of Political Economy, 96(4), 675700.CrossRefGoogle Scholar
Bénabou, R., & Tirole, J. (2016). Bonus culture: Competitive pay, screening, and multitasking. Journal of Political Economy, 124(2), 305370. https://doi.org/10.3386/w18936CrossRefGoogle Scholar
Berke, J. D. (2018). What does dopamine mean? Nature Neuroscience, 21(6), 787793. https://doi.org/10.1038/s41593-018-0152-yCrossRefGoogle ScholarPubMed
Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incentive-sensitization theory of addiction. The American Psychologist, 71(8), 670679. https://doi.org/10.1037/amp0000059CrossRefGoogle ScholarPubMed
Beshears, J., Choi, J. J., Laibson, D., & Madrian, B. C. (2008). How are preferences revealed? Journal of Public Economics, 92(8-9), 17871794.CrossRefGoogle ScholarPubMed
Bessi, A., Zollo, F., Del Vicario, M., Puliga, M., Scala, A., Caldarelli, G., … Quattrociocchi, W. (2016). Users polarization on Facebook and YouTube. PLoS ONE, 11(8), e0159641. https://doi.org/10.1371/journal.pone.0159641CrossRefGoogle ScholarPubMed
Biagioli, M., & Lippman, A. (2020). Gaming the metrics: Misconduct and manipulation in academic research. MIT Press. https://ieeexplore.ieee.org/book/9072252CrossRefGoogle Scholar
Bonner, S. E., & Sprinkle, G. B. (2002). The effects of monetary incentives on effort and task performance: Theories, evidence, and a framework for research. Accounting, Organizations and Society, 27(4–5), 303345. https://doi.org/10.1016/S0361-3682(01)00052-6CrossRefGoogle Scholar
Borges, M. C., Louzada, M. L., de Sá, T. H., Laverty, A. A., Parra, D. C., Garzillo, J. M. F., … Millett, C. (2017). Artificially sweetened beverages and the response to the global obesity crisis. PLoS Medicine, 14(1), e1002195. https://doi.org/10.1371/journal.pmed.1002195CrossRefGoogle Scholar
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.Google Scholar
Bradshaw, S. (2019). Disinformation optimised: Gaming search engine algorithms to amplify junk news. Internet Policy Review, 8(4), 124. https://doi.org/10.14763/2019.4.1442CrossRefGoogle Scholar
Braganza, O. (2020). A simple model suggesting economically rational sample-size choice drives irreproducibility. PLoS ONE, 15(3), e0229615. https://doi.org/10.1371/journal.pone.0229615CrossRefGoogle ScholarPubMed
Braganza, O. (2022a). Market paternalism – Do people really want to be nudged towards consumption? IfSO Working Paper. https://www.uni-due.de/imperia/md/content/soziooekonomie/ifsowp23_braganza2022.pdfGoogle Scholar
Braganza, O. (2022b). Proxyeconomics, a theory and model of proxy-based competition and cultural evolution. Royal Society Open Science, 9(2), 141. https://doi.org/10.1098/RSOS.211030CrossRefGoogle ScholarPubMed
Breslin, P. A. S. (2013). An evolutionary perspective on food and human taste. Current Biology, 23(9), R409R418. https://doi.org/10.1016/j.cub.2013.04.010CrossRefGoogle ScholarPubMed
Bromberg-Martin, E. S., Matsumoto, M., & Hikosaka, O. (2010). Dopamine in motivational control: Rewarding, aversive, and alerting. Neuron, 68(5), 815834. https://doi.org/10.1016/J.NEURON.2010.11.022CrossRefGoogle ScholarPubMed
Bryar, C., & Carr, B. (2021). Working backwards – Insights, stories and secrets from inside Amazon. Macmillan USA.Google Scholar
Bullock, D., Tan, C. O., & John, Y. J. (2009). Computational perspectives on forebrain microcircuits implicated in reinforcement learning, action selection, and cognitive control. Neural Networks, 22(5–6), 757765. https://doi.org/10.1016/j.neunet.2009.06.008CrossRefGoogle ScholarPubMed
Burnham, T. C. (2016). Economics and evolutionary mismatch: Humans in novel settings do not maximize. Journal of Bioeconomics, 18(3), 195209. https://doi.org/10.1007/s10818-016-9233-8CrossRefGoogle Scholar
Buss, L. W. (1988). The evolution of individuality. Princeton University Press. https://press.princeton.edu/books/hardcover/9780691632858/the-evolution-of-individualityCrossRefGoogle Scholar
Campbell, D. T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2(1), 6790. https://doi.org/10.1016/0149-7189(79)90048-XCrossRefGoogle Scholar
Casarini, L., Santi, D., Brigante, G., & Simoni, M. (2018). Two hormones for one receptor: Evolution, biochemistry, actions, and pathophysiology of LH and hCG. Endocrine Reviews, 39(5), 549592. https://doi.org/10.1210/er.2018-00065CrossRefGoogle ScholarPubMed
Chirat, A. (2020). A reappraisal of Galbraith's challenge to consumer sovereignty: Preferences, welfare and the non-neutrality thesis. European Journal of the History of Economic Thought, 27(2), 248275. https://doi.org/10.1080/09672567.2020.1720763CrossRefGoogle Scholar
Coddington, L. T., & Dudman, J. T. (2019). Learning from action: Reconsidering movement signaling in midbrain dopamine neuron activity. Neuron, 104(1), 6377. https://doi.org/10.1016/J.NEURON.2019.08.036CrossRefGoogle ScholarPubMed
Conant, R. C., & Ashby, W. R. (1970). Every good regulator of a system must be a model of that system. International Journal of Systems Science, 1(2), 8997.CrossRefGoogle Scholar
Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: A review and assessment. Journal of Management, 37(1), 3967. https://doi.org/10.1177/0149206310388419CrossRefGoogle Scholar
Cowen, N., & Dold, M. F. (2021). Introduction: Symposium on escaping paternalism: Rationality, behavioral economics and public policy by Mario J. Rizzo and Glen Whitman. Review of Behavioral Economics, 8(3-4), 213220.CrossRefGoogle Scholar
Cruz, B. F., & Paton, J. J. (2021). Dopamine gives credit where credit is due. Neuron, 109(12), 19151917. https://doi.org/10.1016/J.NEURON.2021.05.033CrossRefGoogle ScholarPubMed
Csiszar, A. (2020). Gaming metrics before the game: Citation and the bureaucratic virtuoso. In Biagioli, M. & Lippman, A. (Eds.), Gaming the metrics: Misconduct and manipulation in academic research (pp. 3142). MIT Press. https://ieeexplore.ieee.org/document/9085771CrossRefGoogle Scholar
Dawkins, M. S., & Guilford, T. (1991). The corruption of honest signalling. Animal Behaviour, 41(5), 865873. https://doi.org/10.1016/S0003-3472(05)80353-7CrossRefGoogle Scholar
Demski, A., & Garrabrant, S. (2020). Embedded agency. ArXiv.Google Scholar
Dennett, D. (2009). Intentional systems theory. In Beckermann, A., McLaughlin, B. P., & Walter, S. (Eds.), Oxford handbook of philosophy of mind (pp. 339350). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199262618.003.0020CrossRefGoogle Scholar
Deserno, L., Huys, Q. J. M., Boehme, R., Buchert, R., Heinze, H.-J., Grace, A. A., … Schlagenhauf, F. (2015). Ventral striatal dopamine reflects behavioral and neural signatures of model-based control during sequential decision making. Proceedings of the National Academy of Sciences of the United States of America, 112(5), 15951600. https://doi.org/10.1073/pnas.1417219112CrossRefGoogle ScholarPubMed
DeYoung, C. G. (2013). The neuromodulator of exploration: A unifying theory of the role of dopamine in personality. Frontiers in Human Neuroscience, 7, 126. https://doi.org/10.3389/fnhum.2013.00762CrossRefGoogle ScholarPubMed
Diana, M. (2011). The dopamine hypothesis of drug addiction and its potential therapeutic value. Frontiers in Psychiatry, 2, 17. https://www.frontiersin.org/articles/10.3389/fpsyt.2011.00064CrossRefGoogle ScholarPubMed
Dold, M. F., & Schubert, C. (2018). Toward a behavioral foundation of normative economics. Review of Behavioral Economics, 5(3–4), 221241. https://doi.org/10.1561/105.00000097CrossRefGoogle Scholar
DuBois, G. E. (2016). Molecular mechanism of sweetness sensation. Physiology & Behavior, 164, 453463. https://doi.org/10.1016/j.physbeh.2016.03.015CrossRefGoogle ScholarPubMed
Dyson, E. A., & Hurst, G. D. D. (2004). Persistence of an extreme sex-ratio bias in a natural population. Proceedings of the National Academy of Sciences of the United States of America, 101(17), 65206523. https://doi.org/10.1073/pnas.0304068101CrossRefGoogle Scholar
Edmans, A., Fang, V. W., & Lewellen, K. A. (2017). Equity vesting and investment. The Review of Financial Studies, 30(7), 22292271. https://doi.org/10.1093/rfs/hhx018CrossRefGoogle Scholar
Ellis, G. F. R., & Kopel, J. (2019). The dynamical emergence of biology from physics: Branching causation via biomolecules. Frontiers in Physiology, 9, 1966. https://doi.org/10.3389/fphys.2018.01966CrossRefGoogle ScholarPubMed
Everitt, T., Hutter, M., Kumar, R., & Krakovna, V. (2021). Reward tampering problems and solutions in reinforcement learning: A causal influence diagram perspective. Synthese, 198, 64356467. https://doi.org/10.1007/S11229-021-03141-4CrossRefGoogle Scholar
Faddoul, M., Chaslot, G., & Farid, H. (2020). A longitudinal analysis of YouTube's promotion of conspiracy videos. ArXiv. http://arxiv.org/abs/2003.03318Google Scholar
Farnsworth, K. D., Albantakis, L., & Caruso, T. (2017). Unifying concepts of biological function from molecules to ecosystems. Oikos, 126(10), 13671376. https://doi.org/10.1111/oik.04171CrossRefGoogle Scholar
Fellner, W. J., & Goehmann, B. (2020). Human needs, consumerism and welfare. Cambridge Journal of Economics, 44(2), 303318. https://doi.org/10.1093/cje/bez046CrossRefGoogle Scholar
Finan, F., & Schechter, L. (2012). Vote-buying and reciprocity. Econometrica, 80(2), 863881. https://doi.org/10.3982/ECTA9035Google Scholar
Finefter-Rosenbluh, I., & Levinson, M. (2015). Philosophical inquiry in education. The Journal of the Canadian Philosophy of Education Society, 23, 321. https://journals.sfu.ca/pie/index.php/pie/article/view/894Google Scholar
Fire, M., & Guestrin, C. (2019). Over-optimization of academic publishing metrics: Observing Goodhart's law in action. GigaScience, 8(6), 120. https://doi.org/10.1093/gigascience/giz053CrossRefGoogle ScholarPubMed
Flynn, T. T. (1922). The phylogenetic significance of the marsupial allantoplacenta.Google Scholar
Franco-Santos, M., & Otley, D. (2018). Reviewing and theorizing the unintended consequences of performance management systems. International Journal of Management Reviews, 20(3), 696730. https://doi.org/10.1111/ijmr.12183CrossRefGoogle Scholar
Frank, R. H. (2011). The Darwin economy: Liberty, competition, and the common good. Princeton University Press.Google Scholar
Franken, I. H. A., Booij, J., & van den Brink, W. (2005). The role of dopamine in human addiction: From reward to motivated attention. European Journal of Pharmacology, 526(1–3), 199206. https://doi.org/10.1016/J.EJPHAR.2005.09.025CrossRefGoogle ScholarPubMed
Franklin, M., Ashton, H., Gorman, R., & Armstrong, S. (2022). Recognising the importance of preference change: A call for a coordinated multidisciplinary research effort in the age of AI.Google Scholar
Fremstad, A., & Paul, M. (2022). Neoliberalism and climate change: How the free-market myth has prevented climate action. Ecological Economics, 197, 107353. https://doi.org/10.1016/J.ECOLECON.2022.107353CrossRefGoogle Scholar
Frey, B. S., & Jegen, R. (2001). Motivation crowding theory. Journal of Economic Surveys, 15(5), 589611. https://doi.org/10.1111/1467-6419.00150CrossRefGoogle Scholar
Friedman, E. J., & Oren, S. S. (1995). The complexity of resource allocation and price mechanisms under bounded rationality. Economic Theory, 6(2), 225250. https://doi.org/10.1007/BF01212489CrossRefGoogle Scholar
Frutiger, A., Tanno, A., Hwu, S., Tiefenauer, R. F., Vörös, J., & Nakatsuka, N. (2021). Nonspecific binding – Fundamental concepts and consequences for biosensing applications. Chemical Reviews, 121(13), 80958160. https://doi.org/10.1021/acs.chemrev.1c00044CrossRefGoogle ScholarPubMed
Funk, D. H., & Tallamy, D. W. (2000). Courtship role reversal and deceptive signals in the long-tailed dance fly, Rhamphomyia longicauda. Animal Behaviour, 59(2), 411421. https://doi.org/10.1006/anbe.1999.1310CrossRefGoogle ScholarPubMed
Galbraith, J. K. (1998). The affluent society. Houghton Mifflin.Google Scholar
Gasparini, C., Serena, G., & Pilastro, A. (2013). Do unattractive friends make you look better? Context-dependent male mating preferences in the guppy. Proceedings of the Royal Society B: Biological Sciences, 280(1756), 20123072. https://doi.org/10.1098/rspb.2012.3072CrossRefGoogle ScholarPubMed
Glimcher, P. W. (2011). Understanding dopamine and reinforcement learning: The dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences of the United States of America, 108 (Suppl. 3), 1564715654. https://doi.org/10.1073/pnas.1014269108CrossRefGoogle ScholarPubMed
Goodhart, C. A. E. (1975). Problems of monetary management: The UK experience. Papers in Monetary Economics, 1, 120. https://doi.org/10.1007/978-1-349-17295-5_4Google Scholar
Gray, M. W., Lukeš, J., Archibald, J. M., Keeling, P. J., & Doolittle, W. F. (2010). Irremediable complexity? Science, 330(6006), 920921. https://doi.org/10.1126/science.1198594CrossRefGoogle ScholarPubMed
Grether, G. F., Hudon, J., & Endler, J. A. (2001). Carotenoid scarcity, synthetic pteridine pigments and the evolution of sexual coloration in guppies (Poecilia reticulata). Proceedings of the Royal Society B: Biological Sciences, 268(1473), 1245. https://doi.org/10.1098/RSPB.2001.1624CrossRefGoogle ScholarPubMed
Griesemer, J. (2020). Taking Goodhart's law meta: Gaming, meta-gaming, and hacking academic performance metrics. In Biagioli, M. & Lippman, A. (Eds.), Gaming the metrics: Misconduct and manipulation in academic research (pp. 7787). MIT Press.CrossRefGoogle Scholar
Hadland, S. E., Rivera-Aguirre, A., Marshall, B. D. L., & Cerdá, M. (2019). Association of pharmaceutical industry marketing of opioid products with mortality from opioid-related overdoses. JAMA Network Open, 2(1), e186007. https://doi.org/10.1001/jamanetworkopen.2018.6007CrossRefGoogle ScholarPubMed
Haig, D. (1993). Genetic conflicts in human pregnancy. The Quarterly Review of Biology, 68(4), 495532. https://doi.org/10.1086/418300CrossRefGoogle ScholarPubMed
Haig, D. (2020). From Darwin to Derrida: Selfish genes, social selves, and the meanings of life. MIT Press.CrossRefGoogle Scholar
Hanson, J., & Kysar, D. A. (1999). Taking behavioralism seriously: The problem of market manipulation. New York University Law Review, 74, 632.Google Scholar
Hardin, R., & Cullity, G. (2020). The free rider problem. In Zalta, E. N. (Ed.), Stanford encyclopedia of philosophy (pp. 120). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/free-rider/Google Scholar
Harris, K. D., Daon, Y., & Nanjundiah, V. (2020). The role of signaling constraints in defining optimal marginal costs of reliable signals. Behavioral Ecology, 31(3), 784791. https://doi.org/10.1093/beheco/araa025CrossRefGoogle Scholar
Hartman, C. G. (1920). Studies in the development of the opossum Didelphys virginiana L. V. The phenomena of parturition. The Anatomical Record, 19(5), 251261. https://doi.org/10.1002/ar.1090190502CrossRefGoogle Scholar
Hartwell, L. H., Hopfield, J. J., Leibler, S., & Murray, A. W. (1999). From molecular to modular cell biology. Nature, 402(S6761), C47C52. https://doi.org/10.1038/35011540CrossRefGoogle ScholarPubMed
Heather, N. (2017). Q: Is addiction a brain disease or a moral failing? A: Neither. Neuroethics, 10(1), 115124. https://doi.org/10.1007/s12152-016-9289-0CrossRefGoogle ScholarPubMed
Henke, A., & Gromoll, J. (2008). New insights into the evolution of chorionic gonadotrophin. Molecular and Cellular Endocrinology, 291(1), 1119. https://doi.org/10.1016/j.mce.2008.05.009CrossRefGoogle ScholarPubMed
Hennessy, C., & Goodhart, C. A. E. (2021). Goodhart's law and machine learning. SSRN 3639508. https://doi.org/10.2139/ssrn.3639508CrossRefGoogle Scholar
Heylighen, F., & Joslyn, C. (2001). Cybernetics and second-order cybernetics. In Meyers, R. A. (Ed.), Encyclopedia of physical science & technology (pp. 124). Academic Press.Google Scholar
Hill, J. P., & Hill, W. C. O. (1955). The growth-stages of the pouch-young of the native cat (Dasyurus viverrinus) together with observations on the anatomy of the new-born young. The Transactions of the Zoological Society of London, 28(5), 349352. https://doi.org/10.1111/j.1096-3642.1955.tb00003.xCrossRefGoogle Scholar
Hodgson, G. M. (2003). The hidden persuaders: Institutions and individuals in economic theory. Cambridge Journal of Economics, 27(2), 159175. https://doi.org/10.1093/CJE/27.2.159CrossRefGoogle Scholar
Holmström, B. (1979). Moral hazard and observability. The Bell Journal of Economics, 10(1), 74. ttps://doi.org/10.2307/3003320CrossRefGoogle Scholar
Holmström, B. (2017). Pay for performance and beyond. American Economic Review, 107(7), 17531777. https://doi.org/10.1257/aer.107.7.1753CrossRefGoogle Scholar
Houthakker, H. S. (1950). Revealed preference and the utility function. Economica, 17(66), 159. https://doi.org/10.2307/2549382CrossRefGoogle Scholar
Hsiao, A., & Wang, Y. C. (2013). Reducing sugar-sweetened beverage consumption: Evidence, policies, and economics. Current Obesity Reports, 2(3), 191199. https://doi.org/10.1007/s13679-013-0065-8CrossRefGoogle Scholar
Ittner, C. D., Larcker, D. F., & Meyer, M. W. (1997). Performance, compensation, and the balanced scorecard. Wharton School, University of Pennsylvania.Google Scholar
Iwasa, Y., & Pomiankowski, A. (1995). Continual change in mate preferences. Nature, 377(6548), 420422. https://doi.org/10.1038/377420a0CrossRefGoogle ScholarPubMed
Jeckelmann, J.-M., & Erni, B. (2020). Transporters of glucose and other carbohydrates in bacteria. Pflugers Archiv: European Journal of Physiology, 472(9), 11291153. https://doi.org/10.1007/s00424-020-02379-0CrossRefGoogle ScholarPubMed
Jones, B. A., & Rachlin, H. (2009). Delay, probability, and social discounting in a public goods game. Journal of the Experimental Analysis of Behavior, 91(1), 6173. https://doi.org/10.1901/jeab.2009.91-61CrossRefGoogle Scholar
Kauffman, S. A. (2019). A world beyond physics: The emergence and evolution of life. Oxford University Press. https://scholar.google.com/scholar_lookup?title=A+World+beyond+Physics:+The+Emergence+and+Evolution+of+Life&author=Stuart,+K.&publication_year=2019Google Scholar
Kelly, C., & Snower, D. J. (2021). Capitalism recoupled. Oxford Review of Economic Policy, 37(4), 851863. https://doi.org/10.1093/oxrep/grab025CrossRefGoogle Scholar
Kerr, S. (1975). On the folly of rewarding A, while hoping for B. Academy of Management Journal, 18(4), 769783. https://doi.org/10.5465/255378CrossRefGoogle ScholarPubMed
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., & Kiverstein, J. (2018). The Markov blankets of life: Autonomy, active inference and the free energy principle. Journal of the Royal Society Interface, 15(138), 20170792. https://doi.org/10.1098/rsif.2017.0792CrossRefGoogle ScholarPubMed
Kirkpatrick, J. (1994). Defense of advertising: Arguments from reason, ethical egoism, and laissez-faire capitalism. https://philpapers.org/rec/KIRIDOGoogle Scholar
Kobayashi, S., & Schultz, W. (2008). Influence of reward delays on responses of dopamine neurons. Journal of Neuroscience, 28(31), 78377846. https://doi.org/10.1523/JNEUROSCI.1600-08.2008CrossRefGoogle ScholarPubMed
Koretz, D. M. (2008). Measuring up: What educational testing really tells us. Harvard University Press.CrossRefGoogle Scholar
Krebs, J. R., & Dawkins, R. (1984). Animal Signals: Mind-Reading and Manipulation. In Krebs, J. R., & Davies, N. B. (Eds.), Behavioural Ecology: An Evolutionary Approach (S. 380–402). Blackwell Scientific.Google Scholar
Lebreton, M., Jorge, S., Michel, V., Thirion, B., & Pessiglione, M. (2009). An automatic valuation system in the human brain: Evidence from functional neuroimaging. Neuron, 64(3), 431439. https://doi.org/10.1016/J.NEURON.2009.09.040CrossRefGoogle ScholarPubMed
Levy, D. J., & Glimcher, P. W. (2012). The root of all value: A neural common currency for choice. Current Opinion in Neurobiology, 22(6), 10271038. https://doi.org/10.1016/J.CONB.2012.06.001CrossRefGoogle ScholarPubMed
Lewis, D. A., & Sesack, S. R. (1997). Chapter VI – Dopamine systems in the primate brain. In Bloom, F. E., Björklund, A., & Hökfelt, T. (Eds.), Handbook of chemical neuroanatomy (Vol. 13, pp. 263375). Elsevier. https://doi.org/10.1016/S0924-8196(97)80008-5Google Scholar
Leyton, M., & Vezina, P. (2014). Dopamine ups and downs in vulnerability to addictions: A neurodevelopmental model. Trends in Pharmacological Sciences, 35(6), 268276. https://doi.org/10.1016/j.tips.2014.04.002CrossRefGoogle ScholarPubMed
Li, X., Staszewski, L., Xu, H., Durick, K., Zoller, M., & Adler, E. (2002). Human receptors for sweet and umami taste. Proceedings of the National Academy of Sciences of the United States of America, 99(7), 46924696. https://doi.org/10.1073/pnas.072090199CrossRefGoogle ScholarPubMed
Lucas, R. E. (1976). Econometric policy evaluation: A critique. Carnegie-Rochester Conference Series on Public Policy, 1, 1946. https://doi.org/10.1016/S0167-2231(76)80003-6CrossRefGoogle Scholar
Luguri, J., & Strahilevitz, L. (2021). Shining a light on dark patterns. Journal of Legal Analysis, 13(1), 43109. https://doi.org/10.2139/ssrn.3431205CrossRefGoogle Scholar
Lüpold, S., Manier, M. K., Puniamoorthy, N., Schoff, C., Starmer, W. T., Luepold, S. H. B., … Pitnick, S. (2016). How sexual selection can drive the evolution of costly sperm ornamentation. Nature, 533(7604), 535538. https://doi.org/10.1038/nature18005CrossRefGoogle ScholarPubMed
Lynch, M. (2007). The frailty of adaptive hypotheses for the origins of organismal complexity. Proceedings of the National Academy of Sciences of the United States of America, 104(Suppl. 1), 85978604. https://doi.org/10.1073/pnas.0702207104CrossRefGoogle ScholarPubMed
Magnuson, B. A., Carakostas, M. C., Moore, N. H., Poulos, S. P., & Renwick, A. G. (2016). Biological fate of low-calorie sweeteners. Nutrition Reviews, 74(11), 670689. https://doi.org/10.1093/nutrit/nuw032CrossRefGoogle ScholarPubMed
Manheim, D. (2018). Overoptimization failures and specification gaming in multi-agent systems. ArXiv:1810.10862. https://arxiv.org/abs/1810.10862v2Google Scholar
Manheim, D., & Garrabrant, S. (2018). Categorizing variants of Goodhart's law. ArXiv:1803.04585v4. https://arxiv.org/abs/1803.04585v3Google Scholar
Mas-Colell, A., Whinston, M. D., & Green, J. R. (1995). Microeconomic theory (Vol. 1). Oxford University Press.Google Scholar
Masur, J. S., & Posner, E. A. (2015). Toward a Pigouvian state. University of Pennsylvania Law Review, 164(1), 93147.Google Scholar
Mathur, A., Friedman, M. J., Mayer, J., Narayanan, A., Acar, G., Lucherini, E., & Chetty, M. (2019). Dark patterns at scale: Findings from a crawl of 11K shopping websites. Proceedings of the ACM on Human-Computer Interaction, 3, 132. https://doi.org/10.1145/3359183CrossRefGoogle Scholar
Mayer, C. (2021). The future of the corporation and the economics of purpose. Journal of Management Studies, 58(3), 887901. https://doi.org/10.1111/joms.12660CrossRefGoogle Scholar
Mayr, E. (1961). Cause and effect in biology. Science, 134(3489), 15011506.CrossRefGoogle ScholarPubMed
McCoy, D. E., & Haig, D. (2020). Embryo selection and mate choice: Can “honest signals” be trusted? Trends in Ecology & Evolution, 35(4), 308318. https://doi.org/10.1016/J.TREE.2019.12.002CrossRefGoogle ScholarPubMed
McCoy, D. E., Shultz, A. J., Vidoudez, C., van der Heide, E., Dall, J. E., Trauger, S. A., & Haig, D. (2021). Microstructures amplify carotenoid plumage signals in tanagers. Scientific Reports, 11(1), 120. https://doi.org/10.1038/s41598-021-88106-wCrossRefGoogle ScholarPubMed
McElroy, J. S. (2014). Vavilovian mimicry: Nikolai Vavilov and his little-known impact on weed science. Weed Science, 62(2), 207216. https://doi.org/10.1614/WS-D-13-00122.1CrossRefGoogle Scholar
McShea, D. W. (2016). Freedom and purpose in biology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 58, 6472. https://doi.org/10.1016/J.SHPSC.2015.12.002CrossRefGoogle ScholarPubMed
Merton, R. K. (1940). Bureaucratic structure and personality. Social Forces, 18(4), 560568. https://doi.org/10.2307/2570634CrossRefGoogle Scholar
Mink, J. W. (2018). Basal ganglia mechanisms in action selection, plasticity, and dystonia. European Journal of Paediatric Neurology: EJPN, 22(2), 225229. https://doi.org/10.1016/j.ejpn.2018.01.005CrossRefGoogle ScholarPubMed
Mirowski, P., & Nik-Khah, E. (2017). The knowledge we have lost in information (Vol. 1). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780190270056.001.0001CrossRefGoogle Scholar
Muller, J. Z. (2018). The tyranny of metrics. Princeton University Press.Google Scholar
Newall, P. W. S. (2019). Dark nudges in gambling. Addiction Research & Theory, 27(2), 6567. https://doi.org/10.1080/16066359.2018.1474206CrossRefGoogle Scholar
Nichols, S. L., & Berliner, D. (2005). The inevitable corruption of indicators and educators through high-stakes testing. The Great Lakes Center for Education Research & Practice.Google Scholar
O'Mahony, S. (2018). Medicine and the McNamara fallacy. Journal of the Royal College of Physicians of Edinburgh, 47(3), 281287. https://doi.org/10.4997/JRCPE.2017.315CrossRefGoogle Scholar
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.Google Scholar
Packard, V. (1958). The hidden persuaders. Pocket Books.Google Scholar
Pearce, J. A. (1982). Problems facing first-time managers. Human Resource Management, 21(1), 3538. https://doi.org/10.1002/hrm.3930210108CrossRefGoogle Scholar
Penn, D. J., & Számadó, S. (2020). The handicap principle: How an erroneous hypothesis became a scientific principle. Biological Reviews, 95(1), 267290. https://doi.org/10.1111/brv.12563CrossRefGoogle ScholarPubMed
Perez-Aguilar, J. M., Kang, S., Zhang, L., & Zhou, R. (2019). Modeling and structural characterization of the sweet taste receptor heterodimer. ACS Chemical Neuroscience, 10(11), 45794592. https://doi.org/10.1021/acschemneuro.9b00438CrossRefGoogle ScholarPubMed
Peters, A., Schweiger, U., Pellerin, L., Hubold, C., Oltmanns, K. M., Conrad, M., … Fehm, H. L. (2004). The selfish brain: Competition for energy resources. Neuroscience & Biobehavioral Reviews, 28(2), 143180. https://doi.org/10.1016/J.NEUBIOREV.2004.03.002CrossRefGoogle ScholarPubMed
Poku, M. (2016). Campbell's law: Implications for health care. Journal of Health Services Research & Policy, 21(2), 137139. https://doi.org/10.1177/1355819615593772CrossRefGoogle ScholarPubMed
Price, I., & Clark, E. (2009). An output approach to property portfolio performance measurement. Property Management, 27(1), 615.CrossRefGoogle Scholar
Prum, R. O. (2010). The Lande–Kirkpatrick mechanism is the null model of evolution by intersexual selection: Implications for meaning, honesty, and design in intersexual signals. Evolution, 64, 30853100.CrossRefGoogle ScholarPubMed
Prum, R. O. (2012). Aesthetic evolution by mate choice: Darwin's really dangerous idea. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1600), 22532265. https://doi.org/10.1098/RSTB.2011.0285CrossRefGoogle ScholarPubMed
Prum, R. O. (2017). The evolution of beauty: How Darwin's forgotten theory of mate choice shapes the animal world. Anchor.Google Scholar
Pueyo, S. (2018). Growth, degrowth, and the challenge of artificial superintelligence. Journal of Cleaner Production, 197, 17311736. https://doi.org/10.1016/J.JCLEPRO.2016.12.138CrossRefGoogle Scholar
Puig, M. V., Rose, J., Schmidt, R., & Freund, N. (2014). Dopamine modulation of learning and memory in the prefrontal cortex: Insights from studies in primates, rodents, and birds. Frontiers in Neural Circuits, 8(Aug), 93. https://doi.org/10.3389/FNCIR.2014.00093/BIBTEXCrossRefGoogle ScholarPubMed
Raworth, K. (2018). Doughnut economics: Seven ways to think like a 21st-century economist. Random House Business.Google Scholar
Rendell, L., Fogarty, L., & Laland, K. N. (2011). Runaway cultural niche construction. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1566), 823835. https://doi.org/10.1098/rstb.2010.0256CrossRefGoogle ScholarPubMed
Reynolds, J. N. J., Hyland, B. I., & Wickens, J. R. (2001). A cellular mechanism of reward-related learning. Nature, 413(6851), 6770. https://doi.org/10.1038/35092560CrossRefGoogle ScholarPubMed
Rizzo, M. J., & Whitman, G. (2019). Escaping paternalism: Rationality, behavioral economics, and public policy (pp. 1496). Cambridge University Press. https://doi.org/10.1017/9781139061810CrossRefGoogle Scholar
Rodamar, J. (2018). There ought to be a law! Campbell versus Goodhart. Significance, 15(6), 99. https://doi.org/10.1111/j.1740-9713.2018.01205.xCrossRefGoogle Scholar
Roux, E. (2014). The concept of function in modern physiology. The Journal of Physiology, 592(11), 22452249. https://doi.org/10.1113/jphysiol.2014.272062CrossRefGoogle ScholarPubMed
Samuelson, P. A. (1938). A note on the pure theory of consumer's behaviour. Economica, 5(17), 61. https://doi.org/10.2307/2548836CrossRefGoogle Scholar
Satel, S., & Lilienfeld, S. O. (2014). Addiction and the brain-disease fallacy. Frontiers in Psychiatry, 4, 141. https://doi.org/10.3389/fpsyt.2013.00141CrossRefGoogle ScholarPubMed
Schultz, W. (2022). Dopamine reward prediction error coding. Dialogues in Clinical Neuroscience, 18(1), 2332. https://doi.org/10.31887/DCNS.2016.18.1/WSCHULTZCrossRefGoogle Scholar
Scott, J. C. (2008). Seeing like a state. Yale University Press. https://doi.org/10.12987/9780300128789/HTMLGoogle Scholar
Seo, M., Lee, E., & Averbeck, B. B. (2012). Action selection and action value in frontal-striatal circuits. Neuron, 74(5), 947960. https://doi.org/10.1016/j.neuron.2012.03.037CrossRefGoogle ScholarPubMed
Shaanker, R. U., Ganeshaiah, K. N., & Bawa, K. S. (1988). Parent–offspring conflict, sibling rivalry, and brood size patterns in plants. Annual Review of Ecology and Systematics, 19(1), 177205. https://doi.org/10.1146/annurev.es.19.110188.001141CrossRefGoogle Scholar
Shaffer, H. G. (1963). A new incentive for soviet managers. The Russian Review, 22(4), 410416. https://doi.org/10.2307/126674CrossRefGoogle Scholar
Shen, W., Flajolet, M., Greengard, P., & Surmeier, D. J. (2008). Dichotomous dopaminergic control of striatal synaptic plasticity. Science, 321(5890), 848851. https://doi.org/10.1126/SCIENCE.1160575CrossRefGoogle ScholarPubMed
Siebert, H. (2001). Der Kobra-Effekt wie man Irrwege der Wirtschaftspolitik vermeidet. Deutsche V.-A., Stgt.Google Scholar
Skinner, B. F. (1981). Selection by consequences. Science, 213(4507), 501504. https://doi.org/10.1126/science.7244649CrossRefGoogle ScholarPubMed
Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. https://doi.org/10.1098/rsos.160384CrossRefGoogle ScholarPubMed
Small, D. M., & DiFeliceantonio, A. G. (2019). Processed foods and food reward. Science, 363(6425), 346347. https://doi.org/10.1126/science.aav0556CrossRefGoogle ScholarPubMed
Smith, J. E., & Winkler, R. L. (2006). The optimizer's curse: Skepticism and postdecision surprise in decision analysis. Management Science, 52(3), 311322. https://doi.org/10.1287/mnsc.1050.0451CrossRefGoogle Scholar
Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355. https://doi.org/10.2307/1882010CrossRefGoogle Scholar
Strathern, M. (1997). “Improving ratings”: Audit in the British university system. European Review, 55(5), 305321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:33.0.CO;2-43.0.CO;2-4>CrossRefGoogle Scholar
Stroebe, W. (2016). Why good teaching evaluations may reward bad teaching: On grade inflation and other unintended consequences of student evaluations. Perspectives on Psychological Science, 11(6), 800816. https://doi.org/10.1177/1745691616650284CrossRefGoogle ScholarPubMed
Strombach, T., Weber, B., Hangebrauk, Z., Kenning, P., Karipidis, I. I., Tobler, P. N., & Kalenscher, T. (2015). Social discounting involves modulation of neural value signals by temporoparietal junction. Proceedings of the National Academy of Sciences of the United States of America, 112(5), 16191624. https://doi.org/10.1073/pnas.1414715112CrossRefGoogle ScholarPubMed
Stuckler, D., McKee, M., Ebrahim, S., & Basu, S. (2012). Manufacturing epidemics: The role of global producers in increased consumption of unhealthy commodities including processed foods, alcohol, and tobacco. PLoS Medicine, 9(6), e1001235. https://doi.org/10.1371/journal.pmed.1001235CrossRefGoogle ScholarPubMed
Sugden, R. (2017). Do people really want to be nudged towards healthy lifestyles? International Review of Economics, 64(2), 113123.CrossRefGoogle Scholar
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Georgetown Law Technology Review, 4, 145. https://doi.org/10.2139/ssrn.3306006Google Scholar
Számadó, S., & Penn, D. J. (2018). Does the handicap principle explain the evolution of dimorphic ornaments? Animal Behaviour, 138, e7e10. https://doi.org/10.1016/j.anbehav.2018.01.005CrossRefGoogle ScholarPubMed
Tardieu, H., Daly, D., Esteban-Lauzán, J., Hall, J., & Miller, G. (2020). Measuring the transformation – KPIs for understanding transformation progress. In Deliberately digital (p. 202). Springer. https://doi.org/10.1007/978-3-030-37955-1_4CrossRefGoogle Scholar
Tayan, B. (2019). The Wells Fargo cross-selling scandal. In Rock Center for Corporate Governance at Stanford University Closer Look Series: Topics, Issues and Controversies in Corporate Governance No. CGRP-62 Version 2. https://corpgov.law.harvard.edu/2019/02/06/the-wells-fargo-cross-selling-scandal-2/Google Scholar
Thaler, R. H. (2018). From cashews to nudges: The evolution of behavioral economics. American Economic Review, 108(6), 12651287. https://doi.org/10.1257/aer.108.6.1265CrossRefGoogle Scholar
Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth, and happiness. Penguin Books.Google Scholar
The Lincoln Electric Company. (1975).Google Scholar
Thomas, R. L., & Uminsky, D. (2022). Reliance on metrics is a fundamental challenge for AI. Patterns, 3(5), 100476. https://doi.org/10.1016/J.PATTER.2022.100476CrossRefGoogle ScholarPubMed
Thomson, R., Royed, T., Naurin, E., Artés, J., Costello, R., Ennser-Jedenastik, L., … Praprotnik, K. (2017). The fulfillment of parties’ election pledges: A comparative study on the impact of power sharing. American Journal of Political Science, 61(3), 527542. https://doi.org/10.1111/ajps.12313CrossRefGoogle Scholar
van der Kolk, B. (2022). Numbers speak for themselves, or do they? On performance measurement and its implications. Business & Society, 61(4), 813817. https://doi.org/10.1177/00076503211068433CrossRefGoogle Scholar
Vann, M. G. (2003). Of rats, rice, and race: The great Hanoi rat massacre, an episode in French colonial history. French Colonial History, 4(1), 191203. https://doi.org/10.1353/fch.2003.0027CrossRefGoogle Scholar
Vavilov, N. I. (1922). The law of homologous series in variation. Journal of Genetics, 12(1), 4789. https://doi.org/10.1007/BF02983073CrossRefGoogle Scholar
Volkow, N. D., Wise, R. A., & Baler, R. (2017). The dopamine motive system: Implications for drug and food addiction. Nature Reviews Neuroscience, 18(12), 741752. https://doi.org/10.1038/nrn.2017.130CrossRefGoogle ScholarPubMed
von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press. https://doi.org/10.2307/2019327Google Scholar
Weaver, R. J., Santos, E. S. A., Tucker, A. M., Wilson, A. E., & Hill, G. E. (2018). Carotenoid metabolism strengthens the link between feather coloration and individual quality. Nature Communications, 9(1), 73. https://doi.org/10.1038/s41467-017-02649-zCrossRefGoogle ScholarPubMed
Wickler, W. (1965). Mimicry and the evolution of animal communication. Nature, 208(5010), 519521. https://doi.org/10.1038/208519a0CrossRefGoogle Scholar
Wiedmann, T., Lenzen, M., Keyßer, L. T., & Steinberger, J. K. (2020). Scientists’ warning on affluence. Nature Communications, 11(1), 110. https://doi.org/10.1038/s41467-020-16941-yCrossRefGoogle ScholarPubMed
Willson, M. F., & Burley, N. (1983). Mate choice in plants (MPB-19), volume 19: Tactics, mechanisms, and consequences. (MPB-19). Princeton University Press. https://doi.org/10.2307/j.ctvx5wbssGoogle Scholar
Wilson, D. S., & Kirman, A. P. (2016). Complexity and evolution: Toward a new synthesis for economics. MIT Press.CrossRefGoogle Scholar
Wise, R. A. (2004). Dopamine, learning and motivation. Nature Reviews Neuroscience, 5(6), 483494. https://doi.org/10.1038/nrn1406CrossRefGoogle ScholarPubMed
Wise, R. A., & Robble, M. A. (2020). Dopamine and addiction. Annual Review of Psychology, 71, 79106. https://doi.org/10.1146/ANNUREV-PSYCH-010418-103337CrossRefGoogle ScholarPubMed
Wouters, P. (2020). The mismeasurement of quality and impact. In Biagioli, M. & Lippman, A. (Eds.), Gaming the metrics: Misconduct and manipulation in academic research (pp. 6776). MIT Press.CrossRefGoogle Scholar
Yankelovich, D. (1972). Corporate priorities: A continuing study of the new demands on business. Yankelovich Inc.Google Scholar
Yin, H. H., & Knowlton, B. J. (2006). The role of the basal ganglia in habit formation. Nature Reviews Neuroscience, 7(6), 464476. https://doi.org/10.1038/nrn1919CrossRefGoogle ScholarPubMed
Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53(1), 205214. https://doi.org/10.1016/0022-5193(75)90111-3CrossRefGoogle ScholarPubMed
Zahavi, A., Butlin, R. K., Guilford, T., & Krebs, J. R. (1993). The fallacy of conventional signalling. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 340(1292), 227230. https://doi.org/10.1098/rstb.1993.0061Google ScholarPubMed
Zeleznik, A. J. (1998). In vivo responses of the primate corpus luteum to luteinizing hormone and chorionic gonadotropin. Proceedings of the National Academy of Sciences of the United States of America, 95(18), 1100211007. https://doi.org/10.1073/pnas.95.18.11002CrossRefGoogle ScholarPubMed
Zhuang, S., & Hadfield-Menell, D. (2021). Consequences of misaligned AI (arXiv:2102.03896). arXiv. http://arxiv.org/abs/2102.03896Google Scholar
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.Google Scholar
Figure 0

Table 1. Examples of proxy failure

Figure 1

Table 2. Key propositions and their corollaries

Figure 2

Figure 1. Regulator, goal, agent, proxy, and their potential causal links. Proxy failure can occur when a regulator with a goal uses a proxy to incentivize/select agents. (A) In complex causal networks the causes (arrows) of proxy and goal generally will not perfectly overlap. There may be proxy-independent causes of the goal (c1), goal-independent causes of the proxy (c3), as well as causes of both proxy and goal (c2; note that this subsumes cases in which an additional direct causal link between proxy and goal exists). (B) The regulator makes the proxy a “target” for agents in order to foster (c2). Yet this will tend to induce a shift of actions/properties towards (c3), potentially at the cost of (c1). The causal effects of goals on regulators or proxies on agents are depicted as grey arrows, given that they reflect indirect teleonomic mechanisms such as incentivization or selection. Note that these diagrams are illustrative rather than comprehensive. For instance, the causal diagram of the Hanoi rat massacre would require an “inhibitory” arrow from proxy to goal, as breeding rats directly harmed the goal rather than just diverting resources from it.

Figure 3

Table 3. Proxy failure in neuroscience, economics, and ecology

Figure 4

Table 4. Overview of consequences of proxy failure

Figure 5

Figure 2. The proxy treadmill. Illustration of the proxy treadmill and its consequences. (A) An agent “hacks” a proxy by, say, finding cheaper ways to make it. The regulator responds by “counter-hacking,” for example, requiring higher levels of the proxy, or requiring new proxy attributes. Each pursues their own goal (which for the agent, here, is reduced to wanting to maximize the proxy). (B) This results in instability, escalation, and elaboration of the signalling system.

Figure 6

Figure 3. The proxy cascade. Proxies often regulate the interaction between nested modules of complex hierarchical systems, such as corporations. Goals of lower levels tend to be proxies used by higher levels to regulate and coordinate those lower levels. The proxy cascade describes how a higher-level constraint cascades down the levels, constraining proxy failure at all lower levels.