I. Risk, Innovation, and Public Trust
In 1990, researchers conducted a poll asking whether people trusted government scientists to evaluate the safety of a proposed Nuclear Waste Storage Repository at Yucca Mountain. They found that only 29 percent of those polled believed that the federal government would be honest in its effort to research the safety of the site, while 68 percent believed that the government scientists would probably cook the books. Additionally, 52 percent of those surveyed said they thought the facility would be built whether the site was found to be safe or not. 1 Commenting on these responses, Rebecca Bratspies writes that they reveal “a lack of trust in the objectivity and intellectual honesty of the decision makers, and suggest a clear perception that the research process was an attempt to drum up public support for an already crafted agenda, rather than a genuine attempt at dialogue and shared agenda building.” 2 If regulatory agencies are to serve their assigned functions, they need to be entrusted appropriately to manage risks, to respect and protect rights, and to promote the public good. But trust cannot be taken for granted, and mistrust is often justified. Perhaps the polled citizens were right to doubt their government: What reasons would they have to believe that the risk analysis would be done impartially?
Some technologies apparently inspire intense, almost instinctive mistrust. Just as the 1990 poll showed mistrust of nuclear energy, there is similar public mistrust of biotechnology and of the agencies charged to regulate it. Surveys regularly show that the public fears biotech innovation and that many people do not believe that regulatory agencies will effectively protect their interests. 3 Since new technologies like biotech promise both the hope of benefits and the possibility of risks, how (if at all) should they be regulated? We can interpret this question to address the authority to regulate, the morality of regulation, or the strategic rationality of alternative regulatory regimes. Questions about the justification of public action, in this case, the regulation of technology, are often posed in an idealized mode that makes them distant from concrete choices that those who are charged to frame regulatory strategies must make. Such excessive idealization can undermine the practical value of philosophical approaches. To avoid such problems, this essay will focus on a specific regulatory policy, the 2020 USDA SECURE Rule for the regulation of new biotech crop varieties, using this case to develop a theory of trustworthy regulatory policy. 4 The goal is at the same time normative, practical, and explanatory: by coming better to understand our institutions and the values they serve, we may understand why they were structured in the way they are, and we may come to see how they can be improved.
I will argue that the policy implemented by the United States Department of Agriculture (USDA) during the summer of 2020, (henceforth “the SECURE Rule”) is seriously flawed. I will evaluate this rule by reference to norms that should, as I will argue, find expression in trustworthy regulatory strategies. But the theory of trustworthy regulation employed to reach this judgment is quite general. The norms employed apply not only to the USDA, but to any of the other regulatory agencies, and to the different regulatory regimes created by their various administrative rules. One might hope that a critical analysis like this one might motivate improvements in regulatory policy. Indeed, in other areas of practical ethics, notably in biomedical ethics, philosophical analysis has led to the development of better justified and informed policies concerning the consent of research subjects, the treatment of patients, use and overuse of drugs, and even the individuation of diagnostic categories. Is it unreasonable to hope that work in agricultural bioethics might similarly be put into practice, and used to improve regulation of biotech innovation?
I begin in Section II with a discussion of the different strategies adopted for the regulation of gene edited foods and crops in the United States and the European Union. Focusing primarily on the USDA SECURE Rule, and on a recent ruling by the Court of Justice of the European Union, I argue that the very different approaches of the United States and the EU reflect the different ways that they prioritize underlying values: while the U.S. policy primarily aims to promote innovation by minimizing regulatory hurdles, the EU policy emphasizes the management and perhaps even the minimization of human and environmental risks. In Section III, I describe a set of five necessary conditions that should, as I argue, be met by a regulatory agency hoping to earn the trust of stakeholders. Then, in Section IV, I evaluate the USDA SECURE Rule in light of these criteria, arguing that the rule fails in a variety of different ways. Most importantly, it fails appropriately to manage risk, and cannot, therefore, promote justified public trust. Finally, Section V provides a concise statement of this conclusion, and briefly addresses the concern that the ideal for public regulation described in this essay is inappropriately utopian.
II. Public Trust and the Regulation of Biotech Innovation in the United States and the European Union
Why do people mistrust biotechnology? 5 Sometimes public skepticism is attributed to the slap-dash methods used to generate the first genetic alterations, and to the way in which the technology was introduced to the public. Early in the biotech era, genetic modification was a random, laborious, and expensive process. Radiation was used to increase the rate of random mutations, in the hope that some might turn out to be interesting or beneficial. Later, gene-guns were used to inject DNA into cells, in the hope that some of the injected material might continue to function as a beneficial mutation. Subsequent transgenic techniques were more controlled, allowing genetic sequences from one organism to be spliced into the genes of another. Even then, the results of biotech genetic transformations were difficult to predict. The newly induced traits were often surprising, even to the researchers who had induced them.
New gene editing technologies, especially those using CRISPR-Cas9, make it possible to effect genetic transformations by altering the “spelling” of a genome without introducing genetic material from an external source. 6 CRISPR is quicker, easier, cheaper, and more precise than earlier technologies employed for genetic transformations. 7 Its use has dramatically increased the rate of biotech innovation in a variety of research contexts. The benefits of this new technology are already striking; it could be used to develop flood- and drought tolerant crops, to address nutritional deficiencies, and to make agriculture more sustainable and environmentally appropriate. Skeptics urge that it could also be co-opted to promote private profits with few associated public benefits. In either case, the use of this new technology must still address significant public skepticism and fear. Actual risks may be much smaller than the public believes them to be. But new technologies that are perceived to be risky may be avoided even if actual risk is low, and even when their adoption would be significantly beneficial. Mistrust sometimes has significant costs. 8
With the advent of CRISPR, the United States and the European Union both moved to enact new rules to cover regulation of gene edited foods and crops. In March 2018, U.S. Secretary of Agriculture Sonny Perdue announced that the USDA would not pursue additional regulation of plants “that could have been developed through traditional breeding techniques.” 9 The announcement was part of a push for “regulatory relief,” designed to encourage innovation. The details of the USDA policy were eventually published in the Federal Register in May 2020 as the USDA’s new Biotech Rule, “The SECURE Rule” concerning USDA regulation of agricultural biotechnology. 10 The SECURE Rule specifies a new oversight policy that, in its first stage, permits scientists and corporations to determine for themselves the extent to which their new crop varieties should undergo regulatory review. Secretary Perdue made it clear that the new policy would apply to plants developed using “innovative new breeding techniques,” including genome editing using CRISPR. 11 He emphasized the value of new breeding techniques that can “introduce new plant traits more quickly and precisely, potentially saving years or even decades in bringing needed new varieties to farmers.” The techniques in question include genetic deletions, base pair substitutions, complete null segregants, 12 and gene insertions from compatible plant relatives. Since these technologies are new and proliferant, one might think that it would be appropriate to adopt a presumption to subject them to additional scrutiny. But the SECURE Rule notes that “there is no evidence that use of recombinant deoxyribonucleic acid (DNA) or genome editing techniques necessarily and in and of itself introduces plant pest risk, irrespective of the technique employed.” 13 The Rule specifies that there is no reason specifically to regulate varieties produced by gene editing because they do not introduce any new and regulable risk. 14 As we will see, there are reasons to call this claim into question.
The SECURE Rule imposes much lighter regulatory oversight than the regime it replaces. At the first stage, it entirely exempts from regulation products with a single-sequence genetic deletion, a single base-pair substitution, any modification that adds DNA sourced from within the plant’s own gene pool and not from a more distantly related species, or organisms that are descended from a modified plant but do not retain the modifications of the parent plant. 15 Plants that are modified such that the plant-trait mechanism of action is the same as another plant for which the USDA’s Animal and Plant Health Inspection Service (APHIS) has already conducted a regulatory status review are similarly free from regulatory oversight. Developers with plant products that meet one of these criteria may self-determine that they are free from regulation, or may notify the USDA, which then has thirty days to decide whether regulated development trials are needed. If not, experimental trials can proceed without additional oversight. A second level of regulatory oversight is applied to new varieties produced through multiple sequential genetic changes, or which do not otherwise qualify for exemption at the first stage. 16 For such varieties, developers may request a Regulatory Status Review (RSR) in which the USDA determines whether the plant has any plausible plant-pest risk. At the third and most stringent regulatory level, plants that are not exempt at the first levels must petition the USDA for nonregulated status. If the petition is accepted, then the plant escapes regulation; but if not, a permit is required. Only those plants that receive permits are subject to regulations, designed to prevent organisms from escaping field trials, and to ensure that the modified organisms will not become a plant pest. While earlier regulatory regimes subjected almost all genetically engineered (GE) plants to regulation, representatives from the USDA-APHIS expect that the new rule will exempt most of them. APHIS literature predicts that under the new rule, only “about 1% of [genetically engineered] plants might not qualify for an exemption or deregulation after an initial review.” 17
The diminished level of regulatory oversight implied by the SECURE Rule pleased some, but dismayed and confused others. 18 Plant breeders and seed companies were relieved to hear that they face lighter regulatory burdens. Others argued that new breeding techniques should be treated with caution. Still others regard gene edited products as a new and potentially dangerous technology. Survey data regularly indicate that both U.S. and EU consumers have a significant desire for regulation of biotechnology, and it has been assumed that they will be similarly wary of crop varieties that have been CRISPR-edited. 19
The EU settled on a very different regulatory strategy. Four months after Secretary Perdue’s initial announcement, in July 2018, the Court of Justice of the European Union (CJEU) issued a press release specifying the regulatory status of “organisms obtained by mutagenesis.” “Mutagenesis” refers to any process that changes an organism’s genetic makeup by mutation, and which includes both transgenic and gene-edited organisms. According to the CJEU, all such organisms “are GMOs and are, in principle, subject to the obligations laid down by the GMO directive.” 20 The Court’s press release followed legal action by Confédération Paysanne, a French agricultural organization. Confédération Paysanne brought this action, joined by eight other associations, arguing that new mutagenesis techniques are significantly different from those employed prior to the adoption of the EU’s GMO directive. 21 For the past twenty years, genetically modified organisms have been identified, defined, and regulated under European law through the 2001 GMO Directive. 22 The new judgment clarifies that gene edited plant varieties will be included as GMOs and regulated as such under that directive.
Contrasting approaches to regulation in the United States and the EU create very different regulatory environments. While the USDA announcement emphasized the similarity between existing crops and crops produced by gene editing, the EU ruling states that plants produced by gene editing may introduce striking new risks. While the USDA statement notes no additional risks associated with plants produced using innovative breeding techniques, the CJEU notes Confédération Paysanne’s view that “the use of herbicide-resistant seed varieties carries a risk of significant harm to the environment and to human and animal health, in the same way as GMOs obtained by transgenesis.” 23 Like the USDA guideline describing alterations that “could otherwise have been developed using traditional techniques,” the CJEU’s exclusion of alterations that “do not occur naturally” is both vague and ambiguous. Alternative interpretations will need to be distinguished and addressed by the courts or the legislature before the implications of this ruling will be entirely clear. Genetics and organismal biology are swiftly advancing areas of scientific inquiry, but they have not provided, and may not be expected to provide, a clear and final view about which kinds of alteration can and which cannot occur naturally.
In other respects, however, the differences between the EU and the U.S. regulatory strategies are striking. The USDA emphasizes the value of plant innovation, and seeks to get out of the way of science and industry by minimizing regulatory hurdles. The European Parliament and the CJEU both emphasize the possibility of human and environmental risk. Both strategies have advantages and weaknesses: while the European model seems to impose a heavy regulatory burden on a technology that has relatively low risk, the U.S. model, as I will argue, is ineffective and haphazard in the way it manages risk. This undermines justified public trust in biotech innovation and could slow acceptance of the regulated technology.
Do the EU and U.S. strategies for regulation of biotech innovation simply reflect different but, perhaps, equally justifiable methods for balancing these twin objectives, to promote innovation while protecting against human and environmental risks? I will argue that they do not. The SECURE Rule neither reflects a science-based regulatory strategy nor effectively measures and manages possible risks; and the regulatory regime the rule describes neither deserves nor is likely to inspire the public trust that would be necessary and appropriate for the effective promotion of biotech innovation. Space does not permit an analysis of the changing state of biotech regulation in the EU, but my critical analysis of the USDA rule should not be taken as an endorsement of the EU regulatory strategy. The EU regulatory regime has substantially blocked adoption of biotech innovation in Europe and has slowed the development of biotech innovation worldwide. European import restrictions have had unfortunate global implications, since they provide a disincentive for farmers in poor countries to adopt technologies that are, in some cases, urgently needed. Thus, while this essay focuses critical attention on regulation in the United States, this should not be taken as advocacy for the differently problematic regime adopted in the EU.
III. Risk Management and Public Trust
Trust is a morally ambiguous commodity: it may be wrongly bestowed and fraudulently sought. To earn trust, one must be trustworthy, but to gain trust it’s only necessary to seem trustworthy. Trust in persons is different from trust in technology or trust in institutions. In the case of biotech innovation, an agency seeking to earn public trust may be working against a deep-seated psychological propensity: status-quo bias renders us naturally reluctant to accept what is new or different. 24 This propensity may be quite reasonable and appropriate in many environments. Novelty—divergence from the status quo—can come with new and unexpected or unpredictable risks, so perhaps we should expect this bias to arise independently in other species as well. But while status quo bias may protect us from the dangers of the new, it also renders us reluctant to accept and to use new technologies that might be a benefit. Precipitous acceptance of novelty may sometimes be risky, but reluctance to accept novelty can present similar risks.
Status quo bias may be one source of public mistrust of new technologies, and this propensity for mistrust must be taken into account by agencies like the USDA that seek to gain, and (one hopes) also to merit the trust of the public. Regulatory institutions should not simply seek to generate public trust in valuable technological innovations. They should seek to earn public trust by verifiably and transparently protecting public rights and interests. I propose here a set of five conditions that should be met if a regulatory strategy like the USDA SECURE Rule is to merit public trust. The implicit account of trustworthy regulatory policy is general, and should apply, mutatis mutandis, to other regulatory rules as well.
(1) Effective and Fair Management of Risks and Benefits. 25 The raison d’être of regulatory agencies is their role to protect the public from harm while minimizing interference with commerce and innovation: too much regulation will stifle innovation, but too little regulation would inadequately protect the public. This means, in many cases, careful and ethically informed use of risk-cost-benefit analysis to evaluate and minimize risks (subject to constraints) when they are manageable, and to prohibit the deployment of technologies that have unmanageable risks. But the effective and just management of risk does not simply require that the expected benefits outweigh the expected costs: outcomes that are cost-effective in this sense may still involve unfair distribution of risk and benefit, such as, for example, if the benefits accrue exclusively to the powerful and wealthy while the risks are carried by communities that are powerless or poor. It also matters whether risks are involuntarily imposed or voluntarily undertaken by those who bear them—without express consent, it is not permissible to subject people to excessive significant risks even when overall benefits outweigh overall costs. 26 And even when new technologies are reasonably expected to have benefits that outweigh their costs, regulatory agencies must ask whether the consequent imposition of risks and costs would violate the rights or compromise the liberty of those who bear them. Risk management decisions must therefore be made within the bounds of constraints, including requirements of fairness, autonomy, and respect for rights.
(2) Science-Based Regulatory Strategies. Regulatory agencies usually claim to use “science based” risk assessment tools, instead of relying on intuitions or fears. Indeed, the USDA touts the SECURE Rule as a science-based regulatory strategy. While trustworthy regulatory strategies must appropriately use the best available scientific data, and while appropriate formal models should be used to analyze the level of risk, to say that this means that science is the “base” of the strategy is a mistake. At several junctures, there are ineradicably subjective or nonscientific values that must be incorporated into this process. For example, in order to measure the degree of risk, analysts must assign a value, or a value range, to represent the badness (or goodness) of alternative outcomes. And while formal tools may roughly quantify risks, the judgment that risks are unacceptably high (or acceptably low) involves value judgments that may be justified and well reasoned (or unjustified and badly reasoned), but these judgments are not “scientific” in any strict sense. Nor will standard scientific methods provide a basis for judging whether risks imposed are unjust or unfair or that they are unreasonable or excessive. Ideals of justice, fairness, reasonableness, and harm are not essentially scientific standards. But the ideal that policy should be “science based” cannot mean that such standards are ignored or omitted. Policies that fail to meet these important normative standards would be untrustworthy in the extreme.
To understand the proper sense in which policies should be “science based,” it might help to look at a policy that fails this test. Consider the regulatory regime that was recently replaced by the new SECURE Rule: before implementation of the new rule, the trigger for USDA regulation for many engineered organisms involved the method by which they had been transformed. In many cases, Agrobacterium tumefaciens, a soil bacterium, was used to transfer segments of DNA into the plant genome. Agrobacterium is itself a plant pest, since it can cause crown gall disease in several species, by transferring some of its own DNA into the DNA of host-plant cells. This ability makes it useful, since Agrobacterium can be persuaded to transport desired DNA sequences into host plants to effect a transgenic transformation. The resultant genetically modified organism may have no lingering vestige of Agrobacterium DNA, and plants that have been modified using Agrobacterium are not at higher risk of becoming plant pests, as compared to plants that have been modified by other means. Since the use of Agrobacterium is not automatically associated with plant-pest risk, the regulatory trigger employed under the previous regime did not track the relevant risk. Nonetheless, USDA regulation under this earlier regime was touted as “science based” because the regulatory process used data acquired through scientific investigation, and because it employed formal risk analysis methods. But since this so-called “science based” regulatory regime did not track risk and did not assign higher degrees of regulatory oversight to cases where actual plant-pest risk was higher, it therefore did not appropriately manage risk. It failed, that is, at the most fundamental norm we should apply to regulatory rulemaking.
When regulatory policies are said to be “science based,” this is usually intended as a contrast with methods of policy choice that are clearly inappropriate. It would be wrong to adopt policies that replaced proper risk analysis with fear, regulating to protect people from what they are afraid of regardless of the actual risk. Clearly, regulatory strategies should be informed by the best available scientific data and evidence and should appropriately use formal techniques to evaluate risks. They must not, however, use the façade of science based risk analysis to exclude crucial considerations of fairness, justice, harm, or reasonableness of risk. And measures designed to manage risk must certainly track actual risk levels, and must gauge the degree of regulatory oversight to the level of actual risk. Regulatory regimes that fail to do these things cannot be called “science based” in any meaningful sense.
(3) Truthfulness. Obviously, agencies that lie to their stakeholders do not merit trust. But the obligation of truthfulness goes beyond the minimal obligation to avoid intentional and knowing communication of falsehoods. Truthfulness requires the use of language that effectively communicates the true or best information to stakeholders, without obfuscation and without the use of unnecessarily confusing terminology.
(4) Transparency. Without transparency, truthfulness cannot merit trust. Transparent decisions should, to the extent possible, be reviewable. It should be evident that they have been well made and based on good, publicly justifiable reasons. 27 In the ideal case, transparent decision-making processes foster trust because they can be understood and analyzed by stakeholders or stakeholder representatives. Just as public institutions in general should be publicly justifiable—that is, justifiable to constituents and stakeholders—regulatory institutions and their rulemaking processes should be publicly justifiable to those who are affected by administrative rules and decisions. If regulation is otherwise well constructed, transparency will increase public trust, since transparency facilitates public understanding of regulatory protections. By contrast, if regulations are not well constructed, increased understanding will decrease trust. This might be the paradigm test of trustworthiness: when regulation is trustworthy, then transparency provides understanding; and understanding in turn results in increased trust.
Will transparency have this effect in practice? Sometimes the reasons behind regulatory decisions are complex, more readily justifiable to experts than to the public at large. Reasons justifiable to experts may sometimes be opaque to the non-expert public. Sometimes there may be disagreement among experts about which kinds of reasons can be publicly justified. In practice, there may be cases where transparency generates mistrust, not because policies or the reasons behind their implementation are bad, but because they are easily misunderstood. Even then, however, the effort to make the regulatory process transparent will serve the goal of public justification. For obvious reasons, opaque decision processes undermine trust, and policies that undermine transparency will be less trustworthy.
(5) Responsiveness. A responsive agency must provide opportunities for stakeholders to express concerns and objections and must not treat public comment as a perfunctory performative exercise. Responsiveness is necessary for a variety of different reasons, but not least among these is the fact that diverse public input should appropriately inform risk-cost-benefit calculations by helping analysts to understand what is at stake and what weight to place on the values that may be at risk in regulatory decision-making. Public responsiveness is primary, but where biotech innovation is the target of USDA regulation, stakeholders include plant breeders and developers as well as members of the general public. Regulatory agencies must be responsive to stakeholder concerns about both overregulation and underregulation in the management of risks. However, responsiveness introduces the possibility of error: if regulatory agencies regulate perceived risks instead of actual risks, they abdicate their most fundamental obligation. And if they are more responsive to industry than to the public, this may be taken to indicate that the agency has been captured by the industries it is supposed to regulate.
Responsive agencies need to act appropriately to take into account public concerns, but it will not always be appropriate simply to act on public concerns, to regulate what people fear instead of what poses a real danger. To see why this might be so, consider a study conducted by Paul Slovic. 28 Slovic plotted prospective hazards—events that involved risk and possible regulation to mitigate that risk—on two axes: the vertical axis measured the degree to which a risk was “unknown,” and the horizontal axis measured people’s sense of dread associated with the risk. For example, Slovic identified risks associated with cadmium and trichloroethylene that were unknown; people had not heard of these hazards. Risks associated with nuclear weapons and nerve gas were known and were associated with a high degree of dread. Slovic’s survey data showed that people had a higher degree of concern and a greater desire for regulatory intervention to mitigate risks when those risks were in the unknown/dread quadrant, and lesser desire for regulation of risks that were in the known/not-dread quadrant. This effect appeared to be independent of the actual degree of risk associated with the hazards included in the study. Thus, subjects had a relatively low level of concern and low desire for regulation of risks associated with swimming pools, which were known/not-dreaded. By contrast, they had a surprisingly high level of concern and desire for regulation of risks associated with satellite crashes, which were in the unknown/dreaded quadrant. Actual risks associated with swimming pools are significant, while risks associated with satellite crashes are infinitesimal. Some simple regulations governing pool construction and management are effective at protecting people from harm and death: for example, pools can be constructed with easy step access so that children who fall in can get out without help. Pool covers can be made strong enough to support the weight of a child, so that people don’t fall through. These regulations are especially important to protect children from harm. A regulatory agency that responded to fears instead of hazards would have recommended excessive regulatory expenditures to protect people from satellite crashes and inadequate efforts to regulate pool safety.
Just as regulatory agencies can be inappropriately responsive to public fears, they can also be inappropriately responsive to the industries they are supposed to regulate. One common perception is that the USDA and other regulatory agencies are subject to capture by their own regulated industries, or by administrators who come from those industries. A captured agency cannot be trusted because it will systematically reflect the interests of industry rather than the interests of the public, in contexts where those interests are opposed. Regulatory capture—even the perception of regulatory capture—reasonably undermines public trust that regulatory agencies will effectively manage risks. The inference may go both ways: failure appropriately to manage risk is sometimes taken as evidence that an agency has been subject to capture. It is safe to say that in the case of the USDA, there has often been a problematic public perception that the agency has been captured, and that it therefore reflects the interests of industry and not the interests of the public. This has been a significant source of public mistrust.
IV. USDA’s SECURE Rule and the Regulation of Biotech Innovation
In the United States, management of biotech innovation is orchestrated under the Coordinated Framework for Regulation of Biotechnology, implemented in 1986. 29 The Coordinated Framework divides different tasks—different focus areas—among the various regulatory agencies, including the FDA, EPA, and USDA. The USDA’s authority to regulate biotechnology is limited, under this framework and under its legislative mandate, to a rather narrow focus on plant-pest risk. This leaves other agencies to evaluate broader risks to environmental and human health. The SECURE Rule constitutes the latest attempt to develop a regulatory regime that is focused on “science-based” risk assessment, and which is appropriately responsive to other public stakeholder interests.
A. Veracity, transparency, and responsiveness
How does the SECURE Rule fare when evaluated using norms of veracity, transparency, and responsiveness? While I will not allege that the USDA has been dishonest in its development and promulgation of the new rule, there are good reasons to question whether the new regulatory regime described by SECURE is appropriately transparent and responsive to stakeholder concerns. 30
Transparency requires that decision-making should be reviewable by stakeholders. Under the SECURE Rule, all initial regulatory decisions will be made by plant breeders themselves. Even USDA regulators will have no oversight authority with respect to plants that involve a single base-pair alteration, or plant innovations that involve existing plant-trait action mechanisms. SECURE allows developers simply to decide that they are exempt. If the activities in question were not associated with relevant risks, this might be appropriate. But as we will see, the SECURE Rule does not effectively track risk. It seems unlikely that public stakeholder trust would increase as stakeholders come to realize that plant breeders can mostly exempt themselves and their products from regulation in the first stage.
In a similar manner, the SECURE Rule provides only a low level of USDA responsiveness to expressed stakeholder concerns. As Greg Jaffe has pointed out, by exempting most products from regulation in the first stage, SECURE precludes public response ab initio. 31 Section III above defended the value of responsiveness, but also noted that there are inappropriate forms of responsiveness. In the case of biotech innovation, it would be inappropriate for the USDA to regulate on the basis of public fears that cannot be substantiated—to do so would risk overregulation that would infringe the rights of plant breeders to deploy innovative products even when they are demonstrably safe. If anything, the SECURE Rule moves to the opposite extreme: it is likely that the SECURE Rule will release developers from regulatory oversight in the vast majority of cases. There is concern that this constitutes excessive protection of the interests of industry and plant breeders, at the expense of the public.
However, most experts judge that the actual risk levels associated with plant biotech innovation are low. Might one respond that the USDA strategy minimizes regulation at this early stage because regulatory oversight is simply unnecessary to govern such minimal levels of risk? There are three responses to this argument, which will be elaborated in more detail in what follows: First, while risks associated with most innovative biotech products may be low, they cannot be known to be low in the absence of regulatory oversight. Single base-pair alterations may sometimes result in a significant increase in the relevant risk, but the SECURE Rule would not trigger regulatory oversight even if it did. Second, even where overall risk levels are low, the level of regulatory oversight should still be indexed to the level of risk. Third and finally, increasing rates of innovation can result in increased risk even when each individual innovative event is associated with risk levels that are very low.
B. Science-based regulation and the management of risk
In the discussion of “science based” regulation in Section III, I argued that the regulatory regime recently replaced by the SECURE Rule was not properly science based, in part because that rule indexed the level of regulatory oversight to the use (or not) of known plant pests like Agrobacterium in the development process. Since this regulatory trigger is not associated with higher degrees of risk, the former rule failed properly to track risk. The new SECURE Rule does a little better: instead of focusing on whether a plant pest was used in the development of a genetically modified organism, the new rule focuses on properties of the organism itself. Since the relevant risk is primarily a function of phenotype not genotype, and since risk is not in any direct way associated with the use of Agrobacterium (or other plant-pest organisms) in the development process, this is change in the right direction. But for several important reasons, the new rule still fails appropriately to manage the relevant risks.
The U.S. Plant Protection Act defines a “plant pest” as follows:
The term “Plant Pest” means any living stage of any of the following that can directly or indirectly injure, cause damage to, or cause disease in any plant or plant product: (A) a protozoan, (B) a nonhuman animal, (C) a parasitic plant, (D) a bacterium, (E) a fungus, (F) a virus or viroid, (G) an infectious agent or other pathogen, (H) any article similar to or allied with any of the articles specified in the preceding paragraph. 32
While USDA risk management is limited to risks that lie in the domain specified by this definition, the definition itself is fairly broad. The problem with the SECURE Rule is that there are predictable cases where significant plant-pest risk will not trigger regulatory oversight under the new rule. First, under the new rule many plants are simply exempted from all regulatory oversight from the start. Transformations involving a single sequence deletion, substitution, or addition from the plant’s gene pool are simply exempt. Developers need not check with the USDA if their engineered or edited organism falls into one of these categories; they can simply decide for themselves that they are exempt from regulation. Second, SECURE exempts from regulation plants that have the same plant-trait mechanism of action as another organism the USDA already regulates. If a new organism employs the same underlying biological process to achieve a desired function, then once again developers can decide for themselves that their product is not regulated by USDA.
But single-sequence deletions/substitutions/additions can sometimes involve dramatic changes in phenotype, and multiple-sequence genetic alterations may sometimes involve no discernible phenotype changes at all. 33 Plant-pest risk is associated with phenotype, not with the number of alterations employed in the development process. The new rule would seem to incorporate the same problem that plagued the previous regulatory regime: the trigger used to identify which genetically altered crops are liable for regulatory oversight is not appropriately indexed to the level of actual risk. Noting this problem, Greg Jaffe writes “While many, if not most, plants with a single deletion may not present any plant pest risks, if one does, shouldn’t USDA regulate it?” 34 Like the previous regulatory regime, the SECURE Rule fails at the most fundamental norm we should apply to regulatory rules.
A second argument leads to the same conclusion: A science-based regulatory policy would classify organisms as regulable (or not) depending on the likelihood that they present an actual risk. It would therefore be triggered by the phenotype of the regulated organism, preferably in a way that is context-sensitive, since the same phenotype might present risk in some environments but not in others. For example, experimental trials of cotton variants would present far less risk if trials (presumably indoor trials) were held in Minnesota, where any escaped individuals would be unlikely to survive. Cold-weather brassica variants would present less risk if trials were held in a tropical location like southern Arizona. In general, the risk posed by experimental trials of new varieties will be a function of both the phenotype and the environment in which the trial takes place. A science-based approach would index increasing levels of regulatory oversight to events with higher risk. But the USDA SECURE Rule entirely fails to do this at the first stage of the regulatory process.
C. Comprehensive risk management and the rate of innovation
The USDA is not charged to monitor overall risks of human and environmental harm posed by biotech innovation. It is institutionally required to focus on plant-pest risk. But the goal of the Coordinated Framework for the Regulation of Biotechnology is comprehensive risk management. The coordinated framework distributes to different agencies the management of different varieties of risk. Those who designed this regulatory framework apparently assumed that such piecemeal regulation could provide systematic oversight. This assumption fails to take into account the significance of an innovation like CRISPR, which does not merely provide an alternative method for developing biotech innovations, it also dramatically changes the rate of innovation. By making genetic editing cheaper, easier, and quicker, the use of CRISPR has resulted in the development of many new varieties in recent years. As the number of innovative biotech products that might be eligible for regulatory oversight increases, the potential burden for agencies working under the Coordinated Framework would also be expected to increase. As we have seen, the SECURE Rule renders many biotech innovations exempt from regulation ab initio, and is in no way responsive to changes in overall risk that result from increased rates of product development. This may be a good way to reduce the workload at the USDA, but it is not an effective way to manage aggregate risk.
How significant are the risks involved? Most experts reasonably assume that the risk associated with individual genetic engineered plants is quite low. There are many reasons given for this belief: First, most biotech innovations, one might argue, involve incremental changes that are unlikely to cause significant changes in the environment or to have significant human health impacts. But second, while it may be possible to produce genetically engineered plants that would have devastating environmental effects if introduced into our native ecosystems, few people would have a motive to develop such a product: plant breeders would be liable for environmental and human damage, so they have a strong motive to avoid producing a product that would cause such damage. This second reason might be called “self-regulation through legal liability.” Finally, existing biotech crops—those that have been in use for the past decades, since the first biotech crop was introduced in 1983—have proven to be quite safe. No significant environmental or human harm can be traced to the use of existing biotech crop innovations.
There are, of course, reasons to question each of these arguments: Incremental changes can sometimes have dramatic effects on human or environmental health. “Risk management by lawsuit” is unsatisfactory, since lawsuits can only take place after harm has already been caused. And legal action is less likely to be successful if plaintiffs cannot show that the harms they suffer were specifically caused by the actions or the product of the defendant. In relevantly similar cases in environmental law, such legal actions have often failed, even where it is plausible to believe that the plaintiff’s harms were caused by the defendant’s action. Finally, relatively few genetically engineered traits are widely in use, at present, so the safety of extant varieties might not justify confidence that future varieties will be similarly safe. Most current GE traits involve herbicide resistance (e.g., glyphosate tolerant soybeans and canola), or pest resistance (e.g., Bt corn and cotton). 35 These traits are well tested and may reasonably be expected to be safe. But CRISPR may change all of this: some innovations (e.g., gene drives) could have wide-reaching effects, and it is difficult to judge in advance and impossible to judge a priori what risks might be presented by products that could be developed using these new technologies. 36 As innovative plant breeding techniques are applied more widely, there is reason for concern that some innovations may impose risks quite unlike those of current varieties.
The widespread use of CRISPR has already changed the rate of biotech innovation. Products under development or already becoming available include non-browning apples and mushrooms, low nicotine tobacco, fragrant moss for home use, nutrient fortified bananas, and a wide variety of other new products and traits. As the rate of innovation changes, there is little reason to project that future innovations are likely to be safe merely because the past innovations that are already in use have been safe. Even if the level of risk associated with each new product is very low, the overall probability that some truly dangerous or risky product will escape regulatory oversight or will be introduced with inadequate oversight will increase. Overall risk is an increasing function of the number of risk-bearing events, and the number of risk-bearing events has increased with the rate of innovation. But the USDA’s new SECURE Rule is in no way responsive to this very significant change. It is not, therefore, an effective tool for the management of risks associated with biotech crop innovation in an era when the rate of technological change is increasing rapidly. The Coordinated Framework itself is ill suited to address this cause of increasing overall risk.
The USDA emphasizes that gene editing techniques do not introduce any new regulable risk, and that there is no reason to expect that products produced using CRISPR or other gene editing tools will be more risky than products produced using other methods for genetic transformation. This may well be true: at least, I have given here no reason to believe that products of gene-editing are in any way riskier than other genetically modified foods and crops. It seems quite reasonable to suppose, as the USDA does, that the risk associated with each product is likely to be acceptably low. But this is consistent with the possibility that overall risk is increasing with the rate of innovation, as the annual number of minimally risky development events increases. Regulatory rules that appropriately scale regulatory oversight to respond to risk must recognize and accommodate this change in overall risk. Neither the SECURE Rule nor other provisions in the Coordinated Framework do this.
By pointing these problems out, I do not mean to imply that the risks associated with biotech crop innovation—even aggregate risks—are high. My own presumption is that even the aggregate risk associated with biotech innovation may be quite acceptably low. By contrast, the risk if we were to forego the use of plant biotechnology may be quite high. The level of risk associated with individual crop varieties will of course be much lower than aggregate risk levels. But even under the assumption that the level of risk is low, it is inappropriate to implement regulatory rules that give the appearance of risk management but fail to link the level of regulation with the level of risk. This is not real risk management it is illusory risk management. Such a ruse is especially inappropriate in an innovation sector where the level of public concern—the level of public perceived risk—is relatively high. To make regulation trustworthy is not to replace regulation of actual risks with regulation of perceived risk, but to require implementation of actual risk management instead of settling for an illusion. As noted earlier, a paradigm of untrustworthy regulation is the case where regulation fails appropriately to manage risk. In such a case, increased understanding of the policy will lead to decreasing trust.
It is worth emphasizing that “better regulation” does not mean “more regulation.” The argument given here does not suggest that the level of regulatory oversight provided by the SECURE Rule is too low, or that we need stricter or more comprehensive regulation to provide for effective and trustworthy management of plant-pest risk. An argument for that claim would need to provide evidence that the risk level is higher than the level of regulation present, and I have advanced no such argument here. Under the SECURE Rule relatively few biotech crop varieties will trigger regulatory oversight, and one could argue that this is an acceptable outcome, or that it would be if otherwise trustworthy regulatory mechanisms were in place. As noted earlier, it is important to avoid both overregulation and underregulation. But if the regulatory trigger is unrelated, or inappropriately related to the actual level of risk, the result will be misregulation that both inappropriately regulates low-risk products and inappropriately omits to regulate those that are associated with higher risk. Trustworthy regulatory rules would appropriately respond to the level of risk involved in biotech innovation. I have argued that the SECURE Rule fails to do this.
To accommodate the objections discussed here it would be necessary to change the structure of both the Coordinated Framework and the SECURE Rule. Modification of the SECURE Rule itself would be a good start and would be much easier, since it would not require interagency negotiation. But perhaps there is another measure that would mitigate, though not fully address, the problem. Premarket testing of new products might appropriately respond to public fears and concerns about biotech innovation, while at the same time protecting plant breeders’ interest to demonstrate that their innovations are safe. A trustworthy USDA-implemented regime of premarket safety testing would effectively serve both interests, even if participation were voluntary. Plant breeders who believe that their products are safe would benefit from the opportunity to gather evidence demonstrating their safety. And consumers worried that biotech products may be risky would benefit if the USDA, or other agencies, could provide evidence that products are safe. Perhaps there is concern that mandatory premarket testing would constitute overregulation and would impose excessive demands on the agency. As the rate of biotech innovation increases, it may become infeasible to impose testing on all innovative plant products, and in any case such blanket testing would be unnecessary in many cases, since for most such products the premarket risk is extremely low. However, it is in the interest of developers to demonstrate that their product is safe, to promote public trust. A voluntary premarket testing program could be mutually beneficial, since it could appropriately respond to skeptical concerns while facilitating public acceptance. 37 If the USDA hopes to promote public trust in itself as a regulatory agency, and in the products it regulates, voluntary premarket testing seems uniquely suited to serve this interest.
V. Utopian Critique and Public Trust
I conclude with a practical recommendation for the reform of the USDA SECURE Rule: Proper USDA regulation of plant-pest risk would involve serious investigation into the properties of plant pests and would almost certainly focus on phenotype and on environmental factors that make it more likely that a given plant phenotype will be a pest in a given environment, not on the number of base pairs involved in the transformation. But there is a broader recommendation for the regulation of biotechnology, or of innovation in any technology sector: Properly graduated regulation of broader risks associated with biotechnology would require a more systematic and integrated regulatory regime than the present Coordinated Framework for Regulation of Biotechnology can provide, so that levels of regulatory oversight could be indexed to different levels of risk, sensitive to changes in risk levels due to changing rates of innovation. This standard should apply to regulatory regimes covering other areas of innovative technology. To deserve public trust, such regulatory regimes would also need to be responsive to reasonable public input, and transparent, truthful, and fair in operation. Is it utopian to think that an actual regulatory regime could be sensitive and responsive in this way? And where objective risk levels are reasonably believed to be low, would the implementation of such a regulatory regime cost more than it would be worth? If public mistrust reduces public use of valuable innovation, the cost of untrustworthy regulation may be relatively high. Still, the ideals described in this essay are ideals, and they might be more (or less) perfectly instantiated in various different regulatory regimes. In the real world, perhaps no such regime will be perfect. Indeed, the reason this essay has focused on a particular regulatory policy was, in part, to avoid unreasonable idealism. The value of ideals is that they can be used to evaluate and to improve the status quo, not to posit some unachievable Platonic ideal of perfection. In this spirit, it seems clear that the SECURE Rule and the Coordinated Framework could both be dramatically improved, and that improvements would render them more deserving of public trust.
One goal of this essay has been to evaluate the SECURE Rule itself. But the second, and much broader goal has been to describe a set of requirements that should be satisfied if regulatory rules and decisions are to merit the trust of the people they are intended to protect, and those they are intended to regulate or constrain. Trustworthy policies may not always garner public trust. But it is always a moral mistake for regulatory agencies to work to gain public trust instead of working to deserve it.