Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-24T08:18:21.589Z Has data issue: false hasContentIssue false

Regulatory Impact Analysis and Litigation Risk

Published online by Cambridge University Press:  14 November 2024

Christopher Carrigan
Affiliation:
Trachtenberg School of Public Policy and Public Administration, GW Regulatory Studies Center, George Washington University, Washington DC, United States
Jerry Ellig
Affiliation:
GW Regulatory Studies Center, George Washington University, Washington DC, United States
Zhoudan Xie*
Affiliation:
Department of Economics, GW Regulatory Studies Center, George Washington University, Washington DC, United States
*
Corresponding author: Zhoudan Xie; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

This paper explores the role of microeconomic analysis in policy formulation by assessing how the regulatory impact analyses (RIAs) that federal regulatory agencies prepare for important proposed rules may affect outcomes when regulations are challenged in court. Conventional wisdom among economists and senior regulatory officials in federal agencies suggests that high-quality economic analysis can help a regulation survive such challenges, particularly when the agency explains how the analysis affected decisions. However, highlighting the economic analysis may also increase the risk a regulation could be overturned by inviting court scrutiny of the RIA. Using a dataset of economically significant, prescriptive regulations proposed between 2008 and 2013, we put these conjectures to the test, studying the relationships between the quality of the RIA accompanying each rule, the agency’s explanation of how the analysis influenced its rulemaking decisions, and whether the rule was overturned when challenged in court. The regression results suggest that higher-quality RIAs are associated with a lower likelihood that the associated rules are later invalidated by courts, provided that the agency explained how it used the RIA in its decisions. Similarly, when the agency described how the RIA was used, a poor-quality analysis appears to increase the likelihood that the regulation is overturned, perhaps because it invites a greater level of court scrutiny. In contrast, when the agency does not describe how the RIA was utilized, there is no correlation between the quality of analysis and the likelihood that the regulation will be invalidated.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Benefit-Cost Analysis

1. Introduction

Economists can point to numerous instances when economic analysis influenced economic policy in the United States. Examples include deregulation of airlines, railroads, and trucking (Derthick and Quirk Reference Derthick and Quirk1985; Hazlett Reference Hazlett2011); utility pricing based on marginal cost or Ramsey principles (Faulhaber and Baumol Reference Faulhaber and Baumol1988); and radio spectrum auctions motivated by Coase (Reference Coase1959) and facilitated by developments in auction theory by two relatively recent Nobel laureates, Paul Milgrom and Robert Wilson (Royal Swedish Academy of Sciences 2020).

In much the same way, for 40 years, executive orders have required federal regulatory agencies to conduct regulatory impact analysis (RIA) when developing major regulations.Footnote 1 In a study evaluating the quality of a large number of RIAs, Hahn and Dudley (Reference Hahn and Dudley2007) note that economists should take special interest in regulatory benefit-cost analysis, because “[o]utside of the Federal Reserve, this may be the area of public policy where economic ideas are used most often” (p. 193). However, in contrast to other policy arenas, economists are somewhat less sanguine in their assessments of whether these analyses have affected regulatory decisions. While scholars have identified individual cases where economic analysis likely increased the benefits or reduced the costs of major regulations (Aiken Reference Aiken2019, Farrow Reference Farrow, Harrington, Heinzerling and Morgenstern2009, Morgenstern Reference Morgenstern1997, Hahn and Tetlock Reference Hahn and Tetlock2008), many assessments (Hahn and Tetlock Reference Hahn and Tetlock2008; Wagner Reference Wagner, Harrington, Heinzerling and Morgenstern2009) have concluded that RIAs have little influence on regulatory decisions.

Yet this may be changing. In Michigan v. EPA (576 U.S. 743, 2015), the majority as well as the dissenters on the U.S. Supreme Court agreed that federal regulatory agencies should normally be expected to consider regulatory costs if the regulation’s authorizing statute permits them to do so. Perhaps at least partly because of this case, legal scholars predict that courts will increasingly check to see that agencies have considered relevant economic factors, such as benefits and costs (Cecot and Viscusi Reference Cecot and Viscusi2015; Dooling Reference Dooling2020; Masur and Posner Reference Masur and Posner2018; Sunstein Reference Sunstein2017). Combined with the Court’s decision in Loper Bright Enterprises v. Raimondo (603 U.S. ___, 2024), which overturned the enduring principle that courts should defer to agencies’ reasonable interpretations of ambiguous statutes (Dudley Reference Dudley2024; Pierce Reference Pierce2024), an increased emphasis on economic factors could lead to more extensive court scrutiny of the RIAs or equivalent economic analyses that agencies produce to inform regulatory decisions. For example, through interviews with economists and other high-ranking officials at several regulatory agencies, one study found that respondents cited Michigan v. EPA as a reason courts can be expected to pay greater attention to agency economic analysis in the future (Ellig Reference Ellig2019, Reference Ellig41).

Moreover, some commentators argue that RIAs have evolved into litigation support documents, written primarily with an eye toward buttressing a regulation in court rather than informing decisions while the regulation is being developed as analysis was originally intended (Carrigan and Shapiro Reference Carrigan and Shapiro2017; Katzen Reference Katzen2011; Wagner Reference Wagner, Harrington, Heinzerling and Morgenstern2009). Yet whether RIAs are effective as litigation support documents remains an open question. In fact, we know of no study that examines whether the quality of agency economic analysis is systematically related to the likelihood that a regulation will be upheld in court. This study helps fill that gap, using a unique data set that evaluates the quality of agency RIAs and identifies whether the agency explained how the RIA influenced regulatory decisions and followed judicial outcomes when rules were challenged in court.

Employing a sample of 126 economically significant, prescriptive federal regulations proposed between 2008 and 2013 and eventually finalized,Footnote 2 we build on previously published research that assigns scores to each regulation based on the quality of the accompanying RIA and identifies whether the agency explained how the analysis was used in the regulatory decision (Ellig Reference Ellig2016; Ellig and McLaughlin Reference Ellig and McLaughlin2012). We use those data to assess whether the quality of the analysis and the agencies’ explanations of how they used it is correlated with the likelihood that at least part of the regulation is overturned in court, measured by examining whether any section of the Code of Federal Regulations (CFR) altered by the rule was challenged successfully in court. The regression analysis controls for numerous factors specific to each regulation and specific to the agency issuing the regulation.

We find that higher-quality RIAs are associated with a lower likelihood that the associated regulations will be overturned in court, but only if the agency explains how the RIA affected decisions about the rule. Offering an explanation increases the risk that a regulation will be overturned, presumably by making it more likely that the court will examine the RIA and find shortcomings. Therefore, to increase the odds that the regulation will survive a court challenge, the RIA must be of sufficient quality to offset the increased risk the agency assumes when it says it used the RIA. In contrast, when the agency does not explain how the RIA was used, its quality has no impact on the likelihood the rule is overturned.

In addition to contributing to the small body of academic research analyzing the determinants of judicial review outcomes using large sample quantitative approaches (Carrigan and Mills Reference Carrigan and Mills2019), the findings highlighted in this paper also have potentially important implications for understanding the effects of administrative procedural constraints on agency rulemaking more generally. Much of the debate about procedures imposed on regulatory agencies—including those that require agencies to accept comments on proposed rules, subject their rules to executive oversight, prepare an analysis to support them, and face scrutiny through the courts following finalization—has revolved around how these constraints alter the pace and quality of the resulting rules. Aiken (Reference Aiken2019), for example, reports that when Congress relaxed the Consumer Product Safety Commission’s statutory benefit-cost requirements, the commission stopped conducting the analysis because it believed this would speed up the promulgation of regulations. Inspired by an influential legal literature claiming that rulemaking has been “ossified” by the procedural constraints imposed on agencies seeking to promulgate rules (McGarity Reference McGarity1992; Seidenfeld Reference Seidenfeld1997), quantitative research has focused on whether these procedures actually do slow the pace at which rules are promulgated or alter their content (Balla and Wright Reference Balla, Wright, Krause and Meier2005; Shapiro Reference Shapiro2002; Yackee and Yackee Reference Yackee and Yackee2010, Reference Yackee and Yackee2012).

This paper adds a new element to this body of research by emphasizing that the effects of procedures are not uniform. Moreover, different procedures can reinforce or impede one another. In fact, although it is certainly true that producing a high-quality RIA to accompany a rulemaking is time-consuming, preparing quality economic analysis upfront can also save time later if it improves the rule’s chances of surviving judicial review. Thus, rather than lengthening the timeframe or discouraging agencies from engaging in rulemaking altogether, the effects of procedural constraints may be better viewed as a more nuanced collection of interactions where attention to one can serve to simplify the next and neglect of one can amplify the difficulties caused by the next.

2. Prior research and hypotheses

While RIAs are intended to help agencies choose more cost-effective regulatory policies, the actual role of RIAs in regulatory decision-making is obscure. In fact, an RIA can be developed after a regulatory decision is made and used to justify, rather than inform, the regulation (Carrigan and Shapiro Reference Carrigan and Shapiro2017; Dudley and Mannix Reference Dudley and Mannix2018; Wagner Reference Wagner, Harrington, Heinzerling and Morgenstern2009). In those cases, agencies conduct economic analysis merely to comply with procedural requirements and satisfy the regulatory review conducted by the Office of Information and Regulatory Affairs (OIRA).

Nevertheless, when economic analysis is part of the justification for agency decisions on regulation, its effects are observed in judicial review. In deciding whether a challenged regulation is arbitrary or capricious, some prior research has found that courts at least sometimes consider the quality or the results of an agency’s economic analysis (Cecot and Viscusi Reference Cecot and Viscusi2015; Dooling Reference Dooling2020; Masur and Posner Reference Masur and Posner2018). Upon reviewing a sample of 38 judicial decisions related to agency economic analysis from 10 federal appellate courts, Cecot and Viscusi (Reference Cecot and Viscusi2015) conclude that “courts generally evaluate whether the BCAs include all relevant aspects of the problem, ensuring that entire categories of benefits or costs are not omitted from the analysis” (p. 605). This usually occurs when the agency relies upon the analysis as part of the reason for its decisions—either because a statute requires the agency to consider economic factors or because the agency itself cites the economic analysis as justification for the regulation (Cecot and Viscusi Reference Cecot and Viscusi2015). Statutory requirements that implicate economic factors include directives that the agency consider benefits and costs, consider economic feasibility, or select a particular alternative based on the results of the analysis (Bull and Ellig Reference Bull and Ellig2018).

In some cases, courts have considered whether the agency’s decisions were consistent with the findings of the economic analysis simply because the analysis is part of the record before the agency (Cecot and Viscusi Reference Cecot and Viscusi2015). There are many examples of cases in which courts examined agencies’ RIAs even though statutes did not require the use of economic analysis (Revesz Reference Revesz2017).Footnote 3 It is not clear, however, whether courts consistently hear or decide challenges to regulations on this basis. Executive Order 12866, the source of the RIA requirement for executive branch agencies, explicitly states that its requirements create no new grounds for judicial review (Executive Order 12866, §10). In a few cases, courts have questioned whether an executive branch agency’s RIA can be reviewed (Bull and Ellig Reference Bull and Ellig2017). Recent comprehensive regulatory reform bills have specified that the agency’s analysis can be reviewed by the court as part of the record before the agency, which suggests that this point requires clarification (Bull and Ellig Reference Bull and Ellig2018).

Judicial review of agency analysis is often quite deferential, especially if the analysis involves highly complex scientific questions. Nevertheless, courts have shown themselves willing to invalidate a regulation if the agency ignored important benefits, costs, or alternatives; employed assumptions or methods clearly contradicted by other evidence before the agency; failed to disclose sufficiently the methodology or assumptions employed in the analysis; or made decisions clearly contradicted by the analysis (Cecot and Viscusi Reference Cecot and Viscusi2015; Bull and Ellig Reference Bull and Ellig2017; Masur and Posner Reference Masur and Posner2018).Footnote 4 In other cases, courts have also looked favorably upon the agency’s economic analysis when the agency acknowledged the limitations of its analysis (Dooling Reference Dooling2020).

Economic analysis is sometimes regarded as inherently anti-regulatory (Ackerman and Heinzerling Reference Ackerman and Heinzerling2004, Steinzor et al. Reference Steinzor, Sinden, Shapiro and Goodwin2009), but there is no obvious bias in the court decisions involving agency economic analysis. Bull and Ellig (Reference Bull and Ellig2017) extend Cecot and Viscusi’s (Reference Cecot and Viscusi2015) review of major cases in which federal appellate courts considered challenges to agencies’ economic analysis and find that the courts rejected challenges to the agency’s analysis in 57 percent of the cases. Sixty-two percent of these decisions could be regarded as “pro-regulatory,” in that the court rejected challenges brought by parties seeking less regulation. Of the cases where courts struck down some aspect of the agency’s decision, 44 percent of the court decisions suggested that the agency had over-regulated in light of the economic analysis, and 56 percent suggested that the agency had not regulated enough (Bull and Ellig Reference Bull and Ellig2017).

Results from recent interview research at federal regulatory agencies are consistent with the pattern seen in court cases. Ellig (Reference Ellig2019) interviewed 15 senior regulatory economists and 10 senior non-economists who worked on regulations in federal agencies. One question asked how they believed the agency’s economic analysis affected the likelihood that a regulation would survive challenges in court. These federal regulatory officials generally thought that a high-quality economic analysis helps the agency win in court if it is sued because it aids in demonstrating that the regulation is not arbitrary or capricious. Several stated that this effect is not uniform, noting that the effect of the analysis is more significant when the agency actually uses the analysis in decisions (such as when directed by statute). Most respondents said that the quality of the analysis had little effect on whether the regulation would be challenged in court. Instead, they said the likelihood of legal challenge depends largely on how costly and controversial the regulation is, rather than the quality of the agency’s economic analysis.

This prior literature suggests two somewhat competing hypotheses, which we test empirically in the remaining sections of the paper.

Hypothesis 1: A higher-quality economic analysis will generally reduce the likelihood that a regulation is overturned in court.

This hypothesis would most likely be correct if courts regularly examined the quality and results of the agency’s economic analysis as part of the record before the agency.

Hypothesis 2: A higher-quality economic analysis will generally reduce the likelihood that a regulation is overturned in court only if the agency states that it relied upon the analysis to make decisions about the regulation.

Unlike hypothesis one, this hypothesis would most likely be correct if courts mostly examined the quality and results of the agency’s economic analysis only in instances where the agency has explained how the analysis influenced its decisions.

3. Data

The dataset associated with the analysis covers 126 economically significant, prescriptive regulations for which OIRA concluded its review during the period from 2008 to 2013. From all of the economically significant proposed rules reviewed by OIRA during that period, we exclude regulations that were never finalized by agencies and regulations that implement federal spending programs or revenue-collection measures rather than prescribing mandates or prohibitions. The regulations in the data set were promulgated by 14 departments and 38 agencies. Table 1 summarizes the descriptions of the variables included in the analysis and presents summary statistics.

Table 1. Variable descriptions and summary statistics

Our dependent variable is dichotomous, indicating whether any part of the rule was invalidated through judicial review. To construct the variable, we follow the same procedure used in Carrigan and Mills (Reference Carrigan and Mills2019), first identifying the CFR sections added or revised by the associated final rule. We then use Thomson Reuter’s Westlaw database to track whether each section was invalidated by courts after the final rule was promulgated. Westlaw labels a CFR section as “unconstitutional or preempted” when it was held invalid by courts and links to the specific court case in which such a determination was made. Since a rule may revise or add multiple CFR sections, we code the variable as one if any of the CFR sections were set aside by courts and zero if no CFR section was invalidated. This process identified 23 (out of 126) rules with at least one CFR section overturned by courts.

The independent variables that measure the quality of a rule’s economic analysis and whether the agency explained how the RIA affected its decisions come from the Regulatory Report Card dataset developed by Ellig and McLaughlin (Reference Ellig and McLaughlin2012). The first variable assesses the overall quality of the agency’s RIA on a zero to 20 scale. The criteria for evaluating quality originate in the Office of Management and Budget (OMB)’s guidance for producing RIAs, Circular A-4 (OMB 2003, 2023). A higher score indicates a more thorough and complete analysis of four key elements of RIAs: the systemic problem the regulation seeks to solve, a consideration of alternatives, an examination of benefits, and an accounting of costs.Footnote 5 The quality score was assigned upon an evaluation of well-defined questions for each criterion by a team of trained evaluators through a double-blind coding approach (Ellig and McLaughlin Reference Ellig and McLaughlin2012; Ellig Reference Ellig2016).Footnote 6 As Table 1 suggests, the average RIA quality score for the regulations under analysis is 10.67, with a minimum score of two and a maximum of 18. The standard deviation is 2.84.

The Report Card dataset also includes a variable that assesses the extent to which the agency explained how the analysis affected rulemaking decisions. The possible score on this variable ranges from zero to five, with a score of three or higher indicating that the agency offered an explanation of how it used the RIA; the score varies based on how extensively the RIA was used. From this variable, we create a dichotomous measure indicating whether the agency explained how it used any part of the analysis in a decision about the regulation. We conservatively code this variable as zero if the agency did not discuss its use of the RIA at all or simply mentioned that it used the RIA in its decision-making but did not explain how. Instead, a value of one for the explained RIA use variable indicates a central role of the RIA in regulatory decision-making. This enables us to test Cecot and Viscusi’s (Reference Cecot and Viscusi2015) claim that courts are more likely to examine an agency’s RIA when the agency relies upon the RIA to make decisions. While the explained RIA use variable is not a perfect measure of the agency’s reliance on the RIA in its decision-making, it is reasonable to assume that an agency will extensively explain how it used the RIA only if it actually relied on it, and an agency that offers little explanation likely did not rely on the RIA.

One may suspect that this variable contains a large number of “false positives,” in which the agency said it used the RIA even though it did not, perhaps to satisfy OIRA. In reality, Table 1 shows that the mean value of this variable is 0.421, indicating that agencies explained how they used the RIA for only 42.1 percent of the regulations (or 53 out of 126 regulations). Another natural question is whether agencies that produced lower-quality economic analyses are less likely to explain how the analyses affected their regulatory decisions. In our dataset, the agencies that explained how they used the RIA have an average RIA quality score of 11.66, and those without such explanations have an average score of 9.95. The difference in means is statistically significant, indicating the possibility that the quality of economic analysis influences agencies’ incentives to explain how they relied on the analysis. However, as our data show, low-quality analyses do not preclude all agencies from explaining how the analyses affected their decisions, possibly due to statutory requirements or potential benefits from including such explanations in rulemaking documents compared to no explanation at all. In the econometric analysis, we test whether the relationship between the quality of the analysis and the likelihood that the associated regulation will be invalidated through judicial review is conditional on whether the agency explained how the analysis affected its decisions.

Beyond the primary independent and dependent variables, we also control for a variety of rule- and agency-specific characteristics. One set of variables controls the level of complexity and controversy of a rule since a more complex and controversial rule may be more likely to be challenged successfully in court. These variables include the length of the preamble in the Federal Register notice of proposed rulemaking based on a word count; whether the rule has estimated benefits or costs exceeding $1 billion annually; the number of public comments received by the agency for the proposed rule from regulations.gov; the number of interest group meetings convened by OIRA for the rulemaking; and the time the agency spent promulgating the rule, measured as the time elapsed from the date when the proposed rule was received by OIRA for review to the date the final rule was published in the Federal Register.

Rulemaking deadlines may constrain the agency’s ability to follow a thorough decision-making process in rulemaking (Carpenter et al. Reference Carpenter, Chattopadhyay, Moffitt and Nall2012; Gersen and O’Connell Reference Gersen and O’Connell2008; Lavertu and Yackee Reference Lavertu and Yackee2012), making the promulgated rule more vulnerable to court challenges. We therefore include two variables that indicate whether the rulemaking faced statutory or judicial deadlines, as indicated at reginfo.gov. Additionally, since our sample covers rules proposed across the Bush and Obama administrations, we include a dummy variable indicating whether President Obama was in office when the review of the proposed rule was completed by OIRA.

Research has shown that regulatory agencies using more team-based internal rulemaking arrangements to produce rules, which include a broader set of agency personnel types, tend to promulgate those rules more quickly than those following more hierarchical arrangements (Carrigan and Mills Reference Carrigan and Mills2019). Moreover, to the extent that the increased pace can be at least partially explained by the diversity of the team, which may preclude it from engaging in deeper discussions to flesh out key details, the associated regulation may be more susceptible to court challenges. To that end, we control for two variables that measure the breadth of agency expertise in rulemaking (Carrigan and Mills Reference Carrigan and Mills2019): the number of agency personnel listed in the notice of proposed rulemaking as contacts for further information and the number of personnel types represented by the contacts, including economic and policy analysts, legal staff, regulatory staff, and subject matter experts.

The regression analysis also includes a set of variables that control for agency-specific characteristics that may affect judicial review outcomes. The first variable is a measure of the agency’s effective independence in terms of the limits on both the appointments of key decision-makers and the review of agency policy by politicians (Selin Reference Selin2015). Second, we include a measure of the agency’s policy concentration from Workman (Reference Workman2015). A larger value indicates a more concentrated agenda, meaning that the agency spends more time on a less diverse set of policy issues (Workman Reference Workman2015). The third variable is an expert assessment of the agency’s ideology based on its mission, policy views, and history, where negative numbers represent more “liberal” agencies and positive numbers represent more “conservative” agencies (Clinton and Lewis Reference Clinton and Lewis2008).

Statutory constraints on the rulemaking and the accompanying analysis may affect the level of scrutiny courts will exercise. The extent to which courts examine agency economic analysis depends on how clearly the relevant statutory language directs the agency to consider or ignore different economic factors (Bull and Ellig Reference Bull and Ellig2018; Cecot and Viscusi Reference Cecot and Viscusi2015). Four variables in our analysis indicate statutory requirements that affect the importance of economic analysis in rulemaking. They are whether the statute prohibited the agency from considering costs, whether the statute required the agency to consider benefits and costs in some way, whether the statute required the agency to consider the economic feasibility of the rule, and whether the statute required the agency to consider technological feasibility.

The degree of discretionary authority the statute granted the agency could also affect the likelihood that the rule would be overturned. Four variables in our analysis indicate whether the statute required the agency to issue a new regulation, whether the statute prescribed the stringency of the regulation, whether the statute prescribed the form of the regulation, and whether the statute prescribed who is covered by the regulation.

4. Results

The regressions in Table 2 test the relationship between the overall qualities of the RIA, whether the agency explained how it used the RIA in agency decision-making, and the likelihood that a rule will be invalidated through judicial review.Footnote 7 The probity model is used throughout the analysis. Since regulations issued by the same department may have numerous unobserved similarities, standard errors are clustered by the department to allow for intragroup correlation. Column 1 regresses the likelihood that the regulation will be overturned by courts on the quality of the RIA while controlling for all the covariates introduced in the previous section but not taking into account whether the agency explained how the RIA was used in its decision-making. The results suggest that the RIA quality is not statistically significantly associated with judicial review outcomes. Column 2 takes into account whether the agency explained how the RIA influenced its decisions about the regulation. This regression also suggests that there is no relationship between the quality of RIA and the probability that the regulation will survive a court challenge. Moreover, whether the agency indicated how the RIA affected its decision-making also appears to not be statistically significantly correlated with the probability of the regulation being overturned.

Table 2. Regressions of RIA quality and use and judicial review outcome

Note: Standard errors are in parentheses.

*** p < 0.01

** p < 0.05

* p < 0.1

These first two regressions seem to suggest that judicial review outcomes are not affected by the quality of the RIA or whether the agency explained how the RIA influenced rulemaking decisions. Thus, they provide little evidence for hypothesis 1. Still, a second possibility is that the relationship between the RIA’s quality and the judicial review outcome is contingent on whether the agency explained how it relied upon the analysis to make rulemaking decisions. A higher-quality RIA may reduce the likelihood that a regulation will be overturned in court only if the agency explained how the RIA affected its decision-making, and similarly, a lower-quality RIA may increase the likelihood of an overturn only if the agency explained how it used the RIA. This possibility accords with the general expectation that the agency’s explanation of how it used the RIA in the preamble of the rule may bring additional court attention to the RIA when the rule is challenged (Cecot and Viscusi Reference Cecot and Viscusi2015). In contrast, the quality of the RIA may have little effect on the likelihood that courts will overturn the rule if the agency does not describe how the analysis affected its decisions. As such, the relationship between the quality of the RIA and the judicial review outcome may have different slopes depending on whether the agency explained how it used the RIA. In this case, the lack of statistical significance of RIA quality in columns 1 and 2 would be due to the fact that agencies did not explain how they used the RIA for a large portion of regulations in the sample.

To test for the possibility of a contingent relationship, column 3 in Table 1 adds an interaction term between the variables measuring the quality and explained use of the RIA. The results illustrate a statistically significant relationship on the interaction term at the five percent level, suggesting the existence of a contingent relationship between the quality of the RIA, the explained use of that RIA, and the likelihood that the regulation will be overturned in judicial review. When the agency does not state that it relied on the economic analysis, a change in the quality of the analysis does not seem to be correlated with a change in the probability that the regulation will be invalidated (i.e. the coefficient on RIA quality is not statistically significant). Nevertheless, when the agency explains how it used the RIA, an improvement in the quality of the analysis is associated with a decrease in the probability that any CFR sections changed by the rule are later overturned by a court. These results provide support for hypothesis 2.

This correlation is practically important as well as statistically significant. If the agency explained how it used the RIA, a one-point increase in the quality of the RIA is associated with a 3.7 percentage-point reduction in the probability that the associated regulation is invalidated (calculated at the means of the covariates). This means that a one standard deviation improvement in the quality of the RIA is associated with a 10.5 percentage-point reduction in the likelihood that the associated regulation is invalidated. Considering that about 18 percent of the rules in the data set were at least partially invalidated, this effect is substantial.

To further explore the probability that a regulation is invalidated through judicial review at different levels of RIA quality, Figure 1 plots the adjusted predictions when the agency explained how it used the RIA (i.e. the explained RIA use variable equals one) and when it did not (i.e. the explained RIA use variable equals zero), again setting all other covariates at their means. Clearly, when the agency explains how it used the RIA in decision-making, a higher-quality RIA is associated with a much lower predicted probability that the final rule is overturned by a court than a lower-quality RIA, holding the other independent variables at their means. Yet when the agency does not explain its use of the RIA, the quality of the RIA seems to make little difference in whether the rule is upheld or overturned.

Figure 1. Adjusted predictions of the probability that a rule is invalidated.

Note: The figure shows adjusted predictions of the probability that a rule is invalidated at different values of RIA quality, conditional on whether the agency explained how the associated RIA affected its rulemaking decisions. All other variables were held at their means to generate the predictions.

Figure 2 sheds light on the question of whether, or when, an agency increases its risk by describing its use of the RIA. The figure shows that when an agency explains how it used the RIA, the explanation increases the likelihood that a rule will be invalidated only when the RIA accompanying the rule is of very low quality. Figure 2 plots the estimated differences in the probabilities of being invalidated between a rule for which the agency explains how it used the RIA and a rule for which the agency does not offer an explanation, evaluated at different levels of RIA quality and at the means of all of the other covariates. The 95 percent confidence intervals of the estimates are completely above zero only at RIA quality scores less than or equal to five. This result supports the notion that an explanation of the RIA’s role in the agency’s decision may invite an increased level of court scrutiny of the rule, leading to a higher risk of being overturned. However, our results further suggest that such expectations should only hold for rules accompanied by very low-quality economic analyses. The confidence intervals indicate that RIAs with a level of quality close to or greater than one standard deviation above the mean are more likely to be upheld than overturned. Thus, our results are consistent with the observations of interview subjects in federal regulatory agencies who suggested that a high-quality RIA can help support a rule in court, but a poor RIA can undermine the rule if the agency is sued (Ellig Reference Ellig2019).

Figure 2. Conditional marginal effects of the agency’s explained use of the RIA on the probability that a rule is invalidated.

Note: The figure shows the estimated difference in the probability of being invalidated between a rule for which the agency explained how it used the RIA in making decisions and a rule for which the agency did not, evaluated at different levels of RIA quality and at the means of the covariates. Each vertical line represents the 95 percent confidence interval for the estimate at a given level of RIA quality.

Most of the control variables maintain similar levels of statistical significance and similar magnitudes across all specifications. The length of the preamble of the proposed rule demonstrates a statistically significant, negative association with the likelihood that the associated final rule is invalidated. A likely explanation is that a longer preamble may contain a more thorough justification of the agency’s statutory authority for the rulemaking and rationale for its approach, thus reducing the likelihood that the regulation will be struck down as arbitrary or capricious. In contrast, the number of meetings held by OIRA with interest groups connected to a rulemaking is positively associated with the likelihood that portions of the final rule will be invalidated, as suggested by a coefficient statistically significant at the one percent level. More meetings can be a signal of a lower level of stakeholder agreement with the proposed rule and potentially more candidates to challenge the associated final rule in court.

Interestingly, the existence of statutory deadlines is correlated with a lower probability the associated final rule will be set aside. While this result cuts against previous research suggesting that deadlines may increase the pace of rulemaking at the expense of quality (Carpenter et al. Reference Carpenter, Chattopadhyay, Moffitt and Nall2012; Gersen and O’Connell Reference Gersen and O’Connell2008; Lavertu and Yackee Reference Lavertu and Yackee2012), it is not all that surprising since a statutory deadline likely signals that the rule is a priority in addition to having a tight and specific legislative mandate. Furthermore, Clinton and Lewis’ (Reference Clinton and Lewis2008) measure of agency ideology is negative and highly significant, suggesting that more conservative agencies face a lower likelihood that their rules will be set aside by courts. It is not clear whether this result indicates some type of ideological bias on the part of the judiciary, or if it is due to the specific nature of the types of regulations promulgated by the agencies classified as more conservative by the Clinton and Lewis measure. The more conservative agencies in the sample issuing more than a few regulations are the Departments of the Interior (eight regulations) and Energy (17 regulations). Most of the Interior Department’s regulations involved annual bag limits for hunting migratory birds, which are seldom controversial and were never overturned. All but one of the Energy Department’s regulations are energy efficiency regulations promulgated under a highly structured process that includes extensive involvement throughout by stakeholders who might otherwise challenge the regulations (Department of Energy 1996).

The statutory requirement that the agency consider economic feasibility demonstrates a marginal positive association with the probability that the rule will be invalidated in the third regression. Consistent with observations by Cecot and Viscusi (Reference Cecot and Viscusi2015) and Bull and Ellig (Reference Bull and Ellig2018), such requirements may increase the degree of court scrutiny of the RIA when the rule is challenged in court. To test whether a higher-quality economic analysis generally reduces the likelihood that a regulation will be overturned in court only if the agency is statutorily required to consider economic factors, we ran a series of regressions with interaction terms of RIA quality and statutory requirements on economic feasibility and benefit-cost consideration. The results do not provide robust evidence that the relationship between RIA quality and judicial review outcome is contingent on whether a statute requires the agency to consider economic factors.Footnote 8 It is less clear why a regulation is more likely to be overturned when a statute mandates the form of the regulation, unless possibly this restriction makes stakeholders more likely to challenge the regulation on other grounds, such as the stringency of the regulation.

The probit regressions suggest marginally significant associations for some of the other control variables. For example, the number of comments received for the proposed rule shows a nonlinear correlation with the judicial review outcome in the third regression in Table 2. Since more comments can be an indicator of a more controversial rule, the likelihood that a regulation will be invalidated by a court may increase as the degree of controversy increases, until perhaps some point where the number of comments reaches such a high level that any additional differences between rules do not really indicate more intense controversy. This is also consistent with the findings of Shapiro and Morrall (Reference Shapiro and Morrall2012), which suggest that rules with a smaller number of comments have higher net benefits than other rules. They argue that the number of comments provides a proxy of the political salience of a rule and that rules further from the attention of politics do the best in terms of net benefits (Shapiro and Morrall Reference Shapiro and Morrall2012), which may lead to a lower likelihood that the rules are invalidated by courts.

The number of contacts listed in the notice of proposed rulemaking has a marginally significant relationship with the judicial review outcome, which can be construed as consistent with earlier research showing that an increased breadth of agency staff members involved in the rulemaking leads, through its effects on rule timeliness, to a greater likelihood that the associated final rule is invalidated (Carrigan and Mills Reference Carrigan and Mills2019). Agencies’ effective independence measured by Selin (Reference Selin2015) is also marginally significant at the 10 percent level, perhaps indicating that agencies that enjoy greater independence from their political overseers may receive more deference from the courts.

In sum, our results show that a higher-quality RIA reduces the likelihood that the associated regulation will be invalidated through judicial review only if the agency explains how it relied upon the analysis in making decisions about the regulation. Further, when the quality of the RIA is very low, the agency’s explanation of the RIA’s role in its decision increases the likelihood that the regulation will be invalidated by a court.

5. Conclusion

Courts increasingly recognize that they have a role to play in scrutinizing regulatory agencies’ economic analyses in much the same way that they examine the agency’s interpretation of the statute and the procedures it followed. Our results provide evidence supporting this practice. Using the complete set of economically significant, prescriptive federal regulations proposed between 2008 and 2013, the empirical analysis suggests that the quality of the regulatory agency’s RIA can affect the outcome of judicial review when the rule is challenged by a potentially aggrieved party.

When Hahn and Dudley (Reference Hahn and Dudley2007, Reference Hahn and Dudleyp. 193) postulated that “the utility of a particular analysis depends, in large part, on its quality,” they were referring to the value of the analysis in guiding decision-makers to efficient policies. Our results suggest that a quality RIA may have additional value to the agency when the rule is litigated. Although high-quality analysis is associated with greater deference toward the agency, this effect is conditional on whether the agency explained how it used the RIA in its regulatory decisions. When it did, better analysis is associated with fewer rule overturns, and lower-quality analysis, in contrast, is tied to significantly more successful court challenges by plaintiffs. However, if the agency did not indicate that the analysis played a role in its regulatory decisions, the quality of the analysis appeared to have no bearing on the court’s decision regarding whether to vacate or remand the rule to the agency.

While these results have specific implications for comprehending the role of economic analysis in the rulemaking process, they also suggest the importance of considering rulemaking procedures as a collective system. Although the ossification scholarship has tended to view procedures as affecting rulemaking in only one direction, our findings illustrate the role that attention to one procedure can have for an agency’s ability to minimize scrutiny at another procedural step. In fact, these results may indicate one reason why the empirical tests of the ossification thesis have tended not to find the anticipated effects (Yackee and Yackee Reference Yackee and Yackee2010, Reference Yackee and Yackee2012). Recognizing that rulemaking procedures can work to counteract each other, it is not surprising then that the pace and volume of rulemaking have not substantially slowed with the imposition of procedural constraints by the political actors that oversee the process.

Finally, these results have implications that extend beyond the regulatory context as well. The passage of the Foundations for Evidence-Based Policymaking Act of 2018 “requires agencies to plan to develop statistical evidence to support policymaking” (Congressional Research Service 2019), operationalized through the requirement that agencies submit yearly plans to OMB outlining data they intend to collect and “methods and analytical approaches that may be used to develop evidence to support policymaking” (Public Law 115–435 §101). At least with respect to the rulemaking process, our results suggest that courts may already be trying to move agencies in this direction.

Footnotes

1 Executive Order 12866, written by President Clinton and affirmed by subsequent presidential administrations, requires executive branch agencies to prepare a regulatory impact analysis for significant regulations. A regulatory impact analysis assesses the significance and cause of the problem the regulation seeks to solve, identifies alternative solutions, and assesses the benefits and costs of each alternative. The term “regulatory impact analysis” does not appear in this executive order, but it was used to refer to the same analysis in President Reagan’s Executive Order 12911, which Executive Order 12866 superseded. Independent agencies often call equivalent analyses they prepare “cost-benefit analysis” or just “economic analysis.”

2 “Economically significant” regulations are those that have costs or other economic effects exceeding $100 million annually or that meet other criteria specified in section 3f1 of Executive Order 12866, which has traditionally governed regulatory analysis and review for executive branch agencies. While President Biden’s Executive Order 14094 updated the threshold for economic significance to $200 million, the traditional $100 million value applies to the rules in this dataset. “Prescriptive” regulations mandate or prohibit activities.

3 One example is Center for Biological Diversity v. NHTSA, in which the Ninth Circuit found the National Highway Traffic Safety Administration’s “Average Fuel Economy Standards for Light Trucks” rule to be “arbitrary and capricious” in part due to the agency’s “failure to monetize the value of carbon emissions.” The Energy Policy and Conservation Act, the statute that authorized the rulemaking, did not require cost-benefit analysis. See discussions in Cecot and Viscusi (Reference Cecot and Viscusi2015) and Revesz (Reference Revesz2017).

4 For example, Masur and Posner (Reference Masur and Posner2018) discuss two controversial cases: Business Roundtable v. Securities and Exchange Commission and Corrosion Proof Fittings v. Environmental Protection Agency. In both cases, courts struck down the agency’s rule because the agency’s economic analysis of the regulation was defective.

5 These elements are included in OIRA’s 2003 version of Circular A-4 as well as its revised version, issued in November 2023.

6 Using the double-blind coding approach, two trained evaluators read the proposed rule and RIA for each regulation, and for each criterion, the evaluators assigned a score ranging from 0 (no useful content) to 5 (comprehensive analysis with potential best practices) (Ellig Reference Ellig2016). For a more complete explanation of the scoring method, see Ellig and McLaughlin (Reference Ellig and McLaughlin2012) and Ellig (Reference Ellig2016).

7 We test for collinearity among the explanatory variables, which demonstrates no strong multicollinearity. The mean VIF is 2.17, with most of the variables having a VIF less than 2.5. We also run the regressions without the control variables; the results are similar to those shown in Table 2.

8 As an additional robustness check, we re-ran the regressions after removing (four) regulations implementing the National Ambient Air Quality Standards, for which the statute prohibited the agency from considering costs. This test indicates that the statutory prohibition of cost consideration does not affect the relationship between RIA quality and the likelihood that a regulation is upheld in court. Regression outputs of the robustness checks are available upon request.

References

Ackerman, Frank and Heinzerling, Lisa. 2004. Priceless: On Knowing the Price of Everything and the Value of Nothing. New York: New Press.Google Scholar
Aiken, Deborah V. 2019. “When Benefit-Cost Analysis Becomes Optional: Regulatory Analysis at the Consumer Product Safety Commission in the CPSIA Era.” Journal of Benefit-Cost Analysis, 10(3): 404433.CrossRefGoogle Scholar
Balla, Steven J. and Wright, John R.. 2005. “Consensual Rulemaking and the Time it Takes to Develop Rules.” In Politics, Policy, and Organizations: Frontiers in the Scientific Study of Bureaucracy, edited by Krause, George A. and Meier, Kenneth J., 187206. Ann Arbor: University of Michigan Press.Google Scholar
Bull, Reeve T. and Ellig, Jerry. 2017. “Judicial Review of Regulatory Impact Analysis: Why Not the Best?Administrative Law Review, 69(4): 725840.Google Scholar
Bull, Reeve T. and Ellig, Jerry. 2018. “Statutory Rulemaking Considerations and Judicial Review of Regulatory Impact Analysis.” Administrative Law Review, 70(4): 873959.Google Scholar
Carpenter, Daniel, Chattopadhyay, Jacqueline, Moffitt, Susan, and Nall, Clayton. 2012. “The Complications of Controlling Agency Time Discretion: FDA Review Deadlines and Postmarket Drug Safety.” American Journal of Political Science, 56(1): 98114.CrossRefGoogle ScholarPubMed
Carrigan, Christopher and Mills, Russell W.. 2019. “Organizational Process, Rulemaking Pace, and the Shadow of Judicial Review.” Public Administration Review, 79(5): 721736.CrossRefGoogle Scholar
Carrigan, Christopher, and Shapiro, Stuart. 2017. “What’s Wrong with the Back of the Envelope? A Call for Simple (and Timely) Benefit-Cost Analysis.” Regulation and Governance, 11(2): 203212.CrossRefGoogle Scholar
Cecot, Caroline and Viscusi, W. Kip. 2015. “Judicial Review of Agency Benefit-Cost Analysis.” George Mason Law Review, 22(3): 575618.Google Scholar
Clinton, Joshua D. and Lewis, David E.. 2008. “Expert Opinion, Agency Characteristics, and Agency Preferences.” Political Analysis, 16(1): 320.CrossRefGoogle Scholar
Coase, Ronald H. 1959. “The Federal Communications Commission.” Journal of Law and Economics, 2: 140.CrossRefGoogle Scholar
Congressional Research Service. 2019. “Summary: Public Law No. 115–435 (01/14/2019),” congress.gov/bill/115th-congress/house-bill/4174.Google Scholar
Department of Energy. 1996. “Appendix A to Subpart C of Part 430 – Procedures, Interpretations, and Policies for Consideration of New or Revised Energy Conservation Standards for Consumer Products,” 10 CFR Ch. II §430.Google Scholar
Derthick, Martha and Quirk, Paul. 1985. The Politics of Deregulation. Washington, DC: Brookings Institution Press.Google Scholar
Dooling, Bridget CE 2020. “Bespoke Regulatory Review.” Ohio State Law Journal, 81(4): 673721.Google Scholar
Dudley, Susan E. 2024. “Chevron is Overruled.” Forbes (July 1), forbes.com/sites/susandudley/2024/07/01/chevron-is-overruled.Google Scholar
Dudley, Susan E. and Mannix, Brian F.. 2018. “Improving Regulatory Benefit-Cost Analysis.” Journal of Law and Politics, 34(1): 120.Google Scholar
Ellig, Jerry. 2016. “Evaluating the Quality and Use of Regulatory Impact Analysis: The Mercatus Center’s Regulatory Report Card, 2008–2013.” Mercatus Center at George Mason University Working Paper (July).Google Scholar
Ellig, Jerry. 2019. “Agency Economists.” Administrative Conference of the United States Report (September 11), acus.gov/report/final-report-agency-economists.Google Scholar
Ellig, Jerry and McLaughlin, Patrick A.. 2012. “The Quality and Use of Regulatory Analysis in 2008.” Risk Analysis, 32(5): 855880.CrossRefGoogle ScholarPubMed
Farrow, Scott. 2009. “Improving the CWIS Rule with Regulatory Analysis: What Does an Economist Want?” In Reforming Regulatory Impact Analysis, edited by Harrington, Winston, Heinzerling, Lisa, and Morgenstern, Richard D., 176189. Washington, DC: Resources for the Future.Google Scholar
Faulhaber, Gerald R. and Baumol, William J.. 1988. “Economists as Innovators: Practical Results of Theoretical Research.” Journal of Economic Literature, 26(2): 577600.Google Scholar
Gersen, Jacob E. and O’Connell, Anne Joseph. 2008. “Deadlines in Administrative Law.” University of Pennsylvania Law Review, 156(4): 923990.Google Scholar
Hahn, Robert W. and Dudley, Patrick M.. 2007. “How Well Does the U.S. Government Do Benefit-Cost Analysis?Review of Environmental Economics and Policy, 1(2): 192211.CrossRefGoogle Scholar
Hahn, Robert W. and Tetlock, Paul C., 2008. “Has Economic Analysis Improved Regulatory Decisions?Journal of Economic Perspectives, 22(1): 6784.CrossRefGoogle Scholar
Hazlett, Thomas W. 2011. “Economic Analysis at the Federal Communications Commission: A Simple Proposal to Atone for Past Sins.” Resources for the Future Discussion Paper 1123 (May).Google Scholar
Katzen, Sally. 2011. “OIRA at Thirty: Reflections and Recommendations.” Administrative Law Review, 63: 103112.Google Scholar
Lavertu, Stephane and Yackee, Susan Webb. 2012. “Regulatory Delay and Rulemaking Deadlines.” Journal of Public Administration Research and Theory, 24(1): 185207.CrossRefGoogle Scholar
Masur, Jonathan, and Posner, Eric A.. 2018. “Cost-Benefit Analysis and the Judicial Role.” University of Chicago Law Review, 85(4): 935986.Google Scholar
McGarity, Thomas O. 1992. “Some Thoughts on “Deossifying” the Rulemaking Process.” Duke Law Journal, 41(6): 13851462.CrossRefGoogle Scholar
Morgenstern, Richard D. 1997. Economic Analyses at EPA: Assessing Regulatory Impact. Washington, DC: Resources for the Future.Google Scholar
Pierce, Richard J. Jr. 2024. “Two Neglected Effects of Loper Bright.” The Regulatory Review, (July 1), theregreview.org/2024/07/01/pierce-two-neglected-effects-of-loper-bright.Google Scholar
Revesz, Richard L. 2017. “Cost-Benefit Analysis and the Structure of the Administrative State: the Case of Financial Services Regulation.” Yale Journal on Regulation, 34(2): 545600.Google Scholar
Royal Swedish Academy of Sciences. 2020. “The Prize in Economic Sciences 2020,” Press Release (October 12), nobelprize.org/prizes/economic-sciences/2020/press-release.Google Scholar
Seidenfeld, Mark, 1997. “Demystifying Deossification: Rethinking Recent Proposals to Modify Judicial Review of Notice and Comment Rulemaking.” Texas Law Review, 75(3): 251321.Google Scholar
Selin, Jennifer L. 2015. “What Makes an Agency Independent?American Journal of Political Science, 59(4): 971987.CrossRefGoogle Scholar
Shapiro, Stuart. 2002. “Speed Bumps and Roadblocks: Procedural Controls and Regulatory Change.” Journal and Public Administration Research and Theory, 12(1): 2958.CrossRefGoogle Scholar
Shapiro, Stuart, and Morrall, John F. III. 2012. “The Triumph of Regulatory Politics: Benefit-Cost Analysis and Political Salience.” Regulation and Governance, 6(2): 189206.CrossRefGoogle Scholar
Steinzor, Rena, Sinden, Amy, Shapiro, Sidney, and Goodwin, James. 2009. “A Return to Common Sense: Protecting Health, Safety, and the Environment through ‘Pragmatic Regulatory Impact Analysis.’” Center for Progressive Reform White Paper #909 (October), progressivereform.org/publications/pria-909.Google Scholar
Sunstein, Cass R. 2017. “Cost-Benefit Analysis and Arbitrariness Review.” Harvard Environmental Law Review, 41(1): 241.Google Scholar
Wagner, Wendy E. 2009. “The CAIR RIA: Advocacy Dressed Up as Policy Analysis.” In Reforming Regulatory Impact Analysis, edited by Harrington, Winston, Heinzerling, Lisa, and Morgenstern, Richard D., 5681. Washington, DC: Resources for the Future.Google Scholar
Workman, Samuel. 2015. The Dynamics of Bureaucracy in the U.S. Government: How Congress and Federal Agencies Process Information and Solve Problems. New York: Cambridge University Press.CrossRefGoogle Scholar
Yackee, Jason Webb and Yackee, Susan Webb. 2010. “Administrative Procedures and Bureaucratic Performance: Is Federal Rule-making ‘Ossified’?Journal of Public Administration Research and Theory, 20(2): 261282.CrossRefGoogle Scholar
Yackee, Jason Webb and Yackee, Susan Webb. 2012. “Testing the Ossification Thesis: An Empirical Examination of Federal Regulatory Volume and Speed, 1950-1990.” George Washington Law Review, 80(5): 14141492.Google Scholar
Figure 0

Table 1. Variable descriptions and summary statistics

Figure 1

Table 2. Regressions of RIA quality and use and judicial review outcome

Figure 2

Figure 1. Adjusted predictions of the probability that a rule is invalidated.Note: The figure shows adjusted predictions of the probability that a rule is invalidated at different values of RIA quality, conditional on whether the agency explained how the associated RIA affected its rulemaking decisions. All other variables were held at their means to generate the predictions.

Figure 3

Figure 2. Conditional marginal effects of the agency’s explained use of the RIA on the probability that a rule is invalidated.Note: The figure shows the estimated difference in the probability of being invalidated between a rule for which the agency explained how it used the RIA in making decisions and a rule for which the agency did not, evaluated at different levels of RIA quality and at the means of the covariates. Each vertical line represents the 95 percent confidence interval for the estimate at a given level of RIA quality.