1. Economic deregulation
The earliest regulatory agencies generally issued “economic regulations” that constrained private economic activities through price controls, quantity restrictions, service conditions, and restrictions on entry and exit. These agencies, including the Interstate Commerce Commission (ICC), the Civil Aeronautics Board (CAB), and the Federal Communications Commission (FCC), were often established as independent commissions to avoid political influence, yet they seemed to get “captured” by the industries they regulated. Scholarship in the fields of economics, antitrust, and law found that economic forms of regulation tended to keep prices higher than necessary, to the benefit of regulated industries, and at the expense of consumers (Dudley, Reference Dudley2021).
Policy entrepreneurs at think tanks, officials in the Ford, Carter, and Reagan Administrations, legislators in Congress, and judicial decisions brought these observations and academic insights to the policy realm. Bipartisan efforts across all three branches of government eventually led to the abolition of whole agencies such as the CAB and the ICC, and the removal of unnecessary economic regulation in several previously regulated industries, resulting in improvements in innovation and consumer welfare.
1.2. Rationale
The intellectual underpinnings for economic deregulation derive from four economic concepts: market power, contestability, economic efficiency, and public choice.
The main justification for economic regulation was to prevent monopolistic firms from exercising market power, which allows them profitably to raise prices above marginal cost. Economic research showing that regulation actually created market power in potentially competitive industries made a strong case for economic deregulation (Green & Nader, Reference Green and Nader1973).
Contestability theory suggests that potential competition can prevent the exercise of market power, even if the incumbent firm is a monopoly. Scholars in the 1970s identified several regulated markets as highly contestable, including individual airline, truck, and bus routes (Bailey & Panzar, Reference Bailey and Panzar1981). In industries like telecommunications, natural gas, and electricity, reformers sought to promote contestability by ensuring that competitors could access facilities that involved sunk costs, such as local phone lines, pipelines, and electric wires.
Competitive or contestable markets lead to allocative efficiency, where every unit of every resource is employed in the use that consumers value most highly. Removing restraints on competition also promotes dynamic efficiency, which occurs when firms discover new ways to reduce costs, improve productivity, and offer new products or services that consumers value.
Public choice and the economic theory of regulation recognize that government decision-makers often face incentives to pursue objectives other than economic efficiency. Under the simple capture theory, the regulator advances the interests of the regulated industry (Stigler, Reference Stigler1971; Green & Nader, Reference Green and Nader1973). The economic theory of regulation posits that the regulator strikes a compromise that reflects the relative political strength of various stakeholders (Becker, Reference Becker1983; Peltzman, Reference Peltzman1976). The study of rent-seeking reveals that regulation creates wealth transfers; concentrated interests expend resources to capture those wealth transfers; and those expenditures represent social waste (Buchanan et al., Reference Buchanan, Tollison and Tullock1980).
1.3. Results
The economic deregulation that began in the 1970s unleashed competitive forces among existing firms and led to new entries that placed downward pressure on prices, eroded regulatory rate distortions, and accelerated productivity growth. By 1993, deregulated industries produced efficiency improvements equivalent to a 7–9 percent increase in GDP, and consumers received most of the benefits (Winston, Reference Winston1993, p. 1284). This estimate does not include the substantial effects of additional liberalization in communications and energy since 1993. Below, we summarize the principal results from empirical studies.
1.3.1. Price levels
In most cases, deregulation reduced overall prices (Winston, Reference Winston1993). Airline passengers saved about $12.4 billion annually (in $1993) (Morrison & Winston, Reference Morrison and Winston1995). Inflation-adjusted average freight rail rates fell by 46 percent between 1982 and 1996, and rates for individual commodities fell by between 29 and 56 percent. Deregulation was responsible for at least one-third of this reduction, and possibly much more (Ellig, Reference Ellig2002, pp. 151–156). By 1985, trucking deregulation was associated with a 3 percent reduction in truckload rates and a 17 percent reduction in less-than-truckload rates. Lower rates saved shippers at least $6.8 billion per year ($1977) (Winston et al., Reference Winston, Corsi, Grimm and Evans1990; Ying & Keeler, Reference Ying and Keeler1991; Corsi, Reference Corsi1994, Reference Corsi1996a, Reference Corsi1996b). When the last price controls on natural gas were lifted in 1985, gas prices began a decade-long decline (Crandall & Ellig, Reference Crandall and Ellig1997, pp. 10–11). Cable television rates are lower in jurisdictions with competing cable companies (Ellig & Conover, Reference Ellig and Conover2014; GAO, 2004, 2005; Hazlett & Spitzer, Reference Hazlett and Spitzer1997; Reference Hazlett and Majumdar2006; Levin & Maisel, Reference Levin and Maisel1991).
When the FCC stopped allowing AT&T to prevent customers from attaching its competitors’ equipment to the network, prices of telephone equipment fell throughout the 1970s. They increased while AT&T prepared for divestiture in 1981–1982, and then continued to decline after 1982 (Crandall, Reference Crandall1991, pp. 96–97; Crandall & Ellig, Reference Crandall and Ellig1997, p. 26). In the nine years after the AT&T breakup, interstate long-distance rates net of federally mandated access charges fell from 13.8 to 7.5 cents per minute (Crandall & Waverman, Reference Crandall and Waverman1996).
Wireless voice communications saw even more dramatic price changes. In 1994–1995, the FCC auctioned spectrum licenses for personal communications services, enabling two additional entrants to challenge the existing cell phone duopoly in each market. Cell phone revenue per-minute plunged from more than 80 cents in 1992 to 4 cents in 2008, generating $212 billion in consumer surplus annually, primarily for voice service (Hazlett, Reference Hazlett2017, pp. 216–217).
In a few cases, deregulation led to price increases, usually due to market design flaws or other idiosyncrasies. The 1984 Cable Act, for example, preempted local regulation of basic cable rates but did little to eliminate cable monopolies, so basic cable rates increased (Rubinovitz, Reference Rubinovitz1993). Nevertheless, because cable companies added channels and subscribership continued to rise, Hazlett & Spitzer (Reference Hazlett and Spitzer1997) suggest that consumers were better off because the unregulated monopolist had incentives to share the value of dynamic efficiencies with consumers. Electricity restructuring in California led to significant price spikes and utility bankruptcies because regulators required utilities to buy power in day-ahead spot markets that were vulnerable to manipulation (Borenstein, Reference Borenstein2002). Price increases initially followed electricity competition in Texas because retail prices were aligned more closely with marginal costs – primarily the cost of natural gas. After a transition period, the full implementation of competition in Texas was associated with lower electric prices (Hartley et al., Reference Hartley, Medlock and Jankovska2019).
1.3.2. Price structure
Deregulation tended to align prices more closely with costs (Winston, Reference Winston1993). Thus, less-than-truckload rates fell by more than truckload rates (Winston et al., Reference Winston, Corsi, Grimm and Evans1990), long-distance air fares fell by more than short-distance airfares, and natural gas prices declined more for large customers than for small customers (Crandall & Ellig, Reference Crandall and Ellig1997). Hollas (Reference Hollas1999) finds that FERC’s restructuring of gas pipeline regulation reduced prices to industrial customers and increased prices to residential customers (although his data set includes only two years when FERC Order 636 was in effect). Electricity competition in Texas initially lowered rates for industrial customers but not for residential customers (Zarnikau & Whitworth, Reference Zarnikau and Whitworth2006). Deregulation also allowed some industries, especially airlines and railroads, to set prices that more closely reflected customers’ different elasticities of demand (Winston, Reference Winston1993, Reference Winstonp. 1280; Morrison & Winston, Reference Morrison, Winston, Peltzman and Winston2000, pp 18–19; Grimm & Winston, Reference Grimm, Winston, Peltzman and Winston2000, pp. 62–66; Schmalensee et al., Reference Schmalensee, Boyer, Ellig, Gómez-Ibáñez, Goodchild, Wilson and Wolak2015, pp. 20–21).
Telephone deregulation involved more complicated rate changes. Long-distance competition made it difficult for regulators to continue the inefficient cross-subsidization of local services with revenues from long-distance. As a result, regulated local telephone rates rose by 3.3 percent annually between 1983 and 1989 (after the AT&T breakup), then resumed falling (Crandall, Reference Crandall1991, Reference Crandallp. 60; Crandall & Ellig, Reference Crandall and Ellig1997, p. 25). As per-minute charges on long-distance fell, the associated annual welfare loss caused by regulation dropped from $10–17 billion ($1996) in the mid-1980s (Crandall, Reference Crandall1991, p. 141) to $2.5–7.0 billion in the mid-1990s ($1996) (Crandall & Waverman, Reference Crandall and Waverman2000, p. 120) to $1.5 billion in 2002 ($2002) (Ellig, Reference Ellig2006).
The Telecommunications Act of 1996 required local phone companies to lease elements of their networks to competitors at deep discounts. The FCC initially required that all elements of the local network be made available to competitors as a package (the “unbundled network element platform”) at discounts much larger than the wholesale discount the FCC had previously established for leasing the local network. These discounts further lowered rates for a service that was already cross-subsidized and sold below cost (Crandall, Reference Crandall2005; Braunstein Reference Braunstein2004a, Reference Braunstein2004b). Instead of building their own local networks, long-distance companies and new entrants lobbied for low lease rates (Eisner & Burton, 2001; Zolnierek et al., Reference Zolnierek, Eisner and Burton2001). Most of these competitors collapsed after 2005 when a succession of court cases forced the FCC to reverse course. Competition for local phone service ultimately came from voice over Internet (VOIP) and wireless phones (Beard et al., Reference Beard, Macher, Vickers, Schmalensee and Wilson2016, pp. 299–301).
1.3.3. Costs and productivity
Removal of price and entry regulations increased competition, pushing prices closer to marginal cost (allocative efficiency). However much of the customer savings occurred from dynamic efficiency because deregulated firms reduced costs and improved productivity (Winston, Reference Winston1998, pp. 96–102). Removal of entry restrictions on individual routes allowed airlines and trucking companies to develop “hub-and-spoke” systems that reduced costs and facilitated improved service frequency (Brueckner & Spiller, Reference Brueckner and Spiller1994; Boyer, Reference Boyer1993, p. 486; Morrison & Winston, Reference Morrison and Winston1995). Low-fare airlines that did not develop hub-and-spoke systems often utilized secondary airports in major cities. Interstate natural gas pipelines interconnected at “market hubs” that gave customers access to multiple suppliers and created an integrated, national market for natural gas (Apergis et al., Reference Apergis, Bowden and Payne2015; Arano & Velikova, Reference Arano and Velikova2009; DeVany, Reference DeVany, Ellig and Kalt1996; DeVany & Walls, Reference DeVany and Walls1994).
Deregulation produced remarkable increases in railroad productivity and decreases in operating expenses per ton-mile (Ellig, Reference Ellig2002, Reference Elligp. 161). Railroad productivity increased by 6–7 percent annually from 1981 to 1988, and costs per revenue ton-mile were 41–44 percent lower by 1989 (Wilson, Reference Wilson1997).
Growth in total factor productivity in the telecommunications industry accelerated after 1970 when the FCC allowed some competition in long-distance (Crandall, Reference Crandall1991, pp. 69–71; Crandall & Galst, Reference Crandall, Galst and Pack1995). Crandall (Reference Crandall1991, pp. 133–134) estimated that the value of productivity improvements due to liberalization during the 1980s totaled $6.4 billion to $16.6 billion ($1988). Mergers of broadcasting stations after the Telecommunications Act of 1996 relaxed ownership restrictions reduced costs by more than $2.8 billion, increased industry revenues by almost $2 billion, and increased viewership slightly – results consistent with an overall improvement in economic efficiency (Stahl, Reference Stahl2016).
The wasteful nonprice competition had dissipated most of the rents airlines received from pricing above cost on long-haul routes. The deregulated airlines offered lower fares combined with more crowded flights, less elaborate meals, fewer numerous flight attendants, and in general fewer perks.
Some of the cost reductions in deregulated industries occurred because fewer rents were shared with labor, but the extent and form this change took varied by occupation and by industry (see, e.g., Card, Reference Card1996; Dooley, Reference Dooley1994; Rose, Reference Rose1987; Peoples, Reference Peoples1998; Henrickson & Wilson, Reference Henrickson and Wilson2008). In trucking, for example, deregulation reduced the wage gap between Black and white truck drivers by creating new opportunities for Black drivers to enter the previously regulated (and more lucrative) for-hire portion of the industry (Peoples & Saunders, Reference Peoples and Saunders1993; Heywood & Peoples, Reference Heywood and Peoples1994; Rose, Reference Rose1987). Reduced entry barriers and increased competition more than doubled the likelihood that a trucker would be an owner-operator instead of an employee (Peoples & Peteraf, Reference Peoples and Peteraf1995).
1.3.4. Quality of service
In some industries, deregulation generated dynamic efficiency that improved the quality of service. By 1985, railroads not only reduced delivery times by almost 30 percent but the variance in delivery time as well. This improvement increased shipper welfare by $2 billion to $6 billion annually (in $1977) (Winston et al., Reference Winston, Corsi, Grimm and Evans1990. See also Barnekov & Kleit, Reference Barnekov and Kleit1990). Faster trucking service saved shippers almost $1 billion annually by 1985 (Winston et al., Reference Winston, Corsi, Grimm and Evans1990). Deregulation allowed truckers to offer service guarantees (Boyer, Reference Boyer1993, pp. 489–490), which made just-in-time manufacturing possible (Larson, Reference Larson1992).
Wellhead price deregulation made gas service more reliable by ending curtailments that had created widespread natural gas shortages and service curtailments in the 1970s (MacAvoy, Reference MacAvoy1971; MacAvoy & Pindyck, Reference MacAvoy and Pindyck1975; Breyer & MacAvoy, Reference Breyer and MacAvoy1974). Cable companies that faced competition from other wireline cable companies or satellite TV tended to improve customer service, increase their bandwidth, offer more channels, and upgrade more quickly to digital transmission (Hazlett, Reference Hazlett and Majumdar2006; GAO, 2004, 2005; Savage & Wirth, Reference Savage and Wirth2005). The removal of price regulations on cable TV led to price increases and increases in the number of available channels (Beard et al., Reference Beard, Ekelund, Ford and Saba2001; Hazlett & Spitzer, Reference Hazlett and Spitzer1997). Meanwhile, the absence of content regulation for cable TV, satellite TV, and the Internet led to an explosion of new niche video content (Hazlett, Reference Hazlett2017). FCC policies that “unregulated” the Internet facilitated migration from dialup Internet service to broadband; for example, DSL subscriptions showed significant upward deviations from previous trends when the FCC decided to give DSL the same light-handed regulatory treatment that cable modems had always received (Hazlett & Caliskan, Reference Hazlett and Caliskan2008).
Arguably the most significant improvement in the quality of service occurred in wireless communications. Congress in 1993 directed the FCC to auction spectrum for “personal communications,” with no further specification of the type of service to be offered, enabling the introduction of the Blackberry in the 1990s and the iPhone in 2007, followed by millions of online apps (Hazlett, Reference Hazlett2017).
Airline deregulation had a more mixed effect on the quality of service, but it aligned quality more closely with consumer preferences. Fare savings far outweighed the value consumers attributed to reductions in other aspects of quality mentioned above, and greater flight frequency increased consumer welfare by $10.3 billion annually ($1993) (Morrison & Winston, Reference Morrison and Winston1995).
1.4. Remaining challenges
Despite these successes, beneficial competition is hindered by remaining or emerging challenges, both institutional and technical. Institutionally, if policymakers are not aware of the beneficial outcomes of removing economic regulations, or if—as the economic theory of regulation holds—they respond to motives other than public welfare, harmful types of restrictions in these markets may reemerge (Wilson & Klovers, Reference Wilson and Klovers2020). Additionally, some of the industries, particularly airlines and trucking, rely on complementary government-managed infrastructure (such as airplane landing slots or public roads) that is rarely priced efficiently, and capacity is likely non-optimal. For example, Morrison & Winston (Reference Morrison, Winston, Peltzman and Winston2000) estimated that the limited availability of airport gates increased fares by $3.8 billion ($1998). On the nation’s highways, lack of direct pricing contributes to traffic congestion and reduces incentives to construct new capacity optimally (Small et al., Reference Small, Evans and Winston1989; Winston, Reference Winston2000; Winston & Langer, Reference Winston and Langer2006). Winston and Shirley (Reference Winston and Shirley1998) estimated that optimal toll pricing of highways would generate net benefits of $3.8 billion annually ($1998). For segments of the electric utility and cable services markets, state or local governments still control entry and limit competition.
Technical challenges also impede opportunities to reap benefits from greater competition. Most of the deregulated industries involve the transportation of people, commodities, or communications signals over a network, which complicates the assessment of market power and analysis of mergers (Dudley & Ellig, Reference Dudley, Ellig, Maggetti, Di Mascio and Natalini2022). Some markets (such as airline routes) may enjoy some residual market power (Borenstein, Reference Borenstein1989; Morrison & Winston, Reference Morrison, Winston, Peltzman and Winston2000), and in others (such as some rail lines), firms exercise significant market power over a subset of customers, making new entry unlikely. In some cases where residual market power exists (such as electric wires or gas pipelines), decision-makers have left some element of the network monopolized, and face challenges to design regulation in a way that allows innovation while preventing monopolistic behavior.
2. Regulatory impact analysis (RIA)
As the U.S. was removing the economic forms of regulation discussed above, a new type of regulation, aimed at addressing health, safety, and environmental issues, was emerging. These “social” regulations were supported by different rationales, including concerns—such as environmental emissions—that were external to market transactions, so the case for outright deregulation did not apply as it did for economic regulations. Instead, since the mid-1970s, presidents have required executive branch agencies to perform RIA before issuing significant new regulations (Dudley Reference Dudley2020). President Clinton’s Executive Order (E.O.) 12866 has guided U.S. executive agencies’ practices since 1993.
2.1. Rationale
An RIA organizes evidence about the effects of alternatives to identify whether the benefits of a proposed action are likely to justify the costs and discover which alternative is likely to be most cost-effective (OMB, 2023). The Office of Management and Budget (OMB) observes that “regulatory analysis also has an important democratic function; it promotes accountability and transparency and is a central part of open government” (OMB, 2011).
An RIA should begin with a problem statement; E.O. 12866 directs agencies to identify the problem a rule is intended to address, including “material failures of private markets.” This recognizes that market economies rely on competition and price signals to allocate scarce resources to their most valued uses, to encourage innovation, and to satisfy consumer needs. Government regulation can disrupt those signals, so the problem statement should explain why market outcomes are less efficient than what government regulations could be expected to accomplish (Dudley et al., Reference Dudley, Belzer, Blomquist, Brennan, Carrigan, Cordes and Cox2017). Agencies should identify failures of private markets, which may include externalities or asymmetric information. The order also directs agencies to be alert for “failures of…public institutions,” such as poorly defined property rights or barriers imposed by existing policies.
E.O. 12866 next directs agencies to identify and assess available alternatives to direct regulation, such as antitrust enforcement, consumer-initiated litigation in the product liability system, or administrative compensation systems (OMB, 2011). When regulation is deemed appropriate, it should target the identified problem, and rely on market-based and performance-oriented approaches, when possible, because they are likely to achieve desired goals at lower social costs than approaches that rely on design or engineering standards (OMB, 2023). President Obama’s E.O. 13563 (2011) emphasized flexibility, encouraging agencies to consider “warnings, appropriate default rules, and disclosure requirements as well as provision of information to the public in a form that is clear and intelligible.”
After these first two steps, benefit–cost analysis (BCA) is a key element of the RIA. By translating benefits and costs into monetary terms, BCA allows comparisons of different regulatory options and endpoints. Comparing the incremental benefits and costs of regulatory alternatives (e.g., successively more stringent standards) can identify the alternative that maximizes net benefits (OMB, 2023). For regulatory actions with the same primary endpoint (e.g., tons of pollutants removed), OMB guidance also finds that “cost-effectiveness analysis can provide a rigorous way to identify options that achieve the most effective use of a given amount of resources, without requiring monetization of all relevant benefits or costs” (OMB, 2023).
E.O. 12866 requires agencies to consider distributive impacts and equity, directing them to minimize burdens on individuals, small businesses, small communities, and governmental entities. E.O. 13563 encourages agencies to “consider (and discuss qualitatively) values that are difficult or impossible to quantify, including equity, human dignity, fairness, and distributive impacts,” and E.O. 14094 says “regulatory analysis, as practicable and appropriate, shall recognize distributive impacts and equity, to the extent permitted by law.”
2.2. Results
E.O. 12866 gives the Office of Information and Regulatory Affairs (OIRA) in OMB responsibility for reviewing all significant proposed and final regulations. This gatekeeper function provides an important incentive for agency compliance with RIA requirements. OIRA coordinates interagency disputes on regulation, liaises with White House officials to ensure regulations are consistent with presidential priorities, and reviews RIAs according to the principles in E.O. 12866 and RIA guidance, especially Circular A-4 (Dudley, Reference Dudley2020).
Presidential directives have been the main impetus for regulatory analysis. Congress has passed some cross-cutting statutes calling for RIA (e.g., the Unfunded Mandates Reform Act (1995) and Regulatory Flexibility Act (1980), but the coverage of these statutes is limited. Some statutes that authorize agency regulation may contain language suggestive of RIA, and federal courts are increasingly interpreting vague statutory language as requiring some economic analysis (Mannix, Reference Mannix2016).
Because OIRA review is limited to executive branch agencies, they are more likely to prepare RIAs than independent agencies, which are not subject to presidential orders (Fraas & Lutter, Reference Fraas and Lutter2011a), suggesting the orders have had some effect. However many executive agency regulations are completed without comprehensive BCA. According to annual OMB reports to Congress, less than one-quarter of regulations with impacts of $100 million or more include monetized estimates of both benefits and costs.Footnote 1
One way of evaluating RIA quality is to compare the benefits and costs predicted in the RIA with those achieved by the regulation. Relatively few such retrospective analyses exist. Studies that perform these comparisons disagree on whether ex-ante analyses consistently over- or under-predict benefits or costs (Harrington et al., Reference Harrington, Morgenstern and Nelson2000; OMB, 2005; Harrington, Reference Harrington2006). OIRA’s comparison of 47 ex-ante and ex-post studies of regulations, most of which were conducted by academics rather than the federal government, found that in 11 cases, the RIA’s ex-ante ratio of benefits to costs was accurate; in 22, it was overestimated; and in 14 cases, it was underestimated (OMB, 2005, p. 47). Thus, about three-quarters of relatively sophisticated RIAs arguably had substantial inaccuracies.
Other studies find that RIAs often fail to conform to executive order principles and OMB guidance (Fraas & Lutter, Reference Fraas and Lutter2011b; Belcore & Ellig, Reference Belcore and Ellig2009; Hahn et al., Reference Hahn, Burnett, Chan, Mader and Moyle2000; Hahn & Dudley, Reference Hahn and Dudley2007; Ellig, Reference Ellig2016). Some evidence suggests that RIA requirements and OIRA oversight cause agencies to conduct more thorough analysis than they otherwise would (Bull & Ellig, Reference Bull and Ellig2018; Ellig & Fike, Reference Ellig and Fike2016; McLaughlin & Ellig, Reference McLaughlin and Ellig2011). Political factors and agency ideology are associated with lower-quality analysis (Bull & Ellig, Reference Bull and Ellig2018; Ellig & Conover, 2014; Ellig & Fike, Reference Ellig and Fike2016).
Published studies offer mixed evidence about the influence of RIAs on the quality of regulations (Morgenstern, Reference Morgenstern2011; Hahn & Tetlock, Reference Hahn and Tetlock2008), however, case studies by insiders identify numerous specific instances where well-done RIAs reduced costs, increased benefits, or introduced novel alternatives that improved significant regulations (Morgenstern, Reference Morgenstern1997; Graham, Reference Graham2008).
2.3. Remaining challenges
While presidents have required RIA, legislation delegating regulatory authority to executive branch agencies rarely includes explicit requirements for agencies to base their regulatory decisions on such analysis (Bull & Ellig, Reference Bull and Ellig2018). Most statutes are silent on whether regulations should be based on BCA (Dudley & Mannix, Reference Dudley and Mannix2018), and some have been interpreted as precluding a weighing of costs against benefits (Whitman v. Am. Trucking Ass’ns, Reference Whitman2001). Greater scrutiny by Congress or the courts will be key in improving the quality and use of analysis (Bull & Ellig, Reference Bull and Ellig2017, Reference Bull and Ellig2018; Carrigan et al., Reference Carrigan, Ellig and Xie2019 **in this issue**).
Agencies face incentives to demonstrate that the benefits of their desired actions exceed the costs (Breyer, Reference Breyer1995; Shapiro, Reference Shapiro2017, Reference Shapiro2016; Williams, Reference Williams2008; Ellig, Reference Ellig2019), and usually seek public input on regulatory analysis and alternatives toward the end of a rulemaking process, after important decisions have been made. Engaging public input earlier could support more rigorous RIAs and better regulatory outcomes (Dudley & Wegrich, Reference Dudley and Wegrich2015; Carrigan & Shapiro, Reference Carrigan and Shapiro2017).
Determining the proper scope of the analysis can be challenging, in terms of the number of alternatives considered, time frame, and indirect benefits and costs. While no RIA will be comprehensive, the challenge is to select a set of viable alternatives and to be objective and balanced in selecting what benefits and costs to include (Dudley & Mannix, Reference Dudley and Mannix2018). An RIA is only as good as the data and studies on which it relies, and obtaining reliable information is often challenging, especially when addressing uncertain future problems or for new products, services, or technologies that have not yet been sold in the market or implemented (Dudley et al., Reference Dudley, Pérez, Mannix and Carrigan2019).
For regulations intended to reduce risks to human health or the environment, scientific risk assessments are critical inputs, yet these are rarely provided as probabilistic risk assessments. Agencies’ approaches can inflate estimates of certain risks, benefits, and costs relative to others, and lead to misaligned priorities because the degree of precaution differs across risks (Gray & Cohen, Reference Gray and Cohen2012; Dudley et al., Reference Dudley, Belzer, Blomquist, Brennan, Carrigan, Cordes and Cox2017).
3. Retrospective analysis
More rigorous retrospective evaluation of social regulations could address some of the challenges with ex-ante analysis. RIAs conducted before a regulation is in place rely on “informed guesses” (OMB, 2005, p. 41) about how the world would look absent the regulation, and how responses to regulatory requirements will alter outcomes. Better retrospective review would allow those hypotheses to be tested against actual outcomes.
Nevertheless, retrospective regulatory analysis is much less common than ex-ante analysis. Retrospective review has generally focused on identifying burdensome or underperforming rules that might be revised or rescinded. While this is important, a life-cycle approach to retrospective review could focus attention on ex-post evaluation of outcomes as well as costs and, by testing hypotheses and assumptions regarding causation, help inform future ex-ante analysis and improve regulatory outcomes (Dudley, Reference Dudley2017).
3.1. Rationale
Evaluation and feedback are essential for informed action and learning, and performance evaluation of government programs has a long history (see, e.g., Newcomer et al., Reference Newcomer, Hatry and Wholey2015). In the regulatory sphere, evidence-based policymaking implies systematic retrospective analysis of individual regulations and/or related groups of regulations. Retrospective analysis should be part of an integrated system that starts with a solid RIA to inform the design of regulations, establishes clear performance metrics for regulations, plans for retrospective review, and then uses the results of that review to reassess the regulation (Peacock et al., Reference Peacock, Miller and Pérez2018).
While retrospective analysis is, by definition, done after a regulation is in effect, agencies should begin planning for the analysis when they first develop a regulation. By clearly identifying the problem the regulation is intended to address, laying out the expected causal linkages between the regulatory intervention and desired outcome, and establishing a framework for empirical testing of assumptions and hypothesized outcomes, agencies can lay the groundwork for successful evaluation (Dudley, Reference Dudley2017; Greenstone, Reference Greenstone, Moss and Cisternino2009; Aldy, Reference Aldy2014).
Coglianese (Reference Coglianese2012) lays out a hierarchy of designs for gaining knowledge about regulatory impacts. While the top of his hierarchy, laboratory experiments, are not possible for many regulations, including those aimed at reducing health, safety, and environmental risks, designing regulations from the outset in ways that allow variation in compliance (such as different compliance schedules in different regions, or small scale pilots) is essential if evaluators are to go beyond observing mere associations and gather data necessary to test hypotheses of the relationship between regulatory actions and outcomes (Greenstone, Reference Greenstone, Moss and Cisternino2009). Experimentation and competition among jurisdictions can be a powerful force for improving regulatory outcomes and developing practical knowledge of what works (See Bull in this issue).
3.2. Results
Presidents and Congress have directed agencies to analyze the effects of existing regulations, however, procedures for doing so have not been institutionalized to the extent that ex-ante RIA has. Reviews have found that only a small fraction of major rules had been subject to ex-post evaluation (OMB, 2005; Raso, Reference Raso2017; Aldy, Reference Aldy2014).
Some agencies’ procedures incorporate retrospective reviews more than others. The National Highway Traffic Safety Administration in the Department of Transportation (DOT) publishes a regular schedule for reviewing existing regulations, and OIRA reports that its ex-post estimates of regulatory impacts appear more accurate than other agencies’ (OMB, 2005). The regular data DOT collects on traffic accidents contributes to its ability to validate ex-ante estimates (and improve future estimates). This points to the importance of committing to evaluation at the outset of rulemaking. According to Aldy’s analysis of U.S. practices, most economically significant regulations are not designed to produce adequate data and enable causal inference of the regulation’s effects (Aldy, Reference Aldy2014).
One of the greatest successes of retrospective economic analysis in the U.S. was the economic deregulation described above. Empirical research consistently demonstrated the consumer harms caused by existing price and entry regulations. Studies of policy reform routinely credit this research as a necessary (though not sufficient) factor in motivating change (Derthick & Quirk, Reference Derthick and Quirk1985; Robyn, Reference Robyn1987). This experience highlights the potential for rigorous retrospective analysis to improve public welfare.
3.3. Remaining challenges
Agencies do not have strong incentives to conduct retrospective analyses of their own regulations. OIRA review motivates them to conduct RIAs before issuing new regulations but the consequence of not conducting ex-post analysis is that the regulation will remain in place. Further, regulated parties who have invested in compliance often have less incentive to work to remove an existing regulation (Dudley, Reference Dudley2017).
Furthermore, meaningful retrospective analysis is complicated. Identifying the counterfactual that would best describe the state of the world absent the regulation and measuring opportunity costs and regulatory benefits are technically difficult.
Developing an evaluation plan when a rule is first issued, and committing to gathering the data needed for evaluation, might address some of these technical issues. When possible, designing regulations from the outset in ways that allow variation in compliance would provide natural experiments in which to learn from experience. The experience from the successful economic deregulation discussed above points to the value of such natural experiments. Intrastate airline fares not subject to the CAB’s rate-setting authority were markedly lower than interstate fares, providing a powerful counterfactual for what interstate prices could be with more competition. Similarly, the ICC did not regulate trucking rates for agricultural products, and they were substantially lower than rates for manufactured products.
4. Conclusion
As U.S. regulation has increased over the last 50 years, so have efforts to ensure those regulations serve the public interest. The first wave of reforms came in the 1970s and 80s, when economic deregulation unleashed competitive market forces in previously regulated sectors, resulting in improved efficiency and lower consumer prices. The social regulations that emerged at the same time have not been conducive to outright deregulation. Instead, concerns about their burdens led to the requirement for ex-ante RIA to ensure regulatory benefits justified the costs. The third wave of regulatory reform involves ex-post evaluation of regulatory impacts.
The experience of these three approaches to regulatory reform reinforces the importance of recognizing institutional as well as technical factors that may affect outcomes. For example, the U.S. experience suggests that significant reforms require action by the legislative, executive, and judicial branches. The economic deregulation of the 1970s and 1980s enjoyed bipartisan support from all branches of government and created lasting positive impacts by increasing competition, encouraging innovation, and lowering consumer prices. In contrast, ex-ante and ex-post RIA, largely driven by executive branch requirements, have had more mixed effects. Incentives provided by OMB’s gatekeeper review have made ex-ante analysis more successful than ex-post. Institutional change that motivates agencies to conduct impartial assessments of viable alternatives before making decisions and to revisit their regulatory decisions ex-post could improve outcomes.
The greatest technical challenge to better regulation is data. Economists and other social scientists had access to vast amounts of data to evaluate the effects of anticompetitive economic regulations and quantify the benefits of economic deregulation. Designing regulations so they can later be evaluated, including allowing variations that generate natural experiments, may be critical to ensuring more evidence-based policies going forward.
Acknowledgments
This article is adapted from a chapter the late Jerry Ellig and I wrote for The Handbook of Regulatory Authorities, edited by Martino Maggetti, Fabrizio Di Mascio and Alessandro Natalini, Edward Elgar Publishing Ltd. (2022) It benefited from Dylan Desjardin’s research support, as well as constructive review and feedback from participants at a 2022 conference honoring Jerry Ellig’s legacy, my colleagues in the GW Regulatory Studies Center, and other authors in this volume. Any errors are mine. Susan E. Dudley