Hostname: page-component-7bb8b95d7b-wpx69 Total loading time: 0 Render date: 2024-10-03T03:11:32.835Z Has data issue: false hasContentIssue false

Assessing the quality of European Impact Assessments

Published online by Cambridge University Press:  19 February 2024

Diana-Maria Danciu
Affiliation:
Research Group Policy Management, Faculty of Business Economics, Hasselt University, Hasselt, Belgium
Laura Martens
Affiliation:
Research Group Policy Management, Faculty of Business Economics, Hasselt University, Hasselt, Belgium Faculty of Law, Antwerp University, Antwerp, Belgium
Wim Marneffe*
Affiliation:
Research Group Policy Management, Faculty of Business Economics, Hasselt University, Hasselt, Belgium
*
Corresponding author: Wim Marneffe; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

This paper explores the possibility of developing a framework to assess the quality of Impact Assessments (IAs) by examining the common elements found in the existent academic literature around this concept, the stocktaking exercises carried out by the European institutions and the opinions of the Regulatory Scrutiny Board. At this intersection, we find that diversity in the interpretation and application of the guidelines is not only acceptable but also necessary in tailoring IAs to the needs they represent. Our findings are relevant because a universal framework that avoids focusing solely on assessing quality not only will provide much-needed coherence in this field but will also raise awareness about the normality of variability in the application of any European Union guidelines, thus reflecting the inherent nature of the IAs.

Type
Articles
Copyright
© The Author(s), 2024. Published by Cambridge University Press

I. Introduction

The Better Regulation “Guidelines” have been adopted by the European Commission (hereafter, the Commission) to ensure all decisions are made in a transparent, proportionate and evidence-based manner. The guidelines provide a framework to systematically review regulation and to identify potential reductions of unnecessary costs whilst maximising benefits.Footnote 1 The Better Regulation “Toolbox”, a supporting document, provides guidance, tips and best practice that assist the Commission in the implementation of the Guidelines.

One element covered by Better Regulation is Impact Assessments (IAs). IA is an ex ante tool that assesses the necessity of European Union (EU) action and the design and potential impacts of such action. The IAs inform the decision-making process and must adhere to the highest quality standards.Footnote 2 However, such standards are an elusive concept, as the guidelines offer exactly what they promise – guidance towards drafting reports. The concerned directorates-general (DGs) are given flexibility to choose how they approach the process. Consequently, the IAs are applied in various ways, generating confusion about what might be regarded as a “good-quality IA”.Footnote 3 Therefore, we are interested in answering the following research question: “what are the commonly agreed indicators of a qualitative impact assessment?” In answering this question, we address two objectives. The first is to build a framework of indicators whose objective is to highlight the salient issues in IA and to help policymakers, users and the public understand what elements can make or break an IA. The second is to assess the EU IAs against this framework and to examine the application of the IA process.

We believe such a framework for assessing the quality of EU IAs is needed for several reasons. First and foremost, we want to go beyond the Regulatory Scrutiny Board (RSB) assessment system, which is used mostly as a blueprint for passing an assessment.Footnote 4 Moreover, critics have often questioned the objectivity of the RSB, as half of its members are Commission officials who are expected to return to their services after their mandate.Footnote 5 To add to this, the expertise of the RSB can be considered only general, and the RSB is not supported by specialists to assess the IAs.Footnote 6 We argue that a framework should be independent from a Commission body and available for Member States or any other entities to use to help understand the IA in the broader context of the decision-making process; for example, its use in ex post evaluations or inter-institutional negotiations.Footnote 7 Second, we believe such a framework is necessary to add to existent tools such as the Organisation for Economic Co-operation and Development’s (OECD) Indicators of Regulatory Policy and Governance (iREG), which gather indicators through staff-checked self-reported questionnaires, and the World Bank’s Global Indicators of Regulatory Governance. Although iREG has been tremendously useful, it only provides data with respect to regulatory quality in its member countries to allow for comparison and tracking of progress. The World Bank’s indicators explore practices such as transparency, consultation, the use of regulatory IAs and access to enacted laws, but again, the focus is on country comparison in this area. Neither of them serve as a learning tool at the same time. Since both frameworks make use of composite indicators – which, by their nature, are simplified – we add to this framework by adding more specific elements, but only those that are relevant enough to all three dimensions examined, and leaving the list open for necessary adjustments. In this context, it is necessary to have a discussion around how IA quality can be captured numerically,Footnote 8 and we aim to provide the starting point for such a discussion, without adding to the existent box-ticking exercises. Third, and based on our second point, such a framework could serve – like the OECD’s iREG and the World Bank’s set of composite indicators – as a tool for tracking progress in terms of regulatory quality, which could be used in two ways: to communicate such performance to the public for the sake of transparency and trust and for accountability. When the public has access to such information related to regulatory quality and the steps involved in the decision-making process, it is easier to hold politicians accountable.Footnote 9 Fourth, and possibly more far-fetched, we build on extant frameworks and claim that courts could one day use such a framework to analyse cases brought forward that potentially violate formal procedures.Footnote 10

To build our framework, we have consulted three sources: the academic literature on quality in IA; the stocktaking exercises carried out by the European institutions, including the views of stakeholders; and the opinions of the RSB. By choosing the indicators that appear at the juncture of these three dimensions, we have been left with a list of thirty-five indicators to start building our framework. To test the framework, we assessed a dataset of 100 IAs against it. Our starting point is the assumption that good-quality IAs lead to good regulation, and our added value lies in the identification of determinants of IA quality discussed by a group of scholars who attempted to identify proper measures for regulatory quality and, more recently, for the quality of IAs,Footnote 11 all whilst attempting to complement them. Throughout the paper, we argue that there is no such “one-size-fits-all” practice when building IAs.

The rest of the paper proceeds as follows: in the next section (Section II), we will provide an analysis of the concept of the quality of IAs through a review of the state of the art in the literature, the opinions of the RSB and the opinions of stakeholders, where we gather together the common elements that will create our framework. Section III presents the proposed framework resulting from our analysis, the selected indicators and, ultimately, the analysis of the dataset. We also present the findings of our analysis and a discussion of these findings. Finally, in Section IV, we draw our conclusions.

II. Literature and conceptual framework

An objective, universal framework to assess the quality of EU IAs should allow us to analyse both their content and process. It is our intention in this paper to attempt to build such a framework, or at least pave the way towards one. We move away from the Better Regulation Guidelines for two reasons: (1) it provides guidance for the IAs of the EU in particular, whereas our intention is to create a universally applicable framework; and (2) through the stocktaking exercises and the reports of the Court of Auditors, we observe that the Toolbox is not flawless. Therefore, by only using it as a starting point we might be able to come up with valuable improvements.

1. A critical juncture for an objective, universal framework

Our proposed IA framework is set at the intersection of three dimensions:

  1. (1) The conclusions of the stocktaking exercises to identify important issues for stakeholders. The stocktaking exercises are periodically employed surveys of the EU institutions with the purpose of understanding the progress and effectiveness of the Better Regulation agenda – and IA implicitly – for the sake of improvement. The exercises indicate the commitment of the Commission to oversight and have been employed by the Juncker Commission to assess how the procedures under the Better Regulation agenda work in practice and identify areas of improvement. It constituted of an online questionnaire available in twenty-three languages, which the participants could answer for three months. Both citizens and individuals representing legal entities were part of the group of 626 entities that responded. Besides the public consultation, the Commission staff and the other institutions have also been consulted. Areas of improvement identified in the answers have been:

    • Transparency of the process (not that of the content), meaning that they were not aware of the possibilities to participate in the Better Regulation agenda;

    • Inclusion of a broader range of impacts and their quantification (including individual impacts);

    • Analysis of impacts and the availability of methods;

    • Subsidiarity issues.

  2. (2) The opinions and annual reports of the RSB.Footnote 12 The RSB is a (quasi-)independent body tasked with ensuring quality control and support for IAs, amongst others. Namely, every IA needs to pass a RSB appraisal before moving to the next stage, which triggers a written, published opinion that, if positive, can accompany the draft initiative resulting from the IA. Opinions can be positive, positive with reservation or negative. If a negative opinion is issued, the IA needs to be improved and resubmitted for a maximum of two times. The board consists (as of last year) of nine members: a Commission director-general who serves as chair of the board, four high-level Commission officials and four experts recruited from outside the Commission. Thus, the balance tips in the direction of the Commission insiders (5:4), raising several issues surrounding the perceived impartiality of the Board. Each year, the RSB publishes an annual report, in which it summarises its work of the last year and provides a quality assessment of the submitted IA reports, offering a bird’s-eye view of the issues it deems relevant, as well as strong points and weaknesses in the application of the IAs. When examining the RSB’s annual reports, the first issues that can be noticed are particularly with regard to (1) the use of evidence by the IA, (2) the transparency of the process, (3) the selection and quantification of impacts, (4) the assessment methods and (5) subsidiarity concerns. These issues can also be identified in the stocktaking exercises and in the academic literature, and they will serve as the general clusters in our framework.

  3. (3) The scholarly literature on IA quality. In the literature, IAs are viewed as tools with manifold uses, designed and implemented in four different ways, which are reflected in our framework: (1) as political tools; (2) as instrumental tools supporting the need for intervention; (3) as communicative tools – Torriti claimed that the IA works better as a communicative rather than an informative toolFootnote 13 ; and (4) merely as box-ticking exercises.

Looking at the different salient topics in these three realms, we can safely state that there is no one-size-fits-all approach for either the design, the drafting or the uses of IAs. Starting from this assumption, we base our framework on the idea that the diversity in interpreting and applying IA guidelines is not only acceptable but also necessary for tailoring IAs to the needs of the services that use them.Footnote 14 Thus, our intention is to avoid absolute definitions of correctness.

2. Use of evidence

In the context of IA, two areas of particular significance are the correct (and complete) identification of impacts, but also the use of evidence for the quantification of these impacts.Footnote 15 The guidelines specifically require that all impacts undergo a quantitative assessment where possible, as well as a monetisation of impacts or a justification if impossible. Without proper use of evidence, such practices are not only difficult to perform but also lack transparency. From the RSB reports, we can identify a tendency to include impacts that are easily quantifiable in the IAs whilst avoiding those that are more difficult to quantify. However, even though these impacts are indeed included most frequently in IAs, they are also the ones that feature the most in the RSB opinions. Thus, it is interesting to examine the quantification trends over the years. Another issue of great concern is the choice of policy options. The IA forces the analyst to think “outside the box” by requiring them to consider several policy options for their problem. Thus, the inclusion of several alternatives with the corresponding justification of choosing a preferred one, based on evidence, is crucial for ensuring transparency and proving that the policy option chosen is not one imposed politically. Furthermore, once several policy options are identified, whether in consultation with the public or through internal agreements, robust reasoning needs to be provided when one option is discarded.

3. Transparency of the process and content

Some authors have argued that the purpose of the IA, in reality, is not necessarily to find the “most appropriate” policy option but to explain the process of arriving at that option.Footnote 16 It might be that the actual best policy option does exist but is not listed. Our understanding is that it is more important to be transparent with respect to the decision on what to include in the IA. In fact, according to the stocktaking exercise, 54% of the respondents believed that, amongst all topics covered by IAs, transparency is most in need of improvement.

Consequently, it is crucial to ensure the inclusion of stakeholders’ views. According to the results of the stocktaking exercise, the area that still needs quite some improvement is public consultation (40% of respondents), whereas approximately half of the respondents were unhappy with the way stakeholders are involved in the decision-making process. This belief can also be found in the academic literature, which states that the relationship between knowledge and policymaking can only be reinforced if this stage is open to external stakeholders.Footnote 17 Therefore, it is essential to examine all of the processes associated with ensuring transparency, including the need to intervene, and the views of all relevant parties in the issue at stake.

4. Selection and quantification of impacts

IAs have encountered criticism in the academic literature regarding the neglect of non-economic impacts in their analyses.Footnote 18 We believe it is important always to assess the extent to which such impacts are integrated into the IA. Survey participants in the stocktaking exercise have also called for increased consideration of impacts beyond the economic sphere, such as environmental and health impacts, consumer impacts and impacts on fundamental rights and equality.Footnote 19 More than a third of the respondents were unhappy with the disclosures on how policy alternatives were being identified or chosen, with only about 5% being very satisfied with this. The RSB also noted that a justification for the lack of relevance of impacts is rarely provided. Therefore, in constructing our framework, we will analyse the extent to which economic and non-economic impacts are integrated into IAs.

5. Use of assessment methods in Impact Assessments

Another discrepancy that has been singled out surrounds the assessment methods of the IAs. Traditionally, the method used most frequently in IAs is the cost–benefit analysis (CBA). In a CBA, the focus is on the monetised costs and benefits at the societal level.Footnote 20 Other methods, which have been employed more often in recent times, have taken a more fragmented approach (ie assessing administrative burdens for businesses).Footnote 21 The scholarly literature has found that the most commonly used methods were mostly qualitative and conceptually simple, and that more advanced tools were seldom used, and, when they were used, they presented significant uncertainties that were too risky for politicians to act upon.Footnote 22 As these agendas are becoming increasingly diverse and the policies become increasingly cohesive, IAs need to cover a broader range of impacts across wider groups of stakeholders, with some authors advocating for the integration of non-economic impacts, which would require the use of non-monetary assessment methods.Footnote 23 Moreover, an IA is, by definition (ex ante), a process based on forecasts.Footnote 24 For this reason, CBA has the potential to reduce forecasting errors,Footnote 25 but such an analysis becomes complicated when including impacts that are impossible the monetise. Thus, it is crucial also to include a sensitivity analysis that allows for testing how such uncertain parameters might impact the final results. Therefore, the use of different methods is pivotal to ensuring the transparency of the decision-making process.

6. Quality concerns beyond Impact Assessment content: procedural issues and subsidiarity concerns

Another issue that has been discussed less in the literature but appeared both in the stocktaking exercises and the RSB reports is subsidiarity. Whereas stakeholders are overall pleased with how the EU addresses subsidiarity, they do suggest, nonetheless, that the added value of the EU should be proven through an in-depth evidence-based analysis and justified by reasons that go beyond simply harmonisation. Accordingly, one topic of interest for us in our framework will be the EU’s added value and how it is considered in the IAs. Although not explicitly discussed in the literature, we believe that the number of RSB comments expressed in the first and second readings might also be a good indicator of the quality of both the content and the process of the IA.

By considering these discrepancies in the development of IAs, we allow deviations from the Guidelines and the Toolbox as a natural process stemming from the different uses of IAs, the varying actors involved in IAs and the inherent uncertainty that accompanies every analysis. Thus, our framework avoids labelling the IAs as qualitative or non-qualitative and focuses on the mere presence of the identified items. Complete lists of our general indicators to be included in our framework can be found in Tables 1 and 2.

Table 1. Proposed framework for the assessment of Impact Assessments (IAs) and summary statistics.

DG = Directorate-General; EU = European Union; NA = not applicable; OPC = open public consultation; RSB = Regulatory Scrutiny Board.

Table 2. General indicators used to describe the selected Impact Assessments.

SMART = Specific, Measurable, Achievable, Relevant and Timely.

III. Empirical analysis

1. Construction of the proposed framework

To compile the long list of indicators in our framework, three sources were consulted: the academic literature on IA quality, the stocktaking exercises and the opinions of the RSB. To avoid bias, we only retained the indicators found at the intersection of at least two of these sources. The complete list of indicators that resulted from our analysis of the three dimensions is presented in Table 1. To validate our framework, we randomly selected 100 IAs published in the period 2016–2021 and assessed them against the framework – the summary statistics for the selected IAs are presented below, with the corresponding dimensions. The detailed analysis of these 100 IAs is presented in Section III.2. No conscious attempt has been made to ensure diversity or homogeneity within the topics addressed or drafting DGs. This choice is motivated by the fact that the topics covered and the drafting DGs work on IAs on an ad hoc basis, whenever one is needed.

The variables used for the description of the IAs are listed in Table 2, but they are not necessarily part of the proposed framework, as they are only used to describe the typology of the IA and not its content. The scope of the IAs is diverse: 17% of the IAs cover a broad topic and result in quite a broadly identified policy option, and 83% are more narrowly defined, focusing on specific topics. The average number of problems defined is 3.31, ranging from no problems defined (in five cases) to seven problems defined. The reasoning for intervention is defined in terms of regulatory failures (61% of cases), market failures (negative external effects in 38% of cases, information asymmetry in 35% of cases and imperfect competition in 18% of cases), biased behaviours in 36% of the IAs and equity consideration in 22% of the IAs. The precautionary principle has been used in 18% of the cases as a reason for intervention.

Based on our framework, we provide a descriptive analysis of our IA dataset. We carried out a statistical analysis using Stata software, in which we coded all of the labels above to provide a more structured view (generating a total of 144 variables).Footnote 26

2. Application of the proposed framework: analysis of 100 European Union Impact Assessments

a. Evidence-based choices

Our analysis results presented in the AnnexesFootnote 27 shows that 84% of IAs proposed a minimum of two policy approaches, and 79% considered different policy instruments. Most DGs fare well in terms of defining policy approaches and instruments, with no clear winners observable. When assessing the policy options, costs are quantified for all options in 65% of the IAs, whereas the rest of the IAs quantify the costs only for the preferred policy option in 18% of the cases or not at all in 17% of the cases. We see that DG ENER fares exceptionally well in this endeavour, with a 100% rate of quantified costs. Other DGs, such as CONNECT and EMPL, also show good practice. Across the years, 2017 and 2021 seem to have experienced an increased quantification of the costs for the identified impacts. As for the benefits, we observe a tendency to assess benefits qualitatively in 36% of the IAs, quantitatively in 56% of the IAs and both quantitatively and qualitatively in 8% of the IAs, indicating a promising outlook. DGs CLIMA, CONNECT and SANTE seem to be the frontrunners, quantifying the benefits in all of the examined IAs, whereas across the years they have been quantified most in 2019. Some 6% of the IAs compare the costs and the benefits only for the preferred option, and 40% compare them for all options, indicating a valid comparison of the policy options across all identified impacts. Among the DGs, EMPL distinguishes itself from the others in terms of comparing benefits to costs for all policy options identified. More than half of the IAs (54%) do not attempt such a weighing of the impacts. Even so, we have observed that 61% of the IAs provide a competent, informed and verified justification of choosing a preferred policy option, with SANTE as a frontrunner. Due to this lack of comparisons, the justification for choosing a specific policy option lies with other criteria, such as coherence or effectiveness. We have looked not only into the retained policy options but also into the discarded policy options. The most predominant reasons for discarding policy options are effectiveness issues (40% of cases), proportionality (27% of cases); technical, legal or political feasibility (25% of cases), necessity (22% of cases), lack of stakeholder support (20% of cases) and negative impacts in other areas (18% of cases). Less common but still used in IAs are the limited scope of the option, efficiency, coherence, legal or subsidiarity considerations, redundancy or a lack of added value. Overall, for making evidence-based choices, there is variation in the use of evidence (Table A1). We observe particularly good practices in terms of use of evidence in DGs AGRI, COMP, CONNECT, EMPL, ENER, ENVI, MOVE, SANTE and TRADE. Amongst these, it might be worth distinguishing the DGs from which we analysed more than just one IA (to rule out coincidence): DGs CONNECT, EMPL, ENER and MOVE. Table A2 presents the use of evidence split by year: in 2016, we notice quite a low uptake of the quantification of benefits but an excellent record of considering a broad range of instruments and policy options to solve the problems. The use of evidence seems to have taken a bad turn in 2020, with great variation amongst the used indicators, possibly due to the effects of the COVID-19 pandemic.

b. Transparency concerns within the Impact Assessment content

Zooming in and seeking a potential indicator of the good use of evidence in the IAs, we have noticed that DGs CONNECT, EMPL, ENER and MOVE (especially EMPL and ENER) have scored particularly well in the consultation of stakeholders. This could possibly explain the good practice of embedding evidence-based claims in the IAs. We observe that 64% of the IAs provide a complete list of all of the parties that might be affected by the initiative; 22% provided an incomplete list and 14% did not provide such a list at all. Based on this list of potentially affected parties, the involved DGs need to carry out open public consultations (OPCs). Despite being mandatory, only 93% of IAs did this.Footnote 28 Amongst these, nine IAs did not disclose the number of responses they had received and five did not provide a detailed analysis of the responses.

In the cross-DG examination, we observe several frontrunners with respect to transparency: DGs CONNECT, ENVI, FISMA, GROW, HOME, JUST, MOVE and TAXUD. Over the years, 2019 appears again as an overall frontrunner, but only in relative terms; upon closer examination, we notice a (relatively) poor performance of the IAs in terms of reporting back the results of the stakeholder consultations in all years except 2018. Otherwise, we observe good practices in listing the affected parties in 2016, 2017 and 2020, with the OPC occurring in all of these years. As a preliminary conclusion, we can state that the IAs are properly consulted and sufficiently transparent with respect to the procedural steps.

c. Selection and quantification of impacts

The impacts of the policy options serve as criteria for assessment and comparison. The omission or inclusion of some impacts can change the conclusion of the IA altogether. Thus, we also explore the types of impacts retained in the analysis and the thoroughness with which they are examined. We find that economic impacts, for example, are considered for all retained options in 93% of the IAs, for the preferred option in 4% of cases and they are not considered at all in 3% of cases. Some 69% of the IAs also analyse societal and/or environmental impacts when relevant. However, the fact that 31% of the IAs omit such impacts cannot be overlooked, as these IAs do not provide an explanation for such an omission either. Amongst the economic impacts, we observe that in more than 90% of IAs administrative burden and compliance costs are considered, whereas indirect impacts are mentioned in 84% of cases; intangible impacts are mentioned in 77% of the IAs. The administrative burden is mentioned in 47% of cases, quantified in 3% of cases and monetised in 44% of cases. Of the 77% of the IAs that include intangible impacts, 36% mention just one potential intangible impact, 33% mention two intangible impacts, 22% mention three intangible impacts and just 8% mention four or more intangible impacts. The most predominant of these are fundamental rights (14% of IAs), safety (13% of IAs), protection (consumer protection, investor protection, etc.), transparency and trust (11% of IAs) and access or availability of information (10% of IAs).

The stakeholders for which these impacts have been considered are mentioned in 29% of cases. The impact on Member States is monetised in 46% of cases, quantified in 8% of cases and only mentioned in 42% of cases. It is not considered at all in 4% of the IAs.

Overall, we observe a tendency to routinely quantify the economic impacts for all policy options. We observe good practice in DGs ENER, GROW and MOVE (in which ten out of ten, eight out of eleven and seven out of ten IAs, respectively, quantify economic impacts for all policy options), closely followed by DGs JUST and EMPL. Amongst the economic impacts, the administrative burden is monetised most often. The compliance costs are the impacts quantified by most DGs; this could be due to the ease with which these impacts are quantified and the relative accessibility of relevant data. The impact on the affected stakeholders is assessed quantitatively only by DGs GROW and HOME, and it is monetised in many cases in DGs ENER and MOVE, amongst others. What is surprising is the practice of most DGs of quantifying the indirect impacts, but also of integrating intangible impacts into their assessments, even if these are not part of the analyses. We observe that DGs EMPL, GROW and HOME address intangible impacts in all of the IAs included in the study. Of course, we might assume that these are DGs that deal with issues for which the potential impacts go beyond economic and monetary ones, and, indeed, the DG that does not include any intangible impacts in its analyses (ie DG COMP) deals with issues of a more economic nature, which might simply have no non-monetary implications. Oddly, DG EAC, dealing with educational issues, does not include any intangible impacts in the analysed IAs. As for the inclusion of social and environmental impacts, the DGs that include such impacts most consistently, are, unsurprisingly, DGs CLIMA, ENER, HOME, GROW and MOVE, which routinely deal with social and/or environmental issues. Over the years, we see progressive trends of the increasing quantification of relevant impacts for all policy options and the increasing integration more types of impacts, with almost perfect practices in terms of quantification being observed in 2020.

d. Assessment methods

CBA is the method most predominantly used (23% of cases), followed by the standard cost model (SCM; 18% of cases). Multi-criteria analysis (MCA) methods are used in 11% of the IAs and cost–effectiveness analysis (CEA) is used in 7% of cases. We also conducted an analysis of the correct use of the method used and observed that, in almost a third of the IAs (28%), the methods have not been used properly. In 39% of the cases in which one of the methods listed above is used, these have been carried out correctly. Zooming in, when CBA has been used, this has been carried out properly in just 9% of the IAs. MCA was performed erroneously in 55% of cases and CEA was performed incorrectly in 71% of cases, whereas the use of SCM seems to be correct in 66% of cases. We examined the assessment methods especially for cases in which intangible impacts have been identified (in 69% of cases) to identify trends in usage of methods other than CBA. MCA – arguably the go-to method for such cases – has been used only in 13.5% of these cases, and it performed well in half of these.

Across the DGs, we observe that CBA is used most often in DGs HOME, ENER, GROW and MOVE. SCM is most commonly used in DGs ENER, GROW and JUST, whereas MCA is most commonly used by DG JUST, followed by DGs CONNECT and EMPL. The reason for an increased use of MCA in these DGs might be the cross-dimensional nature of the issues they tackle and the variation in the types of considered impacts, which might pose difficulties in their quantification if using other methods. DGs FISMA, CLIMA, GROW and MOVE make use of variations of the CEA. As for the appropriate use of the employed method, DGs ENER, JUST and GROW do so to the greatest extent regardless of the method used. However, correct use of the methods can be observed in other DGs as well, such as DGs CONNECT, MOVE or HOME.

e. Subsidiarity concerns

The subsidiarity test is also integrated quite well in the IAs. Some 64% of the IAs limit the scope of the initiative to those aspects that Member States cannot achieve by themselves. However, EU added value has only been properly considered in a few IAs. Most commonly It is justified by means of economies of scale, while leaving aside any democratic considerations. We observe particularly good practice in DGs MOVE and ENER – this is unsurprising considering the cross-border issues that they deal with on a daily basis. It appears that 2018 was the year in which this practice showed the most progress. Strikingly, in 2020, there was a lack of consideration about whether the Member States can achieve the set goals by themselves, which might again be explained by the urgency associated with the COVID-19 pandemic.

f. Quality concerns beyond Impact Assessment content: procedural issues

The RSB comments provide insights into the quality of the IA process as well. Amongst the IAs for which RSB opinions are available, 64% have passed the first RSB assessment. In these assessments, the RSB provided between three and twenty-eight comments, with an average of 12.6 comments. The average number of RSB comments provided for the rest of the thirty-three IAs that have not passed the initial RSB assessment (and for which we have information) is 8.4 (ranging from two to fifteen comments). The correlation between IA quality and whether the IA passed the first RSB reading is negative (and quite weak, at –0.289), confirming our starting assumption that the RSB opinion does not provide a global reflection of the quality of the IA (it does, nonetheless, report on the quality of the IA against the Better Regulation Guidelines).

g. Variation in the application of the Impact Assessment process

Up until now, we have observed that there is variability in the interpretation and application of the IA process across the examined IAs but also across the different DGs and across the years. Table A1 shows the spread of a newly created variable that stores information on all elements discussed in such analyses: the average quality of the IAs examined. Thus, we observe an above average overall quality for most DGs. Over the years, 2019 seems to be the year with the best-quality IAs. Expecting a progressive trend in terms of quality and expertise, we would have expected that the quality of IAs in the year 2020 would have surpassed that of those developed in 2019. However, we can observe that this is not the case, possibly due to the COVID-19 pandemic. However, a slight improvement in 2021 might represent promise for the coming years.

IV. Conclusion

This paper explored the possibility of developing a theoretical framework to assess the quality of IAs in response to the existent criticism to the potential impartiality of the RSB. We also aimed to provide a framework that moves away from extant ones that use composite indicators and brings forward the possibility of instead using it as a tool to help explain where the challenges lie in drafting IAs rather than assessing quality. We expected that if such a theoretical framework could be built objectively, this would lead to more clarity amongst policymakers by highlighting the topics demanding the most attention and identifying the trade-offs that are routinely made in drafting these IAs.

Our starting assumptions place IAs as ambassadors of good regulation. We have discussed the procedural issues that surround the concept of a high-quality IA, and we laid the foundations at the intersection of three conversations surrounding the concept of IA quality: the academic literature surrounding this concept, the results of the stocktaking exercises and the opinions of the RSB.

We found that diversity in the interpretation and application of the guidelines is not only acceptable but also necessary in tailoring the IAs to the needs of the society and the responsible services. The common elements can be clustered as follows: (1) the use of evidence by the IA; (2) the transparency of the process; (3) the selection and quantification of impacts; (4) the assessment methods; and (5) subsidiarity concerns.

After creating an IA quality assessment framework, our second objective was to assess the EU IAs against the framework and to examine the application of the IA process. In this endeavour, we randomly selected 100 IAs published in the period 2016–2021 and assessed them against our framework. Our 100 IAs have been examined against thirty-five general indicators and multiple specific ones (ie specific types of intangible impacts considered or the method of assessment, a general indicator that was split into multiple indicators, such as CBA or MCA). We found that the claims made in the IAs are backed by sufficient evidence and the overall process is transparent. The impacts were quantified appropriately in most IAs and for most relevant impacts. Via statistical analysis, we identified a lack of comparisons of costs and benefits for the retained policy options. However, the choices are justified overall justified by considering other factors such as coherence or effectiveness. With regard to transparency, the examined IAs are properly consulted and sufficiently transparent with respect to the procedural steps that are undertaken. The methods that are used consistently are CBA, MCA, CEA and SCM, but these are used appropriately in fewer than half of the examined IAs. As for the impacts integrated into such analyses, as expected, we have noted that economic impacts are most prevalent, and they are analysed for all policy options (as opposed to only for the preferred policy option) in most IAs. Amongst the economic impacts, the administrative burden is most often analysed, closely followed by compliance costs. Although most IAs identify social or environmental impacts, not all of them analyse these; amongst those that do not include them, hardly any provide a justification for this. The intangible impacts are identified in almost all IAs, but they are not examined in almost any of them. Subsidiarity is mostly applied correctly from a general point of view, but trade-offs between EU added value and democratic considerations are never included in the analyses. We also observed that there is variability in the interpretation and the application of the IA process present across the DGs. Furthermore, we observed a progressive increase in the quality of the application of the IA process over the years.

Our findings are relevant because the development of a universal framework will, we believe, normalise variability in the application of any EU guidelines, which arises due to the inherent nature of the IAs developed for and with different purposes. Moreover, the results of our review have multiple applications. For EU civil servants, they provide a list of issues that need to be considered when developing IAs. They might raise red flags as to what the most salient topics are and increase awareness regarding the possibility of staying flexible and adapting the process to their own needs and priorities. Moreover, we hope our findings will serve as a learning tool that helps us to understand where more resources should be allocated. For researchers, we hope our results will inspire others to build on them with any elements that they deem objectively relevant. For the public, the results provide a blueprint for objectively assessing the processes through which decisions are made, enabling greater accountability. Our intention is to keep developing this framework based on the latest findings from the literature and practice and to keep including more IAs (possibly also those published by entities other than the EU) in order to strengthen the reliability of our findings.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/err.2023.83.

Financial support

The research is funded by Hasselt University via a doctoral scholarship.

Competing interests

The authors declare none.

References

1 European Commission, “Better Regulation Guidelines” (2021).

2 C Dunlop, O Fritsch and C Radaelli, “The Appraisal of Policy Appraisal: Learning About the Quality of Impact Assessment” (2015) 149(1) Revue française d’administration publique 163–78.

3 C Cecot, R Hahn, A Renda and L Schrefler, “An Evaluation of the Quality of Impact Assessment in the European Union with Lessons for the US and the EU” (2008) 2 Regulation & Governance 405–24; C Adelle and S Weiland, “Policy Assessment: The State of the Art” (2012) 30(1) Impact Assessment and Project Appraisal 25–33.

4 To add to this, the checklist used by the RSB is not public, and it is therefore unclear what precisely is assessed in the IAs. Although this is not important for the drafting services that receive recommendations based on this checklist, for an external stakeholder it could be helpful to know precisely which elements are being looked at.

5 A Alemanno, “How Much Better Is Better Regulation? Assessing the Impact of the Better Regulation Package on the European Union – A Research Agenda” (2015) 6 European Journal of Risk Regulation 344.

6 C Radaelli, “Regulatory Indicators in the European Union and the Organization for Economic Cooperation and Development: Performance Assessment, Organizational Processes, and Learning” (2020) 35(3) Public Policy and Administration 227–46.

7 See discussions provided in the following: S Smismans, “Policy Evaluation in the EU: The Challenges of Linking Ex Ante and Ex Post Appraisal” (2015) 1 European Journal of Risk Regulation 6–26; J Torriti, “The Unsustainable Rationality of Impact Assessment” (2011) 31 European Journal of Law and Economics 307–20; Radaelli, supra, note 6.

8 E Golberg, “‘Better Regulation’: European Union Style” (2018) Harvard Kennedy School, Mossavar-Rahmani Center for Business and Government, M-RCBG Associate Working Paper Series.

9 C Dunlop and C Radaelli, Overcoming Illusions of Control: How to Nudge and Teach Regulatory Humility (London, Bloomsbury 2015).

10 R Bull and J Ellig, “Judicial Review of Regulatory Impact Analysis: Why Not the Best?” (2017) 69(4) Administrative Law Review 725–840.

11 C Cecot, RW Hahn and A Renda, “A Statistical Analysis of the Quality of Impact Assessment in the European Union” (Working Paper 07-09, AEI-Brookings Joint Center, 2007); Dunlop and Radaelli, supra, note 9.

12 Senninger and Blom-Hansen find that the RSB takes its role very seriously, providing, on average, five major change requests with a rejection rate of 39% in the first opinion and a rejection rate of 7% in the second opinion; R Senninger and J Blom-Hansen, “Meet the critics: analyzing the EU Commission’s Regulatory Scrutiny Board through quantitative text analysis” (2021) 4(15), Regulation and Governance 1436–53.

13 Torriti, supra, note 7.

14 Adelle and Weiland, supra, note 3.

15 Cecot et al, supra, note 3.

16 C Radaelli and A Meuwese, “Better Regulation in Europe: Between Public Management and Regulatory Reform” (2009) 87(3) Public Administration 639–54; J Turnpenny, C Radaelli, A Jordan and K Jacob, “The Policy and Politics of Policy Appraisal: Emerging Trends and New Directions” (2009) 16(4) Journal of European Public Policy 640–53.

17 E Bozzini and S Smismans, “More Inclusive European Governance through Impact Assessments?” (2016) 14(1) Comparative European Politics 89–106.

18 P Carroll, “Does Regulatory Impact Assessment Lead to Better Policy?” (2010) 29(2) Policy and Society 113–22; G Listorti, E Basyte-Ferrari, S Acs and P Smits, “Towards an Evidence-Based and Integrated Policy Cycle in the EU: A Review of the Debate on the Better Regulation Agenda” (2020) 58(6) Journal of Common Market Studies 1558–77.

19 28.7% thought that social and environmental impacts are considered sufficiently, whereas 31.5% thought that they are only partially considered. The remaining respondents – approximately 10% – had no opinion on the matter or were unsure.

20 A Boardman, D Greenberg, A Vining and D Weimer, Cost–Benefit Analysis: Concepts and Practice (Cambridge, Cambridge University Press 2017).

21 C Adelle and Weiland, supra, note 3.

22 M Nilsson, “The Role of Assessments and Institutions for Policy Learning: A Study on Swedish Climate and Nuclear Policy Formation” (2006) 38 Policy Sciences 225–49; J Hammes, “The Influence of Individual Characteristics and Institutional Norms on Bureaucrats’ Use of Cost–Benefit Analysis: A Choice Experiment” (2020) 12(2) Journal of Cost–Benefit Analysis 258–86.

23 Cecot et al, supra, note 3.

24 C Kirkpatrick and D Parker, Regulatory Impact Assessment: Towards Better Regulation? (Cheltenham, Edward Elgar Publishing 2007).

25 C Sunstein, “Does the Clear and Present Danger Test Survive Cost–Benefit Analysis?” (2019) 104 Cornell Law Review 1775–98.

26 By recoding the variables, we transformed, for example, our variable criteria for discarding policy options into fifteen new dummy variables detailing each reason for discarding options.

27 These tables are very extensive and, for the purpose of easing the flow of reading, we decided to move them to the Annexes.

28 The lack of a public consultation can be justified by urgent issues, exceptional situations (such as the COVID-19 pandemic) or requested exemptions, and it is thus not necessarily an indicator of poor quality.

Figure 0

Table 1. Proposed framework for the assessment of Impact Assessments (IAs) and summary statistics.

Figure 1

Table 2. General indicators used to describe the selected Impact Assessments.

Supplementary material: File

Danciu et al. supplementary material

Danciu et al. supplementary material
Download Danciu et al. supplementary material(File)
File 39.5 KB