Skip to Main Content
Book cover for Designing Rulemaking: How Regulatory Policy Instruments Matter for Governance Designing Rulemaking: How Regulatory Policy Instruments Matter for Governance

Contents

Before our motivation and theoretical choices hit the road of empirical analysis, we must address the first practical challenge of our research programme: the search for a measurement approach that allows us to map the design of rulemaking instruments within our population. Furthermore, the measurement of design needs to be sufficiently granular to inspect those aspects of policy instruments that can be related to our research questions and the conversations introduced in Chapter 1. In short, we need a theoretically motivated approach that can be applied to all instruments and countries, allowing for meaningful comparisons.

Yet, we are not looking for just another set of regulatory indicators. As we have made clear, the data extend beyond the toolbox of ‘better regulation’, covering instruments not usually associated with regulatory indicators. However, our approach is not limited to expanding the size of toolbox. The key advancement we propose lies on theoretical and methodological grounds. Fundamentally, this step is inspired by theory, rather than by the idiosyncrasies of the instruments themselves. All instruments, hence, are approached as action situations. Furthermore, the design of each policy instrument in each country is thought to be structurally constituted by specific rules-in-form that shape the action situation. As a result, each instrument is approached with the same theory-driven yardstick (the rule types) in each country. This reduces idiosyncratic effects and enables high comparability across instruments.

We further approach the four instruments as nested action situations—in the sense that they are conjoined as the action situations can overlap; they can also trigger mechanisms that reinforce or cancel each other; and one action situation may be a functional equivalent of another. Together, the four nested action situations feed into the macro action situation we call rulemaking. In other words, the four action situations configured by the instruments are nested into the broader rulemaking action situation. In brief, rulemaking instruments do not work in isolation, they may complement or substitute each other and, in combination, lead to different governance outcomes.

Once our approach uncovers the granular design features of each instrument, we employ a popular (and largely exploratory) dimensional reduction technique (Principal Component Analysis, PCA) to detect key dimensions of variations. We then use these dimensions of variation (i.e. the principal components) and their scores to derive and calibrate conditions that are then suitable for Qualitative Comparative Analysis (QCA). The goal of our QCA as a configurative approach is to associate combinations of instruments’ design features to the occurrence of the outcomes of interest in the countries under research.

In short, we are carrying out a double exercise in parsimony and synthesis: first we reduce the complexity of the rule types’ variability to components. Then we transform components into conditions which are suitable for QCA. This happens through so-called fuzzy values, which indicate how far a given case can be attributed to the concepts which are constitutive for our conditions. Fuzzy values are defined through a process which is called ‘calibration’ in QCA terminology. After doing this, these conditions are related to the three outcomes of ease of doing business, corruption, and environmental performance, which are also based on fuzzy values.

Fundamentally, this account of measurement addresses four empirical questions.

1.

How do the four action situations differ in terms of the seven rule types?

2.

What are the rules that best explain the variability in our population?

3.

How do countries align in relation to the components that best explain variation?

4.

How can we assign synthetic values to each country by instrument?

To answer these, we consider the twenty-eight countries as a single population containing a number of rules. The total number is found empirically. Each rule is classified as belonging to one of the seven rule types. Empirically, each rule type may or may not be present in our population. Incidentally, the lack of a rule type is by itself informative of how the design of an action situation may be incomplete. Now, to the details.1

Over the years, international organizations (IOs) such as the World Bank and the Organisation for Economic Co-operation and Development (OECD) have provided cross-country regulatory performance measures (Johns and Saltane 2016; OECD 2019, 2021). These indicators of rulemaking are mainly descriptive, not theory-informed. They take stock of instrument-specific aspects of consultation and impact assessment across countries (note that the role of freedom of information and ombudsman in rulemaking are not considered). The raw data stem from different sources. The World Bank mainly relies on country officers, country experts, and academics. The OECD iReg dataset2 is generated by a process of interaction between the member states’ delegates in the Regulatory Policy Committee (RPC) and the OECD RPC Secretariat (see Radaelli 2020 for a description). Both the OECD and the World Bank produce regulatory indicators for their own purposes of international benchmarking and advise governments engaged with reforms.

These are legitimate motivations and approaches, of course, but they do entail several limitations that we seek to overcome. Fundamentally, IOs (IOs) employ expert surveys to generate benchmarks and best practices but the way the latter are generated may prevent punctual cross-instruments comparisons. For example, a typical IO questionnaire used to measure public consultation features would look into the presence/absence of certain best practices and include a question along these lines: ‘Do ministries or regulatory agencies in your jurisdiction request comments on proposed regulations from the general public?’ Similarly, in the case of impact assessment, one of the questionnaire items may sound like: ‘What are the criteria used to identify interested stakeholders?’ Quite clearly, these items are both about eligibility criteria, but one can immediately see that, despite their common nature, the way the questions are formulated, and answers ‘measured’, prevents direct comparison between the eligibility criteria for the public to participate in the procedures. Thus, for instance, the comparison between consultation and impact assessment, two instruments belonging to the same better regulation family according to IOs, in terms of eligibility criteria is remarkably hard if we stick to IOs’ measures.

In contrast, by using the conceptual category of boundary rules to scrutinize the legal bases disciplining the instruments we avoid this idiosyncratic approach based on instrument-specific survey items and identify all the provisions that discipline the characteristics of those actors that are eligible to perform a role in the procedure. We will see a range of individuals or public bodies, or organizations.

Thus, rather than superimposing and measuring expert-inspired and instrument-specific dimensions, an approach based on the measurement of universal semantic categories (in this case, Ostrom’s rules) allows us to reach an unprecedented level of comparability—and validity. The increased validity of this procedure is also due to the fact that, in contrast to IOs, we are careful to avoid mixing design and implementation features. Whereas an approach focused solely on design has its own limitations (see Chapter 2), it allows us to achieve higher construct validity as we, first, form the concept (instruments’ design) and, second, tackle an empirical corpus (the legal bases disciplining the instruments) which perfectly matches the concept and does not include aspects of implementation which are typically included in IOs’ measures. For us, the legal text is the faithful, and perhaps only, representation of design.

Furthermore, our approach does not involve expert opinions, either in the definition of relevant dimensions or in their scoring. In fact, while the answer to the questions above comes from different typologies of experts in the case of IOs, our identification of rules that belong to one or another Ostrom type is based exclusively on the legal texts that discipline the procedures. This, quite obviously, allows us to reach a degree of reliability that is simply impossible to achieve in the case of (mutable) experts’ answers.

To gather data, we worked with a team of forty partners (mostly academic and professional administrative lawyers) who retrieved and translated institutional statements included in relevant legal bases of the four procedures. We also collected data on judicial review and administrative procedure acts, but in this book, we use them to describe the context (see Chapter 1) and to qualitatively describe individual cases evidenced by the formal analysis we present in Chapters 4, 5, and 6.

Our data points are statements extracted from the law, not answers to survey items. This, as already noted, increases the reliability and replicability of our approach to data collection. Our measures of instruments’ design are based on the letter of the law, not on opinions that may reflect different positions (in government, as World Bank officer, and so on) and change over time (depending on the attitude of the government in office, for example). There is a difference between the number of lawyers and the number of countries and instruments: many covered more than one instrument per country. This is because an expert on a country’s FOI procedure often has expertise of that same place’s ombudsman procedures. Similarly, impact assessment experts are also knowledgeable about consultation. And so, for some countries, we were able to select a single expert to cover all four instruments.

For each country we identified the legal bases of the rulemaking instruments in force as of 2018, plus the year of adoption, how many times it had changed, and whether there were additional legal bases for sectoral/local regulators. For the analysis that follows, we rely only on nationwide procedures, without considering subnational levels of government and policy sectors (as we noted in Chapter 2). These legal bases are grounded in hard law or on soft law guidance documents. We did not differentiate between these when identifying the rules in the population, although our data tells us whether a rule comes from soft or hard law.

The forty experts completed a protocol for each instrument reporting the exact text of the portion of the legal base corresponding to a rule type and inserting as many portions of legislation (articles, clauses, and sections of guidelines) which revealed a rule type.

In the process of data collection, we did not use the Institutional Grammar Tool (IGT) language directly to avoid super-imposing terms on the experts (most of them with qualifications in law rather than political science) that are open to diverse interpretations. Rather, we asked them in plain language to: (1) identify the actors involved in a procedure (for example, impact assessment) [position rules]; (2) define the characteristics of individuals or public bodies eligible to perform a role in the procedure [boundary rules]; (3) specify the actions and choices that actors can make according to the legal base [choice rules]; (4) identify actions that require the aggregation of two or more actors [aggregation rules]; (5) list the information sent or received in the procedure, including the channels of communication [information rules]; (6) report the statements containing sanctions and rewards [pay-off rules]; and (7) identify the range of possible outcomes or targets and the level of specificity of the desirable outcomes [scope rules]. The experts also provided a flowchart of each procedure which enabled the identification of the essential steps in each administrative procedure while also offering the bigger picture of the process. Finally, they recorded details of various dimensions relevant to the individual instrument or the context of administrative law in that country. This was done using open-ended questions. To support the completion of the tasks, we relied on clearly written instructions further explained in a webinar, a one-day in-person workshop on the protocols, plus a number of on demand online one-to-one sessions.

The 112 (28 x 4) protocols were validated by four members of the research team3 working in pairs with the additional guidance of the legal scholars from the project’s advisory board.4 Working in pairs to increase the reliability of the categorization, we validated all the protocols. In practice, we (re)allocated the extracted statements to the IGT categories. When necessary, we went back to the lawyers to ask precise, factual questions regarding the legal bases, or the accuracy of the translation from the original language into English.

The result was four data architectures. These architectures contain:

33 rules/variables for Consultation,

45 rules/variables for RIA,

64 rules/variables for FOI,

61 rules/variables for ombudsman procedures.

These are mainly ‘Yes’ or ‘No’ micro-procedural items which reflect the rules extracted from the legal bases. We followed Ostrom’s approach outlined in her 2005 book (in particular Chapter 8, pp. 223–226). To take stock of the diversity in boundary rules observed across a population of Common Pool Resource (CPR) arrangements, Ostrom simply extracted all the boundary rules found across all the cases and computed a total of empirically observed boundary rules for that population. Our transition from statements to variables follows this logic. When we look at individual cases (countries), each of the rules represents a variable measured in terms of presence/absence.

In addition to the variables based on Ostrom’s rule types, we collected a small number of background items detailing, for instance, year of adoption, the presence of a hard as opposed to soft legal base, or the regional/sectorial coverage of the instruments.

This approach, based on original legal texts and rule typologies, cannot be considered a form of coding in the style of coding framework set out in the institutional grammar 2.0 codebook (Frantz and Siddiki 2020).5 Rather, ours is a form of categorization which relies on a semantic classification scheme with legal experts reporting the rule types in our protocol.

Table 3.1 and its corresponding heatmap in Figure 3.1 provide initial insights into the density of rule types across the four instruments. Pay-off and aggregation rules are rare in all countries. This points to limited reach of the design in terms of scrutiny, oversight, sanctions and rewards. There are few sanctions related to not performing according to guidance or rewards for good practice. For example, the only aggregation moment for FOI concerns specialist cases of consultations with third parties when dealing with information that may impact them adversely.

Table 3.1

Number of Rules by Instrument

PositionBoundaryChoiceAggregationInformationPay-offScopeTotal
Consultation748060833
21.21%12.12%24.24%018.18%024.24%100%
RIA10717041645
22.22%15.56%37.78%08.89%2.22%13.33%100%
FOI323161125464
4.69%35.94%25%1.56%18.75%7.81%6.25%100%
Ombudsman21726384161
3.28%27.87%40.98%4.92%13.11%6.56%1.64%100%
PositionBoundaryChoiceAggregationInformationPay-offScopeTotal
Consultation748060833
21.21%12.12%24.24%018.18%024.24%100%
RIA10717041645
22.22%15.56%37.78%08.89%2.22%13.33%100%
FOI323161125464
4.69%35.94%25%1.56%18.75%7.81%6.25%100%
Ombudsman21726384161
3.28%27.87%40.98%4.92%13.11%6.56%1.64%100%
Source: Authors
Table 3.1

Number of Rules by Instrument

PositionBoundaryChoiceAggregationInformationPay-offScopeTotal
Consultation748060833
21.21%12.12%24.24%018.18%024.24%100%
RIA10717041645
22.22%15.56%37.78%08.89%2.22%13.33%100%
FOI323161125464
4.69%35.94%25%1.56%18.75%7.81%6.25%100%
Ombudsman21726384161
3.28%27.87%40.98%4.92%13.11%6.56%1.64%100%
PositionBoundaryChoiceAggregationInformationPay-offScopeTotal
Consultation748060833
21.21%12.12%24.24%018.18%024.24%100%
RIA10717041645
22.22%15.56%37.78%08.89%2.22%13.33%100%
FOI323161125464
4.69%35.94%25%1.56%18.75%7.81%6.25%100%
Ombudsman21726384161
3.28%27.87%40.98%4.92%13.11%6.56%1.64%100%
Source: Authors
Heatmap of Number of Rules by Instrument
Figure 3.1

Heatmap of Number of Rules by Instrument

For the rest of the types, the picture is mixed. Position rules are a case in point. The designs of FOI and ombudsman display conventional positions that determine who can participate. Take FOI where four positions recur: (i) the requestor (usually the public in some form); (ii) a public authority; (iii) a specialized information appellate body—usually called the Information Commissioner (IC), and; (iv) a designated information handler that sometimes exists in the bureau or within each public authority. The ombudsman is similar with three clearly codified positions: the ombudsman, the complainant, and the investigated public body. The degree to which positions are designated is lighter in consultation and impact assessment.

In RIA, the position of who carries out the assessment ranges from the ‘individual officer’, the ‘competent administration’ (Estonia, Lithuania, Italy), the ‘initiator of the act or external contractor’ (Romania) to more generic references to decision-making in cabinet (Spain). Specific positions are sometimes assigned to Treasury (control on the costs of proposed legislation), the Ministry of Justice (control on the quality of legislation), the legal service (Cyprus), and independent regulators. In consultation, there is no identification of who exactly carries out the procedure in a number of countries, including Austria, the Czech Republic, and Denmark. By contrast, countries such as Bulgaria define the position of ‘the drafting authority’ with some precision—this authority can be a central government department or an independent regulator. In federal countries, position rules include subnational authorities.

FOI and the ombudsman are heavy on boundary rules, whereas consultation and impact assessment set fewer barriers. Indeed, with these two instruments we enter a world of conditions, exceptions, and exemptions where definite eligibility criteria are attached to each of the positions. In the case of FOI, for example, these rules offer precision on who can request and what constitutes a public authority and an information commissioner. We also find the boundaries of the information and/or documents themselves. This is one of the central dimensions where FOI varies across the world—so-called ‘class tests’ and ‘harm tests’. In essence, these cover the exemptions—which can be either mandatory or discretionary—to particular categories of information (class) or information the release of which it is judged may risk harm to certain functions of the state. In ombudsman procedures, boundaries to eligibility similarly apply to the complainant, in the form of demonstrating a personal interest/suffered violation and of filing the complaint within a specified time frame (typically one year), and to the public administration, in the form of exempted bodies.

Choice rules feature strongly in all four instruments. For consultation and impact assessment, these rules refer to the steps of the procedures. These are mostly procedural-analytical steps and tests in IA, such as measuring the baseline and examining more than one option. In consultation, choice rules deal with identification of parties, notification, consultation timetable, and other steps, including in some cases (for example, Bulgaria) seeking experts’ opinions. In FOI, requestors’ obligations and rights revolve around information reuse and appeals. For public authorities, disclosure actions, rules of engagement with the requestor, reporting requirements, and obligations in the appeals process all have prominence here. Where an Information Commissioner (IC) exists, choice rules concern the nature of their decision-making—binding or not—the extent of their powers, and, again, reporting activities. In the ombudsman, where we see the greatest number (41%), they mainly reflect two key aspects of the procedure. First, the investigative functions of the ombudsman which trigger the relational aspect of the action situation. Second, the overarching dimension of remedies. Indeed, the accountability potential of the ombudsman is muted lacking clear rules which discipline the means through which cases of maladministration or violation of individual rights can be mended. This is also the reason the ombudsman is comparatively the instrument featuring most aggregation and pay-off rules (although few).

As we would expect from an information tool, in FOI, information rules are abundant. They cover a vast range of details regarding the timing, format, record management procedures and the clarity of the process. But all procedures contemplate information rules, given that they are contingent on increasing transparency, notifying, giving reasons, and displaying evidence utilized by the government and the regulators.

Finally, turning to scope rules, statements on the overall aims and outcomes to be achieved are scarce in instruments grounded in codified law—that is, FOI and ombudsman. Discussions about the scope of the instrument are found in the legislative and political debates that pre-date the instruments’ design and enactment. The picture on scope rules is different for consultation and impact assessment. As instruments typically set in guidelines rather than law, motivations and aims are recorded to underline their importance. Consultation in particular is the instrument through which governments send signals and generate expectations about the involvement of a range of interests and preferences that are enfranchised by design. In contrast to other ways of influencing the legislator or rulemaker, consultation is where the legal base provides for access to draft rules of ‘any citizen’, ‘interests not directly affected’ and ‘citizens of other countries that may be affected’ (this wording occurs in the legal base). This is also the procedure with the lowest number of rules, which signals the presence of degrees of freedom in how to carry out consultation as well as reflecting the fact that consultation guidance is generally short and, in many cases, embedded in RIA.

To reveal the key dimensions of each policy instrument, we used Principal Component Analysis (PCA) (see Table 3.2). This was instrumental in developing a quantification of higher-order concepts which then were used for our QCA analysis. As a technique, PCA is meant to reduce the dimensionality of data when manifest variables are correlated. As such, it enables the reduction of redundant information and the identification of principal components (Jolliffe 2002; Lever, Krzywinski, and Altman 2017). These components are computed as orthogonal linear transformations of the original manifest variables and are used to reveal a simpler internal structure of the data. The type of PCAs we have employed are based on the correlation matrixes and resort to a so-called varimax rotation of the components. As such, this technique maximizes the variance in the data as we want our principal components to represent those dimensions that most explain variation across our twenty-eight cases.

Table 3.2

Principal Component Analysis: Consultation, RIA, FOI, and Ombudsman

ConsultationRegulatory Impact Assessment
Principal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rulePrincipal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rule
1) Commitment28.2%

Is there a generally applicable, nationwide, cross-cutting legal base for consultation? (.901)

Does the Drafting Authority (DA) have to set a consultation timetable? (.875)

Does the DA have to publish a report on comments filed by the CEs (consultation report)? (.888)

Background

Choice

Information

1) Breadth of exceptions27%

Does the legal base set exceptions for: International treaties, Constitution, EU, and for federal countries regulations concerning multilevel governance (.878)

Regulations with a mere formal nature and self-regulation of the government (.816)

Urgency (.815)

State budget (.705)

Boundary

 

Boundary

Boundary

Boundary

2) Scope25.9% (54.1%)

Does the legal base spell out inclusiveness of groups that may not be directly affected as aim of the consultation procedure? (.782)

Does the legal base spell out avoiding discrimination as aim of the consultation procedure? (.917)

Does the legal base spell out understanding via plain language as aim of the consultation procedure? (.862)

Scope

Scope

Scope

2) Analysis15.5% (42.5%)

Does the legal base contain requirements to analyse the status quo? (.912)

Does the legal base contain a requirement to compare, identify or commensurate benefits and costs? (.793)

Choice

Choice

3) Responsibility13.2% (55.7%)

Does the legal base mention line departments (as drafting authorities)? (.872)

Are draft IAs published? (.832)

Position

Information

1) Information Commissioner: Presence, powers and paperwork22%

Presence / absence of dedicated information commissioner (.804)

Are information commissioner decisions binding? (.796)

Does the information commissioner have inspection powers? (.867)

Can the information review classified documents? (.907)

Must the information commissioner report to legislature? (.889)

Is there a documented appeal process in the legislation? (.859)

Is there a clear timeline for appeal in the legislation? (.821)

Does the legislation require the sharing of best practice by a dedicated body? (.727)

Position

Choice

Choice

Choice

Choice

Information

Information

Scope

1) Remedies27.7%

Can the OM issue binding recommendations? (.797)

Upon receiving an OM recommendation, is there a specific deadline for the concerned public body to comply? (.924)

Is the concerned public body obliged to comply with the OM’s recommendations by notifying her about actions taken? (.939)

Choice

Choice

Information

2) Boundaries of discretionary harm tests18%
(40%)

Does the legal base give government discretion to deny information that could cause harm to persons? (.890)

Does the legal base give government discretion to deny information that could cause harm to international relations and defence? (.968)

Does the legal base give government discretion to deny information that could cause harm to commercial competitiveness? (.890)

Does the legal base give government discretion to deny information that could cause harm to national economic interests? (.871)

Boundary

Boundary

Boundary

Boundary

2) Breadth of accountability18.7%
(46.4%)

Does the legal base put private entities performing public functions under the OM’s jurisdiction? (.879)

Is the periodic report to the body which appoints the OM public? (.875)

Boundary

Boundary

Does the legal base give government discretion to deny information that could cause harm to the activities of law enforcement agencies? (.968)

Boundary

3) Boundaries of mandatory and discretionary class tests12%
(52%)

Does the legal base contain a mandatory class test on information and documents pertaining to national security? (.775)

Does the legal base include a mandatory class test on information and documents pertaining to national economic competitiveness? (.761)

Does the legal base contain discretionary class tests on information and documents pertaining to national security? (−.874)

Does the legal base contain discretionary class tests on information and documents pertaining to personal information? (−.843)

Boundary

Boundary

Boundary

Boundary

3) (Ecological) boundaries14.3%
(60.7%)

Does the complainant have to hold a personal interest to be allowed to file a complaint? (.834)

Does an ongoing judicial procedure prevent the OM from launching an investigation? (.849)

Boundary

Boundary

ConsultationRegulatory Impact Assessment
Principal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rulePrincipal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rule
1) Commitment28.2%

Is there a generally applicable, nationwide, cross-cutting legal base for consultation? (.901)

Does the Drafting Authority (DA) have to set a consultation timetable? (.875)

Does the DA have to publish a report on comments filed by the CEs (consultation report)? (.888)

Background

Choice

Information

1) Breadth of exceptions27%

Does the legal base set exceptions for: International treaties, Constitution, EU, and for federal countries regulations concerning multilevel governance (.878)

Regulations with a mere formal nature and self-regulation of the government (.816)

Urgency (.815)

State budget (.705)

Boundary

 

Boundary

Boundary

Boundary

2) Scope25.9% (54.1%)

Does the legal base spell out inclusiveness of groups that may not be directly affected as aim of the consultation procedure? (.782)

Does the legal base spell out avoiding discrimination as aim of the consultation procedure? (.917)

Does the legal base spell out understanding via plain language as aim of the consultation procedure? (.862)

Scope

Scope

Scope

2) Analysis15.5% (42.5%)

Does the legal base contain requirements to analyse the status quo? (.912)

Does the legal base contain a requirement to compare, identify or commensurate benefits and costs? (.793)

Choice

Choice

3) Responsibility13.2% (55.7%)

Does the legal base mention line departments (as drafting authorities)? (.872)

Are draft IAs published? (.832)

Position

Information

1) Information Commissioner: Presence, powers and paperwork22%

Presence / absence of dedicated information commissioner (.804)

Are information commissioner decisions binding? (.796)

Does the information commissioner have inspection powers? (.867)

Can the information review classified documents? (.907)

Must the information commissioner report to legislature? (.889)

Is there a documented appeal process in the legislation? (.859)

Is there a clear timeline for appeal in the legislation? (.821)

Does the legislation require the sharing of best practice by a dedicated body? (.727)

Position

Choice

Choice

Choice

Choice

Information

Information

Scope

1) Remedies27.7%

Can the OM issue binding recommendations? (.797)

Upon receiving an OM recommendation, is there a specific deadline for the concerned public body to comply? (.924)

Is the concerned public body obliged to comply with the OM’s recommendations by notifying her about actions taken? (.939)

Choice

Choice

Information

2) Boundaries of discretionary harm tests18%
(40%)

Does the legal base give government discretion to deny information that could cause harm to persons? (.890)

Does the legal base give government discretion to deny information that could cause harm to international relations and defence? (.968)

Does the legal base give government discretion to deny information that could cause harm to commercial competitiveness? (.890)

Does the legal base give government discretion to deny information that could cause harm to national economic interests? (.871)

Boundary

Boundary

Boundary

Boundary

2) Breadth of accountability18.7%
(46.4%)

Does the legal base put private entities performing public functions under the OM’s jurisdiction? (.879)

Is the periodic report to the body which appoints the OM public? (.875)

Boundary

Boundary

Does the legal base give government discretion to deny information that could cause harm to the activities of law enforcement agencies? (.968)

Boundary

3) Boundaries of mandatory and discretionary class tests12%
(52%)

Does the legal base contain a mandatory class test on information and documents pertaining to national security? (.775)

Does the legal base include a mandatory class test on information and documents pertaining to national economic competitiveness? (.761)

Does the legal base contain discretionary class tests on information and documents pertaining to national security? (−.874)

Does the legal base contain discretionary class tests on information and documents pertaining to personal information? (−.843)

Boundary

Boundary

Boundary

Boundary

3) (Ecological) boundaries14.3%
(60.7%)

Does the complainant have to hold a personal interest to be allowed to file a complaint? (.834)

Does an ongoing judicial procedure prevent the OM from launching an investigation? (.849)

Boundary

Boundary

Source: Authors’ own
Table 3.2

Principal Component Analysis: Consultation, RIA, FOI, and Ombudsman

ConsultationRegulatory Impact Assessment
Principal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rulePrincipal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rule
1) Commitment28.2%

Is there a generally applicable, nationwide, cross-cutting legal base for consultation? (.901)

Does the Drafting Authority (DA) have to set a consultation timetable? (.875)

Does the DA have to publish a report on comments filed by the CEs (consultation report)? (.888)

Background

Choice

Information

1) Breadth of exceptions27%

Does the legal base set exceptions for: International treaties, Constitution, EU, and for federal countries regulations concerning multilevel governance (.878)

Regulations with a mere formal nature and self-regulation of the government (.816)

Urgency (.815)

State budget (.705)

Boundary

 

Boundary

Boundary

Boundary

2) Scope25.9% (54.1%)

Does the legal base spell out inclusiveness of groups that may not be directly affected as aim of the consultation procedure? (.782)

Does the legal base spell out avoiding discrimination as aim of the consultation procedure? (.917)

Does the legal base spell out understanding via plain language as aim of the consultation procedure? (.862)

Scope

Scope

Scope

2) Analysis15.5% (42.5%)

Does the legal base contain requirements to analyse the status quo? (.912)

Does the legal base contain a requirement to compare, identify or commensurate benefits and costs? (.793)

Choice

Choice

3) Responsibility13.2% (55.7%)

Does the legal base mention line departments (as drafting authorities)? (.872)

Are draft IAs published? (.832)

Position

Information

1) Information Commissioner: Presence, powers and paperwork22%

Presence / absence of dedicated information commissioner (.804)

Are information commissioner decisions binding? (.796)

Does the information commissioner have inspection powers? (.867)

Can the information review classified documents? (.907)

Must the information commissioner report to legislature? (.889)

Is there a documented appeal process in the legislation? (.859)

Is there a clear timeline for appeal in the legislation? (.821)

Does the legislation require the sharing of best practice by a dedicated body? (.727)

Position

Choice

Choice

Choice

Choice

Information

Information

Scope

1) Remedies27.7%

Can the OM issue binding recommendations? (.797)

Upon receiving an OM recommendation, is there a specific deadline for the concerned public body to comply? (.924)

Is the concerned public body obliged to comply with the OM’s recommendations by notifying her about actions taken? (.939)

Choice

Choice

Information

2) Boundaries of discretionary harm tests18%
(40%)

Does the legal base give government discretion to deny information that could cause harm to persons? (.890)

Does the legal base give government discretion to deny information that could cause harm to international relations and defence? (.968)

Does the legal base give government discretion to deny information that could cause harm to commercial competitiveness? (.890)

Does the legal base give government discretion to deny information that could cause harm to national economic interests? (.871)

Boundary

Boundary

Boundary

Boundary

2) Breadth of accountability18.7%
(46.4%)

Does the legal base put private entities performing public functions under the OM’s jurisdiction? (.879)

Is the periodic report to the body which appoints the OM public? (.875)

Boundary

Boundary

Does the legal base give government discretion to deny information that could cause harm to the activities of law enforcement agencies? (.968)

Boundary

3) Boundaries of mandatory and discretionary class tests12%
(52%)

Does the legal base contain a mandatory class test on information and documents pertaining to national security? (.775)

Does the legal base include a mandatory class test on information and documents pertaining to national economic competitiveness? (.761)

Does the legal base contain discretionary class tests on information and documents pertaining to national security? (−.874)

Does the legal base contain discretionary class tests on information and documents pertaining to personal information? (−.843)

Boundary

Boundary

Boundary

Boundary

3) (Ecological) boundaries14.3%
(60.7%)

Does the complainant have to hold a personal interest to be allowed to file a complaint? (.834)

Does an ongoing judicial procedure prevent the OM from launching an investigation? (.849)

Boundary

Boundary

ConsultationRegulatory Impact Assessment
Principal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rulePrincipal componentsShare of explained variance (cumulative)Loading variables and coefficientsType of rule
1) Commitment28.2%

Is there a generally applicable, nationwide, cross-cutting legal base for consultation? (.901)

Does the Drafting Authority (DA) have to set a consultation timetable? (.875)

Does the DA have to publish a report on comments filed by the CEs (consultation report)? (.888)

Background

Choice

Information

1) Breadth of exceptions27%

Does the legal base set exceptions for: International treaties, Constitution, EU, and for federal countries regulations concerning multilevel governance (.878)

Regulations with a mere formal nature and self-regulation of the government (.816)

Urgency (.815)

State budget (.705)

Boundary

 

Boundary

Boundary

Boundary

2) Scope25.9% (54.1%)

Does the legal base spell out inclusiveness of groups that may not be directly affected as aim of the consultation procedure? (.782)

Does the legal base spell out avoiding discrimination as aim of the consultation procedure? (.917)

Does the legal base spell out understanding via plain language as aim of the consultation procedure? (.862)

Scope

Scope

Scope

2) Analysis15.5% (42.5%)

Does the legal base contain requirements to analyse the status quo? (.912)

Does the legal base contain a requirement to compare, identify or commensurate benefits and costs? (.793)

Choice

Choice

3) Responsibility13.2% (55.7%)

Does the legal base mention line departments (as drafting authorities)? (.872)

Are draft IAs published? (.832)

Position

Information

1) Information Commissioner: Presence, powers and paperwork22%

Presence / absence of dedicated information commissioner (.804)

Are information commissioner decisions binding? (.796)

Does the information commissioner have inspection powers? (.867)

Can the information review classified documents? (.907)

Must the information commissioner report to legislature? (.889)

Is there a documented appeal process in the legislation? (.859)

Is there a clear timeline for appeal in the legislation? (.821)

Does the legislation require the sharing of best practice by a dedicated body? (.727)

Position

Choice

Choice

Choice

Choice

Information

Information

Scope

1) Remedies27.7%

Can the OM issue binding recommendations? (.797)

Upon receiving an OM recommendation, is there a specific deadline for the concerned public body to comply? (.924)

Is the concerned public body obliged to comply with the OM’s recommendations by notifying her about actions taken? (.939)

Choice

Choice

Information

2) Boundaries of discretionary harm tests18%
(40%)

Does the legal base give government discretion to deny information that could cause harm to persons? (.890)

Does the legal base give government discretion to deny information that could cause harm to international relations and defence? (.968)

Does the legal base give government discretion to deny information that could cause harm to commercial competitiveness? (.890)

Does the legal base give government discretion to deny information that could cause harm to national economic interests? (.871)

Boundary

Boundary

Boundary

Boundary

2) Breadth of accountability18.7%
(46.4%)

Does the legal base put private entities performing public functions under the OM’s jurisdiction? (.879)

Is the periodic report to the body which appoints the OM public? (.875)

Boundary

Boundary

Does the legal base give government discretion to deny information that could cause harm to the activities of law enforcement agencies? (.968)

Boundary

3) Boundaries of mandatory and discretionary class tests12%
(52%)

Does the legal base contain a mandatory class test on information and documents pertaining to national security? (.775)

Does the legal base include a mandatory class test on information and documents pertaining to national economic competitiveness? (.761)

Does the legal base contain discretionary class tests on information and documents pertaining to national security? (−.874)

Does the legal base contain discretionary class tests on information and documents pertaining to personal information? (−.843)

Boundary

Boundary

Boundary

Boundary

3) (Ecological) boundaries14.3%
(60.7%)

Does the complainant have to hold a personal interest to be allowed to file a complaint? (.834)

Does an ongoing judicial procedure prevent the OM from launching an investigation? (.849)

Boundary

Boundary

Source: Authors’ own

A detailed discussion of the technical and conceptual aspects of the four PCAs is featured in Section 4 of this chapter and in Section 1 of the online Appendix. Below, we outline the results of these analyses by reporting details on the principal components retained for each instrument.

We adopted a very simple criterion for the retention of components for further analysis. We resorted to the criterion of more than 50% of explained variance (Jolliffe 2002). This threshold is reached with the first two components for consultation and with the first three for the other instruments.

Variation in consultation designs is driven by two components that capture fundamental features of this instrument as well as the importance of certain types of rules. The first principal component (PC1) concerns commitment. We identify a first ‘background rule’ that is not captured by Ostrom’s type but is essential in our case, because we deal with two different approaches to consultation. Some countries follow a formal approach, based on provisions contained in either hard law or soft law, or in some cases both. These provisions describe the steps and actions of the government during consultation, no matter what sector is considered. In the other set of countries either informality is the guiding principle for consultation or there is no consultation at all.

The second rule loading on the commitment component is a choice rule that commits the departments or agencies to the production of a timetable at the beginning of each consultation. Some countries do not have a rule type like this because the timetable is uniform for all consultations and fixed by law or government’s decision (UK). Others do not have the timetable rule because departments and agencies organize consultation with some flexibility and informality (Sweden). The third commitment component concerns the provision of information relevant for the overall credibility of the exercise, that is, the drafting authority publishes a report at the end of consultation showing how the comments raised by the stakeholders are taken into consideration.

Together, these three rules signal the commitment of the government to consultation—hence the label of this component. In the language of Ostrom’s IGT, commitment is a combination of uniform cross-sector standards that create expectations about the process, choice and information.

The second principal component (PC2) for consultation concerns the IGT category of scope. We find three scope rules. They open up consultation to interests that otherwise would not be considered—the interests of those who are not directly affected, who would be discriminated against, and who would not understand draft legislation because of technical language. The design of consultation is among other things a signal of openness and non-discrimination.

The distribution of twenty-eight countries on the two principal components is portrayed in the first square of Figure 3.2.

Principal Components 1 and 2: Plots by Instrument.
Figure 3.2

Principal Components 1 and 2: Plots by Instrument.

Notes: For plotting the cases along the first two components we utilized PC regression scores, a so-called ‘refined method’. The scores were standardized to range from 0 to 1 to improve interpretability. See Section 5 of this chapter for a more detailed explanation. Source: Author’s own

Most countries cluster in the lower quadrants, meaning that few invest in scope rules. Italy, Cyprus and the UK are in the upper part of the figure, but in different positions in relation to PC2 on scope. The case of Italy is one of investment in scope rules but not in commitment. We suggest this is indicative of a flowery language regarding the virtues of consultation without specific obligations (Italy is low on scope). Cyprus has high scores on both scope and commitment. The legal base for consultation is indeed one of the richest we have found in our population—it even contains an obligation to write ‘thank you letters’ to those who have taken part in the exercise! This over-presence of rules may be the result of trying to describe the most idealistic, perhaps unrealistic, consultation structure. The OECD 2021 indicators, which consider implementation of guidance as one dimension, put Cyprus in the category of low performers in stakeholder engagement (OECD 2022).

In the lower part of the figure, and to the east, we find countries that have historically championed informal consultation, quite different from ‘better regulation’ OECD-style (Dunlop et al. 2020), either because of informality (Denmark, see Radaelli 2010) or corporatism (Austria). Sweden has its own approach—based on delegation of consultation on secondary legislation to regulatory authorities and, for primary legislation, consultation via committees of inquiry and hearings. This approach does not contemplate the steps and formalities presupposed by the typical OECD and EU practice (Radaelli 2009, 2010).

The south-west quadrant displays twelve countries that have high scores of commitment and low values on scope. We read this first of all as evidence of convergence across systems that do not share historical traits in terms of diffusion of administrative law or membership of the EU—both a founding member of the EU (Germany) and the most recent entrant (Croatia) are in this group.

Consultation does not follow any conventional narrative. The countries are not displayed in ways that resemble our knowledge of pressure group systems (social dialogue and corporatism versus pluralism) or old versus new member states of the EU. And second, the map points to a prevalence of commitment over declaratory functions of the legal base.

We now examine the components of RIA (Table 3.2 and second square of Figure 3.2). PC1 concerns the breadth of exceptions. It has a clearly discernible IGT value: it is about the boundaries. It is made up of four types of exceptions—cases in which the government does not have to carry out an impact assessment. Exceptions are the most important boundary rule. There are few boundaries of the ‘who’ of impact assessment—generally the position rule is the relevant government department, pure and simple. Instead, the boundaries concern the ‘what’, that is, the contours of the assessment.

RIA’s second principal component (PC2) concerns two analytical requirements. This component assembles the initial test on the status quo—which in the legal base is sometimes described as identification of the current regulatory-legal base, or definition of the problem that needs to be addressed—and the benefit–cost criterion. In our population, the latter is not formal benefit–cost analysis, but rather a requirement to measure positive and negative impacts, or take into consideration some categories of qualitative benefits, or justify costs with the benefits accruing from the chosen option (or from a range of feasible options). The IGT lens shows the nature of choice rules in RIA.

Finally, the third component (PC3) labelled ‘responsibility’ is about ‘who’ carries out the assessment. In some cases, the legal base is silent assuming that RIA shall be done, without clarifying who exactly will take responsibility. The publication of draft impact assessments is another step that points towards responsibility. Some governments do not publish RIA either because they do not have this instrument (except that in special sectors like the environment or for a type of companies, such as Malta with the Small Business Act) or because the process of appraising the likely effects of proposals is not formal. In IGT terms, this issue of responsibility is a combination of position and information rules.

Turning to the distribution of countries portrayed in the second square of Figure 3.2, the design of RIA is analytical (PC2) and carried out across the board (PC1) in seven countries that differ by administrative traditions and experience. Communications with our experts in Romania and Hungary point to the likelihood of ‘communication out of character’. The legal base was inspired by the OECD–EU principles of regulatory reform, however, there are administrative capacity issues on the ground in these two countries (World Bank 2015; personal communications with project experts February 2020). In the north-west quadrant the countries are more distanced. Here, impact assessment is not carried out across the board, but when it is done it is comprehensive. Boundaries may then signal sector-level meticulous guidelines (for example, the gas and electricity sectors in Italy).

The south-west quadrant suggests a low density of IGT rules (at least as far as the first two RIA components are portrayed). The boundaries are low because the RIA procedure is barely sketched (Belgium, Malta, Luxembourg) and because there is a preference for informality (Finland).

Variation in FOI legislation is accounted for by two features—the presence, powers and paperwork associated with a dedicated independent supervisory body (Information Commissioner [IC]) that may exist as an audience for appeals and the boundaries concerning what documents and/or information are exempt from disclosure.

The first principal component (PC1) is comprised of eight variables revealing the pivotal importance of the IC position. Central to an IC’s operation are the choice rules attached to that role and specifically, whether: its decisions are binding, it has inspection powers, and it must report annually to the legislature. Added to this, information rules concerning the existence of a delineated appeal process and timeline account for diversity. Finally, we have a single scope rule usually associated with the IC as an engine for the sharing of best practice. In short, this is a microcosmic action situation concerning the operation (or not) of a dedicated FOI appeals process within the overall instrumentation.

The second and third principal components (PC2) and (PC3) underline the importance of the presence or absence clauses in the law that exempt information—harm and class tests. FOI legislation contains an array of these tests but this analysis cuts through the complexity. PC2 shows that one set of boundaries that matter is the absence of discretionary harm tests across five main categories (Blanke and Perlingeiro 2018: 33–38; Muscar and Cottier 2017; OECD 2011). These are harm to persons; international relations; commercial competitiveness; economic interests; and the activities of law enforcement agencies. PC3 similarly focuses attention on the boundary rules that dominate FOI. This time we are dealing with the presence or absence of tests around entire classifications of information and whether these are mandatory (in relation to national security and economic competitiveness) and discretionary (in relation to national security, personal data, and commercial confidentiality).

The data suggests that the mix of other rules—choice, information, and scope—matters but only as they relate to the presence or absence of one position—the Information Commissioner (IC). Despite the fact that scope rules account for only 6% of the FOI content analysed and are rare in this policy instrument, they do matter in connection to the presence of an IC to operationalize them. And, while information rules are in abundance in FOI legislation (accounting for nearly a fifth of our data structure), they do not drive cross-national variation even though there is a good deal of diversity in these rules across the twenty-eight cases. Rather, their importance only relates to the issue of appeals.

Staying on the theme of surprises, when we consider the legal literature on FOI legislative design, there are some variables that are assumed to make a difference between countries but just do not figure here (despite considerable cross-national variation). These include: whether the legal text gives requestors access to both information and specific administrative documents (Dragos, Kovač, and Marseille 2019); the presence of a so-called public interest over-ride invoked as a final check before exceptions are applied (Banisar 2006); the presence of fees for information access (Banisar 2006); and the sanctions imposed for violations of FOI legislation (Blanke and Perlingeiro 2018: 58–60).

When we look at the two main FOI components, the twenty-eight cases fall into distinct zones (see the third square of Figure 3.2). Taking the bird’s-eye view, as we move eastward we encounter countries with an IC whose powers are considerable and with an appeal process whose rules are clear (the ideal type being the UK6). As we move northward, we find fewer and eventually no discretionary harm tests (the archetype being Sweden). The north-west quadrant contains the largest concentration of countries—nine in total. Eight countries lack any dedicated IC and there is an (almost total) absence of discretionary harm tests in the five main areas (with the exception of Italy which invokes these tests for documentation in relation to personal affairs and commercial confidentiality). Though Germany does have the position of IC, it has none of the powers or explication of appeals process we see in countries in the eastern side of the figure.

The four countries in the north-east quadrant are united by the presence of dedicated IC—some with binding powers and other not (Hungary and Spain). For almost all, discretionary harm tests in the five main areas (Ireland excepted which retains the right to withhold documents it judges may harm national economic competitiveness).

The south-east quadrant has only three countries. Here, discretionary harm tests in the main areas exist (with the exception of Cyprus which has three of the five tests) combined with a dedicated IC with considerable and binding powers in all cases. The six countries in the south-west quadrant have discretionary harm tests in the five main areas but do not have a dedicated IC. With the exception of Austria and Belgium, appeals in these countries are made through the administrative courts and/or ombudsman procedures.

When we lift our gaze from the specifics, and compare the countries found in each quadrant, the analysis offers some unexpected affinities. For example, in the north-west quadrant, we find member states from different regions and different times of EU accession—Scandinavia (Sweden), Central and Eastern Europe (CEE) (Bulgaria, Czech Republic, Romania), the Baltics (Latvia, Lithuania), Southern Europe (Greece), and founding EU countries (Germany, Italy). Such diversity undermines any notions we might have about carbon-copying of legislation taking place during waves of EU enlargement. Moreover, the plot also questions notions of tools based on legal families—for example, the north-east and south-east zones each contain mixed of civil (Croatia, Hungary, Slovenia, and Spain) and common law countries (Cyprus, Ireland, and UK).

One reason for this varied picture, and apparently unlikely affinities between countries, is the politicized nature of the development of FOI legislation. This is a high salience administrative tool whose legislative development and design is subject to intense and forensic scrutiny by a diverse range of policy actors, as shown by Worthy (2017) with reference to Britain. The result is legislation which is not carbon-copied from neighbouring jurisdictions or drawn down from legal principles alone.

The key sources of variation in ombudsman broadly confirm our expectations. The first principal component (PC1) on remedies explains more than a quarter of the overall variance. It is comprised of two choice rules and one information rule. These capture the dialectic relationship that unfolds between the ombudsman and the public bodies to which recommendations are addressed. This is not a surprise: choice and information rules represent together 54% of all ombudsman rule types. The IGT implications are clear: the procedural aspect of ombudsman recommendations (also including the exchange of information between the parties after the decision of the ombudsman) is the most prominent in explaining variation in design.

The distribution of cases along PC1 (see the fourth square of Figure 3.2) confirms this intuition as it reveals the existence of the expected divide between political systems where the oversight potential of ombudsman procedures is expressed though informality and high mutual trust between the parties (mainly in old democracies) and systems where the ombudsman is vested with quasi-judiciary coercive prerogatives (mainly new democracies). This divide corroborates the argument about different waves of diffusion (Gregory and Giddings 2000) with late (and harder) adopters clustering in the right part of the plot.

PC2 brings together different forms of accountability. First, there is accountability of the ombudsman with regard to the body that appoints them as well as the public at large. The publicity of the ombudsman annual reports, in fact, typically brings to the fore all the cases of maladministration treated by the office before the Parliament and the public. This provides incentives for further usage by the public (Diamandouros 2006) and constitutes a form of ‘name and shame’ for the public bodies whose actions were reprimanded, even lacking manifest hard sanctions. The second form of accountability concerns the boundaries of the ombudsman jurisdiction, namely its authority over private bodies performing public functions. Clearly, countries that allow for both publicity of ombudsman reports and coverage of private entities (upper quadrants of the graph) score well in terms of accountability toward different positions.

Finally, the third set of principal components (PC3) includes eligibility criteria to access the ombudsman (boundary rules). Personal interest (as opposed to time boundaries) remains a cornerstone of ombuds’ variability, as well as the incompatibility of ombudsman investigations with judicial procedures. The IGT implications of components two and three are less neat, but note that three of the four original variables loading into these components are boundary rules, bringing us back to our expectations about the centrality of choice and boundary rules for highly proceduralized and codified instruments.

The plot of ombudsman PCs points to low degrees of clustered convergence. Starting from the south-west quadrant, Italy and the UK stand out. Although their designs are different, for PC1 (remedies) and PC2 (accountability) they are functionally equivalent. Italy only has regional institutions, without a central ombudsman. In the UK, access to the ombudsman is filtered by MPs, the set of available remedies is limited to non-binding recommendations and a vast number of regional/local and sectorial ombudsman institutions exist.

Moving up in the north-west quadrant, the countries remain quite distanced with no clear clustering. Yet, among them, we find four Scandinavian and Western/Northern European countries (Sweden, Denmark, Germany, and the Netherlands). These countries belong to the first wave of diffusion of the ombudsman institution and are (still) loyal to the original template: strong on accountability while drawing on informal and non-binding remedies.

The north-east quadrant, where accountability mechanisms are coupled with harder forms of recommendations, is the most populated with fourteen countries. The two groups of countries observed in this quadrant defy classifications like waves of diffusion and legal traditions. In fact, along with new democracies (mainly grouping in the right end side of the plot—as noted above) we find countries like France, Austria, and Finland. The lesson we draw is that the IGT’s comparative logic, aptly expressed in our analysis through orthogonal/uncorrelated components, is truly configurational. As such, one aspect/dimension of policy design highlighted by IGT may converge with existing assumptions and taxonomies, while others may allow us to detect surprising similarities in design (as per the two groups of the north-west quadrant).

Finally, Poland, Lithuania, and Romania are in the south-east quadrant where the hardening of ombuds’ remedies is coupled with weak accountability mechanisms. Interestingly, these are also the only countries where the ombudsman allows some form of direct sanctioning, indicating a potential (and dangerous) trade-off between direct enforcement mechanisms and accountability rules.

As we explained and motivated above, our overarching working hypothesis is that the design diversity we presented in the previous sections of this chapter matters for positive governance outcomes. More precisely, the different configurations and combinations of rulemaking instruments’ design features observed across the EU 27 plus the UK are associated with different levels of ease of doing business, perceived corruption and environmental performance.

In Chapter 2 we showed how we conceptualized and selected thereof the relevant rulemaking instruments, while in the present chapter we explained how we decided to measure their design diversity and presented static results. We termed our measuring effort an exercise in parsimony and synthesis. This exercise conceptually draws on Ostrom’s action situation and methodologically on rule types. The latter allowed us to parse and categorize a large body of legal texts and institutional statements therein. To diminish (and make sense of) the empirical complexity we relied on a popular exploratory dimension reduction technique (Principal Component Analysis—PCA). PCA enabled us to capture those rules/statements (condensed in newly created variables, i.e. the Principal Components) which are key difference-making conditions within our population (technically speaking, those variables which represent the main sources of variability). In other words, we reduced the high number of instrument-specific manifest variables (see Table 3.2) to a limited number of components. We presented and explained this exercise in parsimony in the previous sections where we also used the first two components of each instrument—and their scores—to plot countries/cases on a bidimensional space. The following step, an exercise in synthesis, involves the transformation of Principal Components into conditions that then we calibrate into fuzzy set values which are useful for our main analytical method, Qualitative Comparative Analysis (QCA) in its fuzzy set version (fsQCA).

In this section we introduce the conditions that constitute the core elements of our empirical investigation.

By leveraging rule types as a data collection device, we have created very granular pictures of the design features of the four instruments. Those pictures are ideal for in-depth qualitative analyses of individual countries and/or individual instruments, but they are not readily usable for countries’ and/or instruments’ comparisons. This is the because the granular data we have generated through rule types hardly lend themselves to be condensed in a single metric which, in the case of fsQCA, is represented by a score that indicates set membership of each country vis-à-vis each instrument. ‘Set membership’ in this sense refers to the circumstance that QCA operates through a set-theoretic logic. This means that the components of the analysis (let us take ‘consultation’ as an example) are transformed into sets to which cases can more or less belong. In this language, a country case with highly proceduralized consultation procedures technically speaking belongs to ‘the set of all countries with highly developed consultation procedures’. This is not just a particular way of speaking, but a specific methodological understanding of case description. Cases are related to sets, so that the analysis can proceed with sets as main analytical tool.

It is evident, however, that cases cannot only belong to a set or not, but that they can also belong to sets with different intensities. This differentiation is captured through fuzzy sets which allow to model different set memberships. Fuzzy values vary between 0 and 1. A case with a high fuzzy value of, say, 0.8 belongs to the set of countries with highly proceduralized consultation procedures, but it does not belong to it perfectly.

In very simple terms, the challenge we face is to assign each instrument in each country to such a (fuzzy) set. Doing this starting from forty or fifty variables would be both technically complex and conceptually risky. This is because not every rule possesses the same weight and importance in explaining intra-population variability. Incidentally, the PCAs were instrumental in clearly demonstrating this and in creating new variables (the Principal Components) which embed those manifest variable which play a major role in explaining overall variability.

These circumstances make the Principal Components the ideal devices to summarize, in a synthetic yet informative way, the information conveyed by the dozens of manifest variables we collected. Hence, the starting point of the operationalization of the instruments’ design features into fuzzy set QCA conditions is deciding how many Principal Components to retain for each instrument. If we were using crisp sets which—different from fuzzy sets—are limited to the values 1 (full membership in the set, indicating a perfect presence of the concept for that case) and 0 (full non-membership, indicating a perfect absence), then the operationalization would be more straightforward. The challenge would be to decide, say, whether freedom of information in Croatia is a member in the set of all countries with high levels of freedom of information (value 1) or not (value 0). In that case, a simple consideration about the existence of freedom of information legislation in Croatia may suffice. Hence, absent FOI legislation, Croatia would be assigned a 0, whereas in presence of a (any …) FOI discipline it would be assigned a 1. As introduced above, fuzzy sets are more complex, as they allow for different degrees of set membership and therefore of presence/absence of the concept, allowing for a much more nuanced analysis, which is certainly more in line with our social science thinking which is not only black and white but also refers to many greys in between (see also below for a more technical account).

But presence or absence of what, precisely? To answer this question, we need to go back to the instrument-specific Principal Components and show step by step how we selected them for the sake of fuzzy set calibration, and what they represent conceptually and in terms of manifest variables they embed.

Recall that PCA extracts as many components as manifest variables. As using all the PCs for subsequent analyses would not allow for any meaningful dimensional reduction, the specialized literature has developed a number of rules of thumb for principal components’ retention. The most popular of such rules of thumb suggests retaining for further analyses all the component with an eigenvalue >1. Yet, rules of thumb are, by their very nature, not case-specific (in this case, dataset-specific) and hence are insensitive with respect to the nature/structure of the data. To illustrate, whereas a big sample size is required to extract meaningful components when manifest variables are mainly uncorrelated, this is not true when the variables are highly correlated (and this is the case of our four datasets). Similarly, the eigenvalue >1 rule is meaningful when a limited number of Principal Components have eigenvalues noticeably higher than 1, so that one or two Principal Components are enough to validly summarize population information. Yet, this is not our case. Hence, we decided to employ another more meaningful rule of thumb which suggests retaining those Principal Components which jointly explain more than the 50% of the dataset’s variability.

There are several reasons for this choice, conceptual, practical and methodological. First, consider that the consultation dataset includes 36 manifest variables, the RIA one 48, the ombudsman (OM) one 44, and FOI one 72. Out of these high numbers of manifest variables, we have to distil four categorical conditions only, one per instrument.

In a previous study focusing on the link between consultation design and perceived corruption (Dunlop et al. 2020) and employing again fsQCA, we faced a similar challenge, that is, the need to devise a limited number of explanatory conditions (related to the design of consultation procedures) which are associated to levels of perceived corruption (our outcome variable). In that research, we were able to proceed qualitatively, that is selecting and weighting a subset of the thirty-six variables which constitute the design of consultation procedures. These selected rules ended up contributing to one of the four consultations conditions, broadly based on Ostrom’s rule types: Thickness, Access, Information, and Choice. Now, in the context of an ecologic study which considers four procedures instead of one, the conditions cannot represent aspects of a given instrument but have to convey information about the whole instrument: one instrument, one condition.

This, as already argued, poses a great challenge. Since one of the main qualities of the datasets we collected is granularity and therefore precision in the measurement of the smallest procedural aspects of the four policymaking instruments, we want to preserve as much as possible this quality while transitioning from micro-procedural items (e.g. does country X’s RIA foresee the publication of drafts for comments?) to a single number used to assign set membership to that country. The approach we followed in Dunlop et al. (2020) unfortunately has to be ruled out due to the fact that we cannot use conditions to capture aspects/dimensions of an individual procedure, but we have to use them to capture the nature of the whole procedure. The options we are left with are two. One is to follow a qualitative approach where we browse the variables included in the four databases, select and weight a subset of them based on theoretical considerations, and then proceed to calibrate. This, as discussed, is quite time consuming and requires a number of arbitrary decisions which may undermine the quality of the data we collected.

The second option is definitely more practical and less prone to biased decisions and involves resorting to the PCs, their scores, and their shares of explained variance to develop weighted standardized country scores for each instrument. With these scores, then, we can easily calibrate conditions for the fsQCA.

Technically speaking, PCA being a variance-maximizing technique, when we rely on Principal Components we basically rely on the largest sources of variation of the databases. This is an inductive approach that allows to preserve some/much of the granularity of the Protego data. Technically, we decided to proceed as follows. The starting point is PC scores. In the context of PCA, they are calculated in various different ways and put simply the represent the score each case gets on the Principal Components (PCs). Remember that PCs are agglomerates of manifest variables, hence PC scores are computed starting from the original values each case gets on the variables included within that PC. For each of the four databases we retained only the components which allow to explain more than the 50% of the overall variance.

In practice, this means that we retained the first two Principal Components7 of the Consultation PCA (54.1% of variance explained); the first three principal components of the RIA PCA (55.7% of variance explained); the first three principal components of the FOI PCA (52% of variance explained); and the first three principal components of the ombudsman PCA (60.7% of variance explained). To develop a single index for each instrument and each case (i.e. an index score), we computed a weighted sum of the Principal Component Scores. The PC scores we used are the most used in the literature, that is, regression scores. Regression method simply weights the scores according to original variables’ loadings on the PC.

The sum is weighted in that we employed the percentage of explained variance to balance the influence of each Principal Component in the final indexes. Technically, the weighted indexes were computed according to the following formula:

whereby the subscript, i, indicates the observation and the subscript, j, the principal component. The index is a hierarchically weighted aggregate of the principal components scores retained for each instrument (N=2 for consultation, N=3 for RIA, FOI and ombudsman). Because we are interested in variation, the components explaining major shares of variance in the dataset carry a greater weight. Then, we rescaled the new scores from 0–1 to make interpretation easier.

Equipped with these new weighted index scores, we standardized them to range from 0 to 1 and performed a calibration based on 6-tile rank transformation to obtain 6 fuzzy values as detailed in Table 3.3.

Table 3.3

Calibration of the Conditions

CONRIAFOIOM
Austria00.80.40.6
Belgium00.200.2
Bulgaria0.60.60.20.8
Croatia0.60.211
Cyprus10.210.2
Czech Rep00.400.8
Denmark00.80.40.4
Estonia0.610.60.6
Finland10.20.61
France000.80.8
Germany10.400.4
Greece0.40.40.20.6
Hungary0.40.610.8
Ireland0.80.80.80.6
Italy0.80.20.20
Latvia0.810.80
Lithuania0.20.80.60.2
Luxembourg000.60
Malta0.400.40.4
Netherlands0.20.40.20.4
Poland0.60.60.60.4
Portugal0.40.60.40.6
Romania0.610.20.2
Slovakia0.60.40.40.8
Slovenia0.80.611
Spain0.40.80.81
Sweden0000.2
UK110.80
CONRIAFOIOM
Austria00.80.40.6
Belgium00.200.2
Bulgaria0.60.60.20.8
Croatia0.60.211
Cyprus10.210.2
Czech Rep00.400.8
Denmark00.80.40.4
Estonia0.610.60.6
Finland10.20.61
France000.80.8
Germany10.400.4
Greece0.40.40.20.6
Hungary0.40.610.8
Ireland0.80.80.80.6
Italy0.80.20.20
Latvia0.810.80
Lithuania0.20.80.60.2
Luxembourg000.60
Malta0.400.40.4
Netherlands0.20.40.20.4
Poland0.60.60.60.4
Portugal0.40.60.40.6
Romania0.610.20.2
Slovakia0.60.40.40.8
Slovenia0.80.611
Spain0.40.80.81
Sweden0000.2
UK110.80
Source : Authors’ own
Table 3.3

Calibration of the Conditions

CONRIAFOIOM
Austria00.80.40.6
Belgium00.200.2
Bulgaria0.60.60.20.8
Croatia0.60.211
Cyprus10.210.2
Czech Rep00.400.8
Denmark00.80.40.4
Estonia0.610.60.6
Finland10.20.61
France000.80.8
Germany10.400.4
Greece0.40.40.20.6
Hungary0.40.610.8
Ireland0.80.80.80.6
Italy0.80.20.20
Latvia0.810.80
Lithuania0.20.80.60.2
Luxembourg000.60
Malta0.400.40.4
Netherlands0.20.40.20.4
Poland0.60.60.60.4
Portugal0.40.60.40.6
Romania0.610.20.2
Slovakia0.60.40.40.8
Slovenia0.80.611
Spain0.40.80.81
Sweden0000.2
UK110.80
CONRIAFOIOM
Austria00.80.40.6
Belgium00.200.2
Bulgaria0.60.60.20.8
Croatia0.60.211
Cyprus10.210.2
Czech Rep00.400.8
Denmark00.80.40.4
Estonia0.610.60.6
Finland10.20.61
France000.80.8
Germany10.400.4
Greece0.40.40.20.6
Hungary0.40.610.8
Ireland0.80.80.80.6
Italy0.80.20.20
Latvia0.810.80
Lithuania0.20.80.60.2
Luxembourg000.60
Malta0.400.40.4
Netherlands0.20.40.20.4
Poland0.60.60.60.4
Portugal0.40.60.40.6
Romania0.610.20.2
Slovakia0.60.40.40.8
Slovenia0.80.611
Spain0.40.80.81
Sweden0000.2
UK110.80
Source : Authors’ own

A few considerations on the rationale of fuzzy sets are in order. Fuzzy sets define a full membership in the set under research (fuzzy value of 1), full non-membership (fuzzy value of 0) and the point of indifference (fuzzy value of 0.5) where we do not know whether the case is rather a member of the set or not (Schneider and Wagemann 2012: 28). While this is also achieved with the dichotomous form of QCA—crisp-set QCA (csQCA)—the fuzzy set variant goes further and establishes gradings in between these qualitative anchors. Values between 0.5 and 1 indicate in how far the case belongs to the set in question, while values between 0.5 and 0 inform us about cases that do not belong to the set in question, albeit at different degrees. Spain, for example, has a full membership in the set of all countries with a strong ombudsman (fuzzy value of 1), fairly strong memberships in the sets of all countries with highly developed rules of impact assessment or freedom of information, respectively (both with fuzzy values of 0.8), while it is not a member of the set of countries with strong consultation procedures, but comes close to such a membership (fuzzy value of 0.4).

In brief, fuzzy values

of 1 indicate full set membership,

between 0.5 and 1 rather membership than non-membership,

of 0.5 complete indifference (and thus useless information),

between 0 and 0.5 rather non-membership than membership,

and of 0 full set non-membership.

The fuzzy values then become the data matrix of the QCA. Therefore, their definition and assignment to single cases (a process called ‘calibration’) is of utmost importance for every QCA and a decisive step. This step is nothing else than the transformation of theoretically derived qualitative concepts into numbers between 0 and 1. Implicitly, this is done in any comparative research, while the formal way of a QCA forces the researcher to reason intensively about the single values. In a sense, this is a much more transparent and tractable way to speak about concepts than using superficial language markers such as ‘consultation is strong in country XY’.

Interested readers will find an exhaustive explanation on the calibration of our outcomes and the four instruments which we use as explanatory factors in the online Appendix.

After having discussed the steps linked to our calibration, some terminology clarity is needed. High fuzzy values (such as 1, 0.8, or 0.6) mean high set membership values (the case is, at least partially, a member of that set). However, they also mean that the concept is rather present or, in the language of the Principal Component Analysis, that it scored well. By contrast, low fuzzy values (such as 0, 0.2 and 0.4) mean low set membership values (the case does not belong to that set, or only partially). Consequently, the concept is rather absent, and the case scores low in the PCA.

While this is very technical jargon, let us now verbalize what this substantially means. The following list provides a translation for what low fsQCA scores denote for each instrument (with the reverse translation for high scores).

Consultation: the components we use to measure consultation design are Commitment and Scope. A low score/absence indicates that the design is silent/weak on government’s commitment to good consultation practices and/or on general objectives of consultation (scope);

RIA: the components are Breadth of exceptions, Analysis and Publicity. Absence/low score of a country’s RIA means that there are many exceptions to its application and/or analytical requirements are weak/absent and/or RIA are typically not published;

FOI: the components are Information Commissioner and two types of boundaries, respectively to harm and class tests. As a result, a low score/absence of FOI indicates the absence of the Commissioner (or its lack of powers) and/or the presence of many exceptions to FOI discipline;

OM: the components are Remedies, Breadth of accountability, and Boundaries. A low score/absence hence indicates that the OM has no hard law remedies; and/or the OM has no power on private entities and no reporting obligations.

As mentioned, these conditions and the three governance outcomes (which are also defined through fuzzy sets) will be used for a QCA analysis. Through various set-theoretic techniques (for details, see Schneider and Wagemann 2012), paths will be identified that show which combinations of factors logically imply the outcome. To present it less technically: QCA will produce combinations of factors which render the outcome under research present (or absent); QCA works out all combinations of factors for which the outcome (such as the ease of doing business) is present. However, put like this, this sounds like just a condensation of empirical information. In other words, it sounds like a correlation in quantitatively inspired social science research which is an indication of causality, but not yet a full account on causality. This, indeed, is a fundamental problem of all social science research: over the centuries, the social sciences have produced very helpful tools to find different forms of correlation (captured as ‘set relations’ in QCA), but the establishment of causality poses challenges which go beyond algorithms. This is not different for QCA.

QCA results just indicate where to look for causal mechanisms. The argumentation on causal mechanisms inevitably refers to plausibility (Gerring 2010), but also to case knowledge. Once we know which combinations of explanatory factors are connected to which outcome, then we can identify the cases that are marked through such combinations of factors and use our case knowledge in order to explain why these combinations have led to the outcome. Often, such a procedure then results in the insight that, while different cases belong to the same combination of explanatory factors, different mechanisms or reasons are at work in the single cases. In a way, a QCA represents the results of a cross-case analysis similar to correlations in quantitative research (it is ‘set relations’ in QCA); what has to follow (in all research paradigms) is a case-based account. Twenty-seven EU countries plus the UK are real-existing realities which cannot be easily captured through formal recipes. Therefore, a good deal of case knowledge is necessary. The real explanation of the phenomenon can only occur through case analysis, with all its pros and cons. It necessarily has to remain vague and speculative, since it is the very essence of case analysis to consider case specificities.

In other words, the subsequent QCA analyses will first present the results on communalities and differences between the cases. This step will identify those combinations of conditions which imply the outcome. The verb ‘imply’ is deliberately used here, because we can only be sure about the formal implication. In a subsequent step, we will extensively speak about the single combinations, connect them to country cases, and argue on the basis of our country case knowledge. In this sense, QCA is a truly case-oriented method.

One more clarification is on order: QCA is often used as a method which tries to identify causes of effects (Mahoney and Goertz 2006). We use it in this sense, but also partially invert the perspective: while we are certainly interested in finding the causes for our effects (i.e. the three outcomes), we acknowledge that our analysis might not capture all causes. So, we do not have the ambition to fully explain our outcomes. Rather, our interest is rather on the contribution of the four instruments to the explanation of our effects. In other words, additionally to looking for causes of effects, we are also interested in the effects of causes (= our instruments). We acknowledge a priori that we will omit variables, in order to stay within the well-known vocabulary (Radaelli and Wagemann 2019).

Our theory-informed approach to measurement provides a granular picture in three directions, which correspond to the research questions outlined earlier on: the distribution of rule types in the population as a whole, the IGT structure of each instrument, the variation across twenty-eight cases as accounted for by rule types, and data calibration for further analyses. The principal components are then used in the three following empirical chapters to create the conditions we need for fsQCA. This fsQCA will then identify those combinations of explanatory factors which imply the outcomes under research. Case-by-case analysis will shed light on the processes at work in the single country cases.

The findings on the underlying structure of the data allow us to see how far we are from comparative politics categories about civil and common law countries, varieties of capitalism, strength of pressure groups, Europeanization, and waves of accession to the EU (as per the discussion of expectations in Chapter 2). These empirical results challenge our conventional interpretation of cross-country variation in Europe. In our data, there is no alignment of countries around waves of Europeanization, families of administrative law, or liberal versus coordinated economies (see also Section 2 of Chapter 1 about the limited use of these categories). This reveals comparative political approaches may be suitable for macro-comparisons. Yet, when it comes to administrative law, regulation, and specific policy instruments, the explanation we found is more nuanced and, as we said, granular.

This last observation brings us to the limitations of the data. We do not examine sectors which can also be nationwide procedures. In several countries, independent regulators have their own guidance documents on impact assessment. In others, we find sectoral ombudsman offices, from insurance to banking and prisons, that we did not analyse. Finally, the map we present does not take into account of the evolution across the years. Though changes to FOI and ombudsman in the twenty-eight countries are rare and where they have happened have been incremental, change does happen, and certainly governments have overhauled their approach to consultation and impact assessment since the early days of the 1990s. Of course, when procedures change, the indicators mapping rule types change accordingly. Yet we can only capture the snapshot image of how these procedural instruments look in 2018.

Banisar
,
D
(
2006
)
Freedom of Information around the World
.
London
:
Privacy International
.

Blanke
,
H J
and
Perlingeiro
,
R
(
2018
) ‘Essentials of the Right of Access to Public Information: An Introduction’, in
H J
 
Blanke
and
R
 
Perlingeiro
(Eds.)
The Right of Access to Public Information
.
Berlin
:
Springer: 1–68
.

Diamandouros
,
P N
(
2006
) ‘The Ombudsman Institution and the Quality of Democracy’,
Lecture by the European Ombudsman at the Centre for the Study of Political Change
, 17 October,
University of Siena
,
Siena, Italy
.

Dunlop
,
C A
,
Kamkhaji
,
J C
,
Radaelli
,
C M
,
Taffoni
,
G
, and
Wagemann
,
C
(
2020
) ‘
Does Consultation Count for Corruption?
’,
Journal of European Public Policy
 
27
(
11
):
1718
1741
.

Dunlop
,
C A
,
Kamkhaji
,
J C
,
Radaelli
,
C M
,
Taffoni
,
G
, and
Wagemann
,
C
(
2021
) ‘
Measuring Design Diversity: A New Application of Ostrom’s Rule Types’
,
Political Science Journal
 
50
(
2
):
432
452
.

Dragos
,
D C
,
Kovač
,
P
, and
Marseille
,
A T
(
2019
) ‘From the Editors: The Story of a Data-Driven Comparative Legal Research Project on FOIA Implementation in Europe’, in
Dacian
 
C Dragos
,
P
 
Kovač
, and
A.
T Marseille
(Eds.)
The Laws of Transparency in Action.
 
Basingstoke
:
Palgrave: 1–7
.

Frantz
,
C K
and
Siddiki
,
S
(
2020
) ‘
Institutional Grammar 2.0: ‘A Specification for Encoding and Analysing Institutional Design
’,
Public Administration
 
99
(
2
):
222
247
.

Gerring
,
J
(
2010
) ‘
Causal Mechanisms: Yes, But …
’,
Comparative Political Studies
 
43
(
11
):
1499
1526
.

Gregory
,
R
and
Giddings
,
P J
(
2000
) (Eds.)
Righting Wrongs: The Ombudsman in Six Continents
. Vol.
13
.
Amsterdam
:
IOS Press
.

Johns
,
M
, and
Saltane
,
V
(
2016
) ‘
Citizen Engagement in Rulemaking – Evidence on Regulatory Practices in 185 Countries
’,
World Bank Policy Research Working Paper
,
7840
,
1
45
.

Jolliffe
,
I T
(
2002
)
Principal Component Analysis
.
Berlin
:
Springer
.

Lever
,
J
,
Krzywinski
,
M
, and
Altman
,
N
(
2017
) ‘
Points of Significance: Principal Component Analysis
’,
Nature Methods
 
14
(
7
):
641
642
.

Mahoney
,
J
and
Goertz
,
G
(
2006
) ‘
A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research
’,
Political Analysis
 
14
(
3
):
227
249
.

Muscar
,
N P
and
Cottier
,
B
(
2017
)
Comparative Study of Different Appeal and Control National Mechanisms Regarding Access to Public Information in Six Council of Europe Member States
.
Brussels
:
Council of Europe
.

OECD
(
2011
)
Government at a Glance: Transparency in Government
.
Paris
:
OECD Publishing
.

OECD
(
2019
)
Better Regulation Practices across the European Union
.
Paris
:
OECD Publishing
.

OECD
(
2021
)
OECD Regulatory Policy Outlook 2021
.
Paris
:
OECD Publishing
.

OECD
(
2022
)
Better Regulation Practices across the European Union
.
Paris
:
OECD Publishing
.

Ostrom
,
E
(
2005
)
Understanding Institutional Diversity
.
Princeton, NJ
:
Princeton University Press
.

Ostrom
,
E
(
2007
) ‘Institutional Rational Choice: An Assessment of the Institutional Analysis and Development Framework’, in
Sabatier
,
P A
(ed.)
Theories of the Policy Process
.
2nd edn
.
Boulder, CO
:
Westview Press: 21–64
.

Radaelli
,
C M
(
2009
) ‘
Measuring Policy Learning: Regulatory Impact Assessment in Europe
’,
Journal of European Public Policy
 
16
(
8
):
1145
1164
.

Radaelli
,
C M
(
2010
) ‘
Rationality, Power, Management and Symbols: Four Images of Regulatory Impact Assessment
’,
Scandinavian Political Studies
 
33
(
2
):
164
188
.

Radaelli
,
C M
(
2020
) ‘
Regulatory Indicators in the European Union and the Organization for Economic Cooperation and Development: Performance Assessment, Organizational Processes, and Learning
’,
Public Policy and Administration
 
35
(
3
):
227
246
.

Radaelli
,
C M
and
Wagemann
,
C
(
2019
) ‘
What Did I Leave Out? Omitted Variables in Regression and Qualitative Comparative Analysis
’,
European Political Science
 
18
(
2
):
275
290
.

Schneider
,
C Q
and
Wagemann
,
C
(
2012
)
Set-Theoretic Methods for the Social Sciences: A Guide to Qualitative Comparative Analysis
.
Cambridge, MA
:
Cambridge University Press
.

World Bank
(
2015
)
Strengthening the Regulatory Impact Assessment Framework in Romania
.
Allio ǀ Rodrigo Consulting
. Final Report,
September
.

Worthy
,
B
(
2017
)
The Politics of Freedom of Information.
 
Manchester
:
Manchester University Press
.

Notes
1

This chapter is broadly based on Dunlop et al. (2021)

3

This was Dunlop, Kamkhaji, Radaelli, and Taffoni.

4

This was Herwig Hoffmann and Jacques Ziller.

5

Our choice is due to the fact that rule types are more conducive to conceptualize action situations than the ADICO categories. In fact, ‘[r]ules are part of the underlying structure that constitute a single-action situation or a series of them’ (Ostrom 2005: 179). In Ostrom’s words, rule type categorization is ‘a way of consistently grouping rules so that the analysis of rule systems can be made much more cumulative’ and, we add, comparative (Ostrom 2005: 175). Moreover, the theoretical framework of reference is IAD, not IGT per se: ‘If one wishes to use the syntax as a foundation, this leaves one with the AIM element of a rule to be used. And this is our plan. Although … the syntax fits regulatory rules better than generative rules, generative rules still do have an AIM, so a sorting mechanism that uses the AIM works for generative rules too. And, it works for all three levels of the IAD framework’ (Ostrom 2005: 188). Therefore, rule types can be seen as the key tool for generalizing IAD: ‘As institutional analysts … we need to devise a method that draws on the general Institutional Analysis and Development (IAD) framework to help link rules to the action situations they constitute’ (p. 186). Additionally, and most importantly, rule typologies are conceived as a classification of rules by their AIM, which is one of the ADICO component. The concepts hence are not only genealogically related but actually rule types (as an instrument of classification) generalizes one of the ADICO components (see Ostrom 2005: 185) after careful reflection on what of the ADICO components better lends itself to a semantic generalization (p. 188).

6

We should be clear that despite using the UK label, the legislation coded for this study is the England, Wales, and Northern Ireland Freedom of Information Act 2000. Owing to its distinct legal system (combination of common and civil law), public authorities in Scotland are covered by separate legislation, though it is similar (Freedom of Information [Scotland] Act 2002).

7

Recall that Principal Components are organized hierarchically according to the percentage of variance they explain.

Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close