Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-22T07:09:39.995Z Has data issue: false hasContentIssue false

Integrating data, knowledge, and expertise for policy advice: understanding the practices of Dutch organized crime control professionals

Published online by Cambridge University Press:  25 January 2024

Wybren van Rij
Affiliation:
HU Utrecht University of Applied Sciences, Utrecht, The Netherlands
Rianne Dekker*
Affiliation:
Utrecht University School of Governance, Utrecht, The Netherlands
Albert Meijer
Affiliation:
Utrecht University School of Governance, Utrecht, The Netherlands
*
Corresponding author: Rianne Dekker; Email: [email protected]

Abstract

Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Open Practices
Open materials
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Policy significance statement

This research highlights the importance of understanding data use for policy advice as the integration of data, local knowledge, and professional expertise. The case under study is a Dutch Regional Intelligence and Expertise Center for tackling organized crime. The study reveals how policy advisors tend to rely on data during mapping and defining the policy problem, but give little value to the data when transitioning to policy action. While policy advisors see data as a new source of knowledge, their local knowledge and professional expertise shape its use. Efforts to strengthen policy advice through data science need to build upon a contextual understanding of the interplay between data, local knowledge, and expertise.

1. Introduction

In this digital day and age, many data sources and new ways to analyze them have become available to inform policymaking on complex societal issues. Data are used in an increasing number of policy domains. For example, in the domain of public security, police officers use data to deploy resources efficiently (Brayne, Reference Brayne2017; Meijer et al., Reference Meijer, Lorenz and Wessels2021); in the social domain, civil servants use data-driven dashboards for service provisions to unemployed citizens (Kersing et al., Reference Kersing, van Zoonen, Putters and Oldenhof2022); and in the financial domain, tax professionals use data to detect frauds and social security infringements (Simonofski et al., Reference Simonofski, Tombal, De Terwangne, Willem, Frenay and Janssen2022).

The potential of new digital data sources to better inform public policymaking is a topic of academic interest (Mergel et al., Reference Mergel, Rethemeyer and Isett2016; Giest, Reference Giest2017). Data can be understood as quantifiable observations of reality, conveniently structured and captured through technology (Davenport and Prusak, Reference Davenport and Prusak1997). Currently, the power of “big data” is highlighted in terms of its structural and ever-expanding dimensions, such as volume, velocity, and veracity (Mergel et al., Reference Mergel, Rethemeyer and Isett2016). From a social science perspective, Brayne (Reference Brayne2017) and Seaver (Reference Seaver2017) define data by how they are shaped in social and cultural processes. They stress that people judge which data are analyzed, and for what purposes.

This viewpoint also underscores the importance to study how individual professionals judge data compared to other types of policy-relevant information (Vydra and Klievink, Reference Vydra and Klievink2019; Young et al., Reference Young, Bullock and Lecy2019). Extensive research has explored how data inform judgments by street-level bureaucrats (Bovens and Zouridis, Reference Bovens and Zouridis2002; De Boer and Raaphorst, Reference De Boer and Raaphorst2023; Grimmelikhuijsen, Reference Grimmelikhuijsen2023). Some argue that data systems reduce discretion (Bovens and Zouridis, Reference Bovens and Zouridis2002; Buffat, Reference Buffat2015), while others suggest that data provide bureaucrats with new information and discretion gains a different form (Jorna and Wagenaar, Reference Jorna and Wagenaar2007; De Boer and Raaphorst, Reference De Boer and Raaphorst2023). This has become known as the curtailment versus enablement debate (Buffat, Reference Buffat2015).

Most research thus far has focused on street-level bureaucrats who, in their policy execution role, use data systems for automating frequent, routine tasks. Their activities differ significantly from the complexities faced by policy advisors, who work with data during pre-execution phases and play a vital role in advising and shaping policy content. Their valuation of new data sources may considerably influence how data inform policy implementation (Mergel et al., Reference Mergel, Rethemeyer and Isett2016; Desouza and Jacob, Reference Desouza and Jacob2017). Yet, their practices of integrating data with other sources of knowledge, such as local knowledge, which is gained through experience, and professional expertise, which has a basis in training, remain relatively understudied.

This in-depth research analyzes how such policy advisors judge and integrates digital data with local knowledge and professional expertise. We do so by addressing the following research question:

How do policy advisors judge and integrate data with local knowledge and professional expertise for policymaking on complex issues?

We adopted an ethnographic approach to research a network of policy advisors working on the complex policy issue of organized crime. It is one of the ten Dutch Regional Intelligence and Expertise Centers (RIECs) established to support street-level bureaucrats by sharing data and expertise. The RIEC’s policy advisors started to use new types of data analyses besides local and expert knowledge, allowing us to observe their explicit judgments in the use of data. The domain of law enforcement is becoming more focused on data but is relatively closed for in-depth research (Brayne, Reference Brayne2017; Meijer et al., Reference Meijer, Lorenz and Wessels2021). These considerations make it a suitable and revelatory case for our study.

In the remainder of this article, we first present an analytical framework for understanding the use of data for policy advice based on Vickers’ (Reference Vickers1995) appreciative system, which draws our attention to three types of judgments in public policymaking. This is followed by a description of our case and methods. Hereafter, we present the findings of our empirical research on the use of data by organized crime professionals. We examined the practices of policy advisors to understand whether data was curtailing or enabling their discretion. We focused specifically on how they accept, neglect, or contextualize the data during different types of policy judgments. In Section 5, we discuss the main findings and highlight their relevance.

Our main finding is that the closer policy advisors’ judgments approach actual policy action, the more limited the role of data vis-à-vis other types of knowledge and expertise is. While policy advisors welcome data as an objective and additional source of knowledge, our analysis reveals how the selection and analysis of data sources are shaped by the existing local and expert knowledge of policy advisors. By focusing on policy advisors and analyzing how data influence their judgments, we make a contribution to the growing body of literature on data in public policymaking (Buffat, Reference Buffat2015; Mergel et al., Reference Mergel, Rethemeyer and Isett2016; Giest, Reference Giest2017).

2. Theoretical framework

2.1. Data, knowledge, and expertise in policymaking

New data sources are changing the perception and approach toward policy problems (Boyd and Crawford, Reference Boyd and Crawford2012). Data are quantifiable observations of reality, conveniently structured and captured through technology (Davenport and Prusak, Reference Davenport and Prusak1997). It provides an additional type of policy-relevant information next to expert knowledge or expertise, which is “technical and/or professional expertise that derives from academic training” (Yanow, Reference Yanow2004, 13), and local knowledge, which is context-specific, interactively derived, and based on lived experience. According to Van Dijck (Reference Van Dijck2014, 198), there is a “widespread belief in the objective quantification,” whereas local and expert knowledge is seen as limited and possibly biased (Lepri et al., Reference Lepri, Staiano, Sangokoya, Letouzé, Oliver, Cerquitelli, Quercia and Pasquale2017). Those involved in policy development and execution employ data as a “rationalizing force,” meaning they use data to support their judgment, lending them an appearance of objectivity and instrumental rationality (Pasquale, Reference Pasquale2015, 15). It carries a techno-optimistic perspective that society is knowable and manageable through data and technology (Vydra and Klievink, Reference Vydra and Klievink2019). What evolves is a myth or hype surrounding data, in that society can be entirely shaped and controlled through the use of data and fits into a trend toward greater standardization and rationalization of modern bureaucracy (Boyd and Crawford, Reference Boyd and Crawford2012; Weber and Tribe, Reference Weber and Tribe2019). Data are believed to foster efficiency and limit favoritism in government policymaking. However, it can also bring a loss of freedom and autonomy for those working within this “iron cage,” as well as a more impersonal approach to those being governed (Jorna and Wagenaar, Reference Jorna and Wagenaar2007; Weber and Tribe, Reference Weber and Tribe2019).

Current studies into the effects of data on public policy have mostly focused on the discretion of street-level bureaucrats executing policies. The concept of discretion refers to how street-level bureaucrats judge ambiguous policy objectives while considering contextual factors of their challenging work environments (Lipsky, Reference Lipsky2010). According to the curtailment thesis, introduced by Buffat (Reference Buffat2015), data systems limit users’ ability to apply diverse knowledge sources as they see fit. Automation and data systems replace the use of human judgment (Bovens and Zouridis, Reference Bovens and Zouridis2002). However, other studies, such as De Boer and Raaphorst (Reference De Boer and Raaphorst2023), have found little evidence to support the curtailment thesis. Data systems might also enable street-level bureaucrats by providing more information and reducing the workload, which has become known as the enablement thesis (Buffat, Reference Buffat2015). Policy advisors have a more discretional role compared to street-level bureaucrats when it comes to shaping and developing policies, especially for complex and ambiguous policy issues (Young et al., Reference Young, Bullock and Lecy2019). Whereas the work of street-level bureaucrats becomes partly automated by data systems, policy advisors have much more “discretionary freedom to include, exclude, use or ignore information derived from big data projects” (Van der Voort et al., Reference Van der Voort, Klievink, Arnaboldi and Meijer2019, 37).

We conceptualize three practices that reflect the discretion due to data. First, policy advisors may welcome and accept data analyses, sometimes without questioning or understanding the output (Arnaboldi, Reference Arnaboldi2018). This leads to curtailment of their discretion and is similar to automation bias by which policy professionals, as all human beings, generally tend to favor the results presented by machines, even if these are mistaken (Grimmelikhuijsen, Reference Grimmelikhuijsen2023). Second, users may entirely neglect the information extrapolated from data (Arnaboldi, Reference Arnaboldi2018). They may have their perceptions and goals to pursue—or they even might perceive data systems as a threat to their profession (Van der Voort et al., Reference Van der Voort, Van Bulderen, Cunningham and Janssen2021). A third practice, which shares similarities with the enablement of discretion, is the contextualization of data. It involves interpreting the output of data systems by integrating additional knowledge sources. Establishing connections between the data outputs and local or expert knowledge enables policy advisors to make informed judgments on the policy situation (Kitchin, Reference Kitchin2014; Janssen et al., Reference Janssen, van der Voort and Wahyudi2017). This is not an isolated process, but it is shaped by the culture and norms of the organization (Meijer et al., Reference Meijer, Lorenz and Wessels2021).

Table 1 summarizes these three data practices, which we will use to analyze how policy advisors integrate data. They indicate that policy advisors maintain significant discretion despite the increasing value placed on data. The following subsection will delve deeper into the policy process in which these practices occur.

Table 1. Types of data practices

2.2. Rethinking policymaking through Vickers’ judgments

Gathering reliable and relevant information constitutes an important aspect of public policymaking. Early rationalist perspectives suggest analyzing the best available knowledge during problem definition before proceeding to subsequent stages of policy formulation and decision-making (Bridgman and Davis, Reference Bridgman and Davis2003). In this way, policymaking is “applied problem-solving” where information is used as a tool (Howlett et al., Reference Howlett, Ramesh and Perl2009, 4). However, as policy scientists have later argued, ideal-typical models of policymaking, such as the policy cycle, do not reflect the messy reality in which knowledge is acquired and integrated in a partial, gradual, and incremental manner, and in which each actor contributes a part of the puzzle (Lindblom, Reference Lindblom1979; Hoppe, Reference Hoppe2010). Moreover, scholars such as Stone (Reference Stone2012) argue that policymaking complexities extend beyond knowledge gaps. Policy problems, categorizations, and solutions are neither fixed nor self-evident. Instead, she argues that policymaking is socially constructed and involves judgments at every stage, from defining the problem to making decisions on specific cases. Cook and Brown (Reference Cook and Brown1999) depict policymaking as an ongoing practice of “knowing,” in which different knowledge sources are used to understand a policy problem. Policymakers, including policy advisors, do not apply knowledge a priori to a policy situation; rather, knowledge is evoked within a certain practice.

To better understand policymaking as an ongoing, value-driven, and discursive battle, Vickers’ (Reference Vickers1995) appreciative system provides a helpful perspective. It distinguishes three judgments in policymaking: reality judgments about “what is the case,” value judgments about “what ought to be the case,” and instrumental judgments about if and what action is appropriate (Vickers, Reference Vickers1995). Although these judgments are interrelated, they do not necessarily take place in a structured or consecutive way. Policymakers, such as policy advisors, continuously learn and adapt their judgments of the policy issue.

Reality judgments guide policymakers in identifying relevant aspects of a situation. Vickers (Reference Vickers1995, 54) defines them as “judgments of fact” as they define and scope the situation at hand. It is similar to traditional problem definition perspectives but differs in that it considers more than just available data and evidence. Instead, one’s mental models and assumptions, congruent with Stone (Reference Stone2012), also influence the perception of the situation. After the reality judgments specify “what is the case,” the value judgments assess the extent to which this case is desirable. These judgments are about desired outcomes rather than current realities. They define the aim or objective of a policy initiative, making them a stepping stone toward policy formulation. Based on the judgments on how far current reality is removed from the desired situation, policymakers prioritize aspects of the problem. The last type of judgment is instrumental on whether the case needs intervention and, if so, which intervention. If the outcome is to act on the situation, the instrumental judgments transform the knowledge into a feasible solution using various resources, in line with traditional perspectives on policy implementation.

Vickers’ judgments are distinct but strongly related concepts as they are part of the same mental activity. As Vickers (Reference Vickers1995) explains, reality judgments are charged with values since policymakers select facts according to their mental models and, sometimes implicit, preferences. In turn, policymakers make value judgments based on a selection of “facts” derived from reality judgments. This mimics Stone’s viewpoint, as she describes the influence of values and social constructions on policymaking. This conceptualization of reality, value, and instrumental judgments more closely resembles the complexities of policymaking than staged models such as the policy cycle. Judgments sometimes overlap in practice and are not necessarily consecutive: a policymaker can also return to a previous judgment, or the outcome can be to maintain the current situation. However, analytically distinguishing between the judgments is helpful to come to a comprehensive understanding of instances in which policy advisors judge and might accept, neglect, or contextualize digital data (i.e., Table 1).

This theoretical section has emphasized the evolving role of data alongside the value of considering Vickers’ judgments in studying policymaking. It highlights how policy advisors while engaging with data, expert knowledge, and local knowledge, have the discretion to accept, neglect, or contextualize data (Table 1). Notably, this multifaceted view challenges both bureaucratic rationalization and staged models of policymaking, proposing instead an understanding of policymaking as an ongoing interplay of data, knowledge, and judgments. Table 2 summarizes our analytical framework, which applies Vickers’ judgments to data use in public policymaking activities of policy advisors.

Table 2. Summary of the analytical framework and empirical questions

3. Data and methods

3.1. Case and research context

Our case is an RIEC, of which there are ten in The Netherlands. Participating organizations include municipalities, the police, tax authorities, the public prosecution service, and others, depending on the nature of the case. We studied the policy advisors who work with or under the guidance of an RIEC (hereafter: expertise center). These policy advisors work for municipalities and the police, or they are directly employed by the expertise center. The expertise centers support local efforts to address organized crime with policy advice, supplementing the efforts of the national police, who are focused on severe cases. They were set up to support local professionals and their organizations by providing advice on the elaboration of policies to prevent and reduce organized crime based on data and other sources of knowledge. The policy advisors advise on the content and execution of the policies. They direct street-level bureaucrats, such as community police officers and municipal professionals, and they occasionally attend inspections or other activities themselves.

The expertise center supports street-level bureaucrats in several ways. First, the center publishes reports on local levels of organized crime based on knowledge derived from police officers, citizens, and community workers, as well as from digital data sources. Organizations that participate in the network use the reports to allocate their capacity. Second, the center organizes projects to counter regional types of organized crime. This may include drug production in barns in rural areas and criminal activity in city business parks. Hotspot analyses by the expertise center help to allocate resources. Last, participating organizations can bring actual cases to the expertise center, after which the center might start an investigation and asks other organizations to share data. The expertise center, for example, added algorithms to map criminal actors’ social networks to its tools. To some extent, the data used by the expertise center shares the characteristics of big data: it is vast, digital, and analyzed with new techniques. However, the data are not collected quickly or in real-time, nor is it unstructured. Especially during case investigations, small data (Desouza and Jacob, Reference Desouza and Jacob2017) prevails.

3.2. Data collection

Our ethnographic approach focused on the lived experiences of policy advisors in working with data (Schwartz-Shea and Yanow, Reference Schwartz-Shea and Yanow2012). We undertook observations, interviews, and a document analysis, over a period of 13 months ranging from November 2020 to December 2021. This combination of methods was used to contextualize policy advisors’ verbal accounts of their judgments with observations of their actions and data analysis procedures. Ethnographic observations covered eight occasions, lasting a total of 16 hr. The targeted occasions were meetings in which policy advisors discussed the use of data. Six of these occasions were online meetings. The interviews were open, exploring various topics related to data use. The invitation and consent form briefly introduced our research topic (data use). We formulated several open questions to start the interview: we asked respondents about their professional background, their work, what projects they are working on, and which role data plays in their practice. We conducted 19 interviews with representatives of different organizations collaborating in the expertise center. We followed a snowball sampling approach to recruit respondents within the expertise center until no new participants were suggested. At this point, we interviewed all policy advisors within the network, representing different organizations and positions. Six respondents were project managers advising other organizations (R5, R6, R8, R9, R12, R13) or policy advisors who could be considered clients or mandators of data analyses (R7, R11, R13, R15, R19). Another nine interviewees were analysts working with data (R1–R4, R10, R14, R16–R18). All respondents gave their informed consent to participate in this research anonymously. All interviews were recorded and transcribed verbatim. Observations were logged in written reports and anonymized similarly. In addition to observations and interviews, we also analyzed several policy documents describing the center’s modus operandi and data analysis methods.

3.3. Analysis

Our analysis approach is interpretive, aimed at understanding the data practices of professionals through their motivations and intentions (Schwartz-Shea and Yanow, Reference Schwartz-Shea and Yanow2012). We used Atlas.TI software for qualitative analysis of the interview transcripts, observation reports, and documents. The first round of analysis concerned open and inductive coding (Braun and Clarke, Reference Braun and Clarke2006). Recurring themes were data collection, data analysis, policymaking, and the use of different knowledge sources. From there, we moved back and forth between data and theory, resulting in a list of relevant themes for further analysis (Wulff, Reference Wulff2002). This allowed us to refine our analytical framework (i.e., Table 2) for the following interviews and observations and the next rounds of coding, which is typical to interpretive research (Schwartz-Shea and Yanow, Reference Schwartz-Shea and Yanow2012). Subsequent rounds of coding focused on situations in which respondents were using data and other types of knowledge in making judgments related to organized crime. In the last round, we looked for patterns between the codes and discussed the validity and application of our coding scheme to ensure the reliability of our research. Combining interviews, observations, and policy documents allowed for triangulation and intertextuality (ibid, 51), and therefore also contributed to the robustness of our research. Finally, we validated our findings by performing member checks with key respondents (ibid, 106).

4. Findings

4.1. Data use during reality judgments

First, we analyze how policy advisors judge “what is the case” (Vickers, Reference Vickers1995), and whether they accept, contextualize, or neglect data for these judgments (Table 2). Our empirical data uncovered instances of reality judgments when they defined areas and the type of crime. They do this, for example, during the “rural areas project.” Criminals use farm sheds to produce and store illegal drugs. This is a recurring matter that requires the center’s attention. The respondents use municipal and Chamber of Commerce data to collect general information on the area’s inhabitants and the physical environment. They discuss “what is the case” by considering what is a “rural area.” Then, the data are visualized in “hotspot analyses” (see Figure 1). It leads to a mutual agreement on “what is the case” (Vickers, Reference Vickers1995): situations that might be vulnerable to drug crime, forming the input for subsequent value judgments.

Figure 1. Hotspot analysis (for confidentiality reasons, this is a fictional municipality).

These reality judgments start with the willingness of the involved policy advisors to use data besides other forms of knowledge. Several respondents (R5, R6, R11) indicate that data provides them with insights that other forms of knowledge lack. For example, when areas are too large for manual investigation, they welcome data to improve their “information position” (R5) as they experience “blind spots” (D21). Data are seen as enabling their activities. According to analyst R1, policy advisors usually accept data to enhance their understanding of “what is the case”: “[they] immediately ask: how big is the problem? How much does it happen to us, and do we have more or less [of this type of crime] than our neighboring municipality?” They tend to rely solely on the outcomes of the data analyses for defining and scoping the situation.

According to the policy advisors, there are several challenges before data provides useful insights. The first challenge pertains to selecting appropriate datasets and formulating indicators to signal organized crime in the region. The center’s analysts cannot use sensitive police and financial data to signal organized crime since these are only permissible when there is strong evidence in specific cases. Generally, the analysts consult with policy advisors and other organized crime experts for expert knowledge on which datasets they can use and which indicators possibly reveal patterns of organized crime.

This challenge of selecting relevant datasets and indicators relates to a second issue: the analysts need specific local knowledge from the policy advisors. Much of the analysts’ work is “boundary work” in which they help the expertise center’s policy advisors to explicate their local knowledge of the situation they are interested in:

“What should I pay attention to? Every form of organized crime, which is an umbrella term covering a lot, has different indicators. So, if I am going to just browse for anything interesting, then I am not going to see anything, so to speak.” (R3)

Besides helping policy advisors clarify their needs, the analysts also take the initiative themselves. Analyst R2 describes a situation in which the expert and local knowledge were insufficient—of which datasets and indicators were appropriate and how the situation should be defined—was insufficient:

“So eventually, I looked at what to do and wrote an action plan. […] The next question was: how will we define [organized crime]? […] No real clarity was given on that, so now they have several options: they can bring it up themselves, I will do it myself, or a mix of both.”

Eventually, the efforts of the analysts and the expert and local knowledge of the policy advisors lead to data products such as hotspot analyses. The analyses are then discussed with a broader team of policy advisors. Most were not involved in the preceding process and the different reality judgments, including the choice of datasets, indicators, and definitions. During several meetings (V2–V6), we observed that the policy advisors did not question the underlying reality judgments that led to the visualizations but accepted the data insights as a basis for value judgments.

In sum, during reality judgments, the policy advisors welcome data as an instrument to fill their blind spots. The analysts play a central and proactive role and perform “boundary work” in formulating questions and indicators based on the local knowledge of public professionals. Before the data can provide new insights, they need expert knowledge of definitions and indicators of organized crime. The beneficiaries of the analyses usually accept the data without being involved in the underlying reality judgments.

4.2. Data use during value judgments

Next to the reality judgments, which defined what is the case, the value judgments assess whether the situation deviates from a desirable situation. We looked in our empirical material for value-laden statements or situations that reflect preferences, priorities, or evaluations. They surface when the policy advisors judge a situation—established using reality judgments—that indicate crime or vulnerability to crime; or when they determine they know nothing about an area and find that also undesirable and reason to prioritize it. We analyzed whether the policy advisors accepted, contextualized, or neglected data for their value judgments.

An example of a value judgment is when the expertise center’s policy advisors propose a particular area for additional or more thorough data analysis. Suspicions are often based on local knowledge, referred to by the respondents as “gut feeling” (R2) or “professional intuition” (R6): “it is not always very odd or wrong; it actually is a great treasure of information.” (R4). Earlier studies defined gut feeling as “compressed expertise” (Weick, Reference Weick2009) and as indispensable to public practice (Møller, Reference Møller2022). This is triggered, for example, when community police officers spot previous offenders or notice irregular behavior (R5). The value judgments are informed by reality judgments in which data were accepted. Utilizing local knowledge, the policy advisors use value judgments to classify the situation as undesirable and suspicious, prompting a desire for a more comprehensive data analysis.

Some value judgments precede reality judgments. This was the case when a policy advisor of the expertise center selected indicators based on expert knowledge, reflecting what the advisor values as undesirable when it comes to organized crime in an area. Examples of indicators are the type of industry under which a company is registered or certain characteristics of real estate. Project manager R9 describes this process:

R9: “Some indicators weigh heavier than others, of course. […] So, I have made the calculations in such a way that when specific indicators are present, they act as aggravating factors in processing the results.”

Interviewer: “And the weighting of the indicators, is that determined by you?”

R9: “Yes, haha.”

Interviewer: “Why do you laugh?”

R9: “If you state it that way, it sounds like I am almighty. However, this is the way it is.”

This indicator selection process influences the output of the data analysis—the reality judgments—and the subsequent value judgments about which output of the analysis is undesirable.

When making value judgments based on the outputs, policy advisors again tend to accept data knowledge. For example, during meeting V5, the participants interpreted red on a hotspot map by stating it must be “colored red for a reason” and that “if we have to start somewhere, let us begin with what the data is telling us: that is a no-brainer.” However, policy advisors with local knowledge of a particular area tend to use it to contextualize the data. A police officer, for example, shared his opinion on a specific area: “I think this area is very interesting. When I drive through this area, I really get a gut feeling” (V5). When local knowledge is absent, the participants sometimes recommended adding local knowledge by performing visual inspections: “I think it is worthwhile to enrich the map with a visual inspection also to include gut feeling” (V4).

This practice of contextualizing data knowledge with local knowledge was encouraged by the analysts who prepared the analyses. The analyst at meeting V5 warned against quick conclusions by explaining that the output does not mean organized crime is present but that an area is more vulnerable to crime. “You [the other participants] must be a bit cautious in terms of interpretation. There is a difference, but the differences are not extremely large.” (V5). Data analysts often expressed their warnings: “Data can tell you many things, but you should take care that the use of data is not a matter of absolute truth, in the sense that it gives a kind of objectivity. For obvious reasons, that is not the case.” (R1). Instead, they encourage policy advisors to use local knowledge to value the importance and desirability of output, thus informing their value judgments.

These findings show that judgments of value are made when local knowledge and “gut feeling” initiate data analyses. Subsequently, expert knowledge guides the selection of indicators. Policy advisors are inclined to accept the data in making value judgments or to contextualize it when they possess local knowledge. Interestingly, the analysts are the ones who warn against perceived objectivity and stimulate the users to contextualize the results by adding local knowledge. In addition, our findings on value judgments illustrate a close and mutual relationship between reality and value judgments, mediated by local knowledge and analysts; data analyses are initiated and contextualized by local knowledge, and analysts oversee this process.

4.3. Data use during instrumental judgments

The policy advisors must judge what to do based on the outcomes of the reality and value judgments: situations that are undesirable because of organized crime. Most of the time, these instrumental judgments, during which the policy advisors again might accept, contextualize or neglect data, closely follow value judgments, often in the same meetings. The policy advisors discuss whether the prioritized cases are solid enough to proceed with an intervention. Sometimes they request more thorough data analyses or local information. If they are convinced a case needs intervention, they discuss which type of intervention fits and how to distribute resources. A possible intervention could be preventive, aimed at raising awareness, or repressive by executing inspections and searches.

Before deciding upon policy action, the policy advisors judge whether they still have “additional information needs” (R4). A project manager remarked: “That makes it difficult at the start. What is really going on? And how solid is your story?” (R5). At the same time, additional analyses also cost additional resources: “ok, but it takes time, and it takes capacity, and what will it get us?” (R13). Sometimes, requests for data analyses lose their urgency in the meantime: “the information phase takes quite some time. If you are eventually going to intervene, you realize that the information is outdated.” (R8). One way to gather more information is if the municipality uses its administrative powers to conduct permit inspections. Analyst R2 suggests that policy advisors, who frequently attend inspections, sometimes feel unsatisfied if the inspectors do not uncover any unlawful activities. The inspection results do not match what the data and their local knowledge—again referred to by respondents as gut feeling—tell them. These instrumental judgments show that data analyses are not conclusive but need additional judgments on sufficiency, even after an inspection.

A key instrumental judgment before conducting any inspections or other activities is whether the organizations of the policy advisors are mandated to intervene. For example, local municipalities can use administrative law and enter premises to assess whether the company adheres to the permits. The police can start a criminal investigation, or the expertise center can advise the tax authorities to start a fiscal investigation (D27–D29). The policy advisors judge a specific situation by applying their expert knowledge to determine if they have the jurisdiction to use their legal powers. Therefore, the follow-up of the analyses is mainly determined by what is possible at all, as analyst R2 describes: “It very much depends on what opportunities an individual sees.” In some cases presented by the data analyses, the policy advisors decided that their jurisdiction did not allow them to intervene or feared taking on unnecessary responsibilities from their organization’s perspective (R3).

It is not only the content of the analysis or the mandate that determines the follow-up of the analyses. A project manager described the ideal instrumental judgments: “Which intervention is the fastest and most efficient? In addition, will you achieve the desired result?” (R6). However, according to this project manager, the options are often limited and determined by the organization’s capacity and budgetary limits: “Eventually, sometimes you select the quick and simple intervention because then at least you have a result, even though it may not be the best result.” This was also visible during meeting V5. The data analysis suggested that action was required. However, budget restrictions overruled the analysis: “This municipality has no budget, just zero. I do not know if you heard the news, but we had to cut spending by 1.5 million . I have nothing, so when we come up with innovative ideas then…” after which the policy advisor did not finish the sentence, but the other participants understood the message. In other cases, the policy advisors also neglected data for their instrumental judgments because their local knowledge of the situation made them think repressive actions could negatively affect people’s willingness to report (V4).

These findings regarding instrumental judgments indicate that it requires a considerable amount of time and effort to actually act on prioritized cases. Policy advisors judge the content and timeliness of the analysis to decide if an intervention is necessary, leveraging their local knowledge, such as neighborhood familiarity, and their expert knowledge regarding their organization’s mandate. Besides mandate, another limiting organizational factor is whether the organization has the capacity and budget to intervene. Policy advisors generally neglect data due to these factors.

Summarizing our findings on integrating varied knowledge sources in assessing organized crime during reality, value, and instrumental judgments shows that data’s impact lessens as judgments near policy action. Table 3 provides an overview of the policy advisors’ judgments. During reality judgments, the professionals accept data knowledge to fill their blind spots; during value judgments, they contextualize data with expert knowledge and primarily local knowledge; and during instrumental judgments, they somewhat neglect data because of local and organizational factors.

Table 3. Findings per type of judgment

5. Discussion and conclusions

This study addressed the research question, “How do policy advisors judge and integrate data with local knowledge and professional expertise for policymaking on complex issues?” We examined policy advisors in public security and their practices in integrating data during three types of judgments that underlie policies to prevent and reduce organized crime. In response to our research question, we discovered that when making reality judgments, the center’s policy advisors tend to accept data knowledge to define which forms of organized crime are present, which indicates a curtailment of their discretion. However, local knowledge—often referred to by the respondents as “gut feelings”—was often used to initiate data analyses into the prevalence of certain types of organized crime. During these analyses, the data analysts encouraged the policy advisors to explicate their local knowledge and aims with the analysis. In value judgments on which areas and types of crime to prioritize, we found that the policy advisors usually enable their judgments by contextualizing data with local knowledge and professional expertise—again often prompted by the data analysts. In making instrumental judgments, we saw that policy advisors often neglect data. Organizational factors, including the jurisdiction of the organizations, budget limitations, and time constraints, frequently prevail in instrumental judgments. As judgments move closer to policy intervention, the role of data becomes smaller. While this may signal enablement, in this stage other factors very much constrain the discretion of policy advisors. Interestingly, data analysts within the center often pushed for contextualization of the data with local knowledge and professional expertise, as they were most aware of the limitations of the data. Their boundary work helped policy advisors use the data appropriately.

Despite analysts’ warnings, policy advisors sometimes perceive data as objective truth. However, we observe that through actions of selecting indicators, assigning weights to these indicators, and choosing specific areas, the results of the data analysis largely reflect policy advisors’ own local and expert knowledge. Contrary to the idea of data as a rationalizing force (cf. Kitchin, Reference Kitchin2014; Vydra and Klievink, Reference Vydra and Klievink2019), data are highly interconnected with and dependent on local and expert knowledge. However, the perceptions of the policy advisors of data align with the widespread belief surrounding (big) data as a “higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy” (Boyd and Crawford, Reference Boyd and Crawford2012, 663). Although the expertise center does not define its data collection as a big data method, in practice, it exhibits the same mythical effects.

Our study adds to the academic literature on using data for policymaking by highlighting the discretionary role of policy advisors in working with data. As most studies focus on street-level bureaucrats (cf. Jorna and Wagenaar, Reference Jorna and Wagenaar2007; De Boer and Raaphorst, Reference De Boer and Raaphorst2023), this type of policy professional is understudied, while their involvement influences policy implementation. Upon initial observation, their response appears to reflect enablement by including an additional type of information in their judgments (cf. Buffat, Reference Buffat2015). However, by drawing on Vickers’ judgments, we could discern a more complex dynamic in which data are transformed into a representation of policy advisors’ own local and expert knowledge, which is then presented and interpreted as objective. Because policy advisors tend to believe in the objective nature of data, it plays a pivotal role in informing policy to reduce organized crime, particularly during reality and value judgments. In addition, consistent with earlier work on information practices and algorithmization (Davenport and Prusak, Reference Davenport and Prusak1997; Meijer et al., Reference Meijer, Lorenz and Wessels2021; Meijer and Grimmelikhuijsen, Reference Meijer, Grimmelikhuijsen, Peeters and Schuilenburg2021), our findings confirm that the organizational context highly constrains how data inform policy action. Organizational factors—such as local policy priorities, resources, and mandates—strongly bound data practices.

As a revelatory case, the Dutch RIEC provided in-depth insights into the data judgments of policy advisors. This case represents a policy domain that relies heavily on data but still explores big data-like methods. Studies looking at the effects of data use in law enforcement that fail to examine how public professionals interact with data systems, such as crime hotspot maps and predictive policing algorithms, might overestimate the novelty of knowledge that data brings in addition to local and expert knowledge. Our study reveals that the use of data is shaped in different ways by local and expert knowledge, which is fed into data systems and used to contextualize and sometimes neglect the outcomes of analyses. Future studies into the effects of data on policies within the domain of law enforcement should therefore not only focus on the data system in itself, but also take into account how policy advisors with local and expert knowledge interact with these systems.

Our findings come from a single case of a specific network of policy advisors, and some observations were of digital meetings, which may have led to different group dynamics. While our findings naturally align with other RIEC networks, they may also applicable to other policy advisors in varied contexts, provided they are cautiously interpreted and translated as necessary. The results may be generalizable to other policy networks and public organizations dealing with similarly complex policy issues on which new types of data have become available, such as public health or climate change. For “simple” government tasks, we expect different results. Comparative research designs would allow future research to investigate whether and how our findings can be generalized to such other domains and contexts. Specifically, we recommend future studies to focus on the boundary work of data analysts to observe whether and how data are informed by local and expert knowledge. Furthermore, we suggest future studies to consider how organizational factors might restrain how data inform subsequent policy action. In what way do public professionals account for their organizational context, and how does this shape their data practices, particularly in collaborative networks? The expertise center under study turned out to use a combination of “small” and “big” data, which fit theories on enablement and curtailment only to a certain extent. It would be relevant to repeat such a case study among a policy network or public organization that uses more advanced big data analytics, such as machine-learning algorithms.

Data availability statement

An overview of the data collection and the coding scheme are openly available on OSF at the following link: https://doi.org/10.17605/OSF.IO/MYU7D.

Acknowledgments

The authors would like to thank Dr. Anke van Gorp and Dr. Andrea Donker of the HU Utrecht University of Applied Sciences for their support and feedback throughout the research process. The authors would also like to thank the three anonymous reviewers for their valuable comments.

Author contribution

Conceptualization: W.V.R.; Investigation: W.V.R.; Methodology: W.V.R.; Supervision: R.D., A.M.; Writing—original draft preparation: W.V.R., R.D.; Writing—review and editing: all authors. All authors approved the final submitted draft.

Funding statement

This research was funded by grants from the HU University of Applied Sciences Utrecht. The funder had no role in study design, data collection and analysis, the decision to publish, or preparation of the manuscript.

Competing interest

The authors declare no competing interest.

Footnotes

This research article was awarded Open Materials badge for transparent practices. See the Data Availability Statement for details.

References

Arnaboldi, M (2018) The missing variable in big data for social sciences: The decision-maker. Sustainability 10(10), 118.Google Scholar
Bovens, M and Zouridis, S (2002 ) From street-level to system-level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control. Public Administration Review 62(2), 174184.Google Scholar
Boyd, D and Crawford, K (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society 15(5), 662679.Google Scholar
Braun, V and Clarke, V (2006) Using thematic analysis in psychology. Qualitative Research in Psychology 3(2), 77101.Google Scholar
Brayne, S (2017) Big data surveillance: The case of policing. American Sociological Review 82(5), 9771008.Google Scholar
Bridgman, P and Davis, G (2003) What use is a policy cycle? Plenty, if the aim is clear. Australian Journal of Public Administration 62(3), 98102.Google Scholar
Buffat, A (2015 ) Street-level bureaucracy and E-government. Public Management Review 17(1), 149161.Google Scholar
Cook, SDN and Brown, JS (1999) Bridging epistemologies: The generative dance between organizational knowledge and organizational knowing. Organization Science 10(4), 381400.CrossRefGoogle Scholar
Davenport, TH and Prusak, L (1997) Information Ecology: Mastering the Information and Knowledge Environment. New York: Oxford University Press.Google Scholar
De Boer, N and Raaphorst, N (2023) Automation and discretion: Explaining the effect of automation on how street-level bureaucrats enforce. Public Management Review 25(1), 4262.Google Scholar
Desouza, KC and Jacob, B (2017 ) Big data in the public sector: Lessons for practitioners and scholars. Administration & Society 49(7), 10431064.CrossRefGoogle Scholar
Giest, S (2017 ) Big data for policymaking: Fad or fasttrack? Policy Sciences 50(3), 367382.Google Scholar
Grimmelikhuijsen, S (2023 ) Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Administration Review 83(2), 241262.CrossRefGoogle Scholar
Hoppe, RA (2010) The Governance of Problems: Puzzling, Powering, Participation. Bristol: Policy Press.Google Scholar
Howlett, M, Ramesh, M and Perl, A (2009) Studying Public Policy: Policy Cycles & Policy Subsystems, 3rd Edn. New York: Oxford University Press.Google Scholar
Janssen, M, van der Voort, H and Wahyudi, A (2017) Factors influencing big data decision-making quality. Journal of Business Research 70(1), 338345.Google Scholar
Jorna, F and Wagenaar, P (2007) The “iron cage” strengthened? Discretion and digital discipline. Public Administration 85(1), 189214.CrossRefGoogle Scholar
Kersing, M, van Zoonen, L, Putters, K and Oldenhof, L (2022) The changing roles of frontline bureaucrats in the digital welfare state: The case of a data dashboard in Rotterdam’s work and income department. Data & Policy 4, E24.Google Scholar
Kitchin, R (2014) The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Los Angeles, CA: SAGE.Google Scholar
Lepri, B, Staiano, J, Sangokoya, D, Letouzé, E and Oliver, N (2017) The tyranny of data? The bright and dark sides of data-driven decision-making for social good. In Cerquitelli, T, Quercia, D and Pasquale, F (eds), Transparent Data Mining for big and Small Data (Studies in Big Data). Cham: Springer International Publishing, pp. 324.Google Scholar
Lindblom, CE (1979. Still muddling, not yet through. Public Administration Review 39(6), 517526.Google Scholar
Lipsky, M (2010 ) Street-Level Bureaucracy: Dilemmas of the Individual in Public Services, 30th Anniversary Expanded Edn. New York: Russell Sage Foundation.Google Scholar
Meijer, A and Grimmelikhuijsen, S (2021) Responsible and accountable algorithmization: How to generate citizen trust in governmental usage of algorithms. In Peeters, R and Schuilenburg, M (eds), The Algorithmic Society. London: Routledge, pp. 5366.Google Scholar
Meijer, A, Lorenz, L and Wessels, M (2021 ) Algorithmization of bureaucratic organizations: Using a practice lens to study how context shapes predictive policing systems. Public Administration Review 81(5), 837846.CrossRefGoogle Scholar
Mergel, I, Rethemeyer, RK and Isett, K (2016 ) Big data in public affairs. Public Administration Review 76(6), 928937.Google Scholar
Møller, AM (2022) Mobilizing knowledge in frontline work: A conceptual framework and empirical exploration. Perspectives on Public Management and Governance 5(1), 5062.Google Scholar
Pasquale, F (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.Google Scholar
Schwartz-Shea, P and Yanow, D (2012) Interpretive Research Design. New York: Taylor & Francis.Google Scholar
Seaver, N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society 4(2), 112.Google Scholar
Simonofski, A, Tombal, T, De Terwangne, C, Willem, P, Frenay, B and Janssen, M (2022) Balancing fraud analytics with legal requirements: Governance practices and trade-offs in public administrations. Data & Policy 4, E14.Google Scholar
Stone, D (2012) Policy Paradox: The Art of Political Decision Making, 3rd Edn. New York: W.W. Norton & Co.Google Scholar
Van der Voort, HG, Klievink, AJ, Arnaboldi, M and Meijer, AJ (2019) Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making? Government Information Quarterly 36(1), 2738.Google Scholar
Van der Voort, H, Van Bulderen, S, Cunningham, S and Janssen, M (2021) Data science as knowledge creation a framework for synergies between data analysts and domain professionals. Technological Forecasting and Social Change 173, 110.Google Scholar
Van Dijck, J. (2014) Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society 12(2), 197208.Google Scholar
Vickers, G (1995 ) The Art of Judgment: A Study of Policy Making, Centenary Edn. Thousand Oaks, CA: SAGE.Google Scholar
Vydra, S and Klievink, B (2019) Techno-optimism and policy-pessimism in the public sector big data debate. Government Information Quarterly 36(4), 101383.Google Scholar
Weber, M and Tribe, K (2019) Economy and Society: A New Translation. Cambridge, MA: Harvard University Press.Google Scholar
Weick, KE (2009 ) Making Sense of the Organization, Volume 2: The Impermanent Organization. Chichester: Wiley.Google Scholar
Wulff, H (2002) Yo-yo fieldwork: Mobility and time in a multi-local study of dance in Ireland. Anthropological Journal on European Cultures 11, 117136.Google Scholar
Yanow, D (2004) Translating local knowledge at organizational peripheries. British Journal of Management 15(1), 925.CrossRefGoogle Scholar
Young, MM, Bullock, JB and Lecy, JD (2019 ) Artificial discretion as a tool of governance: A framework for understanding the impact of artificial intelligence on public administration. Perspectives on Public Management and Governance 2(4), 301313.Google Scholar
Figure 0

Table 1. Types of data practices

Figure 1

Table 2. Summary of the analytical framework and empirical questions

Figure 2

Figure 1. Hotspot analysis (for confidentiality reasons, this is a fictional municipality).

Figure 3

Table 3. Findings per type of judgment

Submit a response

Comments

No Comments have been published for this article.