Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-26T03:14:38.990Z Has data issue: false hasContentIssue false

Motivated reasoning and policy information: politicians are more resistant to debiasing interventions than the general public

Published online by Cambridge University Press:  24 November 2020

JULIAN CHRISTENSEN*
Affiliation:
Department of Political Science, Aarhus University, Aarhus, Denmark
DONALD P. MOYNIHAN
Affiliation:
McCourt School of Public Policy, Georgetown University, Washington, DC, USA
*
*Correspondence to: Department of Political Science, Aarhus University, Bartholins Allé 7, 8000 Aarhus C., Denmark. Email: [email protected]; Twitter: @julianhupka
Rights & Permissions [Opens in a new window]

Abstract

A growing body of evidence shows that politicians use motivated reasoning to fit evidence with prior beliefs. In this, they are not unlike other people. We use survey experiments to reaffirm prior work showing that politicians, like the public they represent, engage in motivated reasoning. However, we also show that politicians are more resistant to debiasing interventions than others. When required to justify their evaluations, politicians rely more on prior political attitudes and less on policy information, increasing the probability of erroneous decisions. The results raise the troubling implication that the specialized role of elected officials makes them more immune to the correction of biases, and in this way less representative of the voters they serve when they process policy information.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s) 2020. Published by Cambridge University Press

Introduction

Proponents of evidence-based policymaking hope that enhanced access to policy information will help politicians make better decisions, leading to improved societal outcomes (Davies et al., Reference Davies, Nutley and Smith2000). Governments have accordingly built policy information infrastructures, including appointments of chief scientific advisors, establishment of scientific advisory committees (Doubleday & Wilsdon, Reference Doubleday and Wilsdon2012) and statutory requirements to report data on bureaucratic performance (Moynihan & Beazley, Reference Moynihan and Beazley2016). Most recently, the US federal government passed the Foundations for Evidence-Based Policymaking Act in 2019, which compels agencies to generate more information on how well policies are working.

Whether politicians actually make better decisions when given policy information has been called into question by research showing that people often use information simply to reach conclusions consistent with their political identities and attitudes (Kunda, Reference Kunda1990; Taber & Lodge, Reference Taber and Lodge2006; Kahan, Reference Kahan2016a). Such motivated reasoning makes it less likely that evidence will be judged on its merits. While empirical investigations have typically been based on studies of the mass public, some studies have also found evidence of motivated reasoning among elected politicians (Christensen et al., Reference Christensen, Dahlmann, Mathiasen, Moynihan and Petersen2018; Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019; Esaiasson & Öhberg, Reference Esaiasson and Öhberg2019).

If both politicians and citizens engage in motivated reasoning, we might hope that democratic accountability processes will direct politicians toward better decisions by limiting their biases. After all, in a democracy, politicians are continuously required to justify their claims, such as through committee proceedings, legislative debates, town halls and media interviews. Justification requirements have been found to foster nuance in people's reasoning about a broad range of issues (Green et al., Reference Green, Visser and Tetlock2000; DeZoort et al., Reference DeZoort, Harrison and Taylor2006) and to reduce a variety of cognitive biases (Lerner & Tetlock, Reference Lerner and Tetlock1999; Aleksovska et al., Reference Aleksovska, Schillemans and Grimmelikhuijsen2019), thereby offering what Tetlock describes as a ‘simple, but surprisingly effective, social check on many judgmental shortcomings’ (Tetlock, Reference Tetlock1983, p. 291). However, while scholars have pointed to justification requirements as a potential way to reduce motivated reasoning (Kunda, Reference Kunda1990; Bartels & Bonneau, Reference Bartels and Bonneau2014), evidence on the effects of justification requirements on politically motivated reasoning has been scarce. We thus ask the following research question: Do politicians and members of the general public alter their reasoning about policy information when they are required to justify their evaluation of the information?

Very few studies have provided experimental evidence of psychological processes among actual politicians (for notable exceptions, see Miler, Reference Miler2009; Sheffer et al., Reference Sheffer, Loewen, Soroka, Walgrave and Sheafer2017; Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019), and there is particular value in documenting the extent to which politicians’ reasoning mirrors or departs from the voters they represent. We study how Danish local politicians, and the public they serve, interpret information about local public services within their policy portfolio – elder care and schools. A randomized survey experiment and a decision board experiment asked subjects to evaluate public and private service providers, allowing us to separate the effects of policy information about provider performance from that of political beliefs. We hypothesize that politicians and citizens are biased by information-related political attitudes when evaluating policy information, but that asking them to justify their evaluations will lead to more effortful, less biased evaluations.

As expected, we find strong evidence of motivated reasoning among both the public and politicians. However, politicians and the public differ in their reactions to justification requirements. Both groups spend more effort processing information when they are asked to justify their evaluations. While this effort reduces the influence of prior attitudes among the public, the reverse is the case among politicians. Politicians rely more on prior attitudes and less on evidence when they know that they must justify their evaluations. We conclude by addressing possible reasons for this and by discussing the broader implications of the results.

Our findings highlight a need to take roles seriously when studying elite behavior. Behavioral scientists tend to assume (explicitly or, more often, implicitly) that their research is about general human behavior, meaning that ‘the findings one derives from a particular sample [of subjects] will generalize broadly; one adult human sample is pretty much the same as the next’ (Henrich et al., Reference Henrich, Heine and Norenzayan2010, p. 63). For example, literature on evidence-based policymaking identifies motivated reasoning as a core obstacle to factually informed policymaking, but suggests, based on studies of students and other members of the public, that justification requirements can reduce politicians’ motivated reasoning about evidence (Bartels & Bonneau, Reference Bartels and Bonneau2014, p. 226). Our results show that such efforts, while productive for the public, can actually backfire by encouraging stronger motivated reasoning among politicians. One possible reason for this is that politicians have stronger incentives than members of the public to maintain consistency of political views, since external audiences monitor and impose costs if politicians cannot make credible commitments (Tomz, Reference Tomz2007).

Justification requirements and motivated reasoning

The theory of motivated reasoning is among the most studied in modern political psychology, and so we will not offer a detailed description of it here (for good introductions, see Kunda, Reference Kunda1990; Taber & Lodge, Reference Taber and Lodge2006; Kahan, Reference Kahan2016a). Briefly stated, the theory proposes that people's interpretations of information are driven by goals, and that goals have implications for the interpretation strategies used. Motivated reasoning theory distinguishes between two archetypical types of goals: accuracy and directional goals. People driven by accuracy goals wish to ‘arrive at an accurate conclusion, whatever it may be’ (Kunda, Reference Kunda1990, p. 480), causing an investment of cognitive effort into careful and unbiased evaluations. People driven by directional goals seek to reach a particular, preselected conclusion. This is often the case when information has political implications because people are motivated to defend their political identities and attitudes. Therefore, people make biased evaluations in defense of their desired conclusions even when a great deal of mental agility is required to do so.

Numerous studies find that ordinary people engage in motivated reasoning when evaluating policy information (Taber & Lodge, Reference Taber and Lodge2006; Goren et al., Reference Goren, Federico and Kittilson2009; Taber et al., Reference Taber, Cann and Kucsova2009; Kahan et al., Reference Kahan, Peters, Dawson and Slovic2017; Lind et al., Reference Lind, Erlandsson, Västfjäll and Tinghög2018). The reasoning of elected officials has seen less attention, partly because of the difficulty in recruiting large numbers of politicians as participants in the survey experiments typically used in this field (Druckman & Lupia, Reference Druckman and Lupia2012) and much of the evidence we have about elected officials is only tangentially related to motivated reasoning. For instance, studies of political incumbents in Belgium, Israel and Canada show politicians to be just as or even more subject to various cognitive biases seen in the mass public (Sheffer et al., Reference Sheffer, Loewen, Soroka, Walgrave and Sheafer2017). Furthermore, a set of survey experiments on US state and local officials find that politicians are willing to rationalize constituents with opposing views as less informed (Butler & Dynes, Reference Butler and Dynes2016).

The evidence that does exist suggests that politicians engage in motivated reasoning in the same manner as the public. For example, Christensen and colleagues (Reference Christensen, Dahlmann, Mathiasen, Moynihan and Petersen2018) found that politicians use goal reprioritization as a strategy to make attitude-congenial interpretations of policy information: more liberal politicians generally tend to treat academic performance as a less important educational goal relative to student well-being, but they flip that preference when confronted with evidence that public schools outperform private schools on academic performance. Cumulatively, existing evidence (see also Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019; Esaiasson & Öhberg, Reference Esaiasson and Öhberg2019) thus implies that both politicians and the public engage in motivated reasoning:

H1: Politicians and the public engage in politically motivated reasoning when they evaluate policy information.

If politicians make biased evaluations of policy information, it is likely that their use of the information will also be biased. Are there conditions under which biases are more or less pronounced, or ways in which they can be reduced? Some studies have found variations in voters’ tendency to engage in biased reasoning based on individual-level factors, such as political knowledge, attitude strength and personality differences (Taber & Lodge, Reference Taber and Lodge2006; Taber et al., Reference Taber, Cann and Kucsova2009; Arceneaux & Vander Wielen, Reference Arceneaux and Wielen2017). Others have found variations based on contextual factors, such as monetary incentives to make accurate evaluations (Bullock et al., Reference Bullock, Gerber, Hill and Huber2015; Prior et al., Reference Prior, Sood and Khanna2015), the politicization of the information environment (Slothuus & de Vreese, Reference Slothuus and de Vreese2010) and the amount of information available (Redlawsk et al., Reference Redlawsk, Civettini and Emmerson2010; Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019). This article contributes to the literature on contextual variations in motivated reasoning by asking whether politicians and members of the general public alter their reasoning about policy information when required to justify their evaluations.

In the existing literature, justification requirements have been found to ‘signal to subjects to take the role of the other toward their own mental processes and to give serious weight to the possibility that their preferred answers are wrong’ (Tetlock & Kim, Reference Tetlock and Kim1987, p. 707). Thus, using the terminology of motivated reasoning theory, justification requirements encourage accuracy-driven evaluations, and people tend to respond by investing effort in making more complex, careful and accurate analyses of the information at hand (Tetlock, Reference Tetlock1985; Tetlock & Kim, Reference Tetlock and Kim1987; Lerner et al., Reference Lerner, Goldberg and Tetlock1998).

Studies of ordinary citizens have found debiasing effects of justification requirements in relation to a variety of cognitive biases (for reviews, see Lerner & Tetlock, Reference Lerner and Tetlock1999; Aleksovska et al., Reference Aleksovska, Schillemans and Grimmelikhuijsen2019), and of special relevance to our research question, a number of studies show that justification requirements reduce people's tendency to engage in self-serving biases. For instance, requirements to justify evaluations have been found to reduce people's tendency to overestimate their own performance (Kroon et al., Reference Kroon, Van Kreveld and Rabbie1992; Sedikides et al., Reference Sedikides, Herbst, Hardin and Dardis2002; Smith, Reference Smith2012) and the likelihood of positive events while underestimating the likelihood of negative events in their own lives (Tyler & Rosier, Reference Tyler and Rosier2009). Justification requirements appear to reduce people's overconfidence in their own decisions, making them more willing to consider alternative courses of action in reaction to negative performance feedback (Jermias, Reference Jermias2006).

Our knowledge is more limited when it comes to the effects of justification requirements on people's reasoning about policy information, but there is reason to be cautiously optimistic. Justification requirements increase people's tendency to ‘see valid arguments on both sides of [a political] issue and to balance competing legitimate concerns against one another’ (Green et al., Reference Green, Visser and Tetlock2000, p. 1380). Furthermore, De Dreu and van Knippenberg (Reference De Dreu and van Knippenberg2005) found reduced tendencies to overvalue and aggressively defend people's own political arguments, and Bolsen and colleagues (Reference Bolsen, Druckman and Cook2014) found reduced biases in a party cue experiment when respondents were asked to justify their answers.

The promise of justification requirements has prompted calls to employ them as a means to compel policymakers to take politically uncongenial evidence into consideration (Bartels & Bonneau, Reference Bartels and Bonneau2014, p. 226). However, to our knowledge, no study has directly tested the effects of justification requirements on politicians, and caution is merited in generalizing findings from the public to politicians. After all, politicians are professional partisans (Andeweg, Reference Andeweg1997). They are expected to hold consistent political views (Tavits, Reference Tavits2007; Tomz, Reference Tomz2007), meaning that they are strongly committed to the attitudes for which they have been elected. One study suggests that justification requirements reduce the complexity of respondents’ thinking about contested political issues when the respondents have previously committed themselves to attitudes regarding the issues (Tetlock et al., Reference Tetlock, Skitka and Boettger1989). However, other research shows political elites to respond positively to interventions reminding them of accountability processes where they have to engage others with their claims. For instance, discussions with peers reduce confirmation bias among policy experts working in international organizations (Banuri et al., Reference Banuri, Dercon and Gauri2019), and US state legislators who were exposed to letters warning about the reputational and electoral risks of misstatements were less likely to subsequently receive a negative fact-check rating (Nyhan & Reifler, Reference Nyhan and Reifler2015). Thus, our expectation is to find debiasing effects of justification requirements, which is reflected in the following hypotheses:

H2: Politicians and the public will engage in a more effortful search for and processing of policy information when they are asked to justify their evaluations.

H3: Politicians and the public will engage in less politically motivated reasoning when they are asked to justify their evaluations.

Empirical setting and data collection

Testing our hypotheses requires data on how a large number of politicians and the public process and evaluate comparable pieces of information in situations with and without requirements to justify evaluations. To collect such data, two randomized experiments were run. H1 and H3 are tested with a survey experiment inspired by Kahan and colleagues (Reference Kahan, Peters, Dawson and Slovic2017), while H2 is tested with an online decision board experiment, allowing us to collect behavioral measures on the amount of effort invested in searching for and processing information (Willemsen & Johnson, Reference Willemsen, Johnson, Schulte-Mecklenbeck and Ranyard2011).

By relying on online data collections, we were able to incorporate answers from a large number of elected politicians (Danish city councilors). Denmark has 98 municipalities, led by city councilors elected through municipal elections every 4 years. The elections are characterized by professionalized campaigns, extensive media coverage and voter turnout fluctuating at around 70% (Hansen, Reference Hansen2018). About 95% of the city councilors represent national political parties that also compete for power in the Danish parliament. Councilors are responsible for the local delivery of core public services, such as education, childcare, elder care and employment activities, and municipal budgets represent about half of all public expenditures in Denmark (Ministry of Finance, 2018). Thus, while councilors may not be as professional as, for example, members of national parliaments, they are real-world politicians elected to make substantive decisions (Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019). The high number of councilors makes large samples possible, even with relatively low response rates typical of political elites (Druckman & Lupia, Reference Druckman and Lupia2012). Email invitations to participate were sent to all 2445 city councilors using publicly available email addresses. A total of 889 city councilors participated in the test of H1 and H3 (data collected in November–December 2014),Footnote 1 while 718 city councilors participated in the test of H2 (data collected in November–December 2016). Members of all Danish city councils contributed to our investigation.Footnote 2

Two samples of the Danish public participated in identical experiments, thereby making it possible to directly compare politicians’ responses to those of the public. The samples were recruited through YouGov's online panel of respondents. Both were representative of the Danish population aged 18–75 with regard to age, gender, education and geography. A total of 2109 people participated in the test of H1 and H3 (data collected in December 2016), while 1063 people participated in the test of H2 (data collected in February 2017).

Experimental design and analysis

H1: Motivated reasoning about policy information

To test H1, we employed a standard motivated reasoning design (Baekgaard & Serritzlew, Reference Baekgaard and Serritzlew2016; Kahan et al., Reference Kahan, Peters, Dawson and Slovic2017; Lind et al., Reference Lind, Erlandsson, Västfjäll and Tinghög2018; Baekgaard et al., Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019). Respondents were randomly assigned to one of four experimental conditions (see Figure 1, translated from Danish into English). Each was presented with a table of numerical information about the performance of two suppliers of elder care (a core public service for which city councilors are responsible) and asked to evaluate which supplier performed best.Footnote 3

Figure 1. Experimental material, groups A–D.

The information provided was cognitively demanding in that the absolute numbers were not informative by themselves (satisfaction rates needed to be computed). However, the information was unambiguous in that answers to the performance question could be coded as either correct or incorrect. Thus, converting the information from absolute to relative numbers reveals that one supplier had a satisfaction rate of 83.0% compared to 74.7% for the other. In groups A and C, supplier A and the municipal supplier had the higher satisfaction rate. In groups B and D, the numbers were switched, meaning that supplier B and the private supplier were the best-performing suppliers.

For groups A and B, suppliers were labeled as ‘supplier A’ and ‘supplier B’. Here, respondents’ ability to correctly identify the best-performing supplier should only depend on their numeracy, and thus, the groups serve as placebo or control groups, offering a baseline against which the influence of political attitudes can be measured. Groups C and D were told that one supplier was municipal (public), while the other was private. The relative role of the public and private sectors in delivering public services is a highly contested issue in Danish politics, and thus, contracting out elder care and other public services has ‘been at the center of party conflict’ for more than two decades (Slothuus & de Vreese, Reference Slothuus and de Vreese2010, p. 634). Local politicians are at the frontline of this debate, as Danish city councils must regularly decide on whether or not to contract out specific services. Following H1, respondents’ attitudes toward public and private service delivery should therefore be expected to alter evaluations in groups C and D (where there was a link between these attitudes and the information), but not in groups A and B.

Relevant political attitudes were captured at the beginning of the survey by asking three questions about respondents’ preferences for public or private delivery of public services.Footnote 4 An additive index was constructed, running from 0 to 1. The distribution of responses for politicians and the public is reported in Supplementary Material S1.

Our results are consistent with H1. In both treatment groups, respondents more accurately evaluated policy information when it was attitude-congenial (i.e., when the information supported their desired conclusion about whether public or private suppliers perform better) than when the information was attitude-uncongenial (i.e., when the information challenged their desired conclusions). Furthermore, as expected, the treatment groups’ strong associations between attitudes and answers were not present among respondents in the placebo groups (see regression analyses in the supplementary material's Table S2 comparing group A to group C and group B to group D). Figure 2, which is based on models 3 and 6 in the supplementary material's Table S2, shows associations between the uncongeniality of information and treatment groups’ tendencies to misinterpret the information. Thus, Figure 2 pools data from groups C and D and models uncongeniality as the degree to which information challenges respondents’ information-related attitudes.

Figure 2. Uncongeniality of information and expected probabilities of making erroneous judgments in identifying best-performing suppliers.

Note: This figure is based on regression analyses reported in the supplementary material's Table S2 (models 3 and 6). The horizontal axis runs from 0 to 1, with higher values corresponding to stronger support for the public sector if the private supplier performs best (group D in experiment) and stronger support for the private sector if the public supplier performs best (group C).

Figure 2 illustrates that, among politicians, predicted probabilities of correctly identifying the best-performing supplier range between 57% when the information is most uncongenial and 92% when the information is most congenial. Among non-politicians, predicted probabilities of correctly identifying the best-performing supplier vary between 32% when the information is most uncongenial and 83% when the information is most congenial. Additional analyses (reported in the supplementary material's Table S3a) show that the politicians’ results are not significantly different from the public's results. It should be noted that the public results are based on respondents who passed an attention check in our survey, as respondents who do not pay a minimum of attention to a survey cannot be expected to react meaningfully to experimental treatments (Berinsky et al., Reference Berinsky, Margolis and Sances2014). Details on the attention check and consequences of including inattentive respondents is reported in Supplementary Material S5. Table S5b shows that including inattentive respondents in the analysis does not alter any results regarding H1.

H2: Effects of justification requirements on information search and processing

H2 predicts that justification requirements will make respondents engage in a more effortful search for and processing of information. To test H2, behavioral process measures are needed. We employed an online decision board experiment using MouselabWEB (Willemsen & Johnson, Reference Willemsen, Johnson, Schulte-Mecklenbeck and Ranyard2011). Respondents were asked to click through boxes with information regarding the performance of a public and a private school and evaluate which school performed best. Because respondents had to click through the policy information, the decision board technique made it possible to track their behavior when searching for and processing information.

The decision board contained information regarding the two schools’ performance on five indicators, meaning that there were a total of 10 information boxes as shown in Figure 3.Footnote 5 In order to see information, respondents had to click on each box and the information would then remain visible as long as the respondent's cursor was placed over it.Footnote 6 We randomized the order of the performance indicators and which school performed best for each indicator. Respondents were informed that they could click through all 10 boxes if they wished or could stop when they felt that they had collected enough information. This procedure made it possible to measure the effort each respondent invested in searching for information (modeled as the number of boxes opened) and the effort they invested in actively processing the information (modeled as the time spent with information boxes opened).

Figure 3. Information boxes in the decision board experiment (English translation).

Note: For each respondent, the order of the performance indicators was randomized. Moreover, within each performance indicator, it was randomized as to which school performed best.

Respondents were randomly assigned into a control and treatment group. Both groups were asked to use the information to evaluate which school performed best and, in addition, the treatment group was exposed to the following text asking them to make a written justification of their evaluation: ‘Furthermore, we will ask you to write an argument for your evaluation. Your argument should be suitable for discussion with a person who thinks that the other school performs best’ (emphasis in original). The following open-ended question was constantly visible at the bottom of the treatment group's decision board: ‘Imagine that you are to discuss your answer with a person who thinks that the other school performs best. What would you emphasize in the information above to persuade the other person that your evaluation is correct? Please limit you answer to three lines’ (emphasis in original).

Our written justification requirement resembles treatments from previous studies. Such written justification requirements have been shown to lead to a ‘more complex and careful analysis of available information’ (DeZoort et al., Reference DeZoort, Harrison and Taylor2006, p. 385) and to improve decision quality in accounting and auditing settings (Ashton, Reference Ashton1990, Reference Ashton1992). In addition, Bolsen and colleagues (Reference Bolsen, Druckman and Cook2014) found reduced motivated reasoning in response to a written justification requirement in a survey-based party cue experiment. Thus, we predicted that the treatment group would engage in more effortful search for (by opening more boxes) and processing of information (by spending more time with the boxes opened) than the control group.

The results of our test of H2 are reported in Table 1. We find no effect of justification requirements on respondents’ search for information. The ‘Justification requirement’ coefficient is statistically insignificant, both for the politicians in model 1 and the public in model 3, meaning that the treatment groups’ average number of opened boxes does not vary significantly from the number in the control groups. It should be noted that 72% of the politicians and 76% of the public opened all 10 boxes, limiting the degree of variation. Future research is encouraged to replicate the experiment with a higher number of boxes to test whether a greater need to prioritize will lead to other results. However, for now, we conclude that our decision board experiment does not offer support for the information search part of H2.

Table 1. Influence of justification requirements in decision board (ordinary least squares with standard errors in parentheses).

Note: In models 1 and 3, the dependent variable measures the number of information boxes being opened in the decision board experiment. In models 2 and 4, the dependent variable measures the number of milliseconds spent with information being opened.

p < 0.1, *p < 0.05, **p < 0.01, ***p < 0.001; two-sided significance tests.

We do, however, find some evidence of an effect of justification requirements on respondents’ processing of information. Thus, the ‘Justification requirement’ coefficient is statistically significant among the politicians in model 2 and marginally significant among the general public in model 4 (p = 0.054), meaning that treatment group participants did, on average, spend more time with information opened than members of the control groups.Footnote 7 Politicians who were not asked to justify their evaluations spent an average of 24.3 seconds on actively processing information and those asked to justify their evaluations spent an average of 29.3 seconds, meaning that the justification requirement led to an increase of 21% in time spent actively processing information. General public respondents who were not asked to justify their evaluations spent an average of 14.4 seconds on processing the information and those asked to justify their evaluations spent an average of 16.5 seconds, meaning that the justification requirement led to an increase of 15% in time spent actively processing information. Additional analyses (reported in Supplementary Material S3b) show no statistically significant difference between the politicians’ and the public's results.

H3: Effects of justification requirements on motivated reasoning

Finally, we examine whether justification requirements reduce respondents’ tendency to engage in motivated reasoning. To test this question, we added two experimental groups (E and F) to the experiment that tested H1. Group E was asked to evaluate information that was identical to the information in group C and group F was asked to evaluate information that was identical to the information in group D (see Figure 1), but prior to the information, the following text informed groups E and F that they would be asked to justify their evaluation: ‘On the next page, we will show you a table with information on elder care delivered by two suppliers. We will ask you to evaluate which supplier performs best. Furthermore, we will ask you to formulate an argument for your evaluation. Your argument should be suitable for discussion with a person who thinks that the other supplier performs best’ (emphasis in original).

By specifying that respondents’ arguments should be suitable for discussion with someone who disagrees with their evaluation, we seek to simulate the adversarial nature of political discourse. This was important, as prior studies have found that discussions with fellow partisans (agreeing with one's own attitudes) can amplify politically motivated reasoning (Klar, Reference Klar2014), consistent with the notion that justification requirements can lead to stronger biases when ‘the choice option that appears easiest to justify also happens to be the biased option’ (Lerner & Tetlock, Reference Lerner and Tetlock1999, p. 264). Reminders of the justification requirement were embedded into the survey page where respondents evaluated the elder care suppliers. Thus, the following sentence was added at the end of Figure 1's introductory text: ‘We will now ask you to evaluate which supplier performs best and to give a reason for your evaluation’ (emphasis in original), and the performance question was phrased to include the following reminder: ‘Based on this information, which supplier do you think performs best, and why?’ Finally, the following open-ended question was included immediately after the performance question such that it was visible to the respondents while evaluating the information: ‘Imagine that you are to discuss your answer with a person who thinks that the other supplier performs best. What would you emphasize in the table to persuade the other person that your evaluation is correct? Please limit you answer to three lines.’

We test H3 in Table 2, where the interaction term ‘Congeniality × Justification requirement’ tests the expectation of weaker associations between attitudes and evaluations in groups E and F where respondents were asked to justify their evaluations, compared to groups C and D where no justification was required. The positive and statistically significant ‘congeniality’ coefficients in models 2 and 4 reinforce H1, indicating that the congeniality of information is positively and significantly related to respondents’ ability to correctly identify the best-performing supplier when no justification is required.

Table 2. Moderating effects of justification requirements on influence of attitudes (logistic regression analysis with standard errors in parentheses).

Note: The dependent variable measures whether respondents identify the supplier with the highest satisfaction rate as being the one that performs the best. Congeniality runs from 0 to 1, with higher values corresponding to stronger support for the public sector if the public supplier performs best (groups C and E in the experiment) and stronger support for the private sector if the private supplier performs best (groups D and F).

p < 0.1, *p < 0.05, **p < 0.01, ***p < 0.001; two-sided significance tests.

The results run contrary to H3 for politicians. Politicians become significantly more affected by the congeniality of the information when they are asked to justify their evaluations, as reflected in the significant interaction term in model 2. Thus, among politicians, the justification requirement seems to have bias-strengthening instead of debiasing effects.

Members of the general public behave in accordance with H3, although the evidence is not strong in this regard. The coefficient for model 4's interaction term is negative and marginally significant (p = 0.09), suggesting that congeniality matters less when people have to justify their evaluations.Footnote 8 Additional analyses in Supplementary Materials S3c and S5f show that the difference between the politicians and non-politicians with regard to H3 is statistically significant, controlling for age, gender and education, and regardless of the inclusion of inattentive respondents.

The moderating impact of the justification requirement on respondents’ tendency to engage in motivated reasoning is large. Among the politicians asked to justify their evaluations, predicted probabilities of correctly identifying the best-performing supplier range between 44% when the information is most uncongenial and 98% when the information is most congenial, meaning that the justification requirement increases the impact of congeniality from 35 (cf. test of H1) to 54 percentage points. Among the non-politicians asked to justify their evaluations, predicted probabilities of correctly identifying the best-performing supplier vary between 47% and 81%, meaning that the justification requirement reduces the impact of congeniality from 51 to 34 percentage points.

Discussion and exploratory analysis

The motivation to defend political attitudes is powerful, leading many to auto-accept politically congenial information while disregarding information that challenges existing views about the world. While lending some support to the potential to debias citizens, we find that politicians become more inclined to engage in politically motivated reasoning when required to justify their evaluations.

Why might politicians differ from citizens in their reactions to our experiment's justification requirement? While our study was not designed to offer causal evidence to answer this question, we can draw on our data to explore which possible explanations are more or less likely. One possibility is that personal characteristics, such as being more politically engaged (Taber & Lodge, Reference Taber and Lodge2006), make politicians more resistant towards debiasing interventions, meaning that the politician–citizen differences are due to self-selection. As a proxy for such personal characteristics, we can test the role of political interest, which was measured in the general public survey and is ‘a standard measure of psychological engagement in politics’ (Brady et al., Reference Brady, Verba and Schlozman1995). If the bias-strengthening effects of justification requirements among politicians are driven by self-selection based on political engagement, similar effects would be expected among the group of people who are most politically interested. However, this is not the case for our sample (see the regression analysis in Supplementary Material S4). The respondents who are most interested in politics, and who should therefore, according to the explanation above, be expected to react most like politicians, are the ones who drive the overall debiasing effect on non-politicians’ reasoning, meaning that they are the ones who behave least like politicians in reaction to justification requirements. Thus, our data suggest that explanations other than self-selection must be considered.

Another possibility is that the politician's role changes how people respond to justification requirements. Some studies show that professional roles lead certain groups to make unbiased professional judgments (Kahan, Reference Kahan2016b). For instance, relative to the public, judges and lawyers appear to be less biased when asked to evaluate judicial information, implying that legal training, but possibly also the demands of their job, condition legal professionals to better resist politically biased processing of information (Kahan et al., Reference Kahan, Hoffman, Evans, Devins, Lucci and Cheng2016). Like judges and lawyers, we may consider a politician to be a professional actor who is regularly asked to make judgments based on decision-relevant information. However, where a judicial professional is expected to set aside political attitudes and partisan identities, it is a politician's job to be a partisan (Andeweg, Reference Andeweg1997) and to avoid punishment from an external audience that values credible commitments (Tomz, Reference Tomz2007). As discussed in relation to H2 and H3, politicians are expected to be consistent in their political views and to defend the policy preferences upon which they have been elected. Politicians are trained to treat inconsistency as a sign of weakness, the trademark of a flip-flopper who will be penalized by voters and other political stakeholders (Tomz, Reference Tomz2007). Thus, their professional role gives politicians an incentive to treat justification requirements not as an opportunity to examine and nuance their own reasoning, but to construct arguments in favor of preselected conclusions.

While our experiments were not designed to test effects of role differences between politicians and the public, we can compare the responses of recently elected politicians with those of more experienced colleagues. If the bias-strengthening effect of justification requirements is due to politician-specific norms, we would expect the effect to be stronger among those who have been more exposed to those norms over time. Table 3 divides politicians between those elected in the previous year (39% of our sample) and the rest of our sample who had all been in office for 5 years or more. Consistent with the role-based explanation, Table 3 shows the bias-strengthening effect to be driven by experienced politicians. The justification requirement has no effect on the recently elected politicians in model 1, but has significant bias-strengthening effects on the experienced politicians in model 2.

Table 3. Recently elected versus experienced politicians (logistic regression analysis with standard errors in parentheses).

Note: Politicians are coded as ‘recently elected’ if the most recent election (in November 2013, 1 year before our data collection) was the first election where they were elected and ‘experienced’ if they were elected before the 2013 election. The dependent variable measures whether respondents identify the supplier with the highest satisfaction rate as being the one that performs the best.

p < 0.1, *p < 0.05, **p < 0.01, ***p < 0.001; two-sided significance tests.

To cast further light on the reasoning strategies of our respondents, we coded the qualitative content of the written justifications (for the coding scheme and analyses, see Supplementary Material S6). The results of our qualitative content analyses provide additional evidence of our results being driven by experienced politicians having learned strategies to confront attitude-uncongenial information as an expert motivated reasoner. Thus, whereas the qualitative content of the justifications provided by non-politicians and recently elected politicians was more or less unaffected by the attitude-congeniality of the experiments’ information, experienced politicians more often adapted their arguments depending on the information at hand. Specifically, as reported in the supplementary material's Tables S6ca and S6cb, the experienced politicians tended to base their justifications on the tables’ data (i.e., they referred to parent satisfaction) when this was attitude-congenial. However, Table S6ce shows that when the data were uncongenial, the experienced politicians more often based their justifications on specific conditions of local government (this could be equity considerations, expectations regarding the education of staff, etc.). Because these are explorative analyses of data, which were not collected for the purpose of testing the effects of roles, caution is needed when evaluating the results. However, the results are consistent with the idea that, over time, through their job, politicians learn how to defend their attitudes and beliefs ‘like a politician’ when faced with attitude-uncongenial information.

Conclusion

We conclude by noting some limitations to our study and discussing the broader implications of our results. While survey experiments such as ours are well-equipped to provide causal evidence, caution is needed in terms of generalizing the results beyond the experimental (often rather artificial) setting. For instance, our design asked respondents to make relatively quick interpretations of information, which was limited, stylized and hypothetical. Moreover, respondents were asked to identify the best-performing supplier from among two suppliers whose satisfaction rates were not very different from one another (83% versus 75%). In effect, one may argue that the cost of making erroneous interpretations, or even intentional mistakes, will often be higher in the real world of policymaking.

We acknowledge the theoretical opportunity that people might behave differently in scenarios with access to larger amounts of (potentially counter-attitudinal) information and with more need to engage actively with the information at hand. For instance, some of the literature suggests that people's tendency to engage in motivated reasoning can be limited (e.g., by increasing the amount of counter-attitudinal information to be evaluated; Redlawsk et al., Reference Redlawsk, Civettini and Emmerson2010). However, other studies have found politicians (but not members of the general public) to react with more motivated reasoning when they are confronted with larger amounts of policy information (Baekgaard et al. Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019), thereby calling into question the debiasing effects of forcing politicians to engage with counter-attitudinal information. We invite future research addressing the external validity of our findings empirically by replicating and extending our basic claims under different conditions, settings, constraints and, ideally, also with observations of actual decisions.

While additional research is needed in order to assess the boundary conditions of the behaviors we observe, our results have important implications, both for our understanding of politicians’ use of policy information and for research on elite behavior more broadly.

Politicians are constantly compelled to justify their decisions. Indeed, it is a central element of their job, partly because we hope that forcing such justifications through adversarial processes pushes them to offer policy claims that are more grounded in evidence. Our findings suggest that these processes of justification, which offer a check on motivated reasoning for the public, have the opposite effect on politicians. While representative democracy is premised on the idea that elected officials weigh policy evidence more carefully than voters, the justification processes inherent in their role actually seem to worsen the tendency to engage in motivated reasoning. The troubling paradox raised by our findings is that motivated reasoning is systemically amplified by the very political processes intended to reduce it.

Our results indicate that behavioral scientists who are interested in elites should think carefully about the extent to which elite roles may affect behaviors of interest. In cases where such roles may matter, researchers should attempt to run studies on elite samples or, at a minimum, attempt to identify groups of people who behave most like elites, instead of uncritically generalizing from findings obtained from non-elite samples. This is a demanding task in terms of the nature of the data to be collected, complicating research on elite decision-making. But to do otherwise risks misdiagnosing decision-making problems and potential solutions.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/bpp.2020.50.

Footnotes

1 We would like to thank Casper Mondrup Dahlmann, Asbjørn Hovgaard Mathiasen and Niels Bjørn Grund Petersen with whom we collected the politician data used to test H1 and H3 and who contributed significantly to the design of that survey.

2 While members of left-wing parties were slightly overrepresented in the tests of H1 and H3, respondents did not differ significantly from the population of Danish city councilors in terms of gender, municipality size or municipal finance committee membership. No background information is available for politicians participating in the test of H2.

3 For ethical reasons, we made clear in the introduction to the experiment that the information was hypothetical.

4 The questions were: ’To what extent do you agree or disagree with the following statements? (1) Many public activities could be produced both better and more cheaply by private providers. (2) We should to a larger degree outsource public services (such as childcare, elder care and hospital treatments). (3) The public sector is best at providing public services.’ Possible responses were as follows: Completely agree, Partly agree, Neither agree nor disagree, Partly disagree, Completely disagree or Don't know. In a factor analysis, factor scores were all above 0.8 for the politicians and above 0.7 for the non-politicians. Cronbach's α was 0.92 for politicians and 0.87 for non-politicians.

5 See also Supplementary Material S7, which contains PHP codes to reproduce the decision board experiment using the online MouselabWEB Designer.

6 For smartphone and tablet users, information remained visible until they clicked on a new box.

7 We excluded one outlier, a politician who spent 49 minutes with information opened, out of which 48 minutes were spent on one box (maximum time consumption among the rest of our respondents was 4.7 minutes, all boxes included).

8 The effects on the general public of the justification requirement turn statistically insignificant when inattentive respondents are included in the analysis (see Table S5c in the supplementary material).

References

Aleksovska, Marija, Schillemans, Thomas, and Grimmelikhuijsen, Stephan. (2019), ‘Lessons from five decades of experimental and behavioral research on accountability: A systematic literature review’, Journal of Behavioral Public Administration, 2(2): 118.CrossRefGoogle Scholar
Andeweg, Rudy (1997), ‘Role Specialisation or Role Switching? Dutch MPs between Electorate and Executive’, The Journal of Legislative Studies, 3(1): 110–27.CrossRefGoogle Scholar
Arceneaux, Kevin, and Wielen, Ryan Vander. (2017), Taming Intuition: How Reflection Minimizes Partisan Reasoning and Promotes Democratic Accountability, Cambridge: Cambridge University Press.Google Scholar
Ashton, Robert (1990), ‘Pressure and performance in accounting decision settings: Paradoxical effects of incentives, feedback, and justification’, Journal of Accounting Research, 28: 148180.CrossRefGoogle Scholar
Ashton, Robert (1992), ‘Effects of justification and a mechanical aid on judgment performance’, Organizational Behavior and Human Decision Processes, 52(2): 292306.CrossRefGoogle Scholar
Baekgaard, Martin, and Serritzlew, Søren. (2016), ‘Interpreting Performance Information: Motivated Reasoning or Unbiased Comprehension’, Public Administration Review, 76(1): 7382.CrossRefGoogle Scholar
Baekgaard, Martin, Christensen, Julian, Dahlmann, Casper, Mathiasen, Asbjørn, and Petersen, Niels. (2019), ‘The Role of Evidence in Politics: Motivated Reasoning and Persuasion among Politicians’, British Journal of Political Science, 49(3): 11171140.CrossRefGoogle Scholar
Banuri, Sheheryar, Dercon, Stefan, and Gauri, Varun. (2019), ‘Biased Policy Professionals’, The World Bank Economic Review, 33(2): 310327Google Scholar
Bartels, Brandon, and Bonneau, Chris. (2014), ‘Can Empirical Research Be Relevant to the Policy Process? Understanding the Obstacles and Exploiting the Opportunities’, in Bartels & Bonneau (eds), Making Law and Courts Research Relevant: The Normative Implications of Empirical Research, New York: Routledge, 221–28.CrossRefGoogle Scholar
Berinsky, Adam, Margolis, Michele, and Sances, Michael. (2014), ‘Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys’, American Journal of Political Science, 58(3): 739–53.CrossRefGoogle Scholar
Bolsen, Toby, Druckman, James, and Cook, Fay. (2014), ‘The Influence of Partisan Motivated Reasoning on Public Opinion’, Political Behavior, 36(2): 235–62.CrossRefGoogle Scholar
Brady, Henry, Verba, Sidney, and Schlozman, Kay. (1995), ‘Beyond SES: A Resource Model of Political Participation’, The American Political Science Review, 89(2): 271–94.CrossRefGoogle Scholar
Bullock, John, Gerber, Alan, Hill, Seth, and Huber, Gregory. (2015), ‘Partisan Bias in Factual Beliefs about Politics’, Quarterly Journal of Political Science, 10: 519–78.CrossRefGoogle Scholar
Butler, Daniel, and Dynes, Adam. (2016), ‘How Politicians Discount the Opinions of Constituents with Whom They Disagree’, American Journal of Political Science, 60(4): 975–89.CrossRefGoogle Scholar
Christensen, Julian, Dahlmann, Casper, Mathiasen, Asbjørn, Moynihan, Donald, and Petersen, Niels. (2018), ‘How Do Elected Officials Evaluate Performance? Goal Preferences, Governance Preferences and the Process of Goal Reprioritization’, Journal of Public Administration Research and Theory, 28(2): 197211.CrossRefGoogle Scholar
Davies, Huw, Nutley, Sandra, and Smith, Peter (eds.). (2000), What Works? Evidence-Based Policy and Practice in Public Services, Bristol: The Policy Press.CrossRefGoogle Scholar
De Dreu, Carsten, and van Knippenberg, Daan. (2005), ‘The possessive self as a barrier to conflict resolution: Effects of mere ownership, process accountability, and self-concept clarity on competitive cognitions and behavior’, Journal of Personality and Social Psychology, 89(3): 345357.CrossRefGoogle ScholarPubMed
DeZoort, Todd, Harrison, Paul, and Taylor, Mark. (2006), ‘Accountability and auditors’ materiality judgments: The effects of differential pressure strength on conservatism, variability, and effort’, Accounting, Organizations and Society, 31(4–5): 373390.CrossRefGoogle Scholar
Doubleday, Robert, and Wilsdon, James. (2012), ‘Science Policy: Beyond the Great and Good’, Nature, 485: 301–2.CrossRefGoogle ScholarPubMed
Druckman, James, and Lupia, Arthur. (2012), ‘Experimenting with Politics’, Science, 335(6073): 11771179.CrossRefGoogle ScholarPubMed
Esaiasson, Peter, and Öhberg, Patrik. (2019), ‘The moment you decide, you divide: How politicians assess procedural fairness’, European Journal of Political Research.Google Scholar
Goren, Paul, Federico, Christopher, and Kittilson, Miki. (2009), ‘Source Cues, Partisan Identities, and Political Value Expression’, American Journal of Political Science, 53(4): 805820.CrossRefGoogle Scholar
Green, Melanie, Visser, Penny, and Tetlock, Philip. (2000), ‘Coping with accountability cross-pressures: Low-effort evasive tactics and high-effort quests for complex compromises’, Personality and Social Psychology Bulletin, 26(11): 13801391.CrossRefGoogle Scholar
Hansen, Kasper Møller. (2018), ‘Valgdeltagelsen Ved Kommunal- Og Regionsvalget 2017.’ CVAP-WP1-2018. CVAP Working Paper Series. Copenhagen.Google Scholar
Henrich, Joseph, Heine, Steven, and Norenzayan, Ara. (2010), ‘The Weirdest People in the World?’, Behavioral and Brain Sciences, 33: 61135.CrossRefGoogle ScholarPubMed
Jermias, Johnny. (2006), ‘The influence of accountability on overconfidence and resistance to change: A research framework and experimental evidence’, Management Accounting Research, 17(4): 370388.CrossRefGoogle Scholar
Kahan, Dan (2016a), ‘The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It’, Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource, 116.Google Scholar
Kahan, Dan (2016b), ‘The Politically Motivated Reasoning Paradigm, Part 2: Unanswered Questions’, Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource, 115.Google Scholar
Kahan, Dan, Hoffman, David, Evans, Danieli, Devins, Neal, Lucci, Eugene, and Cheng, Katherine. (2016), ‘‘Ideology’ or ‘Situation Sense’? An Experimental Investigation of Motivated Reasoning and Professional Judgment’, University of Pennsylvania Law Review, 164: 349439.Google Scholar
Kahan, Dan, Peters, Ellen, Dawson, Erica Cantrell, and Slovic, Paul. (2017), ‘Motivated Numeracy and Enlightened Self-Government’, Behavioural Public Policy, 1(1): 5486.CrossRefGoogle Scholar
Klar, Samara. (2014), ‘Partisanship in a Social Setting’, American Journal of Political Science, 58(3): 687794.CrossRefGoogle Scholar
Kroon, Marceline, Van Kreveld, David, and Rabbie, Jacob. (1992), ‘Group versus individual decision making: Effects of accountability and gender on groupthink’, Small Group Research, 23(4), 427458.CrossRefGoogle Scholar
Kunda, Ziva. (1990), ‘The Case for Motivated Reasoning’, Psychological Bulletin, 108(3): 480–98.CrossRefGoogle ScholarPubMed
Lerner, Jennifer, and Tetlock, Philip. (1999), ‘Accounting for the Effects of Accountability’, Psychological Bulletin, 125(2): 255–75.CrossRefGoogle ScholarPubMed
Lerner, Jennifer, Goldberg, Julie, and Tetlock, Phillip. (1998), ‘Sober Second Thought: The Effects of Accountability, Anger, and Authoritarianism on Attributions of Responsibility’, Personality and Social Psychology Bulletin, 24(6): 563–74.CrossRefGoogle Scholar
Lind, Thérèse, Erlandsson, Arvid, Västfjäll, Daniel, and Tinghög, Gustav. (2018), ‘Motivated Reasoning When Assessing the Effects of Refugee Intake’, Behavioural Public Policy. DOI: https://doi.org/10.1017/bpp.2018.41CrossRefGoogle Scholar
Miler, Kristina (2009), ‘The Limitations of Heuristics for Political Elites’, Political Psychology, 30(6): 863–94.CrossRefGoogle Scholar
Ministry of Finance ((2018)). Økonomisk Analyse: Udviklingen i de offentlige udgifter fra 2000 til 2017. https://www.ft.dk/samling/20171/almdel/FIU/bilag/116/1891908.pdf (last accessed July 11, 2020).Google Scholar
Moynihan, Donald, and Beazley, Ivor. (2016), Toward next-generation performance budgeting: Lessons from the experiences of seven reforming countries, Washington D.C: The World Bank.CrossRefGoogle Scholar
Nyhan, Brendan and Reifler, Jason. (2015), ‘The Effect of Fact-Checking on Elites: A Field Experiment on U.S. State Legislators’, American Journal of Political Science, 59(3): 628640.CrossRefGoogle Scholar
Prior, Markus, Sood, Gaurav, and Khanna, Kabir. (2015), ‘You Cannot Be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions’, Quarterly Journal of Political Science, 10(4): 489518.CrossRefGoogle Scholar
Redlawsk, David, Civettini, Andrew, Emmerson, Karen. (2010), ‘The Affective Tipping Point: Do Motivated Reasoners Ever Get It?’, Political Psychology, 31(4): 563–93.CrossRefGoogle Scholar
Sedikides, Constantine, Herbst, Kenneth, Hardin, Deletha, and Dardis, Gregory. (2002), ‘Accountability as a deterrent to self-enhancement: The search for mechanisms’, Journal of Personality and Social Psychology, 83(3): 592605.CrossRefGoogle ScholarPubMed
Sheffer, Lior, Loewen, Peter, Soroka, Stuart, Walgrave, Stefaan, and Sheafer, Tamir. (2017), ‘Non-Representative Representatives: An Experimental Study of the Decision Making Traits of Elected Politicians’, American Political Science Review, 112(2): 302321.Google Scholar
Slothuus, Rune, and de Vreese, Claes. (2010), ‘Political Parties, Motivated Reasoning, and Issue Framing Effects’, Journal of Politics, 72(3): 630–45.CrossRefGoogle Scholar
Smith, Brettney. (2012), The effects of accountability on leniency reduction in self-and peer ratings on team-based performance appraisals, Doctoral dissertation. Clemson University.Google Scholar
Taber, Charles, and Lodge, Milton. (2006), ‘Motivated Skepticism in the Evaluation of Political Beliefs’, American Journal of Political Science, 50(3): 755–69.CrossRefGoogle Scholar
Taber, Charles, Cann, Damon, and Kucsova, Simona. (2009), ‘The Motivated Processing of Political Arguments’, Political Behavior, 31(2): 137–55.CrossRefGoogle Scholar
Tavits, Margit. (2007), ‘Principle vs. Pragmatism: Policy Shifts and Political Competition’, American Journal of Political Science, 51(1): 151–65.CrossRefGoogle Scholar
Tetlock, Philip (1983), ‘Accountability and the Perseverance of First Impressions’, Journal of Personality and Social Psychology Social Psychology Quarterly, 46(4): 285–92.Google Scholar
Tetlock, Philip (1985), ‘Accountability: A Social Check on the Fundamental Attribution Error’, Social Psychology Quarterly, 48(3): 227–36.CrossRefGoogle Scholar
Tetlock, Philip, and Kim, Jae Il. (1987), ‘Accountability and Judgment Processes in a Personality Prediction Task’, Journal of Personality and Social Psychology, 52(4): 700709.CrossRefGoogle Scholar
Tetlock, Philip, Skitka, Linda, and Boettger, Richard. (1989), ‘Social and cognitive strategies for coping with accountability: conformity, complexity, and bolstering’, Journal of Personality and Social Psychology, 57(4): 632640.CrossRefGoogle ScholarPubMed
Tomz, Michael. (2007), ‘Domestic audience costs in international relations: An experimental approach’, International Organization, 61(4): 821840.CrossRefGoogle Scholar
Tyler, James, and Rosier, Jennifer. (2009), ‘Examining self-presentation as a motivational explanation for comparative optimism’, Journal of Personality and Social Psychology, 97(4): 716727.CrossRefGoogle ScholarPubMed
Willemsen, Martijn, and Johnson, Eric. (2011), ‘Visiting the Decision Factory: Observing Cognition with MouselabWEB and Other Information Acquisition Methods’, in Schulte-Mecklenbeck, Kühnberger, and Ranyard, (eds), A Handbook of Process Tracing Methods for Decision Research: A Critical View and User's Guide, New York: Psychology Press, 2142.Google Scholar
Figure 0

Figure 1. Experimental material, groups A–D.

Figure 1

Figure 2. Uncongeniality of information and expected probabilities of making erroneous judgments in identifying best-performing suppliers.Note: This figure is based on regression analyses reported in the supplementary material's Table S2 (models 3 and 6). The horizontal axis runs from 0 to 1, with higher values corresponding to stronger support for the public sector if the private supplier performs best (group D in experiment) and stronger support for the private sector if the public supplier performs best (group C).

Figure 2

Figure 3. Information boxes in the decision board experiment (English translation).Note: For each respondent, the order of the performance indicators was randomized. Moreover, within each performance indicator, it was randomized as to which school performed best.

Figure 3

Table 1. Influence of justification requirements in decision board (ordinary least squares with standard errors in parentheses).

Figure 4

Table 2. Moderating effects of justification requirements on influence of attitudes (logistic regression analysis with standard errors in parentheses).

Figure 5

Table 3. Recently elected versus experienced politicians (logistic regression analysis with standard errors in parentheses).

Supplementary material: File

CHRISTENSEN and MOYNIHAN supplementary material
Download undefined(File)
File 69.1 KB