Article contents
The Effects of Deliberative Polling in an EU-wide Experiment: Five Mechanisms in Search of an Explanation
Published online by Cambridge University Press: 07 February 2012
Abstract
Deliberative Polls simulate public opinion in a given policy domain when members of the relevant mass public are better informed about the issues involved. This article reports on the results of a three-day Deliberative Poll, conducted before the June 2009 European Parliament elections, to evaluate the effects of deliberation on a representative sample of EU citizens. Findings show that, compared with a control group, deliberators changed their views significantly on immigration (becoming more liberal), climate change (becoming greener) and the EU itself (becoming more pro-European). Five different explanations of why deliberation appears to work are tested: sampling bias, increased political knowledge, discussion quality, small group social conformity pressure and the influence of other Deliberative Poll actors, but none is satisfactory.
- Type
- Research Article
- Information
- Copyright
- Copyright © Cambridge University Press 2012
References
1 For specific examples, see Luskin, Robert C.Fishkin, James S. and Jowell, Roger, ‘Considered Opinions: Deliberative Polling in Britain’, British Journal of Political Science, 32 (2002), 455–487CrossRefGoogle Scholar.
2 See Fishkin, James S., The Voice of the People (New Haven, Conn.: Yale University Press, 1997)Google Scholar; Fishkin, James S. and Luskin, Robert C., ‘Bringing Deliberation to Democratic Dialogue’, in Maxwell McCombs and Amy Reynolds, eds, A Poll with a Human Face (Mahwah, N.J.: Lawrence Erlbaum, 1999), pp. 3–38Google Scholar; Fishkin, James S. and Luskin, Robert C., ‘Broadcasts of Deliberative Polls: Aspirations and Effects’, British Journal of Political Science, 36 (2006), 184–188CrossRefGoogle Scholar.
3 See Barabas, Jason, ‘How Deliberation Affects Policy Opinions’, American Political Science Review, 98 (2004), 687–701CrossRefGoogle Scholar; Fishkin, James S., When the People Speak: Deliberative Democracy and Public Consultation (Oxford: Oxford University Press, 2009)Google Scholar.
4 Gastil, John, By Popular Demand: Revitalising Representative Democracy through Deliberative Elections (Berkeley: University of California Press, 2000)Google Scholar.
5 The details of the Europolis project are described at http://www.europolis-project.eu/ and in Pierangelo Isernia and Kaat Smets, ‘Democracy in Hard Times: Does Deliberation Affect Attitude Strength’ (CIRCaP, University of Siena, Europolis Working Paper, 2010).
6 Fishkin, James S. and Luskin, Robert C., ‘Experimenting with a Democratic Ideal: Deliberative Polling and Public Opinion’, Acta Politica, 40 (2005), 284–298CrossRefGoogle Scholar. See also Fishkin, When the People Speak.
7 Respondents’ expenses for attending the three-day DP event, which was held in a hotel on the outskirts of Brussels, were fully reimbursed and a modest per diem fee was also paid. The event was funded by the European Ccommunity under Framework Programme VI.
8 See, for example, Converse, Philip E., ‘The Nature of Belief Systems in Mass Publics (1964)’, Critical Review, 18 (2006)CrossRefGoogle Scholar; Sniderman, Paul M.Brody, Richard A. and Tetlock, Phillip E., Reasoning and Choice: Explorations in Poltical Psychology (Cambridge: Cambridge University Press, 1991)CrossRefGoogle Scholar; Zaller, John, The Nature and Origins of Mass Opinion (Cambridge: Cambridge University Press, 1992)CrossRefGoogle Scholar; Zaller, John and Feldman, Stanley, ‘A Simple Theory of the Survey Response’, American Political Science Review, 36 (1992), 579–616CrossRefGoogle Scholar.
9 See, for example, Lewis-Beck, Michael S., Economics and Elections: The Major Democracies (Ann Arbor: University of Michigan Press, 1988)Google Scholar; Norpoth, Helmut, Lewis-Beck, Michael and Lafay, Jean Dominique, eds, Economics and Elections: The Calculus of Support (Ann Arbor: University of Michigan Press, 1991)Google Scholar; Erikson, Robert S.McKuen, Michael B. and Stimpson, James A., The Macro Polity (New York: Cambridge University Press, 2002)Google Scholar; Dorussen, Han and Taylor, Michael, eds, Economic Voting (London: Routledge, 2002)Google Scholar.
10 See, for example, Page, Benjamin I. and Shapiro, Robert, The Rational Public: Fifty Years of Trends in America's Policy Preferences (Chicago: Chicago University Press, 1992)CrossRefGoogle Scholar; Alvarez, R. Michael, Information and Elections (Ann Arbor: University of Michigan Press, 1997)CrossRefGoogle Scholar; Alvarez, R. Michael and Brehm, John, Hard Choices, Easy Answers (Princeton, N.J.: Princeton University Press, 2002)Google Scholar; Robert Huckfeldt and John Sprague, Citizens, Politics and Social Communication (Cambridge: Cambridge University Press, 1995)Google Scholar; Petty, Richard E. and Wegener, Duane T., ‘Attitude Change: Multiple Roles for Persuasion Variables’, in Daniel T. Gilbert, Susan T. Fiske and Gardner Lindzey, eds, The Handbook of Social Psychology, 4th edn (Boston, Mass.: McGraw Hill, 1998), pp. 323–390Google Scholar; Jacobs, Lawrence R.Cook, Fay Lomax and Carpini, Michael X. Delli, Talking Together: Public Deliberation and Political Participation in America (Chicago: Chicago University Press, 2009)CrossRefGoogle Scholar.
11 Popkin, Samuel, The Reasoning Voter: Communication and Persuasion in Presidential Campaigns (Chicago: Chicago University Press, 1991)Google Scholar; Krosnick, Jon A. and Abelson, Robert P., ‘The Case for Measuring Attitude Strength in Surveys’, in Judith M. Tanur, ed., Questions about Questions: Inquiries into the Cognitive Bases of Surveys (New York, Russell Sage Foundation, 1992), pp. 177–203Google Scholar; Petty, Richard E. and Krosnick, Jon A., Attitude Strength: Antecedents and Consequences (Mahwah, N.J.: Lawrence Erlbaum, 1995)Google Scholar; Lupia, Arthur and McCubbins, Matthew D., The Democratic Dilemma: Can Citizens Learn What They Really Need to Know? (New York: Cambridge University Press, 1998)Google Scholar; Kuklinski, James H. and Peyton, Buddy, ‘Belief Systems and Political Decision Making’, in Russell J. Dalton and Hans-Dieter Klingemann, eds, Oxford Handbook of Political Behavior (Oxford: Oxford University Press, 2007), pp. 45–63Google Scholar.
12 Participants spoke and listened to the discussion in their own native language: simultaneous translation was provided as necessary for all who required it. Note that the small group discussions preceeded the plenary sessions. This allowed participants first to debate the issues for themselves, and then to seek further clarification or elucidation from specialists with different viewpoints in the plenaries.
13 All materials supplied are available at the website cited in fn. 5.
14 Heckman, J., ‘Sample Selection Bias as a Specification Error’, Econometrica, 47 (1979), 153–161CrossRefGoogle Scholar.
15 See Althaus, Scott L., ‘Information Effects in Collective Preferences’, American Political Science Review, 92 (1998), 545–558CrossRefGoogle Scholar; Barabas, Jason, ‘How Deliberation Affects Policy Opinions’, American Political Science Review, 98 (2004), 687–701CrossRefGoogle Scholar.
16 Inglehart, Ronald, ‘The Silent Revolution in Europe: Intergenerational Change in Post-Industrial Societies’, American Political Science Review, 65 (1971), 991–1017CrossRefGoogle Scholar; Inglehart, Ronald, The Silent Revolution: Changing Values and Political Styles among Western Publics (Princeton, N.J.: Princeton University Press, 1977)Google Scholar.
17 Zaller, The Nature and Origins of Mass Opinion.
18 Fishkin, James S. and Luskin, Robert C., ‘Experimenting with a Democratic Ideal: Deliberative Polling and Public Opinion’, Acta Politica, 40 (2005), 284–298CrossRefGoogle Scholar; Mutz, Diana C., Hearing the Other Side: Deliberative versus Participatory Democracy (Cambridge: Cambridge University Press, 2006)CrossRefGoogle Scholar. See also Page, Benjamin I., Who Deliberates? Mass Media in Modern Democracy (Chicago: Chicago University Press, 1996)Google Scholar.
19 This use of aggregated intersubjective measures of discussion quality here is the result of necessity rather than choice. One important part of the Europolis DP project was the construction of objective measures of discussion quality based on content analyses of the actual discussion sessions. Unfortunately, financial constraints on the project meant that it was possible to produce objective measures of deliberation quality for only seven of the twenty-five discussion groups. Moreover, these groups were restricted to a limited set of languages and, therefore, cannot be regarded as representative of the full range of deliberation discussion that took place. In addition, to have objective data on only seven groups does not provide sufficient cases for a robust analysis of the effects of objective discussion quality to be undertaken. In these circumstances, I reluctantly restrict the analysis of ‘deliberation quality’ to subjective, as opposed to objective, measures.
20 Fisher, R. J., ‘Social Desirability Bias and the Validity of Indirect Questioning’, Journal of Consumer Research, 20 (1993), 303–315CrossRefGoogle Scholar.
21 Janis, Irving L., Victims of Groupthink (Boston, Mass.: Houghton Mifflin, 1972)Google Scholar.
22 Elster, Jon, Deliberative Democracy (Cambridge: Cambridge University Press, 1998)CrossRefGoogle Scholar.
23 The question used was: ‘On a scale from 0 to 10, where 0 is “no problem at all”, 10 is “the most serous problem we face”, and 5 is “exactly in the middle” how serious a problem or not would you say [immigration/climate change] is?’
24 The Pro-immigration index was constructed, after extensive dimensional testing, as a 0–10 constant range scale from twelve questions that sought to measure respondents’ attitudes towards immigrants to the European Union. Details on index construction are available from the author on request.
25 The combat climate change scale was based on the question: ‘On a scale from 0–10, where 0 means that we should do everything possible to combat climate change even if that hurts the economy, 10 means that we should do everything possible to maximize economic growth, even if that hurts efforts to combat climate change and 5 is exactly in the middle, where would you position yourself on this scale, or haven't you thought much about that?’ The ordering of the scale was reversed so that a high score signified a more pro-environmental position.
26 These measures are not highly intercorrelated. The highest bivariate correlation is r = 0.36. This reinforces the idea that the measures reflect distinct sets of attitudes towards Europe. This interpretation is supported by extensive dimensional analysis of EU-related attitudes reported in Sanders, David, Bellucci, Paolo, Torcal, Mariano and Tóka, Gábor, eds, The Europeanization of National Polities? Citizenship and Support in a Post Enlargement Europe (Oxford: Oxford University Press, 2012 forthcoming)Google Scholar.
27 The EU identity measure was taken from responses to the question: ‘On a scale from 0 to 10, where 0 is “not at all”, 10 is “completely”, and 5 is exactly in the middle, how much would you say you think of yourself as European?’
28 The EU Policy Scope measure was constructed additively from questions which asked respondents’ preferences for EU-level (as opposed to national or regional level) decision making in four policy areas – ‘fighting unemployment’, ‘climate change’, ‘immigration policy’ and ‘fight against crime’.
29 The EU Democracy Satisfaction or ‘EU Representation’ measure was constructed from responses to: ‘On the whole, how satisfied or not are you with the way democracy works in the European Union?’ Response options were Very Satisfied, Somewhat Satisfied, Neither Satisfied nor Dissatisfied, Dissatisfied, Very Dissatisfied.
30 The EU Evaluations index combined responses to two questions: ‘(1) Generally speaking, do you think that [country's] membership of the European Union is a Very Good Thing, a Fairly Good Thing, Neither Good nor Bad, a Fairly Good Thing, A Very Bad thing; (2) On a 0 to 10 scale, where 0 means that [country] has “not benefited at all” from being a member of the EU, 10 means “has benefited enormously”, and 5 is “exactly in the middle”, using this scale, would you say that on balance [country] has benefited or not from being a member of the EU?’
31 This strong collateral effect on EU attitudes was not observed in the only other EU-wide deliberative poll that has been conducted – Tomorrow's Europe (see http://cdd.stanford.edu/polls/eu/ Tomorrow's Europe: the first EU wide Deliberative Poll). One possible explanation for the difference in these two sets of DP findings could derive from the differential character of the issues debated. In the Europolis poll covered here, the issues of immigration and climate change perhaps intrinsically lend themselves to EU-wide solutions and accordingly invoked a more pro-EU response among participants. In contrast, the Tomorrow's Europe DP focused on Turkey's entry into the European Union. This could have served to focus participants’ attention to the difficulties likely to confront EU citizens in the future, which in turn could have invoked less cosmopolitan attitudes among the Tomorrow's Europe participants. A second possibility is that the set of EU attitude measures employed in the Europolis event were more comprehensive than those used in Tomorrow's Europe. Future studies should enable these potential factors to be considered more explicitly.
32 Other weighting combinations in fact produced similar results to those reported here.
33 There are other possible ways of specifying and estimating knowledge acquisition effects. See fn. 19 and Tables A1 and A2 in the Appendix for an analysis of the main alternative specifications.
34 Dropping these controls from the various specifications makes no difference to the magnitudes, signs or significance levels of the key explanatory variables.
35 The political knowledge term produces no significant effects at conventionally accepted levels. This suggests that there is no systematic tendency for increased knowledge to change DP participants’ attitudes in terms of either salience or position. This conclusion seems to support Zaller's claims about the impact of political knowledge rather than Inglehart's.
36 One of the surprising features of the analysis presented above, given the findings of other DP studies, is the apparent absence of an explanatory role for changes in political knowledge. Previous studies (e.g. Fishkin and Luskin, ‘Experimenting with a Democratic Ideal’; Fishkin and Luskin, ‘Broadcasts of Deliberative Polls’; Luskin, Fishkin and Jowell, ‘Considered Opinions’) have found that increases in knowledge constitute an important part of the DP effect on participants’ attitudes. It could be argued that the lack of knowledge-effects observed here results more from a failure of model specification than from a lack of ‘real effects’. There are two important ways in which the specification and estimation of possible knowledge effects here differs from those used in other studies.
First, increases in knowledge here are captured explicitly in terms of observed individual-level change, rather than in terms of the ‘implied change’ that results from the use of a lagged dependent variable model. In ‘Broadcasts of Deliberative Polls’, Fishkin and Luskin argue that post-test knowledge level is the best proxy for knowledge change. Here, this would imply that knowledge change could best be measured using the level of knowledge at wave 4 rather than the change in knowledge between waves 1 and 4. The use of this term ‘level’ is typically justified on the grounds that people with high knowledge at time 1 by definition cannot increase their knowledge level at time t 2 as much as can people with low knowledge at time t 1. Moreover, with a lagged dependent variable this ‘level of knowledge at wave 4’ specification can in any case be regarded as capturing the implied effects of knowledge change. However, it should be noted that this is only the modelled effect of change – not the actual effect of observed individual change. In any event, substituting ‘wave 4 level of knowledge’ for the ‘change in knowledge between waves 1 and 4’ in the models reported in Tables 8 and 9 produces virtually identical results to those reported in those two tables. Using the ‘level of knowledge at wave 4’ specifications, the knowledge term fails to produce a significant effect for almost all the dependent variables examined here. The only exception is in the immigration importance equation, where the knowledge term produces a significant (but negative) effect. All of this suggests that the use of an explicit knowledge change measure in Tables 8 and 9 does not mask effects that would be otherwise revealed if an implicit change (wave 4 level) measure were used instead.
A second way in which knowledge effects could be specified and estimated is to regard them as interaction effects rather than as simple direct effects. This is the approach taken in Fishkin and Luskin's ‘Broadcasts of Deliberative Polls’. The intuition is that the effects of knowledge acquisition on attitudes vary according to the levels of other explanatory variables in the model. Thus, for example, it might be expected that the effects of inceased knowledge on people's climate change positions will be greater if there is a relatively high (as opposed to a relatively low) level of small group discussion quality. The logic of this approach is that we should be more interested in the effects of the interactions between knowledge and each of the other explanatory variables in the model than in the direct ‘main’ effects of increased knowledge itself. This approach is tested explicity in the results reported in Tables A1 and A2 in the Appendix. The tables are based on the ‘full’ model specifications as shown in Tables 8 and 9, but with all possible interactions between knowledge change and the core explanatory variables added to the models. The tables report the results for the interaction effects only. (The pattern of ‘main’ effects remains virtually identical to that reported in Tables 8 and 9.) The results lend little credence to the idea that knowledge change effects can be discerned using this interaction approach. Across the two tables, only four out of fifty-six possible knowledge interaction effects are statistically significant – barely more than would be expected on the basis of chance alone. This again suggests that it is not the particular model specification design adopted here that is somehow masking the ‘real’ ‘knowledge acquisition’ mechanism that underpins the DP effect.
- 36
- Cited by