Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-21T11:33:28.478Z Has data issue: false hasContentIssue false

Socially Mediated Internet Surveys: Recruiting Participants for Online Experiments

Published online by Cambridge University Press:  30 September 2013

Erin C. Cassese
Affiliation:
West Virginia University
Leonie Huddy
Affiliation:
Stony Brook University
Todd K. Hartman
Affiliation:
Appalachian State University
Lilliana Mason
Affiliation:
Stony Brook University
Christopher R. Weber
Affiliation:
University of Arizona
Rights & Permissions [Opens in a new window]

Abstract

The Socially Mediated Internet Survey (SMIS) method is a cost-effective technique used to obtain web-based, adult samples for experimental research in political science. SMIS engages central figures in online social networks to help recruit participants among visitors to these websites, yielding sizable samples for experimental research. We present data from six samples collected using the SMIS method and compare them to those gathered by other sampling approaches such as Amazon's Mechanical Turk. While not representative of the general adult population, our SMIS samples are significantly more diverse than undergraduate convenience samples, not only demographically but also politically. We discuss the applicability of the method to experimental research and its usefulness for obtaining samples of special, politically relevant subpopulations such as political sophisticates and activists. We argue that the diversity of SMIS samples, along with the ability to capture highly engaged citizens, can circumvent questions about the artificiality of political behavior experiments entirely based on student samples and help to document sources of heterogeneous experimental treatment effects.

Type
The Profession
Copyright
Copyright © American Political Science Association 2013 

Political scientists interested in a wide array of topics, such as voting behavior, public opinion, political communication, decision making, and biopolitics, have increasingly turned to experimentation as a methodological tool (Druckman et al. Reference Druckman, Green, Kuklinski and Lupia2006). To appeal to this growing interest, APSA organized a new section with its own journal, the Journal of Experimental Political Science, devoted to experimental research.Footnote 1 The popularity of the experimental method can be traced to its ability to identify and explicate the causal processes underlying political phenomena (Druckman et al. Reference Druckman, Green, Kuklinski and Lupia2011; Morton and Williams Reference Morton and Williams2010). However, as experiments become more widely used, researchers increasingly face the vexing problem of obtaining diverse, yet affordable, samples.

In the past, political scientists have recruited participants for their experiments by following the standard practice in psychology: drawing convenience samples from the undergraduate student body. The problems inherent in these subject pools are well known—they contain samples that are relatively homogenous with respect to factors such as age, education, life experiences, and political engagement (Henry Reference Henry2008; Kam, Wilking, and Zechmeister Reference Kam, Wilking and Zechmeister2007; Sears Reference Sears1986). As a result, the use of these samples has raised questions about the ability to generalize from such experiments to the electorate as a whole. Given these concerns, nationally representative adult samples have emerged as the “gold standard” for experimental research in political science (Kam, Wilking, and Zechmeister Reference Kam, Wilking and Zechmeister2007). Unfortunately, for many researchers, obtaining this kind of sample can be prohibitively expensive.Footnote 2

Because of the high cost of obtaining representative samples and the limitations of student subject pools, scholars have resorted to using nonprobability samples that move away from the “narrow database” of college students (Sears Reference Sears1986) but are relatively inexpensive and easy to acquire. For instance, Berinsky and colleagues (Reference Berinsky, Huber and Lenz2012) have evaluated the viability of recruiting participants from Amazon's Mechanical Turk (MTurk), which allows researchers to pay participants small amounts of money for completing surveys. Researchers find that relative to other convenience samples, MTurk participants are generally more diverse and seem to respond to experimental stimuli in a manner consistent with the results of prior research. Like MTurk, our approach uses the Internet to recruit participants; however, our method differs in that it takes advantage of social networks organized around Web 2.0 platforms. More specifically, we identify central figures in these networks—for example, bloggers and discussion forum moderators—to assist with participant recruitment among their readers and associates. We call this approach the Socially Mediated Internet Survey (SMIS) method.

The SMIS approach has several important advantages over other methods of participant recruitment. First, by using preexisting social networks, researchers can rapidly collect data at a low cost. Second, SMIS can yield large samples that are more demographically diverse than the typical student convenience sample. Third, this method provides access to interesting subpopulations that are worthy of study in their own right. That is, SMIS allows researchers to target networks organized around specific political themes, thus providing access to low-incidence populations that may be relevant to experimental studies focused on less common political behaviors, such as activism. In these respects, SMIS offers scholars a useful alternative for recruiting nonprobability samples for experimental research.

THE IMPORTANCE OF SAMPLING FOR EXPERIMENTATION

The average student sample tends to be geographically bound and homogeneous with respect to key sociodemographic characteristics such as age, education, and life experiences, as well as particularly important factors such as political engagement and knowledge (Birnbaum Reference Birnbaum2004; Henry Reference Henry2008; Reips Reference Reips and Birnbaum2000). For instance, Sears (Reference Sears1986) argues that students have less crystalized political attitudes on average than does the rest of the electorate, and Wattenberg (Reference Wattenberg2011) reports lower levels of political knowledge, engagement, and activity among college-age Americans relative to older citizens. The omission of politically engaged individuals from political science experiments raises questions about the degree to which observed effects are contingent on college students' limited political experience and involvement.

Diversity and Heterogeneous Treatment Effects

Samples that lack diversity restrict researchers' ability to uncover heterogeneous treatment effects, which occur when stimulus materials from one experimental condition resonate differently among particular demographic or political subpopulations (see Imai et al. Reference Imai, Keele, Tingley and Yamamoto2011; Imai and Strauss Reference Imai and Strauss2011). To the extent that they are homogeneous, student subject pools lack variation on key individual-level covariates that might condition reactions to experimental stimuli (Krupnikov and Levine Reference Krupnikov and Levine2013). Although representative samples are required for experimental research, random assignment of participants to treatment and control conditions ensures observed treatment effects are caused by the experimental manipulation rather than unobserved, systematic differences among participants (Druckman et al. Reference Druckman, Green, Kuklinski and Lupia2011; Kinder and Palfrey Reference Kinder and Palfrey1993; Morton and Williams Reference Morton and Williams2010). Nonetheless, diversity adds value by allowing researchers to explore and verify factors that moderate treatment effects. Consider, for example, economic threat manipulations involving home values, interest rates, or property taxes. Such threats should resonate more with homeowners than with average college students. Researchers who rely solely on an undergraduate sample may underestimate the effects of economic threat on political attitudes because of the limited range of income and financial independence observed in a typical student sample. In this fashion, student samples can mask heterogeneity in response to experimental treatments, further complicating efforts to understand the causal mechanisms underlying political attitudes and behavior.

Diversity and External Validity

Another concern for researchers interested in accurately estimating treatment effects is that citizens tend to self-select into political treatments in “real world” settings. For example, politically engaged participants are more likely than average citizens to be exposed to the kinds of communications manipulated within political experiments given their elevated rates of media consumption (Kinder Reference Kinder2007). These self-selection mechanisms can produce estimates of average treatment effects in experimental research that fail to generalize to applied settings because the treatment does not have the same degree of external validity for all participants. Gaines and Kuklinski (Reference Gaines and Kuklinski2011) illustrate this point in their research on the effects of negative campaign ads on political mobilization. They demonstrate that the effects of negative advertisements on evaluations of Obama and McCain were stronger among those who elected to view the ads than for those who were assigned to view them, which suggests the underestimation of treatment effects in a classic experimental design. This research highlights the fairly well-established claim that the opinions and behaviors of highly engaged, knowledgeable, and politically active citizens, to whom we refer as political sophisticates consistent with much political behavior research, do not always mirror those of the mass public (e.g., Zaller Reference Zaller1992; Gomez and Wilson Reference Gomez and Wilson2001; Taber and Lodge Reference Taber and Lodge2006).

Concerns that a lack of diversity among college student samples leads to muted treatment effects and weakened external validity can be circumvented through diverse nonprobability samples obtained on the Internet. Numerous organizations, such as SurveySavvy, Harris Poll Online, and Survey Spot, sell access to their online volunteer panels, which have been used in political research (Malhotra and Krosnick Reference Malhotra and Krosnick2007) and tend to vary in their composition and quality (Berrens et al. Reference Berrens, Bohara, Jenkins-Smith, Silva and Weimer2003). Web users are not representative of the public as a whole, but Internet survey firms use a variety of approaches to reduce bias.Footnote 3 For instance, YouGov's PollingPoint panel compensates for an initial selection stage that is nonrandom with a sample matching methodology that matches against high-quality probability samples (e.g., The American Community Study) to eliminate self-selection biases. Any residual irregularities are corrected with a propensity score matching technique, yielding a sample that looks demographically similar to the nation (Rivers and Bailey Reference Rivers and Bailey2009).

YouGov's online panel has been frequently used in political science research in studies such as the Cooperative Congressional Election Survey (CCES). Online volunteer panels, coupled with probability-based web samples collected by Knowledge Networks (KN), are among the most common web-based samples used in political research.Footnote 4 For obtaining truly representative samples, KN has emerged as the gold standard for online research. However, KN's high cost means it is out of reach for many researchers, who must rely instead on alternative low-cost nonprobability samples. Even volunteer online panels, such as the YouGov panel, are costly and out of reach for underfunded researchers.

Many researchers have turned to alternatives like Amazon's MTurk, which is cheaper than KN or the other commercial opt-in web panels. Berinsky, Huber, and Lenz (Reference Berinsky, Huber and Lenz2012; see also Buhrmeister, Kwang and Gosling Reference Buhrmeister, Kwang and Gosling2011; Mason and Suri Reference Mason and Suri2012; Paolacci, Chandler, and Ipeirotis Reference Paolacci, Chandler and Ipeirotis2010) have used MTurk effectively to recruit subjects to participate in online experiments by contracting “workers” in exchange for a token payment. In a second approach, Nosek, Banaji, and Greenwald's “Project Implicit”Footnote 5 relies on unpaid web-based volunteers to conduct psychological studies of implicit attitudes. Participants are passively recruited via media coverage of the researchers' studies, word-of-mouth, or simple chance browsing. Surprisingly, this passive method has successfully recruited millions of participants from across the United States and many different countries.

THE SMIS APPROACH

To this existing mix of online recruitment approaches for convenience sampling, we add SMIS. This technique relies on the potential of Web 2.0 platforms not only for expressing political opinion—through blogs, forums, and social networking sites—but also for capturing it. SMIS identifies and recruits individuals who are at the center of rich social networks, or “central nodes.” Given their critical role in the recruitment process, we refer to them as social mediators. We appeal to these social mediators (e.g., bloggers and discussion forum moderators) to endorse a study and then solicit participation among their readers and contacts. Thus, the request for participation comes from a known opinion leader—the social mediator—rather than an unknown researcher. This personal connection increases the likelihood of participation (e.g., Green and Gerber Reference Green and Gerber2004). By selecting highly visible and richly linked sites, researchers can ensure widespread exposure to the survey request.

The SMIS approach adds value to experimental research by providing scholars with cost-effective access to a relatively diverse subject pool. This approach allows researchers to capture a sample with variance on theoretically relevant covariates and to empirically evaluate the existence of heterogeneous treatment effects. SMIS also allows researchers to target and study special populations. Online social networks tend to be homophylic, reflecting concentrations of people with shared interests, issue attitudes, beliefs, and values (Singla and Richardson Reference Singla and Richardson2008). These virtual networks of like-minded individuals can provide access to poorly defined or low-incidence populations.

For instance, scholars have used online chat rooms to study the factors that trigger aggression among hate group members. By experimentally manipulating the content of potentially threatening messages, researchers demonstrated that cultural threats to white identity were more likely to induce aggression among hate group members than threats to material resources (Glaser, Dixit, and Green Reference Glaser, Dixit and Green2002). This insight would not be possible through the use of nationally representative probability samples, conventional student samples, or other online opt-in methods such as MTurk. The SMIS approach could be useful in this type of targeted research, as well as extended to recruit specific populations of interest to political scientists who study, for example, the dynamics of collective action or opinion among members mobilized around a specific political issue such as environmental protection, gay marriage, or legalized abortion (Klar and Kasser Reference Klar and Kasser2009; Mathy et al. Reference Mathy, Schillace, Coleman and Berquist2002; Miller and Krosnick Reference Miller and Krosnick2004; Simon and Klandermans Reference Simon and Klandermans2001; Thomas, McGarty, and Mavor Reference Thomas, McGarty and Mavor2009).

For experimental research focused on political communication, the use of targeted samples of politically knowledgeable, engaged, and active citizens may strengthen external validity and provide more accurate estimates of treatment effects. While sophisticates are not present in large numbers in national probability samples, the content developed and shared on Web 2.0 platforms can readily identify social mediators in networks comprised of politically knowledgeable, attentive, and active citizens. This recruitment strategy captures the kinds of participants who are most likely to be exposed—through self-selection—to political communications like campaign ads, appeals urging voter turnout, and the views of candidates and public officials. Highly controlled lab-based communications experiments have their virtues but they “‘obliterate’ the distinction between the supply of information on one hand and its consumption on the other (Kinder Reference Kinder2007, 157).” When every participant receives a message and the propensity to be “treated” is held constant across all participants, the selection pressures underlying political communication effects are ignored and the true causal process may be misidentified. Alternatively, reactions to political communications in a natural setting do not suffer from this selection bias problem because they do not artificially expose respondents to political communications that they would not otherwise receive (Gaines and Kuklinski Reference Gaines and Kuklinski2011; Kinder Reference Kinder2007).

EVALUATION OF THE SMIS APPROACH

We used the SMIS approach to recruit participants for six political experiments investigating various facets of American public opinion and political behavior. Participants in each experiment were exposed to experimentally altered blog posts, news stories, or political ads.Footnote 6 Five of the six studies recruited bloggers and discussion forum moderators as social mediators, and the sixth study employed research assistants embedded within Facebook networks (see table 1).Footnote 7 The recruitment strategies for these studies varied to capture different types of samples: Studies 1, 2, and 3 targeted politically active and engaged citizens, whereas Studies 4, 5, and 6 were designed to reach heterogeneous adult samples. Together, the six studies underscore the flexibility of the SMIS method.

Table 1 Mediator and Participant Recruitment Details, SMIS Studies

Notes: Cell entries are frequencies or percentages. Social mediators for Study 6 were research assistants who used Facebook to recruit participants among their networks of friends. For more information about Studies 5 and 6, see Hartman Reference Hartman2012.

Approximately one out of every eight bloggers we contacted agreed to serve as a social mediator.Footnote 8 Even with relatively modest participation rates among mediators, we easily secured hundreds of respondents for each of our political experiments—considerably more than could be obtained through typical undergraduate subject pools. In terms of participant yield, each social mediator averaged 104 respondents, ranging from an average low of 50 participants in one study to an average high of 158 in another. More importantly, the mean sample size for the SMIS studies was 1,569 participants, with a range of 297 to 3,219 participants.Footnote 9 These figures underscore the effectiveness of the SMIS technique for obtaining research participants.

DIVERSITY IN SMIS SAMPLES

The demographic profiles of all six SMIS samples illustrate the diversity that can be obtained using this recruitment method (see table 2). For comparison, we also include the average profile of nine samples that Berinsky and colleagues (Reference Berinsky, Huber and Lenz2012) obtained via MTurk (from tables 1 and 2), an undergraduate convenience student sample,Footnote 10 and the 2008 ANES time series panel (conducted in person).Footnote 11 As expected, our SMIS samples contain biases common to other convenience sampling methods in terms of age, race, and educational attainment (e.g., see Berinsky, Huber, and Lenz Reference Berinsky, Huber and Lenz2012; Kam, Wilking, and Zechmeister Reference Kam, Wilking and Zechmeister2007). Although the volunteer samples obtained via SMIS are not designed to be representative of the general population and should not be presented as such, they are considerably more diverse than the average college student sample. Student samples generally consist of participants from a limited age-range and geographic area, restricting variance in both of these factors. In contrast, the average age of our SMIS respondents was just over 40 years old, compared to 20- and 32-year-olds for the student sample and MTurk studies, respectively. Variability in age is important because it reflects different life experiences such as having a family, becoming financially independent, or entering retirement, as well as different levels of political experience and engagement. For instance, the typical college student sample is likely to contain a substantial number of individuals who have never been eligible to vote in a presidential election because they were under 18 at the time of the previous election.

Table 2 Demographic Characteristics of SMIS Respondents

Note: Cell entries are percentages, except for age (mean years). See text for details on the MTurk, Student, and ANES studies. Region is based on ANES codes: North East (CT, MA, ME, NH, NJ, NY, PA, RI, VT), North Central (IL, IN, IA, KS, MI, MN, MO, NE, ND, OH, SD, WI), South (AL, AK, DE, Washington DC, FL, GA, KY, LA, MD, MS, NC, OK, SC, TN, TX, VA, WV), West (AK, AZ, CA, CO, HI, ID, MT, NM, NV, OR, UT, WA, WY). “Leaning” Independents and moderates are included in the Independent and Moderate categories, respectively. Percentages do not add to 100 due to rounding and missing values. Item wording is available in the Appendix.

SMIS samples can also reflect considerable geographic diversity, an important determinant of social and political attitudes (e.g., Brace et al. Reference Brace, Arceneaux, Johnson and Ulbig2004). Our social mediator recruitment targeted blogs with a national focus; thus, the SMIS samples contain participants drawn from across the United States. When compared to the ANES, the SMIS samples slightly underrepresented the South (24% on average vs. 40% in the ANES) and overrepresented the West (36% on average vs. 21% in the ANES). Berinsky and colleagues' MTurk samples demonstrate a similar bias in underrepresenting the South and overrepresenting the Northeast (although not the West). Obviously, the geographic diversity of SMIS samples depends on the social mediators that are targeted; however, this potential for geographic diversity is a benefit when compared to both undergraduate and college-personnel samples (e.g., see Kam, Wilking, and Zechmeister Reference Kam, Wilking and Zechmeister2007), which are typically drawn from a single location. In fact, many research-focused higher education institutions are located in liberal college towns, which can be very different from a typical urban or suburban American setting.

In terms of race and ethnicity, both SMIS and MTurk yielded samples that are disproportionately white (SMIS average = 87.5%, MTurk average = 83.5%). The student sample, by contrast, was racially diverse (53% of respondents identified themselves as white, and roughly 31% as Asian) in line with the demographic profile of the university, but not the region or country more broadly, further reflecting the idiosyncrasies of college-student samples. In terms of education, few differences are noted across any of the online samples. Volunteer and student participants are, for the most part, well educated. By definition, all student participants have some college education; yet, the same is true of our SMIS participants. Only 6% of SMIS respondents had completed only a high school education (or less), whereas 93% indicated they had some college education. The mean years of schooling observed in Berinsky and colleagues' (Reference Berinsky, Huber and Lenz2012) MTurk studies was 14.9, which indicates that the average participant had some college experience. In contrast, 43% of ANES respondents report having no more than a high school education. Ultimately, many of the observed demographic differences in age, race, and education between volunteer web samples and the ANES reflect the digital divide created by disparities in Internet access and usage (Warschauer Reference Warschauer2003).

On average, the SMIS samples were balanced in terms of participant gender, with 56% identifying themselves as female (table 2). There is, however, striking variance in the average proportion of women in the SMIS studies, ranging from 22% to 82% of the sample. This variance is both a strength and weakness of the SMIS technique. On the one hand, studies with a surfeit of female respondents were drawn from blogs that disproportionately attracted women such as BitchPhD, a feminist blog. Of course, this type of sample could be a real strength for research on politics and gender, especially research focused on politically engaged women or politicized gender identity. On the other hand, care must be taken to approach blogs that attract an even mix of men and women for broad experimental research. Yet, gender imbalance arises for other volunteer samples as well. The college personnel sample collected by Kam and colleagues (Reference Kam, Wilking and Zechmeister2007) was 75.7% female. Women tend to be overrepresented in MTurk samples, too. Paolacci and colleagues' (Reference Paolacci, Chandler and Ipeirotis2010) MTurk sample was 75.0% female, compared with Berinsky and colleagues (Reference Berinsky, Huber and Lenz2012) who report a better gender balance with samples that were, on average, 60% female. The student sample is mostly balanced with 48% female participants, although gender imbalance is widely noted when using psychology rather than political science undergraduate subject pools given the disproportionate number of women enrolled in that major.

When it comes to political characteristics, the SMIS, MTurk, and student samples tend to be more liberal and Democratic than the ANES sample (table 2).Footnote 12 When averaged across the SMIS studies, 48% of respondents identified themselves as Democrats, 30% as Independents, and roughly 18% as Republicans. A comparable skew is evident in Berinsky and colleagues' (Reference Berinsky, Huber and Lenz2012) MTurk samples, which are on average 42% Democratic, 23% Independent, and 25% Republican. In the 2008 ANES, roughly a third of all respondents are Democrats and 25% are Republicans. A similar pattern is evident for self-reported ideology. In the SMIS studies, 55% are liberal compared to 48% in the MTurk studies, and 13% in the ANES. The college student sample included in table 2 is also far more liberal than conservative, but the nature of student samples greatly varies with the institution and its location. A similar pattern is also observed for Project Implicit samples, which also consist of online volunteers. Averaged across nine published Project Implicit studies, 51% of respondents identify as liberal, 26% moderate, and 21% conservative. Interestingly, the SMIS samples vary in the degree to which liberals dominate the sample, depending on the blogs targeted for participation. Thus, compared to MTurk, SMIS also offers an opportunity to adjust ideological imbalance through the selective recruitment of conservative bloggers when samples are skewed too heavily to the Left. This prospect is not readily available with other inexpensive techniques for obtaining volunteer participants.

The heterogeneity among the six samples obtained using this approach—both in terms of political predispositions and sociodemographic characteristics—provides a sense of the variability surrounding SMIS samples. Any two SMIS samples will likely differ more than any two conventional student samples, which are drawn from a significantly more homogeneous population. Indeed, this is the primary concern surrounding student samples and the motivating force behind efforts to find alternatives such as SMIS. The information provided in table 2 should encourage researchers interested in using this approach to think carefully and systematically about social mediator selection, how it will affect the characteristics of the resulting sample, and the degree to which a sample will contain the kind of diversity likely to uncover heterogeneous reactions to the experimental treatment. The SMIS samples obtained in our six studies reflect considerable demographic and political diversity, but they are convenience samples and, as for any nonprobability sample, descriptive sample statistics cannot be generalized to a broader population.

TARGETING SPECIAL POPULATIONS

Social mediator selection is important when targeting special populations. A distinctive feature of the SMIS approach, relative to MTurk and others, is its ability to “infiltrate” specific political communities. Online networks of politically engaged and active citizens, including those focused on a single political issue (Converse Reference Converse and Apter1964), are comprised of Americans who are deeply entrenched in emotionally charged political debate. These highly engaged Americans, who maintain strong political views, are thought by many to disproportionately influence political outcomes (e.g., Abramowitz Reference Abramowitz2010). Experiments conducted among these respondents are particularly illuminating when it comes to electoral dynamics and the origins of political attitudes and candidate judgments. By taking advantage of the preexisting level of social organization provided by Web 2.0 platforms, researchers can reach these diffuse populations (Mathy et al. Reference Mathy, Schillace, Coleman and Berquist2002; Skitka and Sargis Reference Skitka and Sargis2006).

For SMIS Studies 1, 2, and 3, we were specifically interested in the psychological origins of political engagement, identity, and emotion. In these studies, we sought highly engaged and sophisticated Americans with strong political identities to participate in several online experiments. As seen in table 3, this approach proved fruitful—levels of political engagement are far higher among these three SMIS samples than ANES respondents. In SMIS Study 1, for example, participants had been actively engaged in the 2004 presidential election: almost four in 10 had attended political meetings or rallies (38% vs. 8% in the 2008 ANES), nearly half (49%) had worn a button or displayed a campaign sticker (compared to 16% in the 2008 ANES), more than three in four had tried to persuade another voter, (compared to 45% in the ANES), 40% had donated money to a candidate (compared to 11% in the ANES), and a third had donated to a political party (compared to 8% in the ANES). In addition, SMIS respondents in all three studies proved to be very knowledgeable about politics. As seen in table 2, SMIS participants recruited from politically active blogs were correct on 90% of the political knowledge items on average, compared to roughly 67% correct in the other SMIS studies (Studies 5 and 6), 71% in the MTurk sample, and only 42.5% in the ANES sample.Footnote 13

Table 3 Electoral Participation and Belief Constraint in Activist SMIS and ANES Samples

Notes: Election participation measures for the Partisan Identity Studies are based on self-reported intentions because these studies occurred before the 2008 Presidential Election. Party ID and Ideology range from 1 to 7, where high values indicate Democrats and liberals. Church Attendance is measured on a scale, where “1” means “Never” and “6” means “More than once a week” (from 1 to 8 in the Partisan Identities Studies). Biblical Orthodoxy is coded from 1 to 3, where “1” means “the Bible was written by men,” and “3” means that it is the “actual word of God.” Democratic Vote Choice is dummy coded such that a 1 indicates a vote (or vote intention) for the Democratic presidential candidate. Precise item wordings are provided in the appendix.

** p < .01.

Our findings underscore the ease with which highly engaged and knowledgeable partisans can be targeted for recruitment by using SMIS. In addition to greater political engagement and knowledge, targeted SMIS respondents also demonstrated high levels of constraint among their political beliefs, including party identification, ideological self-placement, religious beliefs, and political behavior. For instance, the average correlation between partisanship and ideology is 0.76 in the three SMIS studies, whereas it is only 0.56 in the 2008 ANES.Footnote 14 In addition, ideology, church attendance, and views on biblical orthodoxy were more strongly correlated with vote choice in the SMIS studies than in the ANES. Overall, the political views of the targeted SMIS samples are far more constrained than those of ANES participants, and their vote choice is more partisan and ideological and more polarized on religious-secular grounds.

This glimpse into belief systems of SMIS participants underscores the technique's ability to attract engaged partisans and ideologues, individuals who strongly connect religious and political beliefs, and those who act (and act frequently) in accordance with their beliefs and values. The potential to capture these politically engaged, knowledgeable, active Americans is a strength of the SMIS technique and could lend insight into the opinion and behavior of political sophisticates, which does not always mirror that of the mass public, as in the case of economic voting (Gomez and Wilson Reference Gomez and Wilson2001), political information processing (Taber and Lodge Reference Taber and Lodge2006), emotion and political cognition (Miller Reference Miller2011), and framing (Druckman and Nelson Reference Druckman and Nelson2003). Moreover, actively engaged citizens have a disproportionate influence on American politics through regular voting, political actions (e.g., contacting their members of Congress), and campaign donations. Thus, the flexible nature of the SMIS sampling technique is an asset for conducting experimental research on politically active and engaged citizens to better understand the factors that condition their political participation and shed light on the dynamics of election and issue-based campaigns.

GENERAL PROCEDURES FOR CONDUCTING SMIS STUDIES

Recruiting richly networked and influential social mediators is central to the SMIS method and provides access to participants at little or no cost to researchers except for their time. Technorati (www.technorati.com) can identify blogs and forums for both general adult samples and specific target populations. Content is organized topically using categories such as entertainment, business, sports, technology, and politics, and a researcher seeking a generic adult convenience sample can select and contact potential mediators across these content domains. When seeking a targeted sample, as in our case with political sophisticates, the content posted on blogs and discussion forums offers useful insights into the characteristics of the social mediator's network. For example, we located strong partisans on blogs that had a clear, consistent partisan stance and focused exclusively on political content. We also determined the ideological orientation of blogs from the “about” or “contributor bio” sections of the site. “Blogrolls,” a blogger's list of additional blogs that may be of interest to readers, can identify other relevant social mediators and allow broader access to the online social network.

Social networking websites like Facebook and Google+ offer an alternative venue to blogs and discussion forums for acquiring participants. To recruit participants for general convenience samples on these websites, first locate a team of research assistants who are willing to solicit participants from their social circles to serve as social mediators. In addition, research assistants may identify other central figures within their own networks (e.g., individuals with many “friends”) and encourage these contacts to assist with recruitment, thus widening access to a broader base of potential participants. Using this strategy successfully depends on the number of research assistants, as well as the number of people in their social circles. Facebook can also potentially be used to identify and target specific populations. Research assistants can make targeted appeals by combing their contact profiles for criteria related to specific activities, interests, and political and religious views.

This social media platform can also be used to recruit targeted samples using Facebook or Google+ groups organized around a central theme. Thus, rather than relying on diffuse networks and more individualized contact, researchers can locate a group page dedicated to political discussion or particular political causes to recruit participants who are, for example, engaged citizens, strong partisans, or citizens active on a specific political issue. This approach allows for focused recruitment, but also has the added advantage of breaking up any geographic dependence in the research assistants' social networks. Facebook hosts pages for many politically relevant groups such as “Stand with Arizona (and Against Illegal Immigration).” As of early 2013, this group is currently “liked” or followed by approximately 600,000 Facebook users. Other pages are linked to campaign rather than issue-specific mobilization. The page “Dogs against Romney,” for example, was created to raise awareness during the 2012 presidential campaign about Mitt and Ann Romney's alleged animal rights abuses. The settings on these pages vary—some allow any user to post content to the main page, whereas others require permission from the page administrator(s).

Second, regardless of the settings, maximize cooperation by contacting the Facebook page administrator (the social mediator in this case) prior to posting a recruitment message to obtain permission, provide clear instructions, and assuage potential concerns about the project itself. Dillman and colleagues (Reference Dillman, Smyth and Chrisian2009) provide sound advice on how to craft the initial request to potential mediators, as well as plan follow-up communications (see also Kam, Wilking, and Zechmeister Reference Kam, Wilking and Zechmeister2007; Orr Reference Orr2005). The first institutional review board (IRB) approved correspondence with potential mediators should contain the following three components: (1) a mediator recruitment script, which introduces the researcher, details the purpose of the study, and explains the mediator's role in obtaining participants; (2) a brief participant recruitment script that mediators can post verbatim for their readers; and (3) a working hyperlink to the survey or task used in the study. Examples used in our own studies are provided in the online Supplementary Appendix.Footnote 15 Third, prior to making any contact with mediators, researchers should thoroughly test the survey (or tasks) to ensure that it is error-free, and that any resulting data will be properly collected. In addition, correspondence should originate from a university-assigned e-mail address to reassure potential mediators of the study's legitimacy. Finally, encourage mediators to test the study materials to determine whether it would be well-suited to their particular readers.Footnote 16

In our experience, although some mediators immediately agreed to assist with recruitment, this was not the norm. Time, effort, and patience are needed to cultivate relationships and gain the genuine cooperation and trust of the mediators. Bloggers and discussion forum moderators are more likely to assist with a study if they forge a relationship with a member of the research team than if communications are terse and impersonal. To personalize each mediator request, read recent blog posts or forum discussion threads to better understand the website's purpose, as well as reference a specific post or thread in the initial correspondence (a strategy that must be approved by the researcher's IRB). Another effective strategy for securing mediator participation is to make a simple request for assistance with a graduate student's research project (if appropriate). If special populations, which may be suspicious of participating in academic research (e.g., strong conservatives) are sought, inform mediators that their readers are important to provide balance and ensure the study reflects a diversity of viewpoints. Once again, such content must be approved in advance by the researcher's IRB.

Bloggers and forum moderators are wary of exposing their readers and contributors to scams or push polls designed to alter rather than collect public opinion, and scholars should expect mediators to investigate the research team. Researchers should scrutinize their online profiles as reflected in content on their academic or personal websites, including photos, links to other websites, endorsements, or anything else that may discourage mediators from agreeing to participate. In addition, researchers should remove lengthy or detailed study information from their website during data collection, as some mediators will post links to the researchers' websites along with the participant recruitment script. Maintaining a professional web presence helps instill confidence in both mediators and study participants that the project is legitimate and worth their time.

CONCLUSIONS

We have used the SMIS technique to secure nonstudent samples for experimental political science research, as well as to gain access to highly involved and politically sophisticated individuals efficiently and at a minimum cost. In these respects, the SMIS method works well. Evidence from our six studies demonstrates how social media affords access to a large and diverse pool of participants for experimental research. In doing so, it obviates some of the limitations of commonly used undergraduate and community-based sampling methods. While the samples reflect deviations from population characteristics common to those observed in other web-based convenience samples obtained from MTurk and Project Implicit, SMIS samples are more diverse in terms of age, region, and political experience than the typical student-based sample. By introducing diversity, especially in terms of political engagement, SMIS studies extend the reach of political experimentation to demographically varied samples. Moreover, the ability to target highly politically engaged individuals most likely to be exposed to tailored political communications allows researchers to evaluate any possible heterogeneous treatment effects and self-selection processes that complicate experimental research—potentially resulting in more accurate estimated treatment effects. A research study that obtains reactions to political communications from people who are most inclined to seek them out has higher ecological validity than a study that artificially exposes respondents to political communications that they would not otherwise see or hear (Gaines and Kuklinski Reference Gaines and Kuklinski2011; Kinder Reference Kinder2007).

All of the online sampling approaches mentioned in this article vary in their relative strengths and weaknesses. Approaches such as SMIS or MTurk provide low-cost options for data collection that circumvent some of the limitations of student samples. Of course, these approaches have their own problems. Krupnikov and Levine (Reference Krupnikov and Levine2013) note that MTurk respondents reported participating in an average of 37.2 studies. This raises concerns about the savvy or skeptical nature of participants. Indeed, the authors show that experienced MTurk respondents are more likely to disregard experimental instructions than are participants in YouGov's panels. SMIS samples do not suffer from this “expertise” problem. And, unlike MTurk, they do not require a token payment for participants.

SMIS samples are not without limitations. The samples obtained this way can be highly variable, and researchers must take care when selecting a potential pool of mediators to ensure participants fit the desired demographic and political profile. Ultimately, researchers bear the burden of justifying their choice of sample—SMIS or otherwise. On this point, researchers using SMIS should avoid describing marginal frequencies or other descriptive statistics as if they were representative of a general population. Of course, it is critical for researchers to think carefully about whether any nonprobability sample will have characteristics relevant to the causal relationship being studied. If a sample is highly educated, sophisticated, or partisan (as SMIS and MTurk samples tend to be, on average), researchers must interpret their experimental results in light of existing knowledge about opinion dynamics among political sophisticates (i.e., Zaller Reference Zaller1992). Ultimately, one's confidence in the results of any particular experiment will rely on the exercise of good research practices and depend on replication across samples. Although the results of any given experiment using the SMIS approach may not conclusively establish “real world” causation, this research can be suggestive and insightful in pointing to potentially complex causal relationships to be explored and verified in subsequent studies.

Based on our experiences with SMIS, political researchers have much to gain by turning to the web to recruit research participants rather than relying solely on undergraduate student samples for experimental research. The SMIS approach is flexible and inexpensive, which places it within reach of all researchers who can invest the time necessary to develop relationships with social mediators. While our studies are focused on the American political context, the utility of the SMIS approach is not geographically bound given the widespread adoption of Web 2.0 technologies. For example, SMIS could be used to study international political behavior by tapping into blogs read by citizens of different countries. We hope others will use the SMIS technique for their experimental work and take advantage of its ability to access highly engaged and politically active citizens, among other low-incidence populations, who reflect important and often understudied segments of the public.

ACKNOWLEDGMENTS

The authors would like to thank Adam Berinsky, Dan Corstange, and the attendees of the Methodology in Political Psychology conference held at the Ohio State University in October 2010 for their helpful comments on an early draft of this manuscript.

Footnotes

1 For more information on the journal see http://journals.cambridge.org/action/displayJournal?jid=XPS. See also the The Experimental Political Scientist, which is APSA's newsletter for its experimental section, available online at http://scholar.harvard.edu/dtingley/pages/exppolisci.

2 The highly successful NSF-sponsored Time-Sharing Experiments for the Social Sciences (TESS) project, which draws respondents from the national Knowledge Networks panel, is a notable exception (Mutz Reference Mutz2011).

3 Web survey respondents tend to be young, highly educated, and economically advantaged (e.g., see Alvarez, Sherman, and Van Beselaere Reference Alvarez, Sherman and Van Beselaere2003; Berrens, Bohara, Jenkins-Smith, Silva, and Weimer Reference Berrens, Bohara, Jenkins-Smith, Silva and Weimer2003; Chang and Krosnick Reference Chang and Krosnick2009; Malhotra and Krosnick Reference Malhotra and Krosnick2007; Yaeger et al. Reference Yaeger, Krosnick, Chang, Javitz, Levendusky, Simpser and Wang2009).

4 To determine the prevalence of web-based sampling within political science, we reviewed articles published in 8 of the leading journals in the discipline—American Journal of Political Science, American Political Science Review, Comparative Political Studies, Journal of Conflict Resolution, Journal of Politics, Political Analysis, Political Behavior, and Political Psychology—between January 1, 2002, and July 31, 2012. During this 10-year period, 110 articles included at least one web-based sample, and the overwhelming majority (79%) used samples purchased from web-based panels such as YouGov (opt-in) and Knowledge Networks (probability). In stark contrast, only 23 published articles use a less expensive or no-cost alternative web-based convenience sample such as MTurk (2), Project Implicit (2), the Web Experiment List (2), an opt-in volunteer panel comparable to Project Implicit (Reips and Lengler, Reference Reips and Lengler2005), e-mail recruitment (8), virtual flyer (1), and vague techniques described as “adult convenience samples derived from various websites” (9).

5 Project Implicit is the product of collaborative research on implicit social cognition. The original Principle Investigators have developed a web-based infrastructure to support further research in this area. For more information, see: https://implicit.harvard.edu/implicit/.

6 Study 1: The Culture Wars Study (2006) examined the link between various identities (social, religious, and political) and support for “culture wars” policy issues such as abortion and gay rights. Studies 2 and 3: Two Partisan Identity Studies (2007, 2008) were conducted to investigate partisan identity and emotional arousal. Study 4: The Campaign Ads Study (2007) examined the emotional impact of experimentally altered campaign ads on political attitudes and participation. Studies 5 and 6: Two Political Metaphors Studies (Hartman Reference Hartman2012) examined the effects of policy metaphors on political persuasion.

7 Use of Facebook rather than weblogs or discussion forums still relies on a social mediator for recruitment, in this case research assistants provide access to their social networks. Facebook can also be used to tap centralized networks by accessing Facebook groups organized around specific issues, interests, and activities.

8 The bloggers and forum moderators who participated in the studies were not journalists, nor were they associated with the mainstream media in any way. Some of the blogs had minor sponsorship, given the size of their readerships, but all would be considered amateur rather than professional bloggers. Many of the bloggers and forum moderators who agreed to participate regularly produced some kind of political content, ranging from roughly one post per week on a political topic to blogs expressly dedicated to political discourse.

9 In the one study that used Facebook, four research assistants recruited a total of 141 respondents, for an average yield of 35 subjects per research assistant.

10 The undergraduate sample was collected in 2008 at a large, public university in the Northeast. This dataset tests the same hypotheses as SMIS Study 6 (i.e., the Political Metaphors Study, Hartman Reference Hartman2012).

11 The ANES data has been weighted to reduce the influence of its racial oversamples.

12 See appendix for question wording.

13 Students in this sample were drawn from political science courses, which may explain why they score higher on the political knowledge quiz than does the average American.

14 A comparison of correlations has been used periodically to evaluate the quality of web data and assess the success of various survey recruitment strategies (Berrens et al. Reference Berrens, Bohara, Jenkins-Smith, Silva and Weimer2003; Chang and Krosnick Reference Chang and Krosnick2009; Malhotra and Krosnick Reference Malhotra and Krosnick2007).

15 For the sake of brevity, we have created an online appendix in which we present tables and figures representing various auxiliary analyses. This online appendix can be found at the Cambridge University Press website: http://dx.doi.org/10.1017/S1049096513001029.

16 It may be helpful to include a question on the survey that identifies which website referred each subject, so that the researcher can keep track of the effectiveness of each social mediator.

References

REFERENCES

Abramowitz, Alan. 2010. The Disappearing Center: Engaged Citizens, Polarization, and American Democracy. New Haven, CT: Yale University Press.Google Scholar
Alvarez, R. Michael, Sherman, Robert P., and Van Beselaere, Carla. 2003. “Subject Acquisition for Web-Based Surveys.” Political Analysis 11 (1): 2343.CrossRefGoogle Scholar
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S.. 2012. “Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research.” Political Analysis 20 (3): 351–68.Google Scholar
Berrens, Robert P., Bohara, Alok K., Jenkins-Smith, Hank, Silva, Carol, and Weimer, David L.. 2003. “The Advent of Internet Surveys for Political Research: A Comparison of Telephone and Internet Samples.” Political Analysis 11 (1): 122.CrossRefGoogle Scholar
Birnbaum, Michael H. 2004. “Human Research and Data Collection via the Internet.” Annual Review of Psychology 55: 803–32.Google Scholar
Brace, Paul, Arceneaux, Kevin, Johnson, Martin, and Ulbig, Stacey G.. 2004. “Does State Political Ideology Change over Time?Political Research Quarterly 57 (4): 529–40.Google Scholar
Buhrmeister, Michael, Kwang, Tracy, and Gosling, Samuel D.. 2011. “Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High Quality, Data?Perspectives on Psychological Science 6 (1): 35.Google Scholar
Chang, LinChiat, and Krosnick, Jon A.. 2009. “National Surveys via RDD Telephone Interviewing versus the Internet: Comparing Sample Representativeness and Response Quality.” Public Opinion Quarterly 73 (4): 641–78.Google Scholar
Converse, Philip E. 1964. “The Nature of Belief Systems in Mass Publics.” In Ideology and Discontent, ed. Apter, David E., 206–61. New York: Free Press.Google Scholar
Dillman, Don A., Smyth, Jolene D., and Chrisian, Leah Melani. 2009. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, NJ: Wiley.Google Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur. 2006. “The Growth and Development of Experimental Research in Political Science.” American Political Science Review 100 (4): 627–35.CrossRefGoogle Scholar
Druckman, James N., Green, Donald P., Kuklinski, James H., and Lupia, Arthur. 2011. Cambridge Handbook of Experimental Political Science. New York: Cambridge University Press.CrossRefGoogle Scholar
Druckman, James N., and Nelson, Kjersten R.. 2003. “Framing and Deliberation: How Citizens' Conversations Limit Elite Influence.” American Journal of Political Science 47 (4): 729–45.Google Scholar
Gaines, Brian J., and Kuklinski, James H.. 2011. “Experimental Estimation of Heterogeneous Treatment Effects Related to Self-Selection.” American Journal of Political Science 55 (3): 724–36.Google Scholar
Glaser, Jack, Dixit, Jay, and Green, Donald P.. 2002. “Studying Hate Crime with the Internet: What Makes Racists Advocate Racial Violence?Journal of Social Issues 58 (1): 177–93.Google Scholar
Gomez, Brad T., and Wilson, J. Matthew. 2001. “Political Sophistication and Economic Voting in the American Electorate: A Theory of Heterogeneous Attribution.” American Journal of Political Science 45 (4): 899914.Google Scholar
Green, Donald P., and Gerber, Alan S.. 2004. Get Out the Vote! How to Increase Voter Turnout. Washington, DC: Brookings Institution Press.Google Scholar
Hartman, Todd K. 2012. “Toll Booths on the Information Superhighway? Policy Metaphors in the Case of Net Neutrality.” Political Communication 29: 278–98.CrossRefGoogle Scholar
Henry, P. J. 2008. “College Sophomores in the Laboratory Redux: Influences of a Narrow Database on Social Psychology's View of the Nature of Prejudice.” Psychological Inquiry 19 (2): 4971.CrossRefGoogle Scholar
Imai, Kosuke, Keele, Luke, Tingley, Dustin, and Yamamoto, Teppei. 2011. “Unpacking the Black Box: Learning About Causal Mechanisms from Experimental and Observational Studies.” American Political Science Review 105 (4): 765–89.Google Scholar
Imai, Kosuke, and Strauss, Aaron. 2011. “Estimation of Heterogeneous Treatment Effects from Randomized Experiments, with Application to the Optimal Planning of the Get-Out-the-Vote Campaign.” Political Analysis 19 (1): 119.Google Scholar
Kam, Cindy D., Wilking, Jennifer R., and Zechmeister, Elizabeth J.. 2007. “Beyond the ‘Narrow Database’: Another Convenience Sample for Experimental Research.” Political Behavior 29 (4): 415–40.Google Scholar
Kinder, David R. 2007. “Curmudgeonly Advice.” Journal of Communication 57: 155–62.Google Scholar
Kinder, David R., and Palfrey, Thomas R.. 1993. Experimental Foundations of Political Science. Ann Arbor: University of Michigan Press.Google Scholar
Klar, Malte, and Kasser, Tim. 2009. “Some Benefits of Being an Activist: Measuring Activism and its Role in Psychological Well-Being.” Political Psychology 30 (5): 755–77.Google Scholar
Krupnikov, Yanna, and Levine, Adam S.. 2013. “What is the Preferred Population? Comparing ‘College Sophomores’ and Adults in Political Science Experiments.” Working Paper. Google Scholar
Malhotra, Neil, and Krosnick, Jon A.. 2007. “The Effect of Survey Mode and Sampling on Inferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples.” Political Analysis 15 (3): 286323.Google Scholar
Mason, Winter, and Suri, Siddharth. 2012. “Conducting Behavioral Research on Amazon's Mechanical Turk.” Behavior Research Methods 44 (1): 123.Google Scholar
Mathy, Robin M., Schillace, Mark, Coleman, Sarah M., and Berquist, Barrie E.. 2002. “Methodological Rigor with Internet Samples: New Ways to Reach Underrepresented Populations.” Cyber-Psychology and Behavior 5 (3): 253–66.Google Scholar
Miller, Joanne M., and Krosnick, John A.. 2004. “Threat as a Motivator of Political Activism: A Field Experiment.” Political Psychology 25 (4): 507–23.CrossRefGoogle Scholar
Miller, Patrick R. 2011. “The Emotional Citizen: Emotion as a Function of Political Sophistication.” Political Psychology 32 (4): 575600.Google Scholar
Morton, Rebecca B., and Williams, Kenneth C.. 2010. Experimental Political Science and the Study of Causality: From Nature to the Lab. Cambridge: Cambridge University Press.Google Scholar
Mutz, Diana. 2011. Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.Google Scholar
Orr, Shannon K. 2005. “New Technology and Research: An Analysis of Internet Survey Methodology in Political Science.” PS: Political Science & Politics 38 (2): 263–67.Google Scholar
Paolacci, Gabriele, Chandler, Jesse, and Ipeirotis, Panagiotis G.. 2010. “Running Experiments on Amazon's Mechanical Turk.” Judgment and Decision Making 5 (5): 411–19.Google Scholar
Reips, Ulf-Dietrich. 2000. “The Web Experiment Method: Advantages, Disadvantages, and Solutions.” In Psychological Experiments on the Internet, ed. Birnbaum, Michael H., 89116. San Diego, CA: Academic Press.Google Scholar
Reips, Ulf-Dietrich, and Lengler, Ralph. 2005. “The Web Experiment List: A Web Service for the Recruitment of Participants and Archiving of Internet-Based Experiments.” Behavior Research Methods 37 (2): 287–92.Google Scholar
Rivers, Douglass, and Bailey, Delia. 2009. “Inference from Matched Samples in the 2008 U.S. National Elections.” Proceedings of the Joint Statistical Meetings 627–39. Google Scholar
Sears, David O. 1986. “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology's View of Human Nature.” Journal of Personality and Social Psychology 51 (3): 515–30.Google Scholar
Simon, Bernd, and Klandermans, Bert. 2001. “Politicized Collective Identity: A Social Psychological Analysis.” American Psychologist 56 (4): 319–31.Google Scholar
Singla, Parag, and Richardson, Matthew. 2008. “Yes, There is a Correlation: From Social Networks to Personal Behavior on the Web.” Proceedings from the 27th International Conference on the World Wide Web. [Online] http://research.microsoft.com/en-us/um/people/mattri/papers/www2008/socialnetworksandsearch.pdf (accessed January 15, 2013).Google Scholar
Skitka, Linda, and Sargis, Edward G.. 2006. “The Internet as Psychological Laboratory.” Annual Review of Psychology 57: 529–55.Google Scholar
Taber, Charles S., and Lodge, Milton. 2006. “Motivated Skepticism in the Evaluation of Political Beliefs.” American Journal of Political Science 50 (3): 755–69.CrossRefGoogle Scholar
Thomas, Emma F., McGarty, Craig, and Mavor, Kennith I.. 2009. “Transforming Apathy into Movement: The Role of Prosocial Emotions in Motivating Action for Social Change.” Personality and Social Psychology Bulletin 13: 310–33.Google ScholarPubMed
Warschauer, Mark. 2003. Technology and Social Inclusion: Rethinking the Digital Divide. Cambridge, MA: MIT Press.Google Scholar
Wattenberg, Martin P. 2011. Is Voting for Young People? Upper Saddle River, NJ: Pearson.Google Scholar
Yaeger, David S., Krosnick, John A., Chang, LinChiat, Javitz, Harold S., Levendusky, Mathew S., Simpser, Alberto, and Wang, Rui. 2009. “Comparing the Accuracy of RDD Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples.” Unpublished manuscript. [Online] http://comm.stanford.edu/faculty/krosnick/Mode%2004 (accessed January 15, 2013).Google Scholar
Zaller, John. 1992. The Nature and Origins of Mass Opinion. New York: Cambridge University Press.Google Scholar
Figure 0

Table 1 Mediator and Participant Recruitment Details, SMIS Studies

Figure 1

Table 2 Demographic Characteristics of SMIS Respondents

Figure 2

Table 3 Electoral Participation and Belief Constraint in Activist SMIS and ANES Samples

Supplementary material: PDF

Cassese et al. supplementary material

Appendix

Download Cassese et al. supplementary material(PDF)
PDF 223.6 KB