Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-24T08:31:01.404Z Has data issue: false hasContentIssue false

The Advent of Internet Surveys for Political Research: A Comparison of Telephone and Internet Samples

Published online by Cambridge University Press:  04 January 2017

Robert P. Berrens
Affiliation:
Department of Economics, University of New Mexico, Albuquerque, NM 87131
Alok K. Bohara
Affiliation:
Department of Economics, University of New Mexico, Albuquerque, NM 87131
Hank Jenkins-Smith
Affiliation:
George Bush School of Government and Public Service, Texas A&M University, College Station, TX 77843
Carol Silva
Affiliation:
George Bush School of Government and Public Service, Texas A&M University, College Station, TX 77843
David L. Weimer
Affiliation:
Department of Political Science and La Follette School of Public Affairs, University of Wisconsin-Madison, Madison, WI 53706. e-mail: [email protected]

Abstract

The Internet offers a number of advantages as a survey mode: low marginal cost per completed response, capabilities for providing respondents with large quantities of information, speed, and elimination of interviewer bias. Those seeking these advantages confront the problem of representativeness both in terms of coverage of the population and capabilities for drawing random samples. Two major strategies have been pursued commercially to develop the Internet as a survey mode. One strategy, used by Harris Interactive, involves assembling a large panel of willing respondents who can be sampled. Another strategy, used by Knowledge Networks, involves using random digit dialing (RDD) telephone methods to recruit households to a panel of Web-TV enabled respondents. Do these panels adequately deal with the problem of representativeness to be useful in political science research? The authors address this question with results from parallel surveys on global climate change and the Kyoto Protocol administered by telephone to a national probability sample and by Internet to samples of the Harris Interactive and Knowledge Networks panels. Knowledge and opinion questions generally show statistically significant but substantively modest difference across the modes. With inclusion of standard demographic controls, typical relational models of interest to political scientists produce similar estimates of parameters across modes. It thus appears that, with appropriate weighting, samples from these panels are sufficiently representative of the U.S. population to be reasonable alternatives in many applications to samples gathered through RDD telephone surveys.

Type
Web Surveys
Copyright
Copyright © Political Methodology Section of the American Political Science Association 2003 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

AAPOR 1998. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for RDD Telephone Surveys and In-Person Household Surveys. American Association for Public Opinion Research.Google Scholar
Alvarez, R. Michael, Sherman, Robert P., and VanBeselaere, Carla. 2003. “Subject Acquisition for Web-Based Surveys.” Political Analysis 11:2343.Google Scholar
Arrow, Kenneth, Solow, Robert, Portney, Paul, Leamer, Edward, Radner, Roy, and Schulman, Howard. 1993. “Report of the NOAA Panel on Contingent Valuation.” Federal Register 58(10):46014614.Google Scholar
Atrostic, B. K., Bates, Nancy, Burt, Geraldine, Silberstein, Adriana, and Winters, Franklin. 1999. “Nonresponse in U.S. Government Household Surveys: Consistent Measures and New Insights.” Paper presented at the International Conference on Survey Nonresponse, Portland, Oregon, October 28-31.Google Scholar
Bartels, Larry M., 1999. “Panel Effects in the American National Election Studies.” Political Analysis 8:120.CrossRefGoogle Scholar
Bateman, Ian J., and Willis, Ken G. (eds.). 2000. Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the U.S., EC, and Developing Countries. Oxford: Oxford University Press.Google Scholar
Berrens, Robert P, Bohara, Alok K., Jenkins-Smith, Hank, Silva, Carol, and Weimer, David L. 2002. “Information and Effort in Contingent Valuation Surveys: Application to Global Climate Change Using National Internet Samples.” Manuscript.Google Scholar
Boardman, Anthony E., Greenberg, David H., Vining, Aidan R., and Weimer, David L. 2001. Cost-Benefit Analysis: Concepts and Practice. Upper Saddle River, NJ: Prentice Hall.Google Scholar
Cameron, Trudy Ann, and James, Michelle D. 1987. “Efficient Estimation Methods for ‘Closed-Ended’ Contingent Valuation Surveys.” Review of Economics and Statistics 69(2):269276.Google Scholar
Carson, Richard T., Groves, Theodore, and Machina, Mark J. 1999. “Incentives and Informational Properties of Preference Questions.” Plenary Address, European Association of Resource and Environmental Economists, Oslo, Norway, June.Google Scholar
Citro, Constance F., and Kalton, Graham (eds.). 1993. The Future of the Survey of Income and Program Participation. Washington, DC: National Academy Press.Google Scholar
Couper, Nick P., 2000. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly 64:464494.Google Scholar
CTIA. 2000. Wireless Industry Indices: 1985-1999. Cellular Telecommunications Industry Association.Google Scholar
D’Agostino, Ralph B. Jr., and Rubin, Donald B. 2000. “Estimating and Using Propensity Scores with Partially Missing Data.” Journal of the American Statistical Association 95:749759.CrossRefGoogle Scholar
de Leeuw, Edith D. 1999. “Preface.” Journal of Official Statistics 15(2):127128.Google Scholar
Deming, W. Edwards, and Stephan, Frederick F. 1940. “On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals Are Known.” Annals of Mathematical Statistics 11(4):427444.CrossRefGoogle Scholar
Deville, Jean-Claude, Sarndal, Carl-Erik, and Sautory, Oliver. 1993. “Generalized Raking Procedures in Survey Sampling.” Journal of the American Statistical Association 88:10131020.CrossRefGoogle Scholar
eMarketer. 2000. The eDemographics and Usage Patterns Report. eMarketer, Inc., New York, September.Google Scholar
Kalton, Graham, and Citro, Constance F. 1993. “Panel Surveys: Adding the Fourth Dimension.” Survey Methodology 19(2):205215.Google Scholar
Krosnick, Jon A., and Chiat Chang, Lin. 2001. “A Comparison of the Random Digit Dialing Telephone Survey Methodology with Internet Survey Methodology as Implemented by Knowledge Networks and Harris Interactive.” Ohio State University, April.Google Scholar
Lewis, Michael. 2000. “The Two-Bucks-a-Minute Democracy.” New York Times Magazine November 5:6467.Google Scholar
Mitchell, Robert C., and Carson, Richard T. 1989. Using Surveys to Value Public Goods: The Contingent Valuation Method. Washington, DC: Resources for the Future.Google Scholar
Mitofsky, Warren J. 1999. “Pollsters.com.” Public Perspective June/July:2426.Google Scholar
Mondak, Jeffery J. 1999. “Reconsidering the Measurement of Political Knowledge.” Political Analysis 8:5782.Google Scholar
Piekarski, Linda. 1999. “Telephony and Telephone Sampling: The Dynamics of Change.” Paper presented at the International Conference on Survey Nonresponse, Portland, Oregon, October 28-31.Google Scholar
Rademacher, Eric W., and Smith, Andrew E. 2001. “Poll Call.” Public Perspective. March/April:36-37.Google Scholar
Rainie, Lee, Packel, Dan, Fox, Susannah, Horrigan, John, Lenhart, Amanda, Spooner, Tom, Lewis, Oliver, and Carter, Cornelia. 2001. “More On Line, Doing More.” The Pew Internet & American Life Project, Washington, DC, February 18.Google Scholar
RFL Communications. 2000. “Harris Interactive Uses Election 2000 to Prove Its Online MR Efficacy and Accuracy.” Research Business Report November 1-2.Google Scholar
Rosenbaum, Paul R., and Rubin, Donald B. 1984. “Reducing Bias in Observational Studies Using Subclassification on the Propensity Score.” Journal of the American Statistical Association 79:517524.CrossRefGoogle Scholar
Rubin, Donald B. 1997. “Estimating Causal Effects From Large Data Sets Using Propensity Scores.” Annals of Internal Medicine 127:757763.Google Scholar
Silberstein, Adriana R., and Jacobs, Curtis A. 1989. “Symptoms of Repeated Interview Effects in the Consumer Expenditure Interview Survey.” In Panel Surveys, eds. Kasprzyk, Daniel, Duncan, Greg, Kalton, Graham, and Singh, M. P. New York: John Wiley & Sons, pp. 289303.Google Scholar
Steeh, Charlotte. 1981. “Trends in Nonresponse Rates, 1952-1979.” Public Opinion Quarterly 45:4057.Google Scholar
Steeh, Charlotte, Kirgis, Nicole, Cannon, Brian, and DeWitt, Jeff. 2000. “Are They Really As Bad As They Seem? Nonresponse Rates at the End of the Twentieth Century.” Revision of paper presented to the International Conference on Survey Nonresponse, Portland Oregon, October 28-31.Google Scholar
Taylor, Humphrey, Brenner, John, Overmeyer, Gary, Siegel, Jonathan W., and Terhanian, George. 2001. “Touchdown! Online Polling Scores Big in November 2000.” Public Perspective March/April:3839.Google Scholar
U.S. Bureau of the Census. 1999. Statistical Abstract of the United States, 119th Ed. Washington, DC: U.S. Department of Commerce.Google Scholar
Walsh, Ekaterina, Gazala, E., and Ham, Christine. 2000. “The Truth About the Digital Divide.” The Forrester Brief, April 11. (Available from www.forrester.com/ER/Research/Brief/0,1317,9208.FF.htm.)Google Scholar
Zieschang, Kimberly D. 1990. “Sample Weighting Methods and Estimation of Totals in the Consumer Expenditure Survey.” Journal of the American Statistical Association 85:9861001.Google Scholar