Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-28T02:59:28.276Z Has data issue: false hasContentIssue false

When to Protect? Using the Crosswise Model to Integrate Protected and Direct Responses in Surveys of Sensitive Behavior

Published online by Cambridge University Press:  04 January 2017

Daniel W. Gingerich*
Affiliation:
Department of Politics, University of Virginia, Charlottesville, VA 22903
Virginia Oliveros
Affiliation:
Department of Political Science, Tulane University, New Orleans, LA 70118
Ana Corbacho
Affiliation:
Western Hemisphere Department, International Monetary Fund, Washington, DC 20431
Mauricio Ruiz-Vega
Affiliation:
Western Hemisphere Department, International Monetary Fund, Washington, DC 20431

Abstract

Sensitive survey techniques (SSTs) are frequently used to study sensitive behaviors. However, existing strategies for employing SSTs lead to highly variable prevalence estimates and do not permit analysts to address the question of whether the use of an SST is actually necessary. The current article presents a survey questioning strategy and corresponding statistical framework that fills this gap. By jointly analyzing survey responses generated by an SST (the crosswise model) along with direct responses about the sensitive behavior, the article's framework addresses the question of whether the use of an SST is required to study a given sensitive behavior, provides an efficient estimate of the prevalence of the sensitive behavior, and, in its extended form, efficiently estimates how individual characteristics relate to the likelihood of engaging in the behavior. The utility of the approach is demonstrated through an examination of gender differences in proclivities towards corruption in Costa Rica.

Type
Articles
Copyright
Copyright © The Author 2015. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Authors' note: This article was prepared when Ana Corbacho was Sector Economic Advisor and Daniel Gingerich and Virginia Oliveros were visiting scholars at the Inter-American Development Bank. The data, code, and any additional materials required to replicate all analyses in this article are available on the Political Analysis Dataverse of Harvard University's Dataverse Network at: http://dx.doi.org/10.7910/DVN/2AIHQF. Supplementary materials for this article are available on the Political Analysis Web site.

References

Aronow, P. M., Coppock, A., Crawford, F. W., and Green, D. P. 2015. Combining list experiment and direct question estimates of sensitive behavior prevalence. Journal of Survey Statistics and Methodology. doi:10.1093/jssam/smu023.Google Scholar
Azfar, O., and Murrell, P. 2009. Identifying reticent respondents: Assessing the quality of survey data on corruption and values. Economic Development and Cultural Change 57(2): 387411.Google Scholar
Blair, G., and Imai, K. 2012. Statistical analysis of list experiments. Political Analysis 20(1): 4777.CrossRefGoogle Scholar
Blair, G., Imai, K., and Lyall, J. 2014. Comparing and combining list and endorsement experiments: Evidence from Afghanistan. American Journal of Political Science 58(4): 1043–63.Google Scholar
Blair, G., Imai, K., and Zhou, Y.-Y. 2015. Statistical analysis of the randomized response technique. Journal of the American Statistical Association 110(511): 1304–19.CrossRefGoogle Scholar
Böckenholt, U., and van der Heijden, P. G. M. 2007. Item randomized-response models for measuring noncompliance: Risk-return perceptions, social influences, and self-protective responses. Psychometrika 72(2): 245–62.Google Scholar
Böckenholt, U., Barlas, S., and van der Heijden, P. G. M. 2009. Do randomized-response designs eliminate response biases? An empirical study of non-compliance behavior. Journal of Applied Econometrics, Special Issue: New Econometric Models in Marketing 24(3): 377–92.Google Scholar
Bourke, P. D., and Moran, M. A. 1988. Estimating proportions from randomized response data using the EM algorithm. Journal of the American Statistical Association 83(404): 964–68.CrossRefGoogle Scholar
Corstange, D. 2009. Sensitive questions, truthful answers? Modeling the list experiment with LISTIT. Political Analysis 17:5463.CrossRefGoogle Scholar
de Jong, M. G., Pieters, R., and Fox, J. P. 2010. Reducing social desirability bias through item randomized response: An application to measure underreported desires. Journal of Marketing Research 47(1): 1427.CrossRefGoogle Scholar
de Jong, M. G., Pieters, R., and Stremersch, S. 2012. Analysis of sensitive questions across cultures: An application of multigroup item randomized response theory to sexual attitudes and behavior. Journal of Personality and Social Psychology 103(3): 543–64.Google Scholar
Dempster, A. P., Laird, N. M., and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1): 138.Google Scholar
Dollar, D., Fisman, R., and Gatti, R. 2001. Are women really the “fairer” sex? Corruption and women in government. Journal of Economic Behavior & Organization 46(4): 423–29.CrossRefGoogle Scholar
Edgell, S. E., Himmelfarb, S., and Duchan, K. L. 1982. Validity of forced responses in a randomized response model. Sociological Methods & Research 11(1): 89100.Google Scholar
Esarey, Justin, and Chirillo, Gina. 2013. “Fairer sex” or purity myth? Corruption, gender, and institutional context. Politics and Gender 9(4): 390413.Google Scholar
Fox, J. P. 2005. Randomized item response theory models. Journal of Educational and Behavioral statistics 30(2): 189212.CrossRefGoogle Scholar
Fox, J. P., and Wyrick, C. 2008. A mixed effects randomized item response model. Journal of Educational and Behavioral Statistics 33(4): 389415.CrossRefGoogle Scholar
Fox, J. P., and Meijer, R. R. 2008. Using item response theory to obtain individual information from randomized response data: An application using cheating data. Applied Psychological Measurement 32(8): 595610.Google Scholar
Fox, J. P., Avetisyan, M., and Palen, J. 2013. Mixture randomized item-response modeling: A smoking behavior validation study. Statistics in Medicine 32(27): 4821–37.Google Scholar
Franzen, A., and Pointner, S. 2012. Anonymity in the dictator game revisited. Journal of Economic Behavior & Organization 81(1): 7481.Google Scholar
Gilens, M., Sniderman, P. M., and Kuklinski, J. H. 1998. Affirmative action and the politics of realignment. British Journal of Political Science 28(1): 159–83.Google Scholar
Gingerich, D. W. 2013. Political institutions and party-directed corruption in South America: Stealing for the team. Cambridge: Cambridge University Press.Google Scholar
Gingerich, D. W 2010. Understanding off-the-books politics: Conducting inference on the determinants of sensitive behavior with randomized response surveys. Political Analysis 18:349–80.Google Scholar
Gingerich, D. W., Oliveros, V., Corbacho, A., and Ruiz-Vega, M. 2015. Replication files for “When to protect? Using the crosswise model to integrate protected and direct responses in surveys of sensitive behavior.” Available on the Political Analysis Dataverse of Harvard University's Dataverse Network. http://dx.doi.org/10.7910/DVN/2AIHQF.CrossRefGoogle Scholar
Glynn, A. N. 2013. What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly 77(S1): 159–72.Google Scholar
Goetz, A. M. 2007. Political cleaners: Women as the new anti-corruption force? Development and Change 38:87105.Google Scholar
Gonzalez-Ocantos, E., De Jonge, C. K., Meléndez, C., Osorio, J., and Nickerson, D. W. 2012. Vote buying and social desirability bias: Experimental evidence from Nicaragua. American Journal of Political Science 56(1): 202–17.Google Scholar
Imai, K. 2011. Multivariate regression analysis for the item count technique. Journal of the American Statistical Association 106:407–16.CrossRefGoogle Scholar
Jann, B., Jerke, J., and Krumpal, I. 2012. Asking sensitive questions using the crosswise model an experimental survey measuring plagiarism. Public Opinion Quarterly 76(1): 3249.Google Scholar
Kraay, A., and Murrell, P. 2013. Misunderestimating corruption. Policy Research Working Paper 6488, World Bank, Washington, DC.Google Scholar
Krumpal, I. 2012. Estimating the prevalence of xenophobia and anti-Semitism in Germany: A comparison of randomized response and direct questioning. Social Science Research 41(6): 1387–403.Google Scholar
Kuklinski, J. H., Sniderman, P. M., Knight, K., Piazza, T., Tetlock, P. E., Lawrence, G. R., and Mellers, B. 1997. Racial prejudice and attitudes toward affirmative action. American Journal of Political Science 41(2): 402–19.Google Scholar
Lamb, C. W. Jr. and Stem, D. E. Jr 1978. An empirical validation of the randomized response technique. Journal of Marketing Research 15(4): 616–21.Google Scholar
Lara, D., García, S. G., Ellertson, C., Camlin, C., and Suarez, J. 2006. The measure of induced abortion levels in Mexico using randomized response technique. Sociological Methods and Research 35:279–30.Google Scholar
Lensvelt-Mulders, G. J. L., Hox, J. J., van der Heijden, P. G. M., and Maas, C. J. M. 2005. Meta-analysis of randomized response research: Thirty years of validation. Sociological Methods and Research 33:319–48.Google Scholar
Lensvelt-Mulders, G. J. L., van der Heijden, P. G. M., Laudy, O., and van Gils, G. 2006. A validation of a computer-assisted randomized response survey to estimate the prevalence of fraud in social security. Journal of the Royal Statistical Society Series A 169(Part 2): 305–18.Google Scholar
List, J. A., Berrens, R. P., Bohara, A. K., and Kerkvliet, J. 2004. Examining the role of social isolation on stated preferences. American Economic Review 94:741–52.Google Scholar
Magaloni, B., Díaz-Cayeros, A., Romero, V., and Matanock, A. 2012. The enemy at home: Exploring the social roots of criminal organizations in Mexico. Unpublished paper.Google Scholar
Malesky, E. J., Gueorguiev, D. D., and Jensen, N. M. 2015. Monopoly money: Foreign investment and bribery in Vietnam, a survey experiment. American Journal of Political Science 59(2): 419–39.Google Scholar
Miller, J. D. 1984. A new survey technique for studying deviant behavior. PhD thesis, George Washington University, Department of Sociology.Google Scholar
Rosenfeld, B., Imai, K., and Shapiro, J. 2014. An empirical validation study of popular survey methodologies for sensitive questions. Unpublished manuscript, Princeton University.Google Scholar
Sung, H.-E. 2003. Fairer sex or fairer system? Gender and corruption revisited. Social Forces 82(2): 703–23.Google Scholar
Swamy, Anand, Knack, Young Lee, Stephen, and Azfar, Omar. 2001. Gender and corruption. Journal of Development Economics 64:2555.Google Scholar
Tan, M. T., Tian, G. L., and Tang, M. L. 2009. Sample surveys with sensitive questions: A nonrandomized response approach. American Statistician 63(1): 916.Google Scholar
Torgler, B., and Valev, N. T. 2010. Gender and public attitudes toward corruption and tax evasion. Contemporary Economic Policy 28(4): 554–68.Google Scholar
Tracy, P. E., and Fox, J. A. 1981. The validity of randomized response for sensitive measurements. American Sociological Review 46:187200.Google Scholar
van der Heijden, P. G. M., van Gils, G., Bouts, J., and Hox, J. J. 2000. A comparison of randomized response, computer assisted self interview and face-to-face direct questioning: Eliciting sensitive information in the context of welfare and unemployment benefit fraud. Sociological Methods & Research 28:505–37.CrossRefGoogle Scholar
Warner, S. L. 1965. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association 60:63–9.Google Scholar
Yu, J. W., Tian, G. L., and Tang, M. L. 2008. Two new models for survey sampling with sensitive characteristic: Design and analysis. Metrika 67(3): 251–63.Google Scholar
Supplementary material: PDF

Gingerich et al. supplementary material

Appendix

Download Gingerich et al. supplementary material(PDF)
PDF 154.1 KB