Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-19T05:38:43.120Z Has data issue: false hasContentIssue false

Fraud in Online Surveys: Evidence from a Nonprobability, Subpopulation Sample

Published online by Cambridge University Press:  27 May 2022

Andrew M. Bell*
Affiliation:
Indiana University-Bloomington, Bloomington, IN, USA U.S. Army War College, Carlisle, PA, USA
Thomas Gift
Affiliation:
University College London, London, UK
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We hired a well-known market research firm whose surveys have been published in leading political science journals, including JEPS. Based on a set of rigorous “screeners,” we detected what appears to be exceedingly high rates of identity falsification: over 81 percent of respondents seemed to misrepresent their credentials to gain access to the survey and earn compensation. Similarly high rates of presumptive character falsification were present in panels from multiple sub-vendors procured by the firm. Moreover, we found additional, serious irregularities embedded in the data, including evidence of respondents using deliberate strategies to detect and circumvent one of our screeners, as well as pervasive, observable patterns reflecting that the survey had been taken repeatedly by a respondent or collection of respondents. This evidence offers reasons to be concerned about the quality of online nonprobability, subpopulation samples, and calls for further, systematic research.

Type
Short Report
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Growing evidence points to problems with “character misrepresentation” in digital surveys (Ahler et al. Reference Ahler, Roush and Sood2021; Chandler and Paolacci Reference Chandler and Paolacci2017; Hydock Reference Hydock2018; Ryan Reference Ryan2020; Wessling et al. Reference Wessling, Huber and Netzer2017). We present concerning results from a nonprobability online survey of a specific subpopulation fielded through a well-known, commercial firm. A set of rigorous “screeners” revealed extremely high rates of presumptive fraud: More than 81 percent of respondents appeared to misrepresent themselves as current or former US Army members – our subpopulation of interest – to complete the survey and earn compensation. Presumed falsification rates were similar across multiple established sub-vendors, indicating that the problems were not idiosyncratic to a particular panel. Data also indicate the use of deliberate tactics to circumvent one of our screeners and repeated participation from a respondent or group of respondents, further raising suspicions about the data.

These irregularities point to the potential for significant identity misrepresentation rates in online nonprobability, subpopulation surveys – rates that are orders of magnitude greater than those typically reported in standard online surveys (Callegaro et al. Reference Callegaro, Villar, Yeager, Krosnick, Baker, Bethlehem, Goritz, Kros-nick, Callegaro and Lavrakas2014; Cornesse et al. Reference Cornesse, Blom, Dutwin, Krosnick, De Leeuw, Legleye, Pasek and Pennay2020; Kennedy et al. Reference Kennedy, Hatley, Lau, Mercer, Keeter, Ferno and Asare-Marf2020a; Mullinix et al. Reference Mullinix, Leeper, Druckman and Freese2016). Although online survey firms can vary markedly in their regulation of quality control – and all should be assessed for data problems (Kennedy et al. Reference Kennedy, Mercer, Keeter, Hatley, McGeeney and Gimenez2016) – risks for exceedingly high levels of fraud could be heightened under the conditions present in our study. Our findings call for further, systematic research into the validity of nonprobability online surveys, particularly those that sample specific subpopulations. They also underscore the imperative for researchers to develop clear tools and strategies (prior to statistical analyses and within preregistration plans) to ensure data integrity based on expert knowledge of research subjects.

Survey details

We contracted a nationally recognized market research firm, Footnote 1 whose samples have formed the basis of numerous, widely-cited political science studies, including articles published in Journal of Experimental Political Science, American Political Science Review, and Journal of Politics. The firm, which used multiple sub-vendors, fielded the survey over two separate rounds in April–May 2021. Footnote 2

Screening process

We employed two screeners to confirm the authenticity of respondents with self-identified Army experience. First, we asked a “knowledge” question about the practice of saluting, one of the most essential elements of military protocol. Footnote 3 The question required knowing both the Army’s rank hierarchy and that enlisted soldiers salute first. Multiple former Army officers consulted for this study validated the screen, with one stating: “Anyone who is answering that question incorrectly is either not reading the question or has not served in the military, let alone the Army.” Footnote 4 Second, respondents reported specific information on their Army background including: highest rank achieved, source of officer commission, deployment years and locations, and unit type. Footnote 5 We coded “non-viable” responses as those that provided information contravening federal law or Army personnel policy. We coded a small number of responses “highly improbable” based on being contrary to Army personnel practices or historical evidence, but which could be theoretically plausible. Footnote 6

Key findings

  • Total invalid: 81.8 percent

    • 43.3 percent of total respondents failed the Army knowledge question.

    • 35.5 percent of respondents passed the knowledge screen but gave answers about Army service that were non-viable under federal law or military administrative rules.

    • 3.0 percent of respondents reported information about an Army background and career that was highly improbable.

  • Total valid: 18.2 percent

Notably, the number of invalid responses remained consistent across multiple vendors, each operating in a different survey round (83.7 in Round 1 and 78.9 in Round 2), suggesting that the problem was not isolated to a specific panel (Table 1).

Table 1 Summary of response categories

Additional irregularities

1. Respondents appeared to employ deliberate tactics to circumvent the knowledge screener

Evidence. Beginning on April 17 in Round 1, the proportion of respondents who passed the knowledge question with non-viable demographic and career answers jumped sharply from 15.4 to 65.9 percent (Table 2). Possible explanations for this unexpected “break” in the data might include collusion (via the sharing of information on an email listserv or crowdsourced website), the retaking of surveys on devices with unique IP addresses, or the use of survey “bots.”

Table 2 Irregularity 1: Increase in “non-viable” responses (Round 1)

2. Respondents appeared to use deliberate tactics to detect the knowledge question as a screener

Evidence. In Round 1 – where failure to pass the knowledge question terminated the survey immediately – 49.8 percent passed the knowledge question but provided non-viable career responses. Conversely, in Round 2 – where we modified the setup so that failure resulted in termination later in the survey, thereby “hiding” the screener – the number of non-viable fell to 13.7 percent (Table 1). Conversely, the percentage of respondents who failed the screen rose from 30.6 percent (Round 1) to 62.7 percent (Round 2). This suggests that, absent a “hidden” screener, later respondents in Round 1 may have gleaned information from earlier respondents or survey attempts to identify the screener and defeat it. Footnote 7

3. A respondent, or collection of respondents, appeared to take the survey numerous times

Evidence. Categorizing conservatively, we identified – at a minimum – 73 suspicious instances of repeated (and unusual) responses regarding Army background and deployment experience. Footnote 8 The sequential clustering of these responses – six distinct waves of repeating answers across the 3 days of responses – suggests that such repetition was not coincidental. We also observed obvious repetitive patterns of survey takers seeming to misrepresent personal demographic information.

Conclusion

We see this analysis as an opportunity for learning. Despite taking precautions to screen out invalid respondents, we found high rates of presumptive fraud. This reinforces that researchers should be especially cautious when employing online surveys using nonprobability samples of specific subpopulations. Given that only about 7 percent of the US population is military or ex-military (Vespa Reference Vespa2020), our results are consistent with incentives for fraud increasing as the size of the subpopulation qualifying to participate in surveys decreases (Chandler and Paolacci Reference Chandler and Paolacci2017). Combined with other techniques, employing a diversity of screeners predicated on expert understanding of research subjects – including factors like demographics and content knowledge – can improve the odds of detecting falsified responses. Future research should systematically assess the quality of nonprobability surveys (Hauser and Schwarz Reference Hauser and Schwarz2016; Lopez and Hillygus Reference Lopez and Hillygus2018; Kennedy et al. Reference Kennedy, Clifford, Burleigh, Waggoner, Jewell and Winter2020b; Thomas and Clifford Reference Thomas and Clifford2017). By implementing rigorous screeners on diverse populations, replicated across many firms and sub-vendors, this could illuminate whether our results are endemic to nonprobability surveys that sample specific subpopulations and what the broader implications are for internal and external validity.

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2022.8

Data Availability

The data, code, and additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at https://doi.org/10.7910/DVN/Y1FEOX

Acknowledgments

The authors thank Christopher DeSante, Timothy Ryan and Steven Webster, two anonymous U.S. Army officers, as well as anonymous reviewers for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

Ethics Statement

This survey was approved by the Indiana University-Bloomington IRB (Protocol #: 1910663858). The research adheres to APSA’s Principles and Guidance for Human Subjects Research. See Supplemental Appendix for more information.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

1 We withhold the survey firm’s name and its associated sub-vendors due to liability reasons.

2 Round 2 was launched after Round 1 was halted due to concerns about data quality. For detailed information on the survey, see Appendix (Supplementary Material).

3 For text, see Appendix. Army rank structure and saluting protocols are so central to Army service that they are among the first subjects recruits learn in basic training (Army, 2019).

4 Email from former US Army major, April 20, 2021.

5 Information such as military rank is so defining of an Army member’s career and identity – akin to a civilian’s job – that respondents should almost never report these details incorrectly (Bell & Terry, Reference Bell and Terry2021).

6 For examples of “non-viable” and “highly improbable” responses, see Appendix.

7 Duplicate IP addresses accounted for 11 of the total responses. These responses are included in the analysis.

8 For examples, see Appendix.

References

Ahler, D. J., Roush, C. E. and Sood, G.. 2021. The micro-task market for lemons: Data quality on amazon’s mechanical turk. Political Science Research and Methods, 120.CrossRefGoogle Scholar
Army. 2014. Department of the Army Pamphlet 600–3, Commissioned Offcer Professional Development and Career Management. Washington, DC: Headquarters, Department of the Army.Google Scholar
Army. 2019. U.S. Army Training and Doctrine Command Pamphlet 600-4, The Soldier’s Blue Book, The Guide for Initial Entry Training Soldiers. Fort Eustis, VA: Department of the Army.Google Scholar
Bell, A. M. and Terry, F.. 2021. Combatant Rank and Socialization to Norms of Re- straint: Examining the Australian and Philippine Armies. International Interactions 47(5): 825854.CrossRefGoogle Scholar
Callegaro, M., Villar, A., Yeager, D. and Krosnick, J. A.. 2014. A Critical Review of Studies Investigating the Quality of Data Obtained with Online Panels based on Probability and Nonprobability Samples. In Online Panel Research: A Data Quality Perspective, eds. Baker, R., Bethlehem, J., Goritz, A. S., Kros-nick, J. A., Callegaro, M. and Lavrakas, P. J. West Sussex, UK: John Wiley & Sons.CrossRefGoogle Scholar
Chandler, J. J. and Paolacci, G.. 2017. Lie for a Dime: When Most Prescreening Responses are Honest but most Study Participants are Impostors. Social Psychological and Personality Science 8(5): 500508.CrossRefGoogle Scholar
Cornesse, C., Blom, A. G., Dutwin, D., Krosnick, J. A., De Leeuw, E. D., Legleye, S., Pasek, J. and Pennay, D.. 2020. A Review of Conceptual Approaches and Empirical Evidence on Probability and Nonprobability Sample Survey Research. Journal of Survey Statistics and Methodology 8(1): 436.CrossRefGoogle Scholar
Hauser, D. J. and Schwarz, N.. 2016. Attentive Turkers: MTurk Participants Perform Better on Online Attention Checks than do Subject Pool Participants. Behavior Research Methods 48(1): 400407.Google ScholarPubMed
Hydock, C. 2018. Assessing and Overcoming Participant Dishonesty in Online Data Collection. Behavior Research Methods 50: 15631567.CrossRefGoogle ScholarPubMed
Kennedy, C., Mercer, A., Keeter, S., Hatley, N., McGeeney, K. and Gimenez, A.-j.. 2016. Evaluating Online Nonprobability Surveys. Pew Research May 2, 2016.Google Scholar
Kennedy, C., Hatley, N., Lau, A., Mercer, A., Keeter, S., Ferno, J. and Asare-Marf, D.. 2020a. Assessing the Risks to Online Polls From Bogus Respondents. Pew Feb. 18, 2020.Google Scholar
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R. and Winter, N. J. G.. 2020b. The Shape of and Solutions to the mturk Quality Crisis. Political Science Research and Methods 8. 614629.CrossRefGoogle Scholar
Lopez, J. and Hillygus, D. S.. 2018. Why So Serious?: Survey Trolls and Misinformation. SSRN Electronic Journal. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3131087 CrossRefGoogle Scholar
Mullinix, K. J., Leeper, T. J., Druckman, J. N. and Freese, J.. 2016. The Generalizability of Survey Experiments. Journal of Experimental Political Science 2(2): 109138.CrossRefGoogle Scholar
Ryan, T. J. 2020. Fraudulent Responses on Amazon Mechanical Turk: A Fresh Cautionary Tale. Retrieved from https://timryan.web.unc.edu/2020/12/22/fraudulent-responses-on-amazon-mechanicalturk-a-fresh-cautionary-tale/ Google Scholar
Thomas, K. A. and Clifford, S.. 2017. Validity and Mechanical Turk: An Assessment of Exclusion Methods and Interactive Experiments. Computers in Human Behavior 77(1): 184197.CrossRefGoogle Scholar
Vespa, J. E. 2020. Those Who Served: America’s Veterans From World War II to the War on Terror. American Community Survey Report, U.S. Census Bureau June 2020.Google Scholar
Wessling, K. S., Huber, J. and Netzer, O.. 2017. Mturk Character Misrepresentation: Assessment and Solutions. Journal of Consumer Research 44(1): 211230.CrossRefGoogle Scholar
Figure 0

Table 1 Summary of response categories

Figure 1

Table 2 Irregularity 1: Increase in “non-viable” responses (Round 1)

Supplementary material: Link

Bell and Gift Dataset

Link
Supplementary material: File

Bell and Gift supplementary material

Appendix 1

Download Bell and Gift supplementary material(File)
File 14.7 KB