Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-04T21:39:49.920Z Has data issue: false hasContentIssue false

Gray’s False Accusations Necessitate Establishing Standards of Evidence for Making Claims of Misconduct

Published online by Cambridge University Press:  24 June 2020

Brad Verhulst
Affiliation:
Texas A&M University
Peter K. Hatemi
Affiliation:
Pennsylvania State University
Rights & Permissions [Opens in a new window]

Abstract

Claims of misconduct must be accompanied by verifiable proof. In “Diagnosis versus Ideological Diversity,” Phillip W. Gray (2019) professes the need to address bias and dishonesty in research but contradicts his stated goals by making untrue and unsupported allegations of misconduct. He equates a coding error with LaCour and Green’s (2014) suspected data fabrication while disregarding publicly available contradictory evidence. In this evidence-based article, we demonstrate that Gray made a series of false accusations of research dishonesty and ideological bias. His assertions not only are unsupported; the evidence also shows the opposite. PS: Political Science & Politics edited Gray’s article after publication and online distribution—removing or modifying the most explicitly false and harmful statements—and changed his central thesis but without changing the DOI. This resulted in two different articles with the same DOI. Although retraction is uncommon, this degree of post-publication modification appears to meet the threshold. The published corrigendum failed to address Gray’s false allegations, pervasive and unsubstantiated insinuations of misconduct, and errors that persist in the second edition of his article. The constellation of behaviors by the journal and Gray contradicts academic norms and emphasizes the need to establish clear standards of evidence when making accusations of academic misconduct.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press on behalf of the American Political Science Association

Gray (Reference Gray2019) falsely equated a coding error in our work with an act of purposeful data fabrication. In his original published article, without evidence, Gray wrongfully accused our team of authors of research misconduct due to ideological bias; at the same time, he ignored all evidence to the contrary (see supplementary information [SI]).Footnote 1 In an extraordinary move, after publishing Gray’s article online, PS edited and republished it with the same DOI—modifying the most explicitly false and disparaging statements that serve as the article’s foundations and ultimate conclusions—without correcting the implied false equivalences, the pervasive inuendo of misconduct, or the factual errors. This pattern of behavior is extraordinary given that (1) Gray’s claims were never verified by the publisher or the journal editors prior to publication (neither were we contacted about the claims of misconduct prior to publication); (2) after publication, we contacted PS and provided evidence that Gray’s claims were false; (3) the journal agreed; and (4) the journal edited the central premise of Gray’s article post-publication and then republished it.Footnote 2

These actions culminated in two published versions of Gray’s article with the same DOI with different central arguments. When we initially submitted our response (July 27, 2019), the original version of Gray’s article that contained the explicit accusations of “research dishonesty” had been tweeted 80 times, viewed 1,098 times, and downloaded hundreds of times from the journal’s website alone (see SI). These circumstances suggest that the appropriate action is for PS to retract Gray’s article. In this case, specifically, and to prevent similar situations in the future, we suggest the following procedure: (1) the journal determines whether the corrected topic remains of interest; (2) the author submits a new article that does not include any false accusations and provides factually based evidence; (3) the article is sent to new reviewers; and (4) if the manuscript meets the standards for publication, it is published as a new article.

In the current case, the foundation of Gray’s argument, even in the second published version, builds on the false equivalence between a coding error and LaCour and Green’s (Reference LaCour and Green2014) suspected data fabrication. This resulted in Gray de facto accusing us of misconduct. In this evidence-based article, we disprove every foundational premise of both versions of Gray’s article, thereby invalidating his unsubstantiated and untrue insinuations. Given the dubious veracity of his central premise and the false accusations in both versions of his article, we believe an apology and full correction is due from Gray and PS. Footnote 3 As one of the journal’s anonymous reviewers stated:

Gray should not only issue an erratum, but also a public apology, NOT for his political point of view (which he is entitled to hold), but because he did not employ the standard of evidence that we all should abide by, especially when such contentions have the potential to cause injury.

DISCREDITING FALSE ALLEGATIONS

Post-truth describes the modern culture wherein discussion is framed by emotional appeals disconnected from reality, to which factual proof is ignored (Keyes Reference Keyes2004). It relies on telling a lie enough times that it becomes taken as truth for those who want it to be. Motivated reasoning, fundamental attribution errors, belief perseverance, misinformation effects, overconfidence phenomena, illusory correlations, and other forms of cognitive dissonance generate emotional rewards for people to seek false information that confirms their values, ignore evidence that invalidates them, and promulgate false information to create an ideologically driven fictional narrative. These processes are consistent with Gray’s behavior: a false story of ideological bias was promulgated and embellished. Ludeke and Rasmussen (Reference Ludeke and Rasmussen2016, 35) made the first false assertion of our work when they made the allegation as one of two possibilities:

One line of current thought might interpret this as indicative of bias against conservatives….Alternatively, we might interpret the failure to detect the coding errors as an indication that some older conceptualizations of political attitudes….Because conservatism and authoritarianism are so tightly linked…it is a small step to thus infer that psychoticism should be elevated among conservatives.

Pairing an unsubstantiated and disingenuous possibility with a credible alternative creates the perception of equivalence, allowing individuals to choose the possibility that they wish to believe without regard for the differential likelihoods of the two alternatives. In truth, there was only one possible and credible explanation: a simple, tangential coding error, wherein appropriate errata were published.

Although Gray (Reference Gray2019) did not cite Ludeke and Rasmussen (Reference Ludeke and Rasmussen2016) or the ensuing social media claims, he elaborated on their argument without using their subtle methods of equivocation and without addressing published contrary evidence (Verhulst and Hatemi Reference Verhulst and Hatemi2016) or providing any evidence of his own. This resulted in Gray creating a fictitious narrative of bias and misconduct.Footnote 4 His argument is based on several interrelated false premises. The following evidence refutes each of them in turn.

Gray’s first false premise relies on an ambiguous observation: “What is notable is how long it took for this mistake to be noticed” (Gray Reference Gray2019, 1). Gray uses this sentence and others as evidence to argue that there was an ideologically motivated intent to miscode the data and an unwillingness to correct errors. In reality, the opposite was true; it took years to identify the error, but once it was identified, errata were submitted within days (see SI). The interval between publication and correction is not evidence of either dishonesty or bias.

Approximately six months after Verhulst, Hatemi, and Eaves (Reference Verhulst, Eaves and Hatemi2012) was published, a research team suggested that our descriptive correlations were opposite to what they expected. This was not abnormal because many personality traits have inconsistent relationships with political attitudes (Jost et al. Reference Jost, Stern, Rule and Sterling2017; McCann Reference McCann2018). Limited information was shared with us (e.g., no data or methods), so we had no means to identify a mistake. Nevertheless, we reanalyzed the data, finding the same results. We encouraged the research team to submit its work for peer review. Two years later, we published a final personality–attitudes article using newer five-factor measures of personality (Hatemi and Verhulst Reference Hatemi and Verhulst2015). Because this was a smaller sample, we also used the older data based on Eysenck’s (Reference Eysenck1968) traits as a replication sample and posted this dataset online. In the summer of 2015, an anonymous manuscript implied an error in the older Eysenck trait data. Again, we reanalyzed the data, found the same results, and reported them to the journal. Although the manuscript did not direct us to the precise error, it reinvigorated our search for the possibility of an error.

We did not have access to the original data files because the data belonged to various institutes with strict data-sharing regulations implemented to address identifiability concerns and comply with HIPAA requirements. Therefore, we made one final attempt to contact the data managers and ask them to compare our working data with the original hard copies (see SI). This process allowed us to identify the precise error. When merging the data, the codebook was reversed. Within hours of identifying the error, we contacted the relevant journals, wrote errata, and—within weeks—corrections began appearing online. The error took years to discover but, once identified, was rapidly corrected. Importantly, we could not correct the error until we could identify it. The actual timeline for the correction does not correspond with Gray’s (Reference Gray2019) insinuations and the evidence contradicts his claim of ideological bias. The most obvious, parsimonious, and factually based reason for the time it took to find the error was that in our descriptives, we originally reported (in error) correlations that were consistent with the most highly cited papers on the topic that included nationally representative samples (i.e., Carney et al. Reference Carney, Jost, Gosling and Potter2008, cited >900 times; Gerber et al. Reference Gerber, Huber, Doherty, Dowling and Ha2010, cited >580 times; Mondak Reference Mondak2010, cited >500 times). That is, we would have to believe an uncorroborated claim that something was wrong with our data without evidence: a claim that contrasted with the most highly cited, recent, peer-reviewed published research on the topic, which reported the same correlations as ours. Promoting an unsupported claim over published research is incompatible with academic norms. There was no bias or dishonesty in attempting to identify or correct the error, simply normal scientific reasoning. Furthermore, because the main analyses and conclusions of the articles were unaffected by the error, the coding error was tangential, undercutting the likelihood that self-interest or ideological reasoning biased our decision making.

Gray’s second false premise is the assertion that our work focused on the “correlation between conservative political ideology and traits of psychoticism” (Gray Reference Gray2019, 1). His overemphasis on this correlation ignores our main thesis and analyses, promulgating a distorted and inaccurate representation of our work, demonstrating the exact bias he argues against. We assessed more than 30 personality–attitude combinations and relied on an a priori empirical threshold to restrict the analyses to correlations larger than 0.20. This was to have sufficient covariation to decompose into genetic and environmental components—the main goals of our work (Verhulst, Eaves, and Hatemi Reference Verhulst, Eaves and Hatemi2012, 39–40). If the errors were driven by ideological bias, intentional or not, there should have been corresponding errors across the Eysenck and five-factor analyses. This was not the case.

Equally important—and in stark contrast to Gray’s claims—is the fact that both liberals and conservatives were associated with negative personality dimensions in our analyses. We originally reported (in error) that neuroticism was associated with economic liberalism—again, a finding that was consistent with the most highly cited papers in the field (Gerber et al. Reference Gerber, Huber, Doherty, Dowling and Ha2010; Mondak Reference Mondak2010). We also reported (in error) that Eysenck’s psychoticism was associated with certain forms of conservatism, consistent with a long-standing published theory and empirical literature (Eysenck and Wilson Reference Eysenck and Wilson1978; Francis Reference Francis1992; Pearson and Greatorex Reference Pearson and Greatorex1981). We gave little thought to these descriptives because the signs of the associations were irrelevant for our argument and they appeared to replicate well-established relationships. By stating that we focused only on psychoticism and ignoring our main results and the fact that our descriptives (in error) were consistent with both historical and contemporary literature, Gray constructed a fallacious narrative of bias. His false pretense could have been identified easily by reading the original articles (Verhulst and Hatemi Reference Verhulst and Hatemi2016, 363)—or should have, if the journal conducted standard editorial review. Conducting research is the difference between published, peer-reviewed, scientific argument versus something posted on the Internet.

Equally important—and in stark contrast to Gray’s claims—is the fact that both liberals and conservatives were associated with negative personality dimensions in our analyses.

Gray’s (Reference Gray2019) third false premise is that our articles were widely cited for the erroneous correlations—specifically, the psychoticism–conservatism association. In reality, our studies tested causal associations between various personality traits and political-attitude dimensions, remaining agnostic to the direction of the relationships. This is evident in all of our work, even with a cursory examination (Hatemi and Verhulst Reference Hatemi and Verhulst2015; Verhulst, Eaves, and Hatemi Reference Verhulst, Eaves and Hatemi2012; Verhulst, Hatemi, and Martin Reference Verhulst, Hatemi and Martin2010). We briefly discussed the associations between the empirically relevant correlations, similar to when authors discuss means and variances of control variables such as education and gender. Finding no evidence of causal associations in a gene-environment context, our work casts doubt on the substantive significance of personality–attitude correlations, regardless of the signs of the correlations. We argued that the correlations between personality traits and political attitudes were spurious and we questioned correlating any personality trait with any political attitude—exactly the opposite of Gray’s insinuations. Although this was stated numerous times in various peer-reviewed publications (most recently in Verhulst and Hatemi Reference Verhulst and Hatemi2016), it was auspiciously avoided in Gray’s (Reference Gray2019) article; yet it directly contradicts his argument.

The strongest and most impartial evidence of the lack of bias is found when examining how other researchers cited our work. Using Google Scholar, which generally provides the most inclusive list of citations, we reviewed every scholarly work available that cited Verhulst, Eaves, and Hatemi (Reference Verhulst, Eaves and Hatemi2012). We examined (1) how they cited our work; (2) if they cited the preliminary correlations; (3) if so, whether they noted the direction of the correlations; and (4) if they cited them before or after correction (see SI). When we submitted this response, there were 191 citations after removing “ghosts” (i.e., Google erroneously attributed a citation) and duplicates (i.e., 206 before removals).Footnote 5 Of these citations, 95% made no mention of the direction of the correlations and cited the article explicitly for the lack of a causal relationship between personality and attitudes, genetic covariance, or the general method (i.e., 176 citations). Four citations were on errors in science. Only nine papers cited the erroneous correlations. Of these, six mentioned liberal correlations with neuroticism; two cited the article as part of a larger review with other similar findings (one of which was our own work). Only one citation focused on the erroneous correlation of military conservatives being higher in Eysenck’s psychoticism. Of these nine citations, eight came after the false social media claims of bias.

Gray stated that our “results received wide attention and have been cited in numerous journals” (Gray Reference Gray2019, 1), arguing that researchers were incorrectly citing our research for the psychoticism–conservatism correlation. After actually reviewing the citations, the foundation of Gray’s argument collapses. He makes a clear logical error, wrongfully presuming that (1) if there is a mistake in an article, and (2) if that article is cited, then (3) the citation must be for the error. As the evidence shows, the article was overwhelmingly cited correctly for its actual findings and not the error in the descriptives—and, centrally, only one citation explicitly focuses on psychoticism. Thus, our error had virtually no negative impact on the literature and certainly showed no bias against conservatives. The data directly refute Gray’s supposition that other researchers focused on the correlations or misinterpreted our conclusions.

Gray’s (Reference Gray2019) fourth false premise is that the coding error was the result of ideological bias on the part of the authors and was overlooked because of rampant ideological bias in the discipline. For our team of authors to be ideologically biased against conservatives, it would be necessary to demonstrate that we are homogeneously liberal. We are not. Our author team contains a mixture of political views, both resolutely liberal and staunchly conservative.

Importantly, the evolution of our research can be traced to the first contact between the authors, documented in email threads in which numerous additional colleagues were witnesses to the process (see SI). This documents Verhulst’s research interests and Hatemi, Eaves, and Martin’s enthusiasm in assisting Verhulst in pursuing his ideas. The first exchange between the authors outlines a preliminary research plan, initial motivations, and goals, which provides irrefutable evidence that Gray’s attributions of ideological bias are untrue. Our correspondence shows that from the very beginning, our research has always focused on exploring genetic and environmental covariance and challenging any causal role of personality on attitudes. It shows that we were and are agnostic to the direction of the correlations—exactly the opposite of Gray’s (Reference Gray2019) claims.

Our correspondence further shows that it was our goal to not portray liberals and conservatives as more positive or negative than the other—also the opposite of Gray’s claims. A series of emails in 2009 (see SI) described the plan for our first article together, explicitly stating that “we are agnostic” ideologically and that we seek to “step away from normative slants, note that the mean effect size is quite small, and that personality variance is wide, being slightly less open, does not make one closed-minded, previous papers speak in the extremes, when in fact the extremes are quite rare” (see SI). This email and hundreds more dating back more than a decade provide clear and verifiable evidence that we were not ideologically biased but rather the opposite. They exist on independent servers that can be verified. Included are third parties that were not authors.

Finally, Gray’s (Reference Gray2019) claim of discipline-wide ideological bias is built on the argument that the field focused on Eysenck’s psychoticism being linked to military conservatism rather than our actual hypotheses or conclusions and the allegation of a broad demonization of conservatives. Gray (Reference Gray2019, 1) stated: “[i]n a disciplinary population overwhelming progressive in political perspective, it intuitively ‘makes sense’ that conservatives—the Other—would share psychotic traits.” The data, however, show that the near-uniform citation of our research concentrated on our finding that personality traits do not cause political values and on genetic decomposition. In complete opposition to Gray’s unsupported claims, the field largely ignored the direction of the correlations, as it was tangential to our work.

IMPLICATIONS AND CONCLUSION

Gray’s (Reference Gray2019) polemic essay reflects a trend toward an increasingly politicized environment fueled by social media, which has empowered character attacks and false insinuations to enter the academic literature. Sources historically perceived as credible, such as journalists and US senators, now regularly espouse false narratives, twisting partial truths to elevate themselves and denigrate others. By eschewing minimal standards of evidence, such narratives now have been published in PS. A simple coding error that had no role in our theorizing, research questions, or conclusions was portrayed as something ideological and sinister.

The perpetuation of lies stated on social media and picked up by journalists who benefit from drama enhances the credibility of falsehoods. Social media is not obligated to print the truth or adhere to any evidentiary requirements, and mainstream media is required only to provide minimal sources that they can shape into any narrative that will sell. As academics, we must hold ourselves to higher standards. This entire situation would have been avoided by reading the academic literature and following standard norms of editorial oversight. In the current era, all anyone needs for a lie to become their truth is to want to believe it. This should not be the standard in scholarly work.

In the current era, all anyone needs for a lie to become their truth is to want to believe it. This should not be the standard in scholarly work.

In the original version of Gray’s (Reference Gray2019) article, which was online when we initially submitted this manuscript (see SI), he explicitly stated that we engaged in misconduct. After we provided evidence to PS that Gray’s piece was untrue, a draft corrigendum was sent to us without consultation, and Gray’s original article was edited post-publication to remove or alter the most egregious statements. Nevertheless, neither the corrigendum nor the edited article addressed the unfounded charges of bias or took responsibility for the false claims of misconduct. It is troubling that unsubstantiated accusations survived the editorial process; the editors did not alert us that this claim was going to be published or allow us the opportunity to provide exculpatory evidence before publishing false accusations of misconduct. Even with the post-publication edits, our team of authors’ research reputation has been unfairly and grossly maligned. Not only will we suffer damage; if this type of behavior is not corrected and the discipline allows character attacks as a legitimate form of academic discourse, the reputation of the journal and academic enterprise will suffer lasting damage as well.

Gray’s behavior and the publication of his article indicate changing norms in the editorial process. When claims of misconduct are made, they must be accompanied by verifiable proof—not innuendo, rhetoric, social media, or circumstantial observation. We must maintain norms to guide our profession to prevent unfounded ad hominem accusations that assign nefarious intent to simple mistakes or any other form of character attacks. Errors are a normal part of science. Thousands of corrigenda and errata are published every year, promoting learning, development, and growth. Researchers must be encouraged to correct errors without character attacks or assigning intent. To do otherwise incentivizes scholars to hide their errors. Making false accusations hinders the progress of science. This type of behavior and the editors’ reticence to correct it will only encourage more secrecy, less accountability, and uncivil discourse. When credible accusations of unethical behavior exist, they must be investigated appropriately, but false allegations are equally grave and are an act of unethical behavior themselves and deserve equal scrutiny. It takes only a few words to unfairly harm another’s reputation; it takes much more to provide evidence of the opposite. The evidence presented here negates the allegations, foundations, and conclusions of Gray’s (Reference Gray2019) article.

The absence of tangible evidence to support Gray’s accusations violates the academic norms that govern reasonable intellectual dialogue. When editors fail to engage in due diligence, they are equally responsible. In a time when partisanship obfuscates facts, it is essential that standards of evidence be established for making claims of misconduct about academic contributions or to resolve intellectual disagreements. Otherwise, academic discourse risks devolving into slander and libel. It is imperative that academic journals address the profound threat of intentional misinformation making its way into the academic record. It is time to reestablish the academic norms that call for retracting false accusations. In this case, we suggest withdrawing Gray’s (Reference Gray2019) article and publishing a full and transparent correction.

Footnotes

1. Explicit restrictions were placed on the evidence that we were allowed to present. To enhance transparency, we placed additional details on the dataverse available at https://doi.org/10.7910/DVN/MSBZMY.

2. PS sent us the language for a correction on June 26 and July 12, 2019, to which we objected because it did not take responsibility for the errors or address any of our concerns. We also received modified language for Gray’s already published article, three days before the journal’s deadline to submit our response (July 24, 2019)—again, without consultation. PS also agreed to print our response in the same issue as Gray’s article, but a review process that took almost a year and numerous rounds ensured that this would not occur.

3. The proposed corrigendum lacked any recognition by Gray for wrongdoing; neither did it address the damage that his false claims had and will continue to have on the authors.

4. “Research dishonesty” was used in the first published version of Gray’s (Reference Gray2019) article and implied in the second version. As long as the first version is not retracted, it will remain on the web forever and can, has, and will continue to be legitimately referenced.

5. Two book chapters could not be accessed; see the SI for a coded list of citations.

References

REFERENCES

Carney, Dana R., Jost, John T., Gosling, Samuel D., and Potter, Jeff. 2008. “The Secret Lives of Liberals and Conservatives: Personality Profiles, Interaction Styles, and the Things They Leave Behind.” Political Psychology 29 (6): 807–40.CrossRefGoogle Scholar
Eysenck, Hans J. 1968. “Eysenck Personality Inventory.” San Diego, CA: Educational and Industrial Testing Service.Google Scholar
Eysenck, Hans J., and Wilson, Glenn D.. 1978. The Psychological Basis of Ideology. Baltimore, MD: University Park Press.Google Scholar
Francis, Leslie J. 1992. “Is Psychoticism Really a Dimension of Personality Fundamental to Religiosity?” Personality and Individual Differences 13 (6): 645–52.CrossRefGoogle Scholar
Gerber, Alan S., Huber, Gregory A., Doherty, David, Dowling, Conor M., and Ha, Shang E.. 2010. “Personality and Political Attitudes: Relationships across Issue Domains and Political Contexts.” American Political Science Review 104 (1): 111–33.CrossRefGoogle Scholar
Gray, Phillip W. 2019. “Diagnosis versus Ideological Diversity.” PS: Political Science & Politics 52 (4): 728–31.Google Scholar
Hatemi, Peter K., and Verhulst, Brad. 2015. “Political Attitudes Develop Independently of Personality Traits.” PLoS One 10 (3): e0118106.Google ScholarPubMed
Jost, John T., Stern, Chadly, Rule, Nicholas O., and Sterling, Joanna. 2017. “The Politics of Fear: Is There an Ideological Asymmetry in Existential Motivation?” Social Cognition 35 (4): 324–53.CrossRefGoogle Scholar
Keyes, Ralph. 2004. The Post-Truth Era: Dishonesty and Deception in Contemporary Life. New York: Macmillan.Google Scholar
LaCour, Michael J., and Green, Donald P.. 2014. “When Contact Changes Minds: An Experiment on Transmission of Support for Gay Equality.” Science 346 (6215): 1366–69.CrossRefGoogle ScholarPubMed
Ludeke, Steven G., and Rasmussen, Stig H. R.. 2016. “Personality Correlates of Sociopolitical Attitudes in the Big Five and Eysenckian Models.” Personality and Individual Differences 98:3036. Available at https://doi.org/10.1016/j.paid.2016.03.079.CrossRefGoogle Scholar
McCann, Stewart J. H. 2018. “State Resident Neuroticism Accounts for Life Satisfaction Differences between Conservative and Liberal States of the USA.” Psychological Reports 121 (2): 204–28.CrossRefGoogle ScholarPubMed
Mondak, Jeffery. 2010. Personality and the Foundations of Political Behavior. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Pearson, Paul R., and Greatorex, Bryce J.. 1981. “Do Tough-Minded People Hold Tough-Minded Attitudes?” Current Psychology 1 (1): 4548.CrossRefGoogle Scholar
Verhulst, Brad, Eaves, Lindon J., and Hatemi, Peter K.. 2012. “Correlation Not Causation: The Relationship between Personality Traits and Political Ideologies.” American Journal of Politcal Science 56 (1): 3451.CrossRefGoogle Scholar
Verhulst, Brad, and Hatemi, Peter. 2016. “Correcting Honest Errors Versus Incorrectly Portraying Them: Responding to Ludeke and Rasmussen.” Personality and Individual Differences 98:361–65.CrossRefGoogle Scholar
Verhulst, Brad, Hatemi, Peter K., and Martin, Nicholas G.. 2010. “The Nature of the Relationship between Personality Traits and Political Attitudes.” Personality and Individual Differences 49 (4): 306–16.CrossRefGoogle Scholar