Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-10-28T04:11:23.225Z Has data issue: false hasContentIssue false

Victims in Our Own Minds? IRBs in Myth and Practice

Published online by Cambridge University Press:  01 January 2024

Rights & Permissions [Opens in a new window]

Abstract

Type
Comment on the Presidential Address
Copyright
© 2007 Law and Society Association.

During Malcolm Feeley's 2006 Presidential Address, audience members were not just whispering to colleagues sitting next to them, they were being loud. When Feeley cracked, “Institutional Review Boards [are] known to graduate students on my campus as the Committees for the Prevention of Research on Human Subjects!” it was the closest thing I have seen to someone bringing down the house at an academic conference. Before it became a publication, Feeley's Presidential Address was an uproarious event.

What audience members were cheering was Feeley's argument that institutional review boards (IRBs) at American universities represent a failure of law: IRBs infringe on investigators' rights to carry out research because boards can force changes to studies before the research even takes place, often passing judgment on the perceived merit of the research, constraining low-risk qualitative research and high-risk clinical trials alike, and producing decisions against which investigators have little recourse. As a result, Feeley argued, IRBs blunt the potential for researchers to use their scholarship as a medium for political critique and, more ominously, IRBs change the very shape and integrity of academic research with the restrictions they impose. That is to say, regulations aimed at protecting human subjects actually violate researchers' rights. Admirably, Feeley did not let us (or himself) off the hook with his pointing finger. In his experience, Feeley said, “Even our liberal, productive, research-oriented colleagues are not immune to this tendency [to censor research as IRB members],” and, more to the point, all of us who passively go along with research review are complicit in what he described as an unjust human subjects system. Following this impassioned rallying cry against IRBs, I was pleasantly surprised to learn that, to Feeley's mind, LSA members' local involvement offers the only real solution to these problems. (A common reaction to IRBs from professional organizations is to demand that members mobilize to change federal regulations—a dubious prospect to which I return in my final section.)

To my mind, frustrations with IRBs are justifiable. I have occasionally butted heads—or worse, had protracted and unnecessary memo wars—with several boards. In the comments that follow, however, I draw on archival and ethnographic evidence from my research on IRBs at American universities (Reference StarkStark 2006), rather than recounting my personal anecdotes about getting IRB approval.Footnote 1 After these empirical observations, I take a more prescriptive stance and suggest how the ethics review system could usefully be improved by spelling out the conditions under which IRBs work more or less well with investigators. The suggestions with which I close are based on my view as a sociologist that ethics review in some form is here to stay because of institutional inertia, and on my belief as a potential research subject that ethics review is not an entirely bad idea, even for social scientists.

Rights, Harms, and Responsibilities in Historical Context

When reading critiques of IRBs, it is often difficult to imagine why oversight of social and behavioral researchers ever seemed sensible. Is it possible that we social scientists are victims of a terrible mistake—that sometime around 1966, our disciplinary forefathers unwittingly got trapped inside a human subjects bureaucracy only meant to regulate researchers who injected people with dangerous substances?

From the outset, human subjects protections were intended to regulate social and behavioral researchers. Recent insinuations to the contrary fail to appreciate, first, the changing meaning of “real harm” since the 1960s and, second, the extent to which human subjects regulations were never exclusively about preventing harm, but about protecting people's rights not to be researched, even when everyone involved regarded the practices as harmless by any definition.Footnote 2 One of the great ironies of recent critiques of IRBs is that the federal rules aimed at protecting human subjects, which emerged during the late 1960s and early 1970s, were a product of the same liberal spirit that safeguarded academic freedom during this period as well (Reference AltbachAltbach 1980). In an era when potential victims were everywhere, both academics and research subjects won protections against power holders (universities and investigators, respectively)—protections that are used today as rhetorical tools to pit the groups against each other.

In 1960s America, the harm of social and behavioral research was nebulous and subtle. To be sure, a few memorable characters drew fire for what came to be seen as ethics transgressions—such as psychologist Stanley Milgram and the anthropologists involved in a federal counterinsurgency program called Project Camelot (Reference MilgramMilgram 1974; Reference RobinRobin 2001). Nonetheless, there were few exceptionally bad apples to toss from the barrel of social science research. Instead, it was the basic goals, methods, and assumptions underlying much social science that eroded everyday Americans' goodwill toward investigators and the tools of their trade. During the mid-1960s, for example, lawmakers and activists worked to limit the use of psychological tests. As it turned out, results on these seemingly “objective” tests developed by social scientists had a strikingly high correlation with whether respondents had a traditional white middle-class upbringing. The tests were seen as a source of backdoor discrimination in employment, education, and social science research itself. Participants in social and behavioral studies also fretted over who knew what about them—or about the invasion of their privacy, to use the parlance of congressional hearings on the topic held in 1965. On a visceral level, moreover, the delicate topics that were raised by perfect strangers in the course of research made some people uncomfortable. One U.S. Congressman, for example, was alarmed to imagine what might be promoted through “projects financed by grants and contracts under the Federal Government” given that some researchers, as he put it, “ask our citizens to answer intimate questions about their family life, sex experience, religious views, personal values, and other subjects normally regarded as solely the private business of the individual.”Footnote 3

More important than any of these apparent harms, however, was the conviction among National Institutes of Health (NIH) lawyers throughout the 1960s that human subjects protections were as much about safeguarding people's rights as about protecting them from physical or social harm. One lawyer, in explaining this view to the Surgeon General, seemed to celebrate people's “free right” to make irrational decisions and thus confound hubristic investigators. “Only the individual with all his ignorance, superstitions and foibles can make the important choice [of whether to participate in research],” he asserted, “and, being fully informed as possible, he is free to make it for particular reasons or for no reasons at all.”Footnote 4 This was a harsh rebuke to investigators (including the Surgeon General) who often felt that they knew best how to protect potential research subjects.

At the same time, leaders of the federal Department of Health, Education, and Welfare (DHEW) actively sought to avoid being held financially responsible for the growing legions of extramural researchers they funded—a good deal of whom were social and behavioral scientists. Starting in 1958, the NIH (which DHEW subsumed) experienced what Director James Shannon described as an “unorthodox … and quite unprecedented” funding boom (Reference ShannonShannon 1961), which anthropologists, sociologists, psychologists, and others cashed in on with great success. (Read, for example, the preface to Erving Goffman's Asylums.) No doubt, the lion's share of NIH funding still went toward laboratory and clinical medicine. Still, NIH funded a sizeable amount of social and behavioral research in diverse disciplines, especially through the National Institute of Mental Health and the National Institute of General Medical Sciences (Reference Crowther-HeyckCrowther-Heyck 2006).

Thus when Surgeon General William Stewart announced in 1966 the policy instituting review boards at local universities and hospitals, he was clear that social and behavioral researchers required oversight alongside other investigators.Footnote 5 Stewart was brand-new to the post in 1966, and he brought to it an obliging personal style: he was an accommodating and respectful bureaucrat who used his position to follow the shared wishes of members of Congress, NIH lawyers, and his former boss at NIH, the beloved Director Shannon. What these parties wanted was assurance from the Surgeon General that the federal government would be safeguarded—morally, legally, financially—if a person claimed he or she was mistreated while participating in any study sponsored with public money. Previously, Stewart's defiant predecessor, Luther Terry, had insisted that extramural investigators were not the type who would make ethical missteps because they would have been vetted by an unassailable peer review process. Federal lawyers flatly dismissed Terry's position, however, marking the decline in scientists' authority in shaping the policies that would regulate them. (One NIH lawyer in 1965, for example, criticized Terry's “glowing assurance of integrity and ethics of grantees and our confidence in them. Too much has and can happen.”Footnote 6) Thus, when Stewart assumed the mantle of Surgeon General he, unlike Terry, promptly agreed to federal lawyers' plans to shift “assurance” of human subjects protection to researchers' home institutions. Stewart's 1966 memos intentionally set in place a system of local ethics committees that would draw responsibility away from NIH, rather than a system with more centralized authority (of the sort that commentators have called for in recent years [e.g., Reference KatzJay Katz 1995]).

The Legacy of Historical Contingencies for How IRBs Work Today

At first, there was not a tremendously high priority on determining what, precisely, constituted proper treatment of human subjects: the federal aim was above all to disperse responsibility for this new thing called subjects' rights. Tellingly, Stewart wrote to university administrators in 1966 regarding the new ethics committees that “the wisdom and sound professional judgment of you and your staff will determine what constitutes the rights and welfare of human subjects in research, what constitutes informed consent, and what constitutes the risks and potential medical benefits of a particular investigation.”Footnote 7 Although these requirements would be specified more fully in future policies and eventually in federal regulation, the general sentiment remained the same: IRBs were declarative groups—their act of deeming a practice acceptable would make it so.

More important than the specific content of any committee decision, then, was the promise that the decision had been made according to proper procedure. This is why a priority was—and still is—placed on the types of people who must serve on all IRBs, that is, women as well as men, laypeople as well as so-called experts. Because an immense amount of interpretation is left to individual boards, IRBs tend to develop what I have called “local precedents,” short-hand rules based on previous cases that help board members make decisions that are internally consistent over time, even if their decisions do not match those of IRBs at other institutions (Reference StarkStark 2006). Decisions based on local precedents cause serious problems for multisite studies that are reviewed by several boards because each IRB has a different set of precedents that it uses to make sense of vague terms, such as “risk” and “benefit.”Footnote 8 But this well-known problem also illustrates an important point: it is often misleading to draw conclusions about IRBs in general from one's experience with one IRB in particular.

The local character of board review does not mean that IRB decisions are wrong so much as that they are idiosyncratic. To many critics, “idiosyncratic” decisions are tantamount to bad decisions. Yet the past decade of law and society scholarship has suggested that the application of rules is always an act of interpretation and that sometimes this discretion can have positive, as well as negative, effects (e.g., Reference Heimer and StaffenHeimer & Staffen 1998). In the case of IRBs, for example, a psychologist serving on one board that I observed consolidated support for an investigator who needed to stretch the letter of the law to recruit the ideal subjects for her study. “Let's let her do it,” the board member encouraged his colleagues, “I'm comfortable with that in part because of the discussion we had, and the things that I know: she's very conscientious.” Board members were not blind to alternative interpretations of the rules; rather, they were willing to accept the consequences of letting the investigator proceed in order to support her research. As board deliberations closed, the IRB chair remarked, “We can only consider the fact that (if there is a problem), this is where we'd have to say to the government, ‘Sorry we've done something incorrectly’ and get our hand slapped … We'd have to tell the government, ‘Unfortunately, this is what we decided.’”Footnote 9 The important question is not whether regulatory decisions involve local discretion but rather how this discretion is enacted.

Most of us tend to overlook this feature of local review boards because descriptions of IRBs have generally come from two sources: large-scale surveys and anecdotal “horror stories.” First, most large survey-based studies of IRBs have used board members as units of analysis rather than boards (Reference De Vries and ForsbergDe Vries & Forsberg 2002), and as a result a great deal of the organizational variability among boards has been overlooked. Second, as suggested by both Max Weber and conventional wisdom, we hear primarily about colleagues' problems with IRBs because, like any bureaucracy, the best boards can aspire to be is well-oiled, smooth-running, and thus silent. Bureaucracies, by definition, can be effective but not dazzling; yet this makes it tempting to generalize about all boards based on the provocative stories we hear. In sum, the bias in what is commonly believed about IRBs is a product of how we have come to know them.

How to Work Well Now

There is little debate that the ethics review system needs to be reworked. The question is, how? I advocate changing local practices to suit the local research community, rather than refining federal regulations. The most fruitful reviews that I observed took place at universities that had done this by (1) drawing more people into the ethics review process, and (2) pressing this new cast of decision makers to talk to each other.Footnote 10

To begin with, some universities have drawn an unusually wide range of people into ethics review by re-envisioning the model IRB member. These boards actively recruited faculty members who had been frustrated with the board in the past. For example, Ken, a statistician, described his experience as an IRB member as a process of mutual cooptation: “I admit that when I went on the board, my initial approach was ‘I'm going to set these people straight.’” And indeed, in the meetings that I observed, Ken pushed his colleagues to avoid evaluating investigators' research designs—that is, to give advice but to steer clear of demanding design changes, even for notoriously loose methodologies such as Grounded Theory. “At the same time,” Ken continued, “I learned a lot about what (it takes to protect subjects) and so, I think I've more than met them halfway. Although, I also feel that they did have a lot to learn.” Because of his influence on the board as well as the board's effect on him, Ken said, “I wish people who did a lot of research would make that sort of commitment of one or two hours [per week to serve on the IRB].”Footnote 11

Boards have involved new decision makers in other ways. IRB subcommittees, which can review lower-risk studies, have moved ethics review into academic departments. In so doing, these subcommittees of faculty members (who presumably understand the methods in question) have taken over the task of evaluating low-risk studies from board administrators. (In these cases, an IRB member served as a liaison between the full board and her departmental subcommittee, which typically comprised two or three faculty members.) In addition, IRBs have adopted term limits for board members to build change into the review system. Working on the assumption that investigators' methods and topics of study will shift over time, it stands to reason that the people best suited to review studies will change, too. Term limits also work to remedy the concern, whether real or imagined, that veteran IRB members have a conservative effect on IRB decisionmaking.

Most radically, at some universities investigators are asked—and in some instances, required—to attend the meetings at which their studies are reviewed. This practice is an unwitting return to an internal NIH strategy of ethics review that was phased out 40 years ago, before our current IRB system was set in place. For NIH scientists through the 1960s, the idea of evaluating a study without the investigator present was unthinkable because, the theory went, the investigator knew the research methods and the study population better than anyone else. Eventually, though, investigators came to be seen not as aids, but as contaminants, to sound moral decisionmaking. Based on my observations, it is apparent that having board representatives and investigators discuss studies together makes sense simply because talking is an efficient way to communicate. Whether protocols are exempted, expedited, or reviewed by the full board, it is imperative that investigators and IRB representatives talk (not only write) to each other, and that these discussions happen before an IRB takes action on a study.

I realize that there are other views on how to improve the ethics review system. One set of alternatives advocates improving the system by changing federal regulations (e.g., American Association of University Professors 2006; Center for Advanced Study 2005). It is not clear to me that further specifying the regulations would make them more usable, and I am unenthusiastic about changing regulations in a way that would encourage retrospective litigation in lieu of prospective ethics review.

My suggestions fit with a second set of alternatives, which advocate improving the ethics review system while working within the rules we already have in place. Not only is this approach more immediately feasible because it involves reforming practices at the local level rather than changing regulations at the national level, but these alternatives also appear to work. We can now read about how institutions can opt out of federal-wide assurances (Reference ShwederShweder 2006); how social scientists can institutionalize review practices that work best for them as “out-front, mainstream behavior” (Reference BledsoeBledsoe et al. 2007; see also Reference KatzJack Katz 2006); and how with just a bit of goodwill, reflexivity, and humor, ethnographers can educate IRB administrators and themselves about the realities of both fieldwork and ethics review (Reference Bosk and De VriesBosk & De Vries 2004). Although these authors differ on points of fact and on the extent of their humility, they share a common underlying praxis: that of local change. In my experience, the biggest challenge to investigators and IRB representatives is finding ways to exchange ideas across institutions about new, fruitful local practices.

Still, important questions remain. What concerns me is that the social science victim narrative—by which I mean the story that human subjects regulations were not meant to apply to us—is pervasive among academics, and it is particularly central to qualitative researchers as a justification for their criticisms of IRBs. Yet this victim narrative does not stand up to historical scrutiny, as I have shown. Thus it remains to be seen: can we qualitative researchers find ways to justify changes to local practices without recourse to this narrative? Does the victim narrative hobble our attempts to advocate new local practices if we cannot move beyond it? For their part, can IRB administrators make ethics review seem more relevant to social scientists if they abandon this chestnut more explicitly? And finally, is it possible to have a forthright conversation about whether human subjects regulations actually make us angry for reasons that might be less noble than concern for academic freedom?

Footnotes

I am grateful to Renée Beard, Alan Czaplicki, Steve Hoffman, Susan Silbey, and Alistair Sponsel for their comments on earlier drafts of this piece.

1 I got IRB approval from my home institution and the IRBs I studied in order to observe (and in some instances audio-record) board meetings and to interview board members. I have gone through human subjects review for other studies as well, but getting IRB approval to study IRBs was certainly the most reflexive and bizarre process—not least because it forced questions about when investigators versus human subjects (in this case, IRB members) know what is best for the subjects.

2 Confusion abounds over these issues in part because historical actors were imprecise and inconsistent in their use of terms (most notably patient versus subject), which regulatory battles are fought over today. Historical actors were themselves aware of and frustrated by these inconsistencies.

3 NARAII: 443, Central Files 1960–82, b77, f3, Human subjects policy and regulations 1965–67, Rep Cornelius Gallagher to Terry, 13 Sept. 1965.

4 ONIHH: CC, Ethical, Moral and Legal Aspects, f2. Rourke to Stewart, 26 Oct. 1965.

5 Stewart's third and final memo on the topic in December 1966 stated that the policy “refers to all investigations that involve human subjects, including investigations in the behavioral and social sciences. This does not reflect a change in policy, but is a clarification only of the current policy for the use of all grantees” (NARAII: 443, Central Files 1960–82, b77, f3, Human subjects policy and regulations 1965–67, memo from Stewart, 12 December 1966).

6 I suspect that this lawyer, Edward Rourke, had in mind the (now infamous) cancer research scandal at the Jewish Chronic Disease Hospital that he was dealing with as it unfolded. One physician at fault had been funded by the National Cancer Institute, and patients' lawyers argued that because NIH had funded the work, NIH should be held financially responsible. Rourke's quote is from ONIHH: CC, Ethical, Moral and Legal Aspects, f2, Willcox to Dempsey, 13 July 1965.

7 ONIHH, CC: Ethical, Legal, and Moral Aspects, f2, Memo from Stewart to “The Heads of Institutions Conducting Research with Public Health Service Grants,” 8 Feb. 1966.

8 IRBs cannot actually weigh risks and benefits, despite the utilitarian language embedded in human subjects regulations, because the relevant costs and benefits cannot be made commensurate in practice (contrary to the view of scholars who elide public rhetoric with board practices in discussing how ethics decisions are made, e.g., Reference EvansEvans 2000:35). Feeley draws an analogy between tax exemption and IRB exemption to argue that researchers should be able to determine for themselves whether their research presents no more than minimal risk (and is thus exempt from review, usually) just as workers are allowed to determine whether they earned less than a federal income threshold (and are thus exempt from filing taxes). Exempting researchers from IRB review is not analogous to exempting low earners from filing taxes because IRB members do not quantify their evaluations and therefore do not create numerical thresholds for exemption. Still, I do appreciate Feeley's thinking on this matter because I have similar feelings toward paying taxes and submitting IRB materials: in principle, I am happy to do both, but in practice, I enjoy neither.

9 Author's meeting transcript, SG, May: 364–70.

10 It is no coincidence that the IRBs I studied in depth were among the better-functioning IRBs I have encountered. To help myself endure the field experience, I selected boards with which I felt most comfortable and least likely to be put in a difficult ethical position myself (i.e., I felt uneasy about the judgments of a few IRBs that I considered observing). More to the point, the IRBs that eventually gave me access to audio-record their meetings did so to promote research on IRBs so that they can improve—a rather enlightened position that suggested to me these boards were already open and responsive to criticisms.

11 Author's interview: B9.

References

References: Archival Sources

Records of the National Institutes of Health, National Archives and Records Administration II, College Park, MD (cited as NARAII: 443).Google Scholar
Records of the NIH Clinical Center, Office of NIH History, National Institutes of Health, Bethesda, MD (cited as ONIHH: CC).Google Scholar

Published Sources

Altbach, Philip (1980) “The Crisis of the Professoriate,” 448 Annals of the American Academy of Political and Social Science 114.CrossRefGoogle Scholar
American Association of University Professors (2006) “Research on Human Subjects: Academic Freedom and the Institutional Review Board,” Committee A Report, http://www.aaup.org/AAUP/About/committees/committee+repts/CommA/ResearchonHumanSubjects.htm (accessed 2 April 2007).Google Scholar
Bledsoe, Caroline, et al. (2007) “Regulating Creativity: Research and Survival in the IRB Iron Cage,” 101 Northwestern University School of Law Rev. 593642.Google Scholar
Bosk, Charles, & De Vries, Raymond (2004) “Bureaucracies of Mass Deception: Institutional Review Boards and the Ethics of Ethnographic Research,” 595 The Annals of the American Academy of Political and Social Sciences 249–63.Google Scholar
Center for Advanced Study (2005) “The Illinois White Paper: Improving the System for Protecting Human Subjects: Counteracting IRB‘Mission Creep,’” http://www.law.uiuc.edu/conferences/whitepaper/papers/SSRN-id902995.pdf (accessed 2 April 2007).Google Scholar
Crowther-Heyck, Hunter (2006) “Patrons of the Revolution: Ideals and Institutions in Postwar Behavioral Sciences,” 97 Isis 420–46.CrossRefGoogle Scholar
De Vries, Raymond G., & Forsberg, Carl P. (2002) “What Do IRBs Look Like? What Kind of Support Do They Receive?,” 9 Accountability in Research 199216.CrossRefGoogle ScholarPubMed
Evans, John (2000) “A Sociological Account of the Growth of Principlism,” 30 The Hastings Center Report 31–8.CrossRefGoogle ScholarPubMed
Heimer, Carol A., & Staffen, Lisa R. (1998) For the Sake of the Children: The Social Organization of Responsibility in the House and the Home. Chicago: Univ. of Chicago Press.Google Scholar
Katz, Jack (2006) “Ethical Escape Routes for Underground Ethnographers,” 33 American Ethnologist 499506.CrossRefGoogle Scholar
Katz, Jay (1995) “Do We Need Another Advisory Commission on Human Experimentation?,” 25 Hastings Center Report 2931.CrossRefGoogle ScholarPubMed
Milgram, Stanley (1974) Obedience to Authority. New York: Harper and Row.Google Scholar
Robin, Ron (2001) The Making of the Cold War Enemy: Culture and Politics in the Military-Intellectual Complex. Princeton, NJ: Princeton Univ. Press.Google Scholar
Shannon, James A. (1961) “The National Institutes of Health: Programmes and Problems,” 155 Proceedings of the Royal Society of London 171–82.Google Scholar
Shweder, Richard (2006) “Protecting Human Subjects and Preserving Academic Freedom: Prospects at the University of Chicago,” 33 American Ethnologist 507–18.CrossRefGoogle Scholar
Stark, Laura (2006) “Morality in Science: How Research Is Evaluated in the Age of Human Subjects Regulation.” Ph.D. dissertation, Department of Sociology, Princeton University.Google Scholar