Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-24T22:43:52.613Z Has data issue: false hasContentIssue false

Legal and Ethical Complexities of Consent with Cognitively Impaired Research Subjects: Proposed Guidelines

Published online by Cambridge University Press:  01 January 2021

Extract

When science takes man as its subject, tensions arise between two values basic to Western society: freedom of scientific inquiry and protection of individual inviolability.... At the heart of this conflict lies an age-old question: When may a society, actively or by acquiescence, expose some of its members to harm in order to seek benefits for them, for others, or for society as a whole?

Type
Article
Copyright
Copyright © American Society of Law, Medicine and Ethics 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Katz, J., Experimentation with Human Beings (New York: Russell Sage Foundation, 1971): At 1.Google Scholar
The present federal regulations include pregnant women, fetuses, institutionalized persons, prisoners, and children as “vulnerable” populations that need additional protections. I do not deal with the issue of vulnerability in the sense of being susceptible to coercive pressures, rather I focus on the difficulties relating to informed consent in research with cognitively impaired subjects.Google Scholar
Annas, G. Grodin, M., “Introduction,” in Annas, G.J. Grodin, M.A., eds., The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation (New York: Oxford University Press, 1992): At 3.Google Scholar
The Nuremberg Code (1947). See full text as reprinted in Katz, , supra note 1, at 305–06.Google Scholar
Katz, J., “The Nuremberg Consent Principle: Then and Now,” in Annas, G.J. Grodin, M.A., eds., The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation (New York: Oxford University Press, 1992): At 235 (Katz argues that the Nuremberg Military Tribunal's main concern was to prevent recurrence of the atrocities of Nazi experimentation, and thus they did not seek to balance the advancement of science and individual inviolability.).Google Scholar
“Incompetent” is used here to refer to incapacity to make medical decisions; it is not necessarily a legal determination of incompetence. Only a court can make a determination of “competence.” “Capacity” determinations, on the other hand, are relegated to the medical or mental health professions.Google Scholar
World Medical Association, Declaration of Helsinki: Recommendations Guiding Medical Doctors in Biomedical Research Involving Human Subjects (1964). Reprinted in Mappers, T. Zembaty, J., eds., Biomedical Ethics (New York: McGraw-Hill, 1981): At 146–47.Google Scholar
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects in Research (Washington, D.C.: U.S. Government Printing Office, OS 78–0012, 1978).Google Scholar
Id. at 13 (the list included “infants and young children, mentally disabled patients, the terminally ill and the comatose”).Google Scholar
Department of Health and Human Services, Rules and Regulations for the Protection of Human Research Subjects, 45 C.F.R. §§ 46.101–.409 (1983).Google Scholar
45 C.F.R. § 46.116 (1991) (emphasis added).Google Scholar
21 C.F.R. §§ 50.20–.48 (1993).CrossRefGoogle Scholar
43 Fed. Reg. 53,954 (Nov. 17, 1978).Google Scholar
See, for example, Appelbaum, P. Lidz, C. Meisel, A., Informed Consent: Legal Theory and Clinical Practice (New York: Oxford University Press, 1987).Google Scholar
43 Fed. Reg. 53,954 (Nov. 17, 1978) (“‘mentally disabled’ individuals includes those who are mentally ill, mentally retarded, emotionally disturbed, psychotic or senile, regardless or their legal status or the reason for being institutionalized.”).Google Scholar
See infra page 20.Google Scholar
See Horowitz, J., “For the Sake of Science: When Tony Lamadrid, a Schizophrenic Patient and Research Subject at UCLA, Committed Suicide, It Set Off a National Debate: What Is Acceptable in Human Experimentation and Who Decides?,” Los Angeles Times Magazine, Sept. 11, 1994, at 16; and Office for Protection from Research Risks, “Evaluation of Human Subject Protections in Schizophrenia Research Conducted by the University of California Los Angeles” (May 11, 1994).Google Scholar
See, for example, Fletcher, J.C. Wichman, A., “A New Consent Policy for Research with Impaired Human Subjects,” Psychopharmacology Bulletin, 23 (1987): 382–85.Google Scholar
Statement on Federal Funding for Research on Human Embryos, 30 Weekly Comp. Pres. Doc. 2459 (Dec. 2, 1994). No action was taken in 1995. It remains to be seen whether a commission will be established in 1996.Google Scholar
I use the terms research and experimentation interchangeably, except where specially noted. However, the two have different meanings, and it is useful to clarify exactly what is intended. The Belmont Report points out that “[t]he fact that a procedure is ‘experimental,’ in the sense of new, untested, or different, does not automatically place it in the category of research” (National Commission for the Protection of Biomedical and Behavioral Research, supra note 8, at 3). Experimentation and research, however, both involve unknown consequences and thus both should be subject to increased scrutiny. McNeill, P., The Ethics and Politics of Human Experimentation (New York: Oxford University Press, 1993). Even so, it is not realistic to require that every doctor who tries a new or experimental treatment on one patient get IRB approval. Thus the guidelines proposed here should only apply when the physician seeks systematically to gather general information, that is, when the doctor seeks not only to treat the individual patient but also to obtain information that will help treat other patients. Thus many “experimental treatments” may become research as they are repeatedly tested on different patients. IRB approval would not be avoided by categorizing each of a number of trials of an experimental medium or method as “individual treatment.”Google Scholar
Katz, J., “Human Experimentation and Human Rights,” St. Louis University Law Journal, 38 (1993): At 7.Google Scholar
Id. at 41. See also Williams, P., “Why IRBs Falter in Reviewing Risks and Benefits,” IRB: A Review of Human Subjects Research, 6, no. 3 (1984): At 4 (listing possible reasons why IRBs fail to evaluate adequately risks and benefits, but noting that there is “no evidence that unduly dangerous protocols are actually passing committee review”).Google Scholar
Katz, , supra note 21, at 158–59 (“In human experimentation, research ethics committees are ideal candidates to exercise discretion in the application of principles and rules to particular research proposals…. The degree of discretion exercised by committees should not mean that some committees can approve research on human subjects that is below the minimum ethical standard for such research. This position … requires an active national body to oversee the system of review and provide specificity in research guidelines.”).Google Scholar
45 C.F.R. §§ 46.101–.409 (1983); and 21 C.F.R. §§ 50.20–.48 (1993). More guidance at the national level on the meaning of these terms is necessary.Google Scholar
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, U.S. Department of Health, Education and Welfare, Institutional Review Boards: Report and Recommendations (Washington, D.C.: U.S. Government Printing Office, OS 78–0008, 1978): At 1.Google Scholar
American College of Physicians, “Cognitively Impaired Subjects,” Annals of Internal Medicine, 111 (1989): At 843.Google Scholar
National Institutes of Health, Clinical Center Policy, Research Involving Impaired Human Subjects Clinical Center Policy for the Consent Process (Bethesda: National Institutes of Health, 1986).Google Scholar
Wicclair, M., Ethics and the Elderly (New York: Oxford University Press, 1993).Google Scholar
American College of Physicians, supra note 26, at 883.Google Scholar
Fletcher, J.C. Miller, F.G., The Promise and Perils of Public Bioethics (Baltimore: University of Maryland, 1994).Google Scholar
Id. at 13.Google Scholar
Dworkin, G., “Law and Medical Experimentation: Of Embryos, Children and Others with Limited Legal Capacity,” Monash University Law Review, 13 (1987): 189208.Google Scholar
Id. at 208.Google Scholar
Miller, B., “Autonomy and Proxy Consent,” IRB: A Review of Human Subjects Research, 4, no. 10 (1982): At 1, 2.CrossRefGoogle Scholar
Additional protections may also be necessary when the research presents a high risk of harm to the subjects, or when the subject population is considered unusually vulnerable to coercive tactics designed to induce participation.Google Scholar
Miller, , supra note 35, at 3 (specific authorization; general authorization with instructions; general authorization without instructions; instructions without authorization; substituted judgment; and deputy judgment). He concludes that all but one of the models, deputy judgment, protects autonomy in one or more of the four aspects. I do not use Miller's categories; instead, I address the standards of proxy decision making (best interests and substituted judgment) allowed in different contexts. Miller's analysis of autonomy, however, is useful in explaining why one model of proxy consent is preferable over another.Google Scholar
Although Wicclair recognizes the existence of these documents, he does not specifically allude to them in his seven conditions. In his comments, however, he notes that “speculative judgments are discouraged by requiring explicit instructions and criteria to be given to surrogates.” Wicclair, , supra note 28, at 183.Google Scholar
American College of Physicians, supra note 26, at 884.Google Scholar
See Sunderland, T. Dukoff, R., “Informed Consent with Cognitively Impaired Patients: An NIMH Perspective on the Durable Power of Attorney,” Accountability in Research (forthcoming, 1996) (evaluating the NIH Clinical Center Policy).CrossRefGoogle Scholar
Sabatino, C., “Legislative Trends in Health-Care Decisionmaking,” ABA Bioethics Bulletin, 3, no. 2 (1994): 1011 (Advance directive is used in the study as an all encompassing term that includes living wills and DPAs).Google Scholar
See DeRenzo, E., “Surrogate Decision Making for Severely Cognitively Impaired Research Subjects: The Continuing Debate,” Cambridge Quarterly of Healthcare Ethics, 3 (1993): At 543.Google Scholar
42 U.S.C. §§ 1395cc, 1396aa (1990).Google Scholar
See DeRenzo, , supra note 42, at 546.Google Scholar
The term surrogate will be used rather than guardian. A guardian is a court-appointee who has much greater control over an incompetent individual's life than a surrogate decision maker. Casasanto, Simon, and Roman note a historical disagreement about whether a guardian should act in the ward's best interests or as a surrogate using a substituted judgment standard. Casasanto, M. Simon, M. Roman, J., “A Model Code of Ethics for Guardians,” Whittier Law Review, 11 (1989): At 545. Casasanto, Simon, and Roman suggest that a substituted judgment standard is preferable, and that a best interests standard only be used when there is no indication of the ward's preferences. Id. at 547. In addition, they state that the guardian should be careful not to act beyond the scope of his/her authority and should allow wards to make decisions on their own behalf when appropriate. Thus, in situations where the research subject has already had a guardian appointed, or in states where the only surrogate decision makers legally provided for are guardians, such guardians may function in essentially the same manner as any other surrogate decision maker. Their proposed model code's Rule 4 recognizes the guardian's responsibility to make medical decisions; it states, however, that the guardian cannot consent to “experimental treatment … without seeking review by the court or the ward's attorney or other representative.” Id. at 561. This seems unnecessarily burdensome. The guardian in these situations should be as free to act as any other surrogate—being permitted to make decisions within the confines of the applicable guidelines.Google Scholar
Hamann, A.A., “Family Surrogate Laws: A Necessary Supplement to Living Will and Durable Powers of Attorney,” Villanova Law Review, 38 (1993): At 106 (“All states have intestacy laws to provide for the distribution of the decedent's property without a will. A comparable alternative is needed to provide for those individuals who fail to execute a living will or a durable power of attorney.”).Google Scholar
NIH Clinical Center Policy, supra note 27, cases 5, 6.Google Scholar
Dubler, N., “Some Legal and Moral Issues Surrounding Informed Consent for Treatment and Research Involving Cognitively Impaired Elderly,” in Kapp, M.B. Pies, H.E. Doudera, A.E., eds., Legal and Ethical Aspects of Health Care for the Elderly (Ann Arbor: Health Administration Press, 1986): 247–57. Dubler argues that the courts are an improper forum to make health care decisions; however, she states that “there is a class of patients, primarily the long-term institutionalized, retarded, and mentally ill, who are demonstrably so vulnerable that public judicial process is required.” Id. at 252. There is no reason why these groups should be singled out for differential treatment. All cognitively impaired subjects face the same questions of informed consent and substitute decision making, some, perhaps, to a greater degree than others. The courts are no better arbiters of the research process for the groups identified by Dubler than for others; recourse to the judicial system merely lends a false sense of legitimacy. Rather, clear guidelines should be set forth that limit what types of research can be done with cognitively impaired persons, regardless of the source of the impairment. Furthermore, local review boards should be responsible for taking into account additional considerations that are unique to particular groups, for example, coercion in an institutionalized setting.Google Scholar
Hamann, , supra note 47, at 166. This has led to such anomalous scenarios as the Christian Fellowship of the Disabled being appointed as the guardian of a permanently unconscious individual who had no previous affiliation with the group, or the Americans United for Life Legal Defense Fund being allowed to argue against a family's decision to remove a feeding tube when the guardian ad litem was unable to find a physician to testify in favor of continuing treatment.Google Scholar
See infra notes 90–91 and accompanying text.Google Scholar
See Hamann, , supra note 47 (arguing that the family is the most appropriate decision maker).Google Scholar
See, for example, Cruzan v. Director, Missouri Dep't of Health, 497 U.S. 261, 286–87 (1990) (recognizing that a number of states permit close family members to make surrogate treatment decisions, but holding that the U.S. Constitution does not require that a state do so in all circumstances).Google Scholar
See, for example, Illinois Health Care Surrogate Act, Ill. Rev. Stat. ch. 755, § 40/25 (1993); Tex. Code Ann. Health & Safety § 313.002(6) (Supp. 1995); and Va. Code 54.1–2986 (1994). Even under statutes that set out a hierarchy, it may be necessary to choose among people in a given class, for example, between a number of adult children or siblings. In such situations, some statutes authorize all the members of the class to make decisions and require either a majority or a consensus (Illinois). Others specify criteria, such as age, expressed interest, or availability of a particular member of the class (Virginia). Lastly, some statutes leave the ranking among class members unspecified, and thus open the possibility that the physician will have to exercise discretion (Texas). In situations where such a choice must be made, additional scrutiny may be appropriate.Google Scholar
American College of Physicians, supra note 26, at 845; NIH Clinical Center Policy, supra note 27, at 2; and Wicclair, , supra note 28, at 182.Google Scholar
See generally, Busby-Mott, S., “The Trend Towards Enlightenment: Health Care Decisionmaking in Lawrence and Doe,” Connecticut Law Review, 25 (1993): 1159–225.Google Scholar
Miller, , supra note 35.Google Scholar
See, for example, American College of Physicians, supra note 26, at 844; and Keyserlingk, E.W. et al. , “Proposed Guidelines for the Participation of Persons with Dementia as Research Subjects,” Perspectives in Biology and Medicine, 38 (1995): 319–62 (condition 10, at 352).CrossRefGoogle Scholar
Compare Keyserlingk, et al. , supra note 59, at 351 (restricting the use of advance directives for research involving more than a minor increment over minimal risk except where the subject has experienced a similar level of pain or discomfort before consenting to the research).Google Scholar
States' laws vary on this point. For example, a judicial determination of incompetence may be necessary. Other states may require that a court scrutinize the proxy decision. Compare, for example, Superintendent of Belchertown State School v. Saikewicz, 370 N.W.2d 417 (Mass. 1977) (requiring court review) with Matter of Quinlan, 355 A.2d 647 (N.J. 1976) (noting that court review of all decisions would be unduly burdensome).Google Scholar
The “evidence” may include written preferences, such as those found in a general advance care document; practices throughout the subject's life (that is, previous research participation); and verbal communications regarding research.Google Scholar
American College of Physicians, supra note 26, at 845.Google Scholar
Wicclair, supra note 28, at 182 (conditions 1(a), 2(a)).Google Scholar
Id. (condition 5).Google Scholar
But see Keyserlingk, et al. , supra note 59, at 351 (allowing only the use of a written directive).Google Scholar
American College of Physicians, supra note 26, at 845.Google Scholar
Because the American College of Physicians paper only allows use of a subject's advance directive, this position does not allow a proxy decision maker to take into account other evidence of the subject's wishes. Thus a proxy could not consent for a subject who, although he/she had not executed an advance care document, has left other evidence that he/she would not want to participate in a research protocol, even if participation was in the subject's best interests. In this sense, the paper only permits a proxy to take into account evidence of a subject's negative wishes. By contrast, my position, oudined in the previous paragraphs, would allow a proxy also to consider evidence of a subject's positive attitude toward participation in research. Id.Google Scholar
Wicclair, , supra note 28, at 182 (conditions 1(b), 2(b)).Google Scholar
American College of Physicians, supra note 26, at 844.Google Scholar
Lazarus, J., “Ethical Issues and Conflicts,” Journal of the California Alliance for the Mentally Ill, 5 (1994) 2021 (describing a patient who competently consented to research that involved taking him off medication, subsequently regressed and refused to take the experimental medication, and, later, after stabilizing again on the initial medication, “earnestly begged for another chance to try the experimental medication.” Id. at 20.).Google Scholar
Id.; see also Warren, J. et al. , “Informed Consent by Proxy,” N. Engl. J. Med., 315 (1986): 1124–28 (suggesting that investigators should reconsider when research begins, initiating contact while the subjects are still competent and thus can prospectively consent).Google Scholar
See, for example, Keyserlingk, et al. , supra note 59, at 349 (discussing the use of the “Ulysses Contract” (a self-binding psychiatric advance directive) in dementia research).Google Scholar
Fletcher, Wichman, , supra note 18, at 383.Google Scholar
Wicclair, , supra note 28, at 182 (condition 6).Google Scholar
If state law does not allow for treatment over an incompetent patient's refusal, research, even if it holds the potential for direct therapeutic benefit, would likewise be impermissible.Google Scholar
See infra pages 2428.Google Scholar
State laws governing treatment refusals of incompetent patients would apply here. Thus it may be necessary to get a court order declaring the individual incompetent or to appoint a surrogate.Google Scholar
See, for example, Keyserlingk, et al. , supra note 59.Google Scholar
A consent auditor and the research auditor may be the same person. The different titles simply highlight the different roles—the consent auditor monitors the initial consent process to ensure that the subject has the ability to consent and actually understands the information. The research auditor would monitor the experiment to ensure that continued participation is appropriate given either changes in the protocol or changes in the subject's physical and mental state.Google Scholar
See Lazarus, , supra note 72, at 21 (noting the “additional advocacy roles for family or friends of the patients”).Google Scholar
National Commission for Protection of Human Subjects of Biomedical and Behavioral Research, supra note 8, at 5.Google Scholar
See, for example, Milliken, A.D., “The Need for Research and Ethical Safeguards in Special Populations,” Canadian Journal of Psychiatry, 38 (1993): At 683 (discussing the need to balance risk and potential benefit). In this section, I will not go into detail about the risks that IRBs should consider when deciding whether to accept a particular protocol. For my purposes, I assume that the protocol has already passed IRB review and that at issue is whether cognitively impaired subjects should be allowed to participate. Because some protocols are designed to include only cognitively impaired subjects, in some sense the protocol approval will overlap with the analysis presented here. For example, it would be useless to “approve” a protocol involving severely cognitively impaired subjects, but not to allow proxy consent to participation.CrossRefGoogle Scholar
Agreement also has not been reached on the definitions of more fundamental terms such as harm and risk. See, for example, Meslin, E., “Protecting Human Subjects from Harm Through Improved Risk Judgments,” IRB: A Review of Human Subjects Research, 12, no. 1 (1990): 710 (comparing various definitions of harm and risk and proposing a model for risk judgments); and Beauchamp, T. Childress, J., Principles of Biomedical Ethics (New York: Oxford University Press, 1994) (discussing risk assessment, uncertainty, and risk perception).CrossRefGoogle Scholar
45 C.F.R. § 46.102(I) (1991); and 21 C.F.R. § 50.3(l) (1993).Google Scholar
A 1995 consensus statement regarding informed consent in emergency research proposed a new risk category— “appropriate incremental risk.” This is defined as “any potential risk associated with participating in the research protocol relative to the natural consequences of the medical condition, or any potential risk associated with receiving the experimental intervention relative to receiving the standard intervention for the medical condition.” Biros, M.H. et al. , “Informed Consent in Emergency Research: Consensus Statements from the Coalition Conference of Acute Resuscitation and Critical Care Researchers,” JAMA, 273 (1995): At 1286. See also Keyserlingk, et al. , supra note 59, at 329 (noting that minimal risk must be evaluated in light of the subject's “everyday care and treatment.” Thus the baseline of risk for a person with Alzheimer's would be higher than for a healthy person.).CrossRefGoogle Scholar
Current DHHS regulations allow parents to consent to research involving their children that involves greater than minimal risk and no potential for direct therapeutic benefit when it is likely to yield generalizable information about the subject's condition. 45 C.F.R. § 46.406 (1991). The regulations state that the risk must “represent a minor increase over minimal risk.” Id. Furthermore, the regulations require the approval of the secretary of DHHS. I suggest instead that when a research protocol involves risk that is too high for IRB approval, review should be the responsibility of a national review board. See infra pages 28.Google Scholar
It is hard to quantify benefits. For example, how do you compare potential life-spans—living pain-free and out of the hospital for two weeks versus confinement to a hospital bed for two months? Notwithstanding, it is possible for IRBs to make general benefit calculations.Google Scholar
This simplifies the analysis greatly. The percentages, while being difficult to predict initially, may also not be this clear cut. For example, a drug could be likely to help 20 percent of the people 100 percent, 10 percent of the people 50 percent, and 2 percent of the people 30 percent. For the sake of argument, simple statistics are used.Google Scholar
As the potential for direct therapeutic benefit approaches zero, greater protections should apply.Google Scholar
I will not discuss protections in the treatment context. If these were to be changed or augmented, similar protections should be applied to the research context.Google Scholar
If the subject in question has been adjudicated as incompetent, he/she can neither consent to research nor execute an advance care document.Google Scholar
See generally, Robertson, J., “Taking Consent Seriously: IRB Intervention in the Consent Process,” IRB: A Review of Human Subjects Research, 4, no. 5 (1982): 15 (describing ways in which IRBs could improve consent forms and the consent process through the use of competency testing, monitoring, and restructuring).Google Scholar
Therapeutic misconception refers to a phenomenon that occurs when the investigator is viewed as both a researcher and an attending physician. The result is a blurring of the two roles and a misconception that the research is primarily conducted for the direct therapeutic benefit of the patient-subject. Both physicians and patients are subject to this phenomenon.Google Scholar
Appelbaum, P.S. Roth, L. Lidz, C., “The Therapeutic Misconception: Informed Consent in Psychiatric Research,” International Journal of Law and Psychiatry, 5 (1982): 319–29.CrossRefGoogle Scholar
Benson, P. Roth, L. Winslade, W., “Informed Consent in Psychiatric Research: Preliminary Findings from an Ongoing Investigation,” Social Science and Medicine, 20 (1985): 1331–41.CrossRefGoogle Scholar
Id. at 1337.Google Scholar
Stanley, B. Stanley, M. Lautin, A., “Informed Consent and Competency in Psychiatric Research,” (unpublished manuscript, on file with the author, 1985) (medication use did not affect capacity, but diagnosis correlated with poorer decision making, that is, persons with schizophrenia were more likely chose to participate in high risk, low benefit studies).Google Scholar
Appelbaum, Lidz, Meisel, , supra note 14, at 248–58. See also Grinnell, F., “Endings of Clinical Research Protocols: Distinguishing Therapy from Research,” IRB: A Review of Human Subjects Research, 12, no. 4 (1990): 13 (noting that many subjects at the end of a research protocol are unaware that a potentially beneficial treatment may be discontinued).Google Scholar
See, for example, Robertson, , supra note 94.Google Scholar
A low potential could mean either that the magnitude of benefit is low or that the percentage calculation approaches zero. The former should be treated cautiously because, as previously mentioned, it primarily involves an individual assessment. The latter could occur either because of the low probability that the treatment will help or because of the experimental design.Google Scholar
See National Commission for Protection of Human Subjects of Biomedical and Behavioral Research, supra note 8, at 5: “Respect for the immature or incapacitated may require protecting them as they mature or while they are incapacitated…. The extent of protection afforded should depend upon the risk of harm and the likelihood of benefit.”Google Scholar
American College of Physicians, supra note 26, at 844.Google Scholar
To some extent, guidelines can carve out extremes along the continuum but not the middle. Thus one can specify restrictions on minimal and high-risk research, but appropriate protections for research that falls into the category of greater than minimal risk but lower than high risk should be determined by local IRBs. Factors to consider include the degree of risk (higher = greater protections) and the potential for direct therapeutic benefit (lower = greater protections).Google Scholar
See infra case 7.Google Scholar
Wicclair, , supra note 28, at 182 (condition 4).Google Scholar
Id. at 182 (condition 3).Google Scholar
Compare the NIH Clinical Center Policy of having a bioethics consultant as a nonvoting member of an IRB. DeRenzo, E. Wichman, A., “A Pilot Project: Bioethics Consultants as Non-Voting Members of IRBs at the National Institutes of Health,” IRB: A Review of Human Subjects Research, 12, no. 6 (1990): 68. In general, the use of such consultants was considered helpful (id.; and DeRenzo, E. Bonkovsky, F., “Bioethics Consultants to the National Institutes of Health's Intramural IRB System: The Continuing Evolution,” IRB: A Review of Human Subjects Research, 15, no. 3 (1993): 9–10). However, most bioethics consultants will need additional training to serve in this capacity. DeRenzo, Wichman, , id. at 8 (“individuals who were more familiar with the performance of human subjects research were appropriately perceived as better able to assist the ICRSs”).CrossRefGoogle Scholar
Bein, P.M., “Surrogate Consent and the Incompetent Experimental Subject,” Food and Drug Cosmetic Journal, 46 (1991): 739–71.Google Scholar
Where there is no evidence of the subject's wishes, the best interests standard applies and participation would be barred. Likewise, where there is some but insufficient evidence, the high level of certainty needed would not be met and thus participation would be barred.Google Scholar
Delgado, R. Leskovac, H., “Informed Consent in Human Experimentation: Bridging the Gap Between Ethical Thought and Current Practice,” UCLA Law Review, 34 (1986): At 116.Google Scholar
The above analysis assumes that the IRB has already approved the research. Thus, at issue here is whether the participation of an individual subject is permissible.Google Scholar
See NIH Clinical Center Policy, supra note 27, case 8.Google Scholar
Research that involves a significant degree of risk (that is, greater than minimal) and offers little or no therapeutic benefit would never be in a person's best interests. See American College of Physicians, supra note 26, at 845. Even if one assumes some altruistic gain from participating in experimentation, this would be outweighed by the high risk. Thus a surrogate decision maker using the best interests standard should never consent to such experiments.Google Scholar
See Miller, , supra note 35, at 6 (arguing that “deputy judgment” where there is “no authorization of the agent by the principal, and the agent is not able to make a decision that is implied by the principal's values and attitudes … does not respect the autonomy of the principal in any of the four aspects of autonomy.”).Google Scholar
Jonas, H., “Philosophical Reflections on Experimenting with Human Subjects,” in Freund, P.A., ed., Experimentation with Human Subjects (New York: G. Braziller, 1970): At 3–4.Google Scholar
McNeill, , supra note 20, at 155 (McNeill also notes that “[t]here needs to be an authority to define the cases to which the rule applies because it is not possible to specify in advance all the situations to which the rule or principle applies.” Id. at 156). But see Keyserlingk, et al. , supra note 59, at 334 (arguing that even a national review board should not have the authority to allow research under these circumstances).Google Scholar
See DeRenzo, , supra note 42, at 545 (discussing the benefits of a public forum).Google Scholar
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, supra note 25, at 1. See also Biros, et al. , supra note 87, at 1286 (“Because the local IRB has good insight into local medical practice, the local patient population, and the capabilities of local researchers, institutions, and resources, the IRB should be the monitoring body primarily responsible for maintaining vigilant oversight of clinical trials of emergency research.”).Google Scholar
National Institutes of Health Office for the Protection from Research Risks, Institutional Review Board Guidebook: Protecting Human Research Subjects (Washington: D.C.: U.S. Government Printing Office, 1993): 626 to 6–32.Google Scholar
Id. at 6–41.Google Scholar
Id. at 6–47.Google Scholar
Id. at 6–31 (“IRBs should be aware of any applicable law in their state particularly those relating to consent by family members on behalf of persons incapable of consenting on their own.”).Google Scholar
See, for example, Kane, J. Robbins, L. Stanley, B., “Psychiatric Research,” in Greenwald, R. Ryan, M.K. Mulvihill, J., eds., Human Subjects Research (New York: Plenum, 1982): 193205 (proposing specific issues for IRBs to consider with regard to subjects who have a mental illness, but noting that “the population has much in common with other patient populations. Therefore issues such as risk-benefit ratio of the proposed research must be addressed as with nonpsychiatric research.” Id. at 194.); and DeRenzo, E.G., “The Ethics of Involving Psychiatrically Impaired Persons in Research,” IRB: A Review of Human Subjects Research, 16, no. 6 (1994): 7–9, 11.CrossRefGoogle Scholar
See, for example, Letter to Ron Wyden from Subcommittee Staff (Mar. 11, 1994) (highlighting abuses in this context); Biros, et al. , supra note 87; and Protection of Human Subjects, 60 Fed. Reg. 49,086 (proposed Sept. 21, 1995).Google Scholar
Letter to Ron Wyden from Subcommittee Staff (Mar. 11, 1994). The present federal regulations appear to be inconsistent. Compare 45 C.F.R. § 46.116(d) (1991) with 21 C.F.R. § 50.23(a) (1993).Google Scholar