Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-06T07:44:15.958Z Has data issue: false hasContentIssue false

Improving the Ethical Review of Health Policy and Systems Research: Some Suggestions

Published online by Cambridge University Press:  21 April 2021

Rights & Permissions [Opens in a new window]

Abstract

Type
Independent Articles: Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© 2021 The Author(s)

Consistent and well-designed frameworks for ethical oversight enable socially valuable research while forestalling harmful or poorly designed studies. I suggest some alterations that might strengthen the valuable checklist Rattani and Hyder propose in this issue of Journal of Law, Medicine & Ethics Reference Rattani and Hyder 1 for the ethical review of health policy and systems research (HPSR), or prompt future work in the area.

Institutional Versus Individual Interventions

Rattani and Hyder describe HPSR as “investigation, evaluation, and/or implementation of healthcare strategies or issues at the institutional or systems-level.” 2 But their case study involves an individual-level intervention — a conditional cash transfer — for which individual informed consent was obtained, just as in traditional clinical research. In contrast, much HPSR involves changes to institutional rules or policies, such as changes to health system budgets, staffing, or supply chains, where individual consent is infeasible. More detail about how the checklist applies to institutional HPSR, and who should review it, would strengthen the project. It is not obvious that research ethics committees should review institutional-level HPSR,Reference Faden, Beauchamp and Kass 3 and current law in the United States exempts some types of HPSR — such as research on the design of benefit programs — from research ethics committee review, though not from review altogether.Reference Persad, Cohen and Lynch 4

The distinction between individual and institutional HPSR might also help clarify the proper role of gatekeepers. When individual consent is feasible, respect for autonomy supports a presumption in favor of leaving enrollment decisions in the hands of potential participants, not gatekeepers. By contrast, the infeasibility of individual consent for institutional HPSR makes representatives more relevant. Yet identifying legitimate representatives is challenging. If institutional HPSR is proposed for a political jurisdiction, politically legitimate representatives are appropriate gatekeepers. But in the absence of recognized structures of representation, authorizing informal representatives to approve or veto studies presents complexities.Reference Salkin 5

Incentives, Harm, and Undue Influence

Rattani and Hyder suggest that “the use of incentives creates a unique risk for harm, especially in LMICs, where the socioeconomic effects of poverty may inappropriately influence participation.” 6 Incentives to participate in a risky study could in principle produce undue influence by leading participants to misjudge risks, though the reality of that danger is empirically uncertain.Reference Largent and Lynch 7 But harm from undue influence requires that the underlying intervention be risky: incentives cannot make a low-risk intervention into a high-risk one. Meanwhile, though incentives may activate financial motivations, financial motivations do not make participation inappropriate.Reference Largent, Grady, Miller and Wertheimer 8 I worry that the checklist’s concerns about incentives may amplify existing misconceptions among research ethics committees that incentives undermine autonomyReference Largent, Grady, Miller and Wertheimer 9 and motivate disproportionate scrutiny of incentive-based research. It is doubtful that providing an intervention without consent — as institutional-level HPSR often involves — raises fewer concerns than incentivizing its use.

Further, the same “incentives that expand a participant’s range of opportunities” may also “entice participants to undergo risks they would not otherwise” 10 : an incentive can expand opportunity while leading participants to assume risks. This is recognized outside research: “in the realm of work it is ethically permissible and not undue influence to offer money as an incentive to get people to perform activities that they would otherwise not.” 11 Likewise, workers can accept time-limited incentives (like bonuses) without being harmed by their temporary receipt. This calls into doubt the suggestion that consensual provision of temporary incentives in research is harmful.

The “principlist” (autonomy, beneficence, nonmaleficence, justice) framework familiar in clinical ethics is used to ground the checklist. This framework fits uneasily with research ethics, especially the systems-level ethical issues HPSR presents. For instance, Rattani and Hyder find themselves driven to transmute principlist respect for autonomy to a nonspecific principle of respect. Similarly, it is not clear that beneficence and nonmaleficence should be understood as distinct principles, or separate from justice, in research. Rattani and Hyder understand justice to include improving the well-being of the worst off, which seems like a species of beneficence. Grounding the checklist in ethical frameworks more commonly used in research ethics, and/or frameworks used in public health or population-level bioethics, might enhance the ethical review of HPSR.

Avoiding Research Exceptionalism and Overbroad Mandates

Many HPSR interventions, including conditional cash transfers, could be implemented outside research — either by governments or by employers and philanthropists — without research ethics committee review, and often even without consent. This distinguishes many HPSR interventions from investigational treatments, for which consent is required even outside research. And it raises an important question about research exceptionalism: why should providing an intervention via HPSR prompt greater ethical review than simply implementing the intervention without research?

While the checklist’s goal of improving consistency in HPSR review is laudable, imposing clinical-style REC review on HPSR, or imposing more stringent duties of justice on researchers than non-researchers, creates counterproductive incentives to implement policy changes without research.Reference Meyer, Heck, Holtzman, Anderson, Cai, Watts and Chabris 12 Clarifying which aspects of the checklist entail mandates as opposed to encouragement could help address this concern. Consent when practicable and not waived (II(2a)), and a reasonable balance of risk and benefit (VII(6)), should be mandatory. By contrast, other aspects of the checklist, such as the details of community engagement and research translation, support encouragement but not mandates.

Excessively aspirational mandates risk either obstructing valuable research or prompting conceptual contortions from research ethics committees and researchers. For instance, while global health research as an enterprise should promote health equity and the interests of the worst off, each HPSR study in a LMIC need not necessarily to realize those goals. Many low- and middle-income countries are large and economically diverse, and mandating that all HPSR in low- and middle-income countries achieve global justice goals will incentivize overbroad definitions of equity and poverty. It would be better to recognize that just as some HPSR in Boston that neither serves nor harms global justice is acceptable, so is some similar research in Bangalore. Similarly, mandating equipoise in HPSR, as opposed to a reasonable risk/benefit balance, seems dubious given the contested status of equipoise even in medical research.Reference Miller and Joffe 13

Mandating “[e]quality in the distribution of power to make decisions, object, or modify various aspects of the study … between researchers and communities” 14 likewise presents concerns. The researcher-community relationship better fits a separation-of-powers model than equal, coextensive power. Typically, participants (whether groups or individuals) have decisive — not merely equal — power to decide whether to enroll or withdraw. But they do not have equal power to modify the design of ongoing studies, and permitting such modification without careful planning can erode the social value and scientific validity needed for research to be ethical. Research ethics should consider how to ensure fairness and prevent harm under conditions of unequal power, rather than imposing a requirement of equal power as a precondition to research.

Selecting the Best Ethical Framework

The “principlist” (autonomy, beneficence, nonmalefi-cence, justice) framework familiar in clinical ethics is used to ground the checklist.Reference Beauchamp and Childress 15 This framework fits uneasily with research ethics, especially the systemslevel ethical issues HPSR presents. For instance, Rattani and Hyder find themselves driven to transmute principlist respect for autonomy to a nonspecific principle of respect. Similarly, it is not clear that beneficence and nonmaleficence should be understood as distinct principles, 16 or separate from justice, in research. Rattani and Hyder understand justice to include improving the well-being of the worst off, which seems like a species of beneficence. Grounding the checklist in ethical frameworks more commonly used in research ethics,Reference Emanuel, Wendler and Grady 17 and/or frameworks used in public health or population-level bioethics,Reference Kass 18 might enhance the ethical review of HPSR.

Footnotes

Professor Persad has received funding from the Greenwall Foundation, compensation from the ASCO Post for column authorship, and consulting fees from WHO.

References

Rattani, A. and Hyder, A., “Operationalizing the Ethical Review of Global Health Policy and Systems Research: A Proposed Checklist,Journal of Law, Medicine & Ethics 49, no. 1 (2021): 92122.CrossRefGoogle Scholar
Faden, R. R., Beauchamp, T. L., and Kass, N. E., “Informed Consent, Comparative Effectiveness, and Learning Health Care,New England Journal of Medicine 370, no. 8 (2014): 766768.CrossRefGoogle Scholar
Persad, G., “Democratic Deliberation and the Ethical Review of Human Subjects Research,” in Human Subjects Research Regulation: Perspectives on the Future, Cohen, I. G. and Lynch, H. F. eds. (Cambridge, MA: MIT Press, 2014): 157172.CrossRefGoogle Scholar
Salkin, W., Not Just Speaking for Ourselves (Harvard University Press) (forthcoming).Google Scholar
Rattani and Hyder, supra note 1.Google Scholar
Largent, E. A. and Lynch, H. F., “Paying Research Participants: The Outsized Influence of ‘Undue Influence,’IRB 39, no. 4 (2017): 115; S. D. Halpern, J.H.T. Karlawish, D.Casarett, J. A. Berlin, and D. A. Asch, “Empirical Assessment of Whether Moderate Payments are Undue or Unjust Inducements for Participation in Clinical Trials,” Archives of Internal Medicine 164, no. 7 (2004): 801-803.Google Scholar
Largent, E. A., Grady, C., Miller, F. G., and Wertheimer, A., “Money, Coercion, and Undue Inducement: A Survey of Attitudes about Payments to Research Participants,IRB 34, no. 1 (2012): 114; A. J. London, D. A. Borasky, Jr., A. Bhan, and Ethics Working Group of the HIV Prevention Trials Network, “Improving Ethical Review of Research Involving Incentives for Health Promotion,” PLoS Medicine 9, no. 3 (2012): e1001193-e1001198.Google Scholar
Largent, E., Grady, C., Miller, F. G., and Wertheimer, A., “Misconceptions about Coercion and Undue Influence: Reflections on the Views of IRB Members,Bioethics 27, no. 9 (2013): 500507.CrossRefGoogle Scholar
Rattani, supra note 1.Google Scholar
Largent et al., supra note 8.Google Scholar
See Meyer, M. N., Heck, P. R., Holtzman, G. S., Anderson, S. M., Cai, W., Watts, D. J., and Chabris, C. F., “Objecting to Experiments that Compare Two Unobjectionable Policies or Treatments,” Proceedings of the National Academy of Sciences 116, no. 22 (2019): 10723–10728; R. Platt, N. E. Kass, and D. McGraw, “Ethics, Regulation, and Comparative Effectiveness Research: Time for a Change,” JAMA 311, no. 15 (2014): 1497-1498.Google Scholar
Miller, F. G. and Joffe, S., “Equipoise and the Dilemma of Randomized Clinical Trials,New England Journal of Medicine 364, no. 5 (2011): 476480; R. M. Veatch, “The Irrelevance of Equipoise,” The Journal of Medicine and Philosophy 32, no. 2 (2007): 167-183.CrossRefGoogle Scholar
Rattani, supra note 1.Google Scholar
Beauchamp, T.L. and Childress, J.F., Principles of Biomedical Ethics (Oxford University Press, USA, 2013).Google Scholar
In fact, the Belmont Report does not separate them. See United States, National Commission for the Protection of Human Subjects of Biomedical, and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research Vol. 2 (1978).Google Scholar
E.g. Emanuel, E.J., Wendler, D., and Grady, C., “What Makes Clinical Research Ethical?JAMA 283, no. 20 (2000): 27012711; E.J. Emanuel, D. Wendler, J. Killen, and C. Grady, “What Makes Clinical Research in Developing Countries Ethical? The Benchmarks of Ethical Research,” The Journal of Infectious Diseases 189, no. 5 (2004): 930-937.CrossRefGoogle ScholarPubMed
E.g., Kass, N.E., “An Ethics Framework for Public Health,American Journal of Public Health 91, no. 11 (2001): 17761782.CrossRefGoogle Scholar