Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-16T21:15:16.388Z Has data issue: false hasContentIssue false

HEALTH TECHNOLOGY ASSESSMENT, DELIBERATIVE PROCESS, AND ETHICALLY CONTESTED ISSUES

Published online by Cambridge University Press:  29 July 2016

Norman Daniels
Affiliation:
Harvard School of Public Health, Department of Global Health and Population [email protected]
Gert Jan van der Wilt
Affiliation:
Donders Institute for Brain, Cognition, and Behavior, Radboud University Medical Centre
Rights & Permissions [Opens in a new window]

Abstract

Healthcare technology assessment (HTA) aims to support decisions as to which technologies should be used in which situations to optimize value. Because such decisions will create winners and losers, they are bound to be controversial. HTA, then, faces a dilemma: should it stay away from such controversies, remaining a source of incomplete advice and risking an important kind of marginalization, or should it enter the controversy? The question is a challenging one, because we lack agreement on principles that are fine grained enough to tell us what choices we should make. In this study, we will argue that HTA should take a stand on ethical issues raised by the technology that is being investigated. To do so, we propose adding a form of procedural justice to HTA to arrive at decisions that the public can regard as legitimate and fair. A fair process involves deliberation about the reasons, evidence, and rationales that are considered relevant to meeting population-health needs fairly. One important way to make sure that there is real deliberation about relevant reasons is to include a range of stakeholders in the deliberative process. To illustrate how such deliberation might work, we use the case of cochlear implants for deaf children.

Type
Policies
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2016

What Makes Healthcare Technology Assessment Useful to Policy Makers?

Healthcare technology assessment (HTA) historically has focused on answering two key questions of importance to policy makers and clinicians who are considering approving the introduction of a new healthcare intervention: Is a specific technology effective? Is it safe? (Reference Banta and Jonsson1). The absence of serious ethical analysis in HTA reports has persisted despite the fact that calls for more ethical analysis have occurred throughout the history of HTA and the fact that “ethics” is included in the domains of analysis for HTA (Reference Hoffman2). In the past few decades, however, it has become apparent that efficacy and safety may be necessary conditions for a technology to meet if it is to be included as a service in a health system, but for most policy makers they are not sufficient conditions. Policy makers may also want to know whether they are getting “value for money,” usually measured by “health improvement per dollar spent” (Reference Porter3). In addition, they may want to know whether they can sustainably afford to add a specific technology to the array of interventions already included in the system for which they are responsible (Reference Mauskopf, Sullivan and Annemans4). Answering these questions requires some form of economic analysis.

But even this information may not be sufficient for making decisions about whether to disseminate a new intervention. Economic analysis usually is insensitive to important distributive issues, that is, who gets the intervention and what does that do to equity in a system? Some other ethical issues that are not distributional arise regarding whether some interventions should be used at all. Many technologies that are involved in reproduction raise these questions, but so do some technologies that aim at life extension.

In contrast to safety, efficacy, and cost-effectiveness, however, we lack quantitative methods (which are unlikely to be appropriate) for assessing the ethical issues involved in these questions. To be sure, we have quantitative methods that can tell us what people's attitudes are toward these issues. However, these methods cannot tell us what their attitudes ought to be. Moreover, reasonable people may disagree about some of these ethical issues and what they suggest for specific technologies. Should we broaden HTA to address some of these ethical issues? An argument can be made that HTA should focus only on giving answers to the narrow range of traditional questions for which certain quantitative methods exist, even if policy makers must address other issues that HTA does not help them answer. Such caution about scope might help avoid controversy. If HTA is to be as useful as it can be, then we must consider how we can include these additional issues in HTA.

In this study, we argue that economic analysis as part of HTA provides a useful input to policy making, but it falls short of answering important questions that HTA can help policy makers address. Given the lack of quantitative methods, and the fact of reasonable disagreement, how can HTA be useful in addressing these issues? We propose that HTA should involve a deliberative process in which these ethical issues are addressed if the input to policy makers is to be as useful as possible. In the next two sections, we briefly discuss the strengths and weaknesses or limitations of the economic tools available to us and the nature of the ethical issues that HTA faces. We then propose subsuming HTA within a broader deliberative process that can address these ethical issues and consider some of its strengths and weaknesses. The goal is a form of HTA that not only can answer traditional questions about safety, efficacy, economic value, and affordability, but also can produce enhanced legitimacy and greater fairness in claims made about the ethical issues facing the introduction of new healthcare technologies.

Limitations of Key Economic Tools

Comparative effectiveness research (CER) and cost-effectiveness analysis (CEA) are important elements of HTA because they help to answer some questions that policy makers must address. For example, a typical use of CER compares the effectiveness of two interventions (drugs, procedures, or even two methods of delivery), and policy makers may want to know how a new technology's effectiveness compares with that of older ones.

Of course, they may also want to know if the new technology provides additional effectiveness at a reasonable cost, which points to a shortcoming of much CER in the United States, where considerations of cost are generally avoided in CER. Furthermore, CER cannot help us compare the outcomes of interventions across different disease conditions, because it uses no measure of health that permits such a comparison of effectiveness.

In Germany, however, CER is combined with economic analysis that takes cost into consideration and that allows the calculation of “efficiency frontiers” for different drug classes (Reference Caro, Nord and Siebert5). This use of CER allows German decision makers to negotiate with manufacturers about the price of treatments, rejecting payments that yield inefficient improvements. German policy makers can then cover every intervention that has greater effectiveness, provided it is sold at a price that makes it reasonably efficient. Still, because German use of CER cannot make comparisons across diseases, it allows great differences in efficiency across conditions.

CEA aims for greater scope than CER, especially when it focuses on incremental costs and health gains. It then deploys a common unit for measuring health outcomes, either a disability-adjusted life-year (DALY) or a quality-adjusted life-year (QALY). This unit purports to combine duration with quality, permitting us to compare health states across a broad range of disease conditions. We can then calculate the cost per QALY (or DALY) and arrive at a general efficiency measure for a broad range of interventions for different conditions.

Critics have noted problematic ethical assumptions in both the construction of the health-adjusted life-year measures, such as whether to count some indirect costs and benefits or the failure to emphasize serious health conditions enough, and the use of CEA, such as failures to take distributive issues into account (Reference Brock, Khusfh and Engelhardt6). To see some of these distributive problems, consider the Table 1.

Table 1. Clashes between Decisions Based on Cost-Effectiveness Analysis (CEA) and Fairness

CEA systematically departs from judgments that many people will make about what is fair. The priorities problem asks how much priority we should give to people who are worse off. By constructing a unit of health effectiveness, such as the QALY, CEA assumes that this unit has the same value for whoever gets it or for wherever it goes in a life (“A QALY is a QALY” is the slogan). Many people, however, intuitively think that a unit of health is worth more if someone who is relatively worse off (sicker) gets it rather than someone who is better off (less sick) (Reference Brock, Battin, Rhodes and Silvers7). At the same time, people generally do not think we should give complete priority to those who are worse off. We may be able to do very little for them, so giving them complete priority means that we would have to forego doing a lot more good for others. Few would defend creating a bottomless pit out of those unfortunate enough to be the worst off.

Similarly, CEA assumes that we should aggregate even very small benefits so that, if enough people get small benefits, then it outweighs giving significant benefits to a few. Most people, however, think some benefits are trivial goods that should not be aggregated to outweigh significant benefits to a few (Reference Kamm8). Curing a lot of colds, for example, does not outweigh saving a life.

Finally, CEA favors putting resources where we get a best outcome, whereas many people favor giving people a fair (if not equal) chance at a significant benefit. Locating an HIV/AIDS treatment clinic in an urban area may save more lives than reaching out to a rural area, but in so doing, we may deny many people a fair chance at a significant benefit (Reference Daniels9).

In all three of these cases, CEA favors a maximizing strategy, whereas people making judgments about fairness are generally willing to sacrifice some aggregate population health to treat people fairly. In each case, whether it is giving some priority to those who are worse off, viewing some benefits as not worth aggregating, or giving people fair chances at some benefit, fairness deviates from the health maximization that CEA favors. Yet, we lack agreement on principles that tell us how to trade off goals of maximization and fairness in these cases. People disagree about what trades they are willing to make, and this ethical disagreement is pervasive.

Determining priorities primarily by seeing whether an intervention achieves some cost/QALY standard is adopting a health maximization approach. Such an approach will depart from widely held judgments about fairness, even where people differ in these judgments. Thus, the National Institute for Health and Care Excellence (NICE) in the United Kingdom has had to modify its initially more rigid practice of approving new interventions only if they met a cost/QALY standard in the face of recommendations from its Citizens Council. This Council, intended to reflect representative social and ethical judgments of a people in the United Kingdom, has proposed relaxing NICE's threshold in a variety of cases where judgments about fairness differed from concerns about health maximization. The judgments of the Citizens Council in this regard are consistent with what the social science literature suggests are widely held views in a range of cultures and contexts (Reference Dolan, Shaw, Tsuchiya and Williams10;Reference Menzel, Gold, Nord, Pinto-Prades, Richardson and Ubel11).

Can HTA Avoid Ethical Controversy?

Decisions about including a new technology create winners and losers. These political controversies add more heat to the ethical disagreement about the fairness of decisions that we have already noted. In the face of this degree of controversy, it is tempting to argue that HTA should limit its focus to relatively uncontroversial matters, such as the safety and efficacy of a new technology. In this way, it might be hoped, HTA can rise above the fray of policy debates, providing an uncontroversial input into policy while remaining detached from it.

But, this temptation values lack of controversy over usefulness and relevance. Policy makers must consider not only safety, efficacy, and cost-effectiveness, but also the impact of introducing that technology on equity in the health system and various other nondistributive ethical issues. A complete assessment must take into account those considerations. Consequently, HTA will be thought inadequate to properly inform policy decisions if its assessment does not include making reasonable suggestions about the matters that policy makers must consider.

So the choice really is between HTA remaining a source of incomplete advice to avoid controversy, thus risking an important kind of marginalization, and HTA entering the controversy with as defensible a set of tools as possible to provide as complete an assessment of a technology as possible. In what follows, we sketch one route to entering the controversy that health technologies invariably face when they are assessed for inclusion in a health system.

How to Expand HTA to Include Ethically Contested Issues: A Proposal

The problem facing HTA is that decisions to add a technology to a health system must take a stand on ethical issues raised by the technology and its use. This raises questions about their legitimacy and fairness that go beyond information about the efficacy, cost effectiveness, or safety of the technology, the traditional focus of HTA. At the same time, we lack agreement on principles fine grained enough to tell us what is fair in these cases.

Because we lack agreement on such principles, we propose adding a form of procedural justice to HTA to arrive at decisions that the public can regard as legitimate and fair. We might also think of this approach as embedding HTA in a broader deliberation. We shall have to explain what this appeal to procedural justice involves and what methodologies might be used within that addition. First, we consider briefly one important objection to this approach.

Some might insist that we already have a politically acceptable way to resolve disputes, namely the authority that we assign to democratically delegated agencies to manage our health systems. Because their decisions have at least the legitimacy of other democratically authorized decisions, we do not have to add a form of procedural justice to HTA to address these disputes. After all, we rely on a representative, democratic political process to make decisions in the face of many ethical disagreements about policy. Therefore, decisions about introducing new healthcare technologies are no more problematic than similar decisions that devolve to appropriately delegated authorities.

Yet, standard democratic decision making often rests on the simple aggregation of preferences (Reference Cohen, Bohman and Rehg12), and few people want to accept that process as determining what counts as ethically right. A racist, misogynist, anti-immigrant, or anti-gay policy that has popular support from a majority is not thereby the ethically right policy to pursue. In moral deliberation, in contrast, we aim to evaluate the weight that reasons should receive; we do not simply aggregate the preferences people have. This suggests that we need a process that is more deliberative than a simple aggregative democratic vote; such a process can supplement and improve (not replace) broader democratic processes that we hope may become more deliberative. The deliberation that the process encourages is intended to emphasize ethical reasoning about what we should count as fair. Maybe if some existing public procedures that were intended to be deliberative worked better, then they would suffice; in any case, what we propose draws on features of process that are widely believed to be necessary to ensure proper deliberations.

One widely used method in ethical reasoning is the method of wide reflective equilibrium (WRE) (Reference Rawls13Reference Daniels15). The key to understanding this method is that considered moral judgments, general moral principles, and relevant background theory can be justified through their mutual support. The method of WRE was used by Rawls in the development of his account of justice as fairness (Reference Rawls13). It was extended by Daniels, who elaborated in more detail the various elements in Rawls’ argument, such as background views of the person as having certain moral powers or about the role of justice in society, and the nature of their mutual support (Reference Daniels14;Reference Daniels15).

Various modifications of the original method of WRE have been developed (Reference van der Burg and van Willigenburg16). To use WRE in a more dialogical manner, we have suggested that it may be used as an integrated part of a deliberative process (Reference Reuzel, van der Wilt, ten Have and de Vries Robbé17). In this approach, it is acknowledged from the outset that various stakeholders (e.g., patients, healthcare providers, policy makers) usually frame a problem differently, that these frames show a certain internal coherence, but that frames may be mutually incompatible (Reference Schön and Rein18). By reconstructing these interpretive frames, optimal use is made of the knowledge and experience of various stakeholders on the subject. Confronting stakeholders with interpretive frames different from their own can lead to reflection, check of assumptions, and partial revision (Reference Reuzel19). Thus, WRE is used as an argumentative tool to promote social learning among stakeholders on empirical and normative issues (Reference Fischer20). In this way, it can play a role in a deliberative process, called accountability for reasonableness, within which we are proposing HTA be embedded (Reference Daniels and Sabin21).

WRE in the Context of HTA

How might WRE be used in the context of HTA? To address this question, we will consider HTA as a process that is embedded in a specific institutional setting, within which “scientists, decision makers, and advocates communicate to define relevant questions for analysis, mobilize certain kinds of experts and expertise, and interpret findings in particular ways” (Reference Farrell, VanDeveer and Jäger22). These various stakeholders may differ in what they consider particularly problematic (problem definition) and what sort of solutions they find most likely to be useful (judgment of solutions). These will, in turn, be related to background theory that they consider plausible and values that are important to them. Together, such sets of problem definitions, judgments of solutions, background theories, and underlying values are called interpretive frames (Reference Schön and Rein18). The challenge, then, is to make such interpretive frames explicit and assess them for their internal consistency and evidential support. This challenge can be done by holding semistructured interviews and by providing document analysis and critical review of the existing literature (Reference Moret, Reuzel, van der Wilt and Grin23).

The results of such analysis may be occasion for stakeholders to revise parts of their interpretive frame (social learning). The objective, then, is not so much to assess which of the existing interpretive frames is best supported by available evidence and shows greatest coherence (“Who is right?”), but rather it is to encourage the collaborative development of new, revised interpretive frames that show greater coherence and consistence with evidence than any of the original frames. This approach to HTA, based on the work by Grin and van de Graaf (Reference Grin and van de Graaf24), can, then, best be characterized as transformative: it is aimed at creating new solutions and re-conceptualizations of the problem and the underlying empirical and normative theories. It is an attempt at integrating empirical and normative inquiry, acknowledging that evaluation is not only just about the validity of facts, but also about their relevance.

Illustration: Assessing the Value of Cochlear Implants for Deaf Children

To illustrate how stakeholder consultation in conjunction with analysis of interpretive frames may give rise to social learning, we briefly report the results of an HTA of cochlear implants in deaf children (Reference Reuzel19). The use of cochlear implants in deaf children has given rise to fierce debates across the world. Although the technology was heralded by some as an effective and safe means to provide deaf people with a sense of hearing and all the associated benefits, representatives of deaf communities objected that the technology represented a negative value judgment on deaf culture and upon its most important feature, sign language. In the context of an HTA of the technology, interpretive frames of various stakeholders (including parents of deaf children; representatives of various advocacy groups; ear, nose, and throat surgeons; manufacturers; social workers; teachers; psychologists; and audiologists) were reconstructed.

One of the key findings of this study was that proponents and opponents of the technology defined the problem differently. For proponents, the key problem was that deaf children cannot hear. Opponents, however, held that deaf children do not get the sort of sensory input that is appropriate for their cognitive, social, and emotional development. What they need, according to this view, is the consistent use by parents and others of sign language, irrespective of whether the child will receive a cochlear implant or not. The two positions were associated with different background theories: for proponents of the cochlear implant, early and consistent use of sign language was not an option, because they assumed that the acquisition of spoken language and the acquisition of sign language are mutually competitive.

In contrast, opponents of the cochlear implant assumed that acquisition of the two language modalities is actually mutually reinforcing. In addition to this difference in background theory, differences were found in normative preferences. The normative preference of proponents of the cochlear implant could be resolved to the “open future” argument: do not, as a parent, take decisions for your child that will unduly restrict opportunities later in life. The normative position of opponents of the cochlear implant could be resolved more accurately to a preference for cultural diversity. Searching for evidence on this issue, reports from longitudinal studies were found supporting the “mutually reinforcement” theory, rather than the “mutual competition” theory regarding the acquisition of the two language modalities (Reference Reuzel19). This helped to forge consensus that it is important that hearing parents of deaf children start learning and using sign language with their child, while this does by no means suggest that the child will not receive a cochlear implant later in life. This adjusted solution was coherent with the problem definition and the background theory that seemed to be most credible, given the available evidence, and with both normative preferences.

We think this example illustrates that controversies like these can never be resolved by empirical inquiry alone; they also require normative inquiry. A means to integrate these two types of inquiry is, in our view, the reconstruction of interpretive frames of the various stakeholders, which may be considered a discursive application of the method of WRE.

Accountability for Reasonableness

To be relevant, HTA should then address both normative and empirical issues. It can do so only when it is subsumed in a fair deliberative process. Key elements of a fair deliberative process, which is compatible with the method of WRE and arguably encourages reliance on it, will involve at least four conditions: (i) Publicity, specifically, transparency about the grounds for decisions; (ii) Relevance, rationales that rely on reasons that all can accept as relevant to meeting health needs fairly (by “all,” we mean people who are affected by a decision and who seek mutually justifiable grounds for such decisions); (iii) Revisability, including procedures for revising decisions in light of new evidence and arguments and other challenges to them; (iv) Enforcement, meaning ensuring that the conditions i–iii are met. Together these elements ensure “accountability for reasonableness” (Reference Daniels and Sabin25;Reference Daniels and Sabin26).

A fair process requires publicity about the reasons and rationales that play a part in decisions (and this is how fair process encourages reliance on reflective equilibrium). There must be no secrets where justice is involved, for people should not be expected to accept decisions that affect their well-being unless they are aware of the grounds for those decisions. This broader transparency about rationales is a hallmark of fair process. Fair process also involves constraints on reasons. Fair minded people, by which we mean those who seek mutually justifiable grounds for cooperation, must agree that the reasons, evidence, and rationales are relevant to meeting population health needs fairly, the shared goal of deliberation.

One important way to make sure that there is a real deliberation about relevant reasons is to include a range of stakeholders in the deliberative process. Such stakeholders should not be token “lay” people who may be intimidated by others; they should be supported so that they can clearly express their views about relevant reasons. Including an appropriate range of stakeholders does not make a process more democratic (for they are not elected representatives of the public), but it can improve the quality of the deliberation, provided the process is managed so that it is not simply a lobbying exercise by people who are not really seeking relevant reasons. Fair process also requires opportunities to challenge and revise decisions in light of the kinds of considerations all stakeholders may raise. There should be a mechanism for appeals of decisions by those affected by them.

Accountability for reasonableness makes it possible to educate all stakeholders about deliberation about fair decisions under resource constraints. It facilitates social learning about limits. It connects decision making in healthcare institutions to broader, more fundamental democratic deliberative processes.

CONFLICTS OF INTEREST

This research received no specific grant from any funding agency, commercial or not-for-profit sectors. The authors report no conflicts of interest.

References

REFERENCES

1. Banta, HD, Jonsson, E. History of HTA: An introduction. Int J Technol Assessment Health Care. 2009;25 (S1):1-6.CrossRefGoogle Scholar
2. Hoffman, B. Why not integrate ethics in HTA: Identification and assessment of the reasons. GMS Health Technol Assess. 2014;10:Doc04 http://www.egms.de/static/de/journals/hta/2014-10/hta000120.shtml (accessed October 30, 2015).Google ScholarPubMed
3. Porter, ME. What is value in health care? N Engl J Med. 2010;363:2477-2481.CrossRefGoogle ScholarPubMed
4. Mauskopf, JA, Sullivan, SD, Annemans, L, et al. Discourse and morality. Ethics. 2000;110:514-536.Google Scholar
5. Caro, JJ, Nord, E, Siebert, U, et al. The efficiency Frontier approach to economic evaluation of health-care interventions. Health Econ. 2010;19:1117-1127.CrossRefGoogle ScholarPubMed
6. Brock, D. Ethical issues in the use of cost effectiveness analysis for the prioritization of health care resources. In: Khusfh, G, Engelhardt, T, eds. Bioethics: A philosophical overview. Dordrecht: Kluwer Publishers; 2004:353-380.Google Scholar
7. Brock, D. Priority to the worst off in health care resource prioritization. In: Battin, M, Rhodes, R, Silvers, A, eds. Medicine and social justice. New York: Oxford University Press; 2002:362-372.CrossRefGoogle Scholar
8. Kamm, F. Morality, mortality: Death and whom to save from it. (Vol. I). Oxford: Oxford University Press; 1993.Google Scholar
9. Daniels, N. How to achieve fair distribution of ARTs in “3 by 5”: Fair process and legitimacy in patient selection. Geneva: World Health Organization/UNAIDS; 2004.Google Scholar
10. Dolan, P, Shaw, R, Tsuchiya, A, Williams, A. QALY maximization and people's preferences: A methodological review of the literature. Health Econ. 2005;14:197-208.CrossRefGoogle Scholar
11. Menzel, P, Gold, M, Nord, E, Pinto-Prades, JL, Richardson, J, Ubel, P. Toward a broader view of values in cost-effectiveness analysis in health care. Hastings Cent Rep. 1999;29:7-15.CrossRefGoogle Scholar
12. Cohen, J. Procedure and substance in deliberative democracy. In: Bohman, J, Rehg, W, eds. Deliberative democracy. Essays on reasons and politics. Cambridge, MA: The MIT Press; 1997:407-437.CrossRefGoogle Scholar
13. Rawls, J. A theory of justice. Cambridge, MA: Harvard University Press; 1971.CrossRefGoogle Scholar
14. Daniels, N. Wide reflective equilibrium and theory choice in ethics. J Philos. 1979;76:256-282.CrossRefGoogle Scholar
15. Daniels, N. Justice and justification: Reflective equilibrium in theory and practice. New York and Cambridge: Cambridge University Press; 1996.CrossRefGoogle Scholar
16. van der Burg, W, van Willigenburg, Theo, (eds.). 1998. Reflective equilibrium: Essays in Honour of Robert Heeger (Library of Ethics and Applied Philosophy). Dordrecht: Kluwer; 1998.Google Scholar
17. Reuzel, RPB, van der Wilt, GJ, ten Have, HAMJ, de Vries Robbé, PF. Interactive technology assessment and wide reflective equilibrium. J Med Philos. 2001;26:245-261.CrossRefGoogle ScholarPubMed
18. Schön, D, Rein, M. Frame reflection: Toward the resolution of intractable policy controversies. New York, NY: Basic Books; 1995.Google Scholar
19. Reuzel, RPB. Health technology assessment and interactive evaluation: Different perspectives. Thesis. Nijmegen: Radboud University Nijmegen; 2001.Google Scholar
20. Fischer, F. Democracy and expertise. Reorienting policy inquiry. Oxford: Oxford University Press; 2009.CrossRefGoogle Scholar
21. Daniels, N, Sabin, J. Setting limits fairly. 1st and 2nd eds. New York: Oxford University Press; 2002:2008.CrossRefGoogle Scholar
22. Farrell, A, VanDeveer, SD, Jäger, J. Environmental assessments: Four under-appreciated elements of design. Glob Environ Change. 2001;11:311-333.CrossRefGoogle Scholar
23. Moret, M, Reuzel, RPB, van der Wilt, GJ, Grin, J. Validity and reliability of qualitative data analysis: Inter-observer agreement in reconstructing interpretative frames. Field Methods. 2007;19:24-39.CrossRefGoogle Scholar
24. Grin, J, van de Graaf, H. Technology assessment as learning. Sci Technol Hum Values. 1996;20:72-99.CrossRefGoogle Scholar
25. Daniels, N, Sabin, J. Limits to health care: Fair procedures, democratic deliberation, and the legitimacy problem for insurers. Philos Public Aff. 1997;26:303-350.CrossRefGoogle ScholarPubMed
26. Daniels, N, Sabin, J. Setting limits fairly: Learning to share resources for health. 2nd ed. New York: Oxford University Press; 2008.Google Scholar
Figure 0

Table 1. Clashes between Decisions Based on Cost-Effectiveness Analysis (CEA) and Fairness