The UK National Health Service (NHS) was conceived as a comprehensive healthcare system and owns the great majority of hospitals, although actual management of the health services is considered a task for the lower levels in the hierarchical system, whereas the upper levels should provide a coordinating and policy-making function (Reference Stocking11).
IMPORTANT CONTROLS ON HEALTH TECHNOLOGY
There are several long-standing controls on health technology. The most important of these are (Reference Spiby10): (i) The regionalized healthcare system, which tends to channel technology to the appropriate level; (ii) the gatekeeper role of the general practitioners, requiring a referral to specialists, which tends to restrain use of more specialized technology; (iii) the regulation of efficacy and safety of pharmaceuticals and pharmaceutical prices; and (iv) the central control of efficacy and safety of medical devices.
More recently, the commissioning function, now devolved to primary care trusts (PCTs), is intended to encourage a more efficient provision of all services and, by implication, the use of health technology.
EARLY DEVELOPMENTS CONCERNING HEALTH TECHNOLOGY
Beginning as early as the 1970s, there was increasing concern about the effectiveness of the British health services, especially in relation to their considerable cost. Archie Cochrane's book Effectiveness and Efficiency (Reference Cochrane5) gave a voice to this concern. Cochrane examined cases in several different fields of health care, finding a pervasive lack of evidence of effectiveness. He observed that the NHS “. . . could be seen as giving a blank cheque both to the demands of patients and the wishes of doctors (page 9).” He did acknowledge that, relatively speaking, a great deal of research on effectiveness of health care was already going on in the United Kingdom. He stated that, nonetheless, such applied research had a low priority in academic departments of medicine in the United Kingdom. He proposed that clinical research, especially randomized clinical trials (RCTs), would be expanded and that the then Department of Health and Social Security (DHSS) would organize the new research program because it had the best overview of what reasonable priorities for such research would be. As will be seen, with time, Cochrane's perspectives were embodied in the development of the UK Health Technology Assessment (HTA) Programme.
RISING CONCERNS ABOUT HEALTH TECHNOLOGY
During the 1970s and 1980s, more and more concern was expressed about the cost of health technology, as well as its effectiveness. For example, the Health Committee of the House of Commons estimated that 5 percent of surgical procedures in Britain were unnecessary (Reference Woolf and Henshall12). One study, using the Rand Corporation methodology for defining appropriateness, found that 16–20 percent of coronary angiography and coronary artery bypass surgery were performed for inappropriate indications (Reference Woolf and Henshall12). Some argued that 20 percent of healthcare costs in Britain could be freed up by eliminating wasteful practices (Reference Woolf and Henshall12).
A growing number of organizations became involved in HTA during this period. It was recognized that the Medical Research Council (MRC) funded a relatively large number of high-quality clinical trials, as Cochrane had pointed out; however, these trials were done as research efforts and not for the strategic purposes of policy making or improving the quality of health care. Other sources of HTA were industry, charitable organizations, universities and medical centers, and the Department of Health (DH). However, these efforts were uncoordinated (6).
In the early 1980s, the DH commissioned a study of the effectiveness and cost-effectiveness of heart transplantation that was widely regarded as one of the best examples of HTA of its era. This study (Reference Buxton, Acheson and Caine1) was used by the DH in deciding whether the existing heart transplantation program should be expanded (Reference Buxton and Drummond3). One of the practical issues raised by the evaluation was the problem in evaluating technologies that are continually evolving. In the United Kingdom, the problem that, “it is always too early to assess a new technology, until suddenly it's too late”, became known as Buxton's Law, after the principal investigator of the study, Prof. Martin Buxton.
In 1988, the House of Lords Select Committee on Science and Technology reported on priorities in medical research (7). The report paid tribute to the research programs of the MRC and the Department of Health, but pointed to a major gap in funding for applied research that could provide the information needed to support the development of an efficient and effective NHS. The government accepted the main thrust of the House of Lords report and in 1991 appointed Professor Michael Peckham as the first Director of Research and Development for the Department of Health and the NHS. From the outset, Professor Peckham stated that HTA would be the primary activity of the Research and Development Programme (R&D).
DEVELOPMENT OF THE R&D PROGRAMME
The launch of the R&D Programme marked a shift in emphasis away from the NHS as a passive recipient of new technology to a knowledge-based health service with a strong research infrastructure and competence in critically reviewing its own needs (Reference Woolf and Henshall12). In addition, the R&D Programme increased funding for research oriented to efficacy and safety from approximately 1 percent of the NHS budget to 1.5 percent in the late 1990s. Programs on evidence-based clinical practice, guidelines, audit, performance measures and implementation strategies were developed and aggressively promoted.
The greatest expenditure of the R&D Programme was funding for original research, mainly clinical trials, based on national health priorities. However, the R&D Programme showed an increasing commitment over time to synthesizing information on health technology into policy-oriented reports. In addition, the R&D Programme undertook to coordinate HTA-type research whatever its source, to ensure appropriate use of the results and to avoid wasteful duplication.
The R&D Programme had eighty staff by approximately 2000, with a budget of approximately £75 million (Reference Woolf and Henshall12). The Programme developed a complex internal structure for ensuring that research on high priority national needs and regional priorities were in balance. Approximately two-thirds of the R&D Programme budget was devoted to regional priorities and was administered by eight regional-held budgets. The bulk of the HTA work, however, was carried out under the central R&D budget.
The R&D Programme did not undertake HTA studies, but relied on investigators, primarily in universities. The role of the NHS Executive in running the HTA program was to purchase high-quality HTA in areas of greatest need for the NHS. Advice on topics and on the general direction of the Programme comes from an independent committee, the Standing Group on Health Technology, which was established in 1993. Because the Programme budget was not sufficient to fund all studies that were deemed to be needed, a rigorous priority-setting exercise was developed to decide which study areas were of most importance. The process of priority setting, commissioning studies, assessment of the results from studies, and disseminating the results was and is carried out on contract by the National Coordinating Centre for Health Technology Assessment (NCCHTA), located mainly at the Wessex Institute for Health Research. The NCCHTA also manages the contracts for the technology assessments undertaken by independent evaluation groups for the National Centre for Health and Clinical Excellence (NICE) (see below).
The R&D Programme established the UK Cochrane Centre in 1992 to facilitate and coordinate systematic reviews of controlled clinical trials (Reference Woolf and Henshall12). From this Centre, the world-wide Cochrane Collaboration was developed. (See the study on the Cochrane Collaboration in this issue.) In addition, the Programme established the NHS Centre for Research and Dissemination at the University of York in 1993. The two centers were intended to serve complementary roles. The Cochrane Centre was to focus on investigator-led, continuously updated review of all trials in particular areas (Reference Chalmers, Enkin and Keirse4). The York Centre was to respond in a relatively short period of time to pressing problems faced by decision makers by drawing on all relevant research, including primary research and the work of Cochrane groups (Reference Woolf and Henshall12).
THE NATIONAL CENTRE FOR (HEALTH AND) CLINICAL EXCELLENCE (NICE)
Despite considerable attention to the growing body of evidence produced by the R&D Programme, the leaders of the HTA effort were dissatisfied with the reception of the results in terms, especially, of changes in clinical practice. In addition, the absence of, or lack of attention to, evidence on the effectiveness and cost-effectiveness of health technologies was leading to so-called “postcode rationing,” whereby expensive new technologies were available in some locations and not others.
Because of these problems, NICE was established in 1999. NICE issues guidance on health technologies and clinical practice, with the underlying policy target to maximize health gain within the NHS budget (8). (At a later stage, NICE also assumed the responsibilities of the former Health Development Agency for providing guidance on the use of public health interventions. To recognize the adoption of these new responsibilities, the name of the Institute was changed to the National Institute for HEALTH AND Clinical Excellence, although it kept the acronym NICE.)
For each guidance area, a center of excellence has been made responsible within NICE for the production of guidelines about important issues within the scope of its center (8). The Centre for Public Health Excellence develops public health guidance on the promotion of good health and the prevention of ill health. The Centre for Health Technology Evaluation develops technology appraisals and interventional procedures guidance. Technology appraisals are recommendations on the use of new and existing medicines and treatments within the NHS. Interventional procedure guidance evaluates the safety and efficacy of such procedures where they are used for diagnosis or treatment. The Centre for Clinical Practice develops clinical guidelines. In producing its guidance, NICE also relies on the participation of external groups. For example, NICE's clinical guidelines are produced by national coordinating centers based in the Royal Colleges for the relevant clinical fields. In addition, NICE's technology appraisal program relies on independent assessments produced by evaluation groups based in several universities.
Essentially, NICE guidance consists of recommendations, based on the best available evidence, on the appropriate treatment and care of people with specific diseases and conditions (8). Stakeholders, including patient groups, participate in all the stages of the guidance development process, including the scoping of the research and commenting on draft reports. In addition, in the case of technology appraisals, key stakeholders can appeal the decision, if they believe that NICE has not adequately followed its procedures, or has been perverse in its judgments.
The NICE process has also been criticized for not sufficiently influencing the everyday practice within the NHS, as guidance issued is not fully implemented (Reference Sheldon, Cullum and Dawson9). Here, the main problem seems to be that the centralized decision-making process is not reflected within the local NHS situation, as the NHS is organized in geographical units, which have their own responsibilities in how they spend their budget. Certainly, what seems to be an acceptable cost-effectiveness ratio at a national level could easily be overruled by the budget impact that certain interventions have when implemented fully in the local situation, such as a primary care trust or a hospital. In response to the problem of lack of implementation, the guidance from technology appraisals was made binding on the NHS, within a 3-month period. Despite this, there are still reports of patients not having full access to therapies that NICE has recommended.
One major problem, mentioned by Buxton (Reference Buxton2), is that disinvestments in cost-ineffective technologies are necessary to make a budget available for the implementation of new interventions that are cost-effective. NICE has to invest in the identification of such cost-ineffective measures, and NHS needs the political strength to communicate disinvestments to the public.
In conclusion, although NICE has been generally accepted as an effective source of information on effectiveness (and often cost-effectiveness) of health technology, the actual provision of health and preventive services is a result of many other strong forces, even within the centralized healthcare system of the United Kingdom.
Finally, it should be also be noted that the remit of NICE is limited to England and Wales, and that other parts of the United Kingdom have other arrangements for conducting and using HTAs. For example, in Scotland, new medicines are evaluated for clinical and cost-effectiveness by the Scottish Medicines Consortium (SMC) and clinical guidelines are produced by the Scottish Intercollegiate Guidelines Network (SIGN). In addition, in Wales, new medicines are evaluated by the All Wales Medicines Strategy Group, particularly in situations where there is no NICE guidance
DISCUSSION
It is clear from the brief description of the wide range of policies and activities reviewed in this study that the UK government has made a major commitment to HTA. The major orientation is that the health service should be knowledge-based.
Since the inception of the R&D Programme and its emphasis on HTA, HTA activities and results have gained wide visibility in all sectors of British society. In particular, the work of NICE and the clear link it makes between the results of HTAs and the issuing of guidance is to be welcomed. In particular, NICE has set the standard in methodological rigor, transparency of activities, and stakeholder involvement. Nevertheless, it still faces criticism over the time it takes to undertake assessments and the patchy implementation of its findings.
The R&D Programme seems to have developed a successful model of HTA funding, commissioning, and implementation. One reason for this success is that the United Kingdom has a long tradition of high quality health-related research. The Programme has drawn on the expertise of the research community in developing a decentralized model that involves a large number of those outside the government. This orientation probably accounts for the acceptance of HTA among professionals. Nonetheless, the leaders of the R&D Programme continue to consider ways to improve the Programme and its results.
CONTACT INFORMATION
Michael Drummond, DPhil ([email protected]), Professor of Health Economics, Centre for Health Economics, University of York, Heslington, York YO10 5DD, United Kingdom
David Banta, MD MPH ([email protected]), 9 route de Bragelogne, 10210 Villiers-le-Bois, France