Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-23T00:41:14.521Z Has data issue: false hasContentIssue false

Health technology assessment to optimize health technology utilization: Using implementation initiatives and monitoring processes

Published online by Cambridge University Press:  29 June 2010

Katrine B. Frønsdal
Affiliation:
Norwegian Knowledge Centre for the Health Services
Karen Facey
Affiliation:
University of Glasgow
Marianne Klemp
Affiliation:
Norwegian Knowledge Centre for the Health Services
Inger Natvig Norderhaug
Affiliation:
Norwegian Knowledge Centre for the Health Services
Berit Mørland
Affiliation:
Norwegian Knowledge Centre for the Health Services
John-Arne Røttingen
Affiliation:
Norwegian Knowledge Centre for the Health Services
Rights & Permissions [Opens in a new window]

Abstract

Background: The way in which a health technology is used in any particular health system depends on the decisions and actions of a variety of stakeholders, the local culture, and context. In 2009, the HTAi Policy Forum considered how health technology assessment (HTA) could be improved to optimize the use of technologies (in terms of uptake, change in use, or disinvestment) in such complex systems.

Methods: In scoping, it was agreed to focus on initiatives to implement evidence-based guidance and monitoring activities. A review identified systematic reviews of implementation initiatives and monitoring activities. A two-day deliberative workshop was held to discuss key papers, members’ experiences, and collectively address key questions. This consensus paper was developed by email and finalized at a postworkshop meeting.

Results: Evidence suggests that the impact and use of HTA could be increased by ensuring timely delivery of relevant reports to clearly determined policy receptor (decision-making) points. To achieve this, the breadth of assessment, implementation initiatives such as incentives and targeted, intelligent dissemination of HTA result, needs to be considered. HTA stakeholders undertake a variety of monitoring activities, which could inform optimal use of a technology. However, the quality of these data varies and is often not submitted to an HTA.

Conclusions: Monitoring data should be sufficiently robust so that they can be used in HTA to inform optimal use of technology. Evidence-based implementation initiatives should be developed for HTA, to better inform decision makers at all levels in a health system about the optimal use of technology.

Type
POLICIES
Copyright
Copyright © Cambridge University Press 2010

Suboptimal use of health technologies (e.g., delayed introduction, underuse, overuse, or misuse) not only affects patient care, but the efficiency of the healthcare system. Well-informed decisions to support optimal technology use may be achieved through health technology assessment (HTA), but this is not always the case. The reasons for this are multifaceted, including issues related to the scope, purpose and impact of HTA, the complexity of decision-making processes in the healthcare system, and the wider policy context.

In this article, we present discussions held at the 2009 HTAi Policy Forum Meeting, which considered implementation and monitoring strategies to optimize technology uptake and use, and the implications of such strategies for HTA used in decision-making processes.

METHODS

The Policy Forum contains twelve Not-For-Profit members (Health Service Payers/Providers or HTA Agencies) and ten For-Profit members (pharmaceutical and device industries). Each member sends two senior professionals who have experience of HTA being used at the interface of decision making to participate in an annual face-to-face meeting. HTAi Board members and Observers fully participate in the meeting, and for this meeting the Director of an umbrella patient organization was invited to the meeting. The complete list of participants is included in Supplementary Table 1, which can be viewed online at www.journals.cambridge.org/thc2010023. All those who participate in the meeting are involved in extensive scoping, in all debates and workshops at the meeting and in development of outputs.

The opinions expressed in this study are believed to be a fair reflection of the exchanges of views. However, not all participants, nor the organizations they affiliate to, necessarily agree with the whole content, for which the authors take full responsibility.

A review identified systematic reviews of implementation initiatives and monitoring activities. A two-day deliberative workshop was held to discuss key papers, members’ experiences, and collectively address key questions. This consensus paper was developed by email and finalized at a postworkshop meeting.

HTA AND TECHNOLOGY USE IN COMPLEX HEALTH SYSTEMS

HTA-related activities can be used at any stage of the life cycle of a health technology: from technology development to disinvestment as illustrated in Figure 1. Policy makers have focused on the use of HTA at reimbursement, but reimbursement listing does not guarantee optimal use of the technology. Hence, there is a need to assess the impact of HTA on technology uptake and adoption and consider how HTA can better inform “optimal use” of technologies.

Figure 1. Potential uses of HTA during the technology life cycle.

As outlined by the Veterans Administration, in their pursuit of quality, optimal use may be described as appropriate or improved use of technologies to achieve better patient outcomes at affordable and sustainable costs, and so this guides the work of their Technology Assessment Program (Reference Perlin, Kolodner and Roswell19).

In many health systems, the introduction of a new technology may involve a fragmented, hierarchical and misaligned structure of decision making. For example, a national decision such as recommendation for use of a medicine based on cost-effectiveness may be passed to a local system for consideration in a formulary that allows a limited number of choices and finally used by an individual clinician in a particular clinical setting. This was recognized in an OECD survey (15), which concluded that the processes for decisions regarding funding, investment, and planning of health technologies are unclear. Hence, to achieve optimal use of technology, it is necessary to understand the decision-making processes in a system and other factors that affect technology uptake.

Leavitt's modified organizational model (Reference Leavitt and March12;Reference Kristensen and Sigmund11) describes a health system as a complex, dynamic system where four variables interact: structure, tasks (function), people (staff, attitudes, and knowledge), and technology. In addition, the surroundings of organizational culture, the financial and legal context of the system, and other intraorganizational factors, such as views of stakeholders, are important. Change in use of technology will influence the other variables and surroundings and, thus, the whole system as shown in Figure 2.

Figure 2. Leavitt's modified organizational model. Adapted from Kristensen, 2007 (Reference Kristensen and Sigmund11).

It may be questioned whether HTAs take sufficient account of these “organizational issues” and if this could improve the impact of an HTA to optimize the use of a health technology.

IMPLEMENTATION INITIATIVES TO OPTIMIZE HTA AND TECHNOLOGY UTILIZATION

Impact of HTA on Decision Making

Several projects have analyzed how HTA feeds into decision-making processes, what factors may stimulate or hinder the use of HTA in decision making, and what enhances the impact of HTA in decision making. The European network for Health Technology Assessment (EUnetHTA) Project found that acceptance of HTA recommendations varies, as does the impact of the HTA on policy decisions (Reference Gerhardus, Dorendorf, Røttingen, Santamera, Garrido, Kristensen, Nielsen and Busse6). The OECD health project (15) showed that most important for uptake of HTA advice about investment in technology was funding for the technology, stakeholder involvement, and evidence from a trusted source. They did not comment on HTAs that led to change in use of the technology or disinvestment, but others have shown that HTA has limited impact on disinvestment (Reference Elshaug, Hiller and Moss5).

Improving Implementation of Evidence-Based Guidance

In a more general context, substantial work has been done to identify barriers to the implementation of evidence-based guidance about effective health services. A systematic review from the Cochrane Effective Practice and Organization of Care (EPOC) Group identified four types of barrier: professional, financial, regulatory/political, or organizational (Reference Cheater, Baker and Gillies3). Specific issues we have identified within these categories are listed in Table 1.

Table 1. Barriers to Implementation of Evidence-Based Advice in Health Policy and Decision Making

Note. Developed on the basis of barrier categories from Cheater, 2005.

Improving Implementation of HTA

In terms of HTA, the OECD project (15) suggested that to increase the use of HTA required timely availability of information in line with decision-making priorities, recognition of the various dynamics of different technology markets, and production of evidence that can slot into designated decision-making nodes of the healthcare system (receptor function). Contrary to the general work on evidence-based guidance, this recognizes that HTA is intended to inform policy and suggests that HTA should be tailored to the needs of decision makers and linked more comprehensively with innovation and other aspects of policy making.

Hence, methods for increasing the impact of HTA and facilitating optimal use of technologies are multiple, from consideration of the breadth of the assessment process in an HTA to obtain policy maker and users perspectives, through to the placement of HTA in the policy process and communication/educational initiatives to improve uptake of HTA findings to all “end users” in the healthcare system.

Breadth of Assessment

An HTA report should be relevant, timely, with up-to-date evidence, produced by means of transparent procedures and easy to interpret. To achieve this, all stakeholders should be actively involved in the HTA process (Reference Granados, Jonsson and Banta7). Decision makers may be considered as the key stakeholder for HTA, and it is recognized that they prefer to incorporate or balance other factors and information beyond “hard” evidence when making decisions, that is, also taking into account societal- and political-related values (Reference Borowski, Brehaut and Hailey1). However, evaluating a new technology in its specific context, including social, ethical, and organizational aspects, is what makes HTA different from other evidence-based healthcare disciplines. It is the assessment of these elements that can be most enhanced by the involvement of other stakeholders (such as patients, clinicians, industry, etc.).

As discussed at the previous Policy Forum meeting on harmonization of evidence requirements for HTA, methods for assessment of such broader aspects are still in development (Reference Hutton, Trueman and Facey9). New developments are emerging to support a structured approach to addressing ethical and social aspects, with HTAi hosting Interest Sub-Groups in both of these areas. There is guidance on how administrative issues, organizational conditions, and their consequences for introduction or disinvestment of a technology can be assessed in HTA (Reference Kristensen and Sigmund11), focusing on difficulties in implementation that might arise according to the four variables in Leavitt's modified organizational model. However, there is a lack of agreement about whether such explicit presentation of these factors is needed, as they are essential for any assessment of clinical and cost-effectiveness, where evidence and value judgments will combine to consider these organizational issues.

HTA and Health Policy

HTA is more likely to have an impact when it is used in a policy process with regulation in place that demands compliance with HTA recommendations (clear receptor function) and a transparent funding scheme for new technologies is in place and a mechanism to ensure disinvestment is implemented. Investment or disinvestment must also be considered in the context of the entire system, which may require control or prioritization mechanisms to manage fixed budgets.

There are a variety of compliance incentives linked to funding, for instance using tariffs, or “pay for performance” (Reference Nicholson, Pauly and Wu14), but ensuring alignment of incentives and objectives throughout the decision and organizational hierarchy is crucial for these to succeed. In Australia, laparoscopic tubal ligation appears to be favored by gynecologists despite new, equally effective, less invasive alternative procedures being available. The newer procedures are less well remunerated and may, in part, explain the slow uptake by the profession (Guy Maddern, personal communication, 2009). In the United Kingdom, the family practitioners contract increases base income according to performance against a set of evidence-based indicators (Reference Doran, Fullwood and Gravelle4). This has recently become the responsibility of the national organization that uses HTA to make reimbursement decisions and that develops clinical guidelines and quality standards, ensuring greater alignment of the decisions in the hierarchy.

Communication of HTA

The OECD project (15) identified that lack of transparency in healthcare decision making makes it difficult to disseminate the findings of an HTA effectively. Thus, it is essential to identify the key decision makers at the outset of an HTA, consider what evidence they need to inform their decisions, in what form that evidence should be provided, and what user support should be provided once HTAs are published. As decision makers at all levels need to understand the implications of an HTA, tailored education and dissemination activities at all levels are important. For example, the National Institute for Health and Clinical Excellence (NICE) has introduced a suite of implementation tools designed to meet the needs of commissioners in regional healthcare systems, including cost impact reports and commissioning guides, care pathways, indicators, and metrics (13). The Canadian Agency for Drugs and Technologies in Health (CADTH), has developed its COMPUS (Canadian Optimal Medication Prescribing and Utilization Service) program to provide reports, recommendations, and key messages on the optimal utilization of drugs that are customized to meet the needs of a broad range of audiences (e.g., policy makers, regional health authorities, health providers, patients). The information is packaged to facilitate decision making and ease of use for the intended audience. Prescribing aids, newsletters, accredited presentations, and academic detailing kits are all strategies that demonstrated some effectiveness have been developed (2) to communicate HTAs and optimize use.

For clinicians, systematic reviews from the EPOC group have shown that educational meetings, audit, and feedback have achieved small to moderate, but important improvements in professional practice and healthcare outcomes (Reference Jamtvedt, Young, Kristoffersen, O'Brien and Oxman10), but that multiple interventions are likely to be needed to change professional practice (Reference Oxman, Bjørndal, Flottorp, Lewin and Lindahl17). CADTH has used this work to create the RxforChange database that summarizes systematic reviews of the effectiveness of strategies to improve drug prescribing and drug use, targeting professionals, the organization of health care, and consumers (2).

At the policy level, the impact of HTA can also be increased if it is linked with other quality systems or national guidance, such as standards and performance indicators/outcomes, as demonstrated by the reorganization of several national HTA units into wider quality-related organizations (Norwegian Knowledge Centre for the Health Services, NHS Quality Improvement Scotland, NICE, among others).

MONITORING AS A STRATEGIC APPROACH TO OPTIMIZE TECHNOLOGY UTILIZATION

Forms of Technology Monitoring

Monitoring may be generically described as a set of tools for generating and collecting data on health technologies from the time they are introduced into the healthcare system. Its purpose may vary from observation of the technology in the real-world setting (to see when and how it is used) to more experimental studies (evaluating safety and efficacy outcomes in a specific population or setting). All stakeholders may collect monitoring information, either from routine data collection (through sources such as administrative databases, pharmacovigilance activities, population data sets and electronic medical records), or from specific data collection activities (such as technology registries, disease registries, sales and utilization data, surveys, long-term follow-up studies, and pragmatic controlled trials).

The information arising from such monitoring activities could be used to refine technology utilization, in terms of methods of use, the setting, patient selection and concordance, or for staff training. Thus, data from monitoring activities, particularly those in the more experimental setting, could augment the evidence base for an HTA (if it is performed after regulatory approval and before HTA) or it could be stipulated post HTA. Post HTA, much work is ongoing to discuss the value of Coverage with Evidence Development, but there is little evidence to describe how other monitoring information is used with HTA.

Challenges to Creating Robust Monitoring Mechanisms

Observational data from the monitoring activity of any stakeholder may contribute to an HTA, but like other evidence, it must be critically appraised. If experimental monitoring is undertaken, it may be most easily achieved in partnership with other stakeholders, but to be successful a clear governance structure must be put in place that documents agreement about the purpose, conduct, reporting, intended actions from the monitoring activity, and the sharing of risks (including cost). There are barriers to be overcome if such a monitoring activity is to be successful, which primarily relate to patient participation, competing demands on clinicians, data collection, alteration of patient pathways, quality, and costs. Hence, engagement of practitioners and patients to improve compliance is essential.

After a technology has been licensed, patient recruitment to studies and a high drop-out rate (affecting completeness of data in long-term studies) can be challenging. Meanwhile, for clinicians the burdens of data collection in standard clinical practice can be seen as onerous. Using electronic systems or Web-based systems may make data collection easier, but there may still be a perception of an unreasonable increase in monitoring requirements in the healthcare system (“bureaucracy”) as a whole. Furthermore, to use monitoring material from different sources, data collection systems need to use the same coding systems with a linkage mechanism, combined with good collaborative management to facilitate sharing of information, analysis according to an agreed plan and presentation of results in a form acceptable to all those who contributed evidence. Hence, it is essential that monitoring activities focus on a minimum data set to achieve a clearly defined purpose within a reasonable time period. The time period of study is important as there is a need to balance the relevance of a study with a fixed regimen and setting in an ever changing healthcare environment.

One of the most challenging hurdles is who will pay for such good quality monitoring studies? Health organizations generally have limited funding for such research. Meanwhile manufacturers have already invested substantial amounts in multinational high quality trials required by regulatory authorities. In fact, they do support monitoring activities, which are often in the form of registry studies evaluating outcomes or through claims databases. However, these data are not always published or submitted to an HTA, in the same manner as the confirmatory regulatory trials.

Given the difficulties in conduct and interpretation of results arising from monitoring studies, careful consideration must be given to the additional value of the information that monitoring may provide compared with the investment required to conduct the study.

Facilitators to Monitoring Activities Related to HTA

The U.S. comparative effectiveness program (Reference Slutsky, Atkins and Chang20) funds controlled studies which compare strategies to manage a condition, taking into account real-world practice and variations in patient populations. Detailed guidance about study design and analysis has been created and substantial funding is available so that high quality studies can be undertaken to inform disease management strategies under the auspices of the Agency for Healthcare Research and Quality, which is also responsible for HTA. Likewise, the Ontario Health Technology Advisory Committee funds “field evaluations,” undertaken in partnership with clinical and academic research institutions (16).

Providers with high quality electronic healthcare records have the potential to use these data for monitoring the use of technologies. Although such registries contain the limitations of uncontrolled data, they may provide valuable information with careful planning and reporting. For the Kaiser Permanente joint replacement registry, multistakeholder involvement, linkages with a variety of other databases, combined with assured confidentiality, clear sense of purpose, and a dynamic feedback mechanism have been shown to be important for success. The registry has provided a mechanism for recalls, changed clinical practice through ongoing feedback to physicians and identified risk factors for development of postoperative complications (Reference Paxton, Inacio, Slipchenko and Fithian18). It has achieved savings by reducing use of higher risk procedures and new implants that are less cost-effective in the short term. It also gives patients access to information about their own implants and anonymized data are fed back to manufacturers. So it is being used to inform clinical practice, procurement, and patient decisions.

Manufacturers are investing in an increasing number of prospective, observational studies to measure outcomes, postregulatory authorization or HTA, especially in Europe. These studies cover a range of designs, from those that are epidemiological in focus through to observational outcomes studies and registries. They may be performed in partnership with other stakeholders, such as providers or patient organizations. Some are being reported in peer reviewed journals and some are required by agencies as a condition for reimbursement. One example is the SOHO (Schizophrenia Outpatient Health Outcome) study evaluating implications for the treatment of schizophrenia (Reference Haro and Salvador-Carulla8).

DISCUSSION

Optimization of health technology utilization is a difficult concept to consider as it is health system- and health technology-specific. However, any improvements in uptake or utilization of a health technology that lead to improved patient outcomes and more efficient use of resources would be welcomed.

Improved technology utilization can be achieved by high quality, relevant, timely HTA advice; funding for the technology; coherent decision-making systems; professional engagement; sufficient infrastructure; and patient participation. HTAs may be made more relevant by explicit consideration of organizational issues, the use of HTA at various points in the decision-making process and communication/educational initiatives to improve uptake of HTA findings to all “end users” in the healthcare system. This suggests the need for more effective engagement of all stakeholders throughout the HTA process, but this is time consuming and different perspectives must be managed.

There is particular debate about the breadth of the HTA assessment and whether it should consider organizational issues separately. Some believe that these issues can be integrated into assessments of clinical- and cost-effectiveness. Others consider that for certain complex technologies (such as devices and procedures), providing evidence about the expected impact of a technology on health system structure, processes, and resources might be valuable to inform the construct and recommendations of an HTA or develop an implementation plan. However, such an approach may make efforts to harmonize the evidence requirements for an HTA more challenging.

There is clearly a need for “intelligent dissemination” of HTA findings, contextualizing the HTA for different audiences and ensuring that those audiences receive, understand, and can use the information, all of which may be achieved with evidence-based approaches.

This meeting showed that many stakeholders undertake monitoring activities to either observe the real-life use of a technology or in a more experimental way, for example, to study epidemiology or determine the impact of the technology. In some cases, this information is not input to the HTA process, whereas in others, it is generated at the request of HTA agencies. Such requests for monitoring data are relatively new, and there are concerns that the mechanisms for review of such evidence are not transparent. In particular, the study designs may lead to perceived biases and so major investment in a study may be wasted. Hence, it is vital that before any monitoring study is undertaken, all parties agree on the intended use of the evidence and the limitations that will be inherent in the study design and conduct (as a result of clinicians’ and patients’ behavior in the real-world setting) or due to lack of a control group.

To be able to influence HTA, monitoring information must be scientifically robust, but feasible and fundable. This suggests that a “minimum data set” is collected, which could perhaps be determined by some form of “value of information” approach (Reference Tappenden, Chilcott, Eggington, Oakley and McCabe21) to focus monitoring activities. Overall, further work and consultation is required to develop a shared understanding of the purposes, limitations and value of monitoring studies.

This meeting intended to address optimization in terms of uptake of clinically/cost-effective technologies and disinvestment (or desisting use of) technologies that are not clinically or cost-effective. However, more evidence was available to support discussion of uptake. The use of the Kaiser registry shows that monitoring data can be used to disinvest in procedures that lead to poor outcomes, but experience of several HTA organizations shows that use of HTA to support disinvestment is challenging (Reference Elshaug, Hiller and Moss5).

POLICY IMPLICATIONS

To improve the impact of HTA and ensure optimal utilization of technology, HTA requires clear policy receptor functions. This is a challenge in many healthcare systems which often have hierarchical and imperfectly aligned structures with fragmented decision making. An example of how this can be improved is to make the links between evidence-based work and initiatives that drive decision maker or provider behavior to be more transparent as with the pay for performance programs for family practitioners in the United Kingdom.

There is evidence to show that some implementation initiatives improve uptake of effective health interventions. These should be used in conjunction with HTA to improve the impact of HTA on utilization of or disinvestment in the health technology.

Greater partnership working is needed to create monitoring activities that are streamlined to produce robust valuable evidence to inform HTA.

In an economic climate of recession, where many health systems will see minimal uplifts in budgets, there will be little room for investment in new technologies without disinvestment of other technologies. Further work is needed to consider the use of HTA not only in terms of optimization of single technology utilization, but also to help prioritization between technologies.

SUPPLEMENTARY MATERIAL

Supplementary Table 1

www.journals.cambridge.org/thc2010023

CONTACT INFORMATION

Katrine B Frønsdal, PhD, MSc (), Senior Researcher, Norwegian Knowledge Centre for the Health Services, P.O. Box 7004 St. Olavsplass, N-0130 Oslo, Norway

Karen Facey, BSc(Hons), PhD, CStat, HonMFPH (), HTAi Policy Forum Chair, Honorary Senior Research Fellow, Department of Public Health and Health Policy, University of Glasgow, 1 Lilybank Gardens, Glasgow G12 8RZ, United Kingdom

Marianna Klemp, MD, PhD (), Research Director, Norwegian Knowledge Centre for the Health Services, P.O. Box 7004 St. Olavsplass, N-0130 Oslo, Norway

Inger Natvig Norderhaug, PhD, MSc (), Research Director, Norwegian Knowledge Centre for the Health Services, P.O. Box 7004 St. Olavsplass, N-0130 Oslo, Norway

Berit Mørland, DDS, PhD (), Deputy Director General, Norwegian Knowledge Centre for the Health Services, P.O. Box 7004 St. Olavsplass, N-0130 Oslo, Norway

John-Arne Røttingen, MD, PhD, MSc (), Chief Executive, Norwegian Knowledge Centre for the Health Services, P.O. Box 7004 St. Olavsplass, N-0130 Oslo, Norway

CONFLICT OF INTEREST

K. Frønsdal, M. Klemp, I. Norderhaug, B. Mørland, and J.A. Røttingen report having received consulting fees from the HTAi Policy Forum for their institution (NOKC) that houses the scientific secretariat for the Policy Forum. K. Facey has received funding as an employee of the MHRA (UK) at the Committee on Safety of Devices, as a Member of the Board at NHS Forth Valley (UK), and for Being Chair of Policy Forum in 2009–2010 at HTAi. She has received consultancy fees or travel funding from the National Institutes of Health Research, the EUnetHTA network, Guidelines International Network, HTA agencies, patient organizations, international pharmaceutical companies, and governmental departments. She is Chair of HTAi Interest Group on Patient/Citizen Involvement in HTA, and her husband is a general practitioner in the UK.

References

REFERENCES

1. Borowski, HZ, Brehaut, J, Hailey, D. Linking evidence from health technology assessments to policy and decision making: The Alberta model. Int J Technol Assess Health Care. 2007;23:155161.CrossRefGoogle ScholarPubMed
2. CADTH COMPUS Program. http://www.cadth.ca/index.php/en/compus (accessed December 2008).Google Scholar
3. Cheater, F, Baker, R, Gillies, C, et al. Tailored interventions to overcome identified barriers to change: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005;3:CD005470.Google Scholar
4. Doran, T, Fullwood, C, Gravelle, H, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355:375384.CrossRefGoogle ScholarPubMed
5. Elshaug, AG, Hiller, JE, Moss, JR. Exploring policy-makers’ perspectives on disinvestment from ineffective healthcare practices. Int J Technol Assess Health Care. 2008;24:19.CrossRefGoogle ScholarPubMed
6. Gerhardus, A, Dorendorf, E, Røttingen, JA, Santamera, AS. What are the effects of HTA reports on the health system? Evidence from the research literature. In: Garrido, MV, Kristensen, FB, Nielsen, CP, Busse, R, eds. Health technology assessment and health policy-making in Europe. Current status, challenges and potential. EUnetHTA, European Observatory on Health Systems and Policies; 2008; Observatory Studies Series No. 14:109125.Google Scholar
7. Granados, A, Jonsson, E, Banta, HD, et al. EUR-ASSESS project subgroup report on dissemination and impact. Int J Technol Assess Health Care. 1997;13:220286.CrossRefGoogle ScholarPubMed
8. Haro, J, Salvador-Carulla, L. The SOHO (Schizophrenia Outpatient Health Outcome) study: Implications for the treatment of schizophrenia. CNS Drugs. 2006;20:293302.CrossRefGoogle ScholarPubMed
9. Hutton, J, Trueman, P, Facey, K. Harmonization of evidence requirements for health technology assessment in reimbursement decision making. Int J Technol Assess Health Care. 2008;24:511517.CrossRefGoogle ScholarPubMed
10. Jamtvedt, G, Young, JM, Kristoffersen, DT, O'Brien, MA, Oxman, AD. Audit and feedback: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2006;2:CD000259.Google Scholar
11. Kristensen, FB, Sigmund, H. DACEHTA health technology assessment handbook: The organisation, chapt. 8. Copenhagen: Danish Centre for Health Technology Assessment, National Board of Health; 2007:116139.Google Scholar
12. Leavitt, HJ. Applied organizational change in industry: Structural, technological and humanistic approaches. In: March, JG, ed. Handbook of organizations. Chicago: Rand McNally; 1965.Google Scholar
13. National Institute for Health and Clinical Excellence. Implementation tools. http://www.nice.org.uk/usingguidance/implementationtools/implementation_tools.jsp (accessed March 2009).Google Scholar
14. Nicholson, S, Pauly, MV, Wu, AY, et al. Getting real performance out of pay-for-performance. Milbank Q. 2008;86:435457.CrossRefGoogle ScholarPubMed
15. OECD Health Project. Decision making and implementation: An analysis of survey results. In: Health technologies and decision making. Paris: OECD; 2005.Google Scholar
16. Ontario Ministry of Health and Long Term Care. Medical Advisory Secretariat Field Evaluations. http://www.health.gov.on.ca/english/providers/program/mas/field/field_eval.html (accessed April 2009).Google Scholar
17. Oxman, AD, Bjørndal, A, Flottorp, SA, Lewin, S, Lindahl, AK. Implementing change. In: Integrated health care for people with chronic conditions. Oslo: Norwegian Knowledge Centre for the Health Services; 2008:1108. http://www.kunnskapssenteret.no/Publikasjoner/5114.cms (accessed December 2008).Google Scholar
18. Paxton, EW, Inacio, M, Slipchenko, T, Fithian, DC. The Kaiser Permanente national total joint replacement registry. Perm J. 2007;12:1216.CrossRefGoogle Scholar
19. Perlin, JB, Kolodner, RM, Roswell, RH. The Veterans Health Administration: Quality, value, accountability, and information as transforming strategies for patient-centered care. Am J Manag Care. 2004;10:828836.Google ScholarPubMed
20. Slutsky, J, Atkins, D, Chang, S. Comparing medical interventions. In: Agency for healthcare research and quality. Methods guide for comparative effectiveness reviews. Rockville MD: AHRQ; 2008. http://effectivehealthcare.ahrq.gov/healthInfo.cfm?infotype=rr&ProcessID=60 (accessed December 2008).Google Scholar
21. Tappenden, P, Chilcott, JB, Eggington, S, Oakley, J, McCabe, C. Methods for expected value of information analysis in complex health economic models: Developments on the health economics of interferon-beta and glatiramer acetate for multiple sclerosis. Health Technol Assess. 2004;8:178.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Potential uses of HTA during the technology life cycle.

Figure 1

Figure 2. Leavitt's modified organizational model. Adapted from Kristensen, 2007 (11).

Figure 2

Table 1. Barriers to Implementation of Evidence-Based Advice in Health Policy and Decision Making

Supplementary material: File

Fronsdale et al. supplementary material

Supplementary figures and tables

Download Fronsdale et al. supplementary material(File)
File 156.8 KB