Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-22T12:18:44.025Z Has data issue: false hasContentIssue false

Assessing medical devices: a qualitative study from the validate perspective

Published online by Cambridge University Press:  24 April 2024

Bart Bloemen*
Affiliation:
IQ Health Science Department, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
Wija Oortwijn
Affiliation:
IQ Health Science Department, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, Netherlands
*
Corresponding author: Bart Bloemen; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Objectives

Our objective was to explore procedures and methods used at health technology assessment (HTA) agencies for assessing medical devices and the underlying views of HTA practitioners about appropriate methodology to identify challenges in adopting new methodologies for assessing devices. We focused on the role of normative commitments of HTA practitioners in the adoption of new methods.

Methods

An online survey, including questions on procedures, scoping, and assessments of medical devices, was sent to members of the International Network of Agencies for Health Technology Assessment. Interviews were conducted with survey respondents and HTA practitioners involved in assessments of transcatheter aortic valve implantation to gain an in-depth understanding of choices made and views about assessing medical devices. Survey and interview questions were inspired by the “values in doing assessments of health technologies” approach towards HTA, which states that HTA addresses value-laden questions and information.

Results

The current practice of assessing medical devices at HTA agencies is predominantly based on procedures, methods, and epistemological principles developed for assessments of drugs. Both practical factors (available time, demands of decision-makers, existing legal frameworks, and HTA guidelines), as well as commitments of HTA practitioners to principles of evidence-based medicine, make the adoption of a new methodology difficult.

Conclusions

There is a broad recognition that assessments of medical devices may need changes in HTA methodology. In order to realize this, the HTA community may require both a discussion on the role, responsibility, and goals of HTA, and resulting changes in institutional context to adopt new methodologies.

Type
Method
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Introduction

Health technology assessment (HTA) aims to inform decision-makers by assessing the potential value of health technologies (Reference O’Rourke, Oortwijn and Schuller1). Therefore, HTA practitioners (those responsible for conducting assessments, including scoping, collecting, synthesizing, and interpreting available evidence) need to identify evidence that can answer policy-relevant questions about the potential value of health technology, requiring decisions on which information can be regarded as reliable and relevant. Current discussions about appropriate HTA methodology for assessing (high-risk) medical devices show that this is not an easy task. Based on differences between medical devices and drugs, scholars argue that HTA methodology for medical devices should be adapted to 1) integrate other types of evidence (e.g., real-world evidence) to address the lack of evidence from randomized clinical trials and capture the impact of iterative developments of devices on outcomes; 2) broaden the scope of assessments to capture organizational aspects (e.g., impact on healthcare capacity); and 3) involve stakeholders in assessments (e.g., making methodological decisions) to address context-dependence of outcomes and gather information on user experiences and preferences (Reference Enzing, Vijgen, Knies, Boer and Brouwer2Reference Tarricone, Torbica, Drummond and Medtec8).

Despite these calls to assess medical devices differently, previous studies have shown that HTA agencies use similar methodologies for assessing drugs and medical devices (Reference Enzing, Vijgen, Knies, Boer and Brouwer2;Reference Fuchs, Olberg, Panteli, Perleth and Busse4;Reference Ming, He, Yang, Hu, Zhao and Liu5;Reference Bluher, Saunders, Mittard, Torrejon Torres, Davis and Saunders9;Reference Ciani, Wilcher, Blankart, Hatz, Rupel and Erker10). Although practical reasons like capacity problems and existing regulatory frameworks contribute to this uniformity, we argue that normative commitments of HTA agencies and practitioners also play a role. Inspired by the “values in doing assessments of health technologies” (VALIDATE) approach, which emphasizes that the relevance and meaning of evidence considered in HTA depends on underlying values, we reasoned that both the value perspectives of stakeholders and HTA practitioners are instrumental in conducting assessments (Reference van der Wilt, Oortwijn and Consortium11;Reference van der Wilt, Bloemen, Grin, Gutierrez-Ibarluzea, Sampietro-Colom and Refolo12). This implies that the activities of HTA agencies and practitioners are not solely guided by established HTA guidelines but are also influenced by practitioners’ views on how HTA can improve outcomes of health technology for society. Given that HTA is often presumed to provide information about the public value of health technology, transcending particular interests, HTA practitioners and agencies are committed to methodological principles presumed to guarantee a neutral or unbiased evidence base for decision-makers (Reference Ducey, Ross, Pott and Thompson13Reference Gagnon, Legault, Bellemare, Parent, Dagenais and S15). These commitments may conflict with new types of evidence, outcome measures, and methodologies proposed for assessing medical devices.

To explore the significance of these commitments, besides practical challenges, in the adoption of new methodology (e.g., real-world data, stakeholder involvement) for (high-risk) medical device assessments, we conducted a survey and interview study among relevant HTA agencies. Our objective was to map the procedures and methodologies currently used by these HTA agencies and to retrieve the views of HTA practitioners about the role of HTA, stakeholder involvement, and appropriate evidence in HTA.

Methods

We used a semi-structured survey to gather information on the current practice of assessing (high-risk) medical devices by HTA agencies (i.e., legal frameworks, procedures, and methods). We defined high-risk medical devices as Class IIb and Class III medical devices according to the European Regulation on Medical Devices – Regulation (EU) 2017/745. Additionally, via semi-structured interviews with HTA practitioners, we explored, building on previous findings in the literature, whether changes in HTA methodology may conflict with their views (Reference Ducey, Ross, Pott and Thompson13). Specifically, we were interested in their perspectives on the role of HTA in decision-making, their responsibilities in the conduct of HTA, stakeholder involvement, and what constitutes appropriate evidence, particularly for assessing medical devices. Both survey and interview questions, inspired by the VALIDATE approach and literature on HTA for medical devices, also delved into the value-laden aspects of HTA procedures and methodology. See also Supplementary Figure 1 for a schematic illustration of the qualitative approach taken in this study.

Survey

The online survey was developed based on our previous work regarding deliberative HTA processes (targeting stakeholder involvement), normative analysis, and desk research on challenges in assessing medical devices (Reference Enzing, Vijgen, Knies, Boer and Brouwer2Reference Ciani, Wilcher, Blankart, Hatz, Rupel and Erker10;Reference van der Wilt, Bloemen, Grin, Gutierrez-Ibarluzea, Sampietro-Colom and Refolo12;Reference Oortwijn, Jansen and Baltussen16;Reference Oortwijn, Husereau, Abelson, Barasa, Bayani and Santos17). Questions focused on institutional context and current HTA processes, scoping, and assessing medical devices (the types of evidence used, aspects assessed, stakeholder involvement). A draft version was tested by an HTA practitioner at a national HTA agency from our network. Based on the received feedback, minor changes were introduced to clarify questions. The survey (and invitation email) is provided as Supplementary file 1.

We invited members of the International Network of Agencies for Health Technology Assessment (INAHTA), except research organizations and regulatory agencies (n = 3), and one institute which we know does not assess medical devices. We targeted specific persons known from our networks and/or who assess medical devices; otherwise, contact persons mentioned on the INAHTA website (www.inahta.org) were approached. Data collection occurred via the online tool CheckMarket, between January and February 2023, including two biweekly reminders. We asked respondents for consent to analyze results and assured confidentiality (no attribution is made to specific persons). We also asked for consent to contact them for an interview.

Descriptive statistics (frequencies presented as percentages) derived from the CheckMarket tool were used to summarize findings. When needed, websites, literature, and publicly available guidelines and HTA reports from HTA agencies (retrieved by manually searching on their websites) were reviewed to clarify responses and gain an in-depth understanding of processes and methodology used for assessing medical devices, see also Supplementary file 2.

Interviews

We invited (via email) HTA practitioners who responded to the survey and indicated to be contacted, and specifically invited HTA practitioners involved in assessing Transcatheter Aortic Valve Implantation (TAVI) to explore choices made in real-world assessments. TAVI was chosen as an example because it is a high-risk medical device that has already been implemented in clinical practice, and full HTAs are conducted in different jurisdictions. It is a minimally invasive technology aimed at inoperable patients with symptomatic severe aortic valve stenosis. Since its Conformité Européene (CE) marking in 2007, usage has expanded to patients at high, intermediate, and low surgical risk. We focused on assessments of TAVI for patients at low risk for surgical complications (i.e., eligible for the standard treatment, surgical aortic valve replacement [SAVR]), which became standard care for patients 75 years old and above (Reference Vahanian, Beyersdorf, Praz, Milojevic, Baldus and Bauersachs18). In November 2022, the HTA database (https://database.inahta.org/) was used to search for full HTA reports using the MeSH term “Transcatheter Aortic Valve Replacement,” which retrieved available HTA reports (on TAVI for low-risk patients) from Health Information and Quality Authority – HIQA (Ireland), Ontario Health (Canada), and the Norwegian Institute of Public Health (19Reference Himmels, Flottorp, Stoinska-Schneider, Kvist and Robberstad21). In addition, a manual search retrieved a report by Haute Autorité de Santé (France) (22).

We developed a semi-structured interview guide based on relevant literature on normativity in HTA, challenges in assessing medical devices/TAVI, and the VALIDATE approach. Interviews comprised three parts: (i) professional background, experience, and current position of the HTA practitioner; (ii) questions on context and decisions made in developing the respective HTA report on TAVI, or questions to clarify answers given to survey questions; (iii) personal views of the HTA practitioner on roles and responsibilities of HTA, and methodological issues in assessments of medical devices. The interview guide was iteratively updated based on experiences with conducting the interviews. Given the explorative nature of our study, data saturation was not a target.

The lead author (B.B., PhD candidate in HTA) conducted online interviews (using Microsoft Teams) between February and May 2023, with a duration of 1 to 1.5 hours. All interviews were audio-recorded and summarized; interviewees were asked to provide feedback on the summary to clarify any misunderstandings. Prior to participation, oral consent was obtained from all interviewees, who were informed about the study objectives through invitation mail and the concept interview guide.

More information about the preparation of interviews, and the interview guide, can be found in Supplementary file 3.

The basis for analyzing the interviews was the updated summaries (based on feedback from the interviewees), including information retrieved from the websites of respective HTA agencies, HTA reports, and publicly available guidelines. Thematic analysis was used, which is a method for identifying, analyzing, and reporting themes within the data. Because interviews were conducted to provide in-depth information, complementary to the surveys, about the context and reasons (including views of HTA practitioners) behind current processes and methodology for assessing medical devices (see also Supplementary Figure 1), main themes from the survey (scoping, types of evidence, aspects of devices being assessed, stakeholder involvement) were the starting point for analyzing the interviews. The lead author used a process of inductive comparison and reasoning to identify subthemes that reflect the content of conducted interviews.

The consolidated criteria for reporting qualitative research (COREQ) checklist was used to ensure that the methods, results, and discussion were reported appropriately (Reference Tong, Sainsbury and Craig23).

Results

Study participants

We invited fifty contact persons of INAHTA member agencies, of which twenty-two (response rate of 44 percent) responded to the survey. Two respondents answered less than 50 percent of the main questions and were excluded from the analysis. In addition, five respondents were excluded as they were not involved in the assessment of medical devices. In total, we analyzed fifteen survey responses, including twelve fully completed surveys and three agencies that provided meaningful answers (answering more than 50 percent of questions on either scoping and/or assessment). Among these, eight were willing to be interviewed (53 percent).

Four accepted our invitation for an interview (50 percent) from HTA agencies in the Netherlands, Spain, Taiwan, and Colombia. Of the authors of the four retrieved HTA reports on TAVI who were invited for an interview (n = 9), two accepted our invitation, one did initially agree to be interviewed but did not respond after sending multiple reminders to set an interview date, one declined participation, two referred to a co-author, and three did not respond at all. When an author of an HTA report on TAVI accepted the invitation, other authors of the same HTA report were not invited.

Table 1 provides an overview of participating HTA agencies. Additional information about interview participants is reported in Supplementary Table 1. Most participating agencies are governmental institutions (29 percent) or institutes with a government function (47 percent, independent from a Ministry of Health), advising policymakers on national policy decisions (e.g., allocation of public resources, reimbursement by health insurance) on medical devices.

Table 1. Overview of HTA agencies that (partially) completed the survey and/or participated in the interviews

a Categorization based on Fuchs et al. 2017: 1 = independent academic research entity, 2 = Governmental institutions (a. national, b. regional), 3 = Regional Ministries of Health/Social Affairs including a related department, 4 = Independent entities with function as a governmental institution, 5 = Non-departmental public body with legislative function.

Abbreviations: Avalia-t/ACIS, Unidad de Asesoramiento Científico-técnico (Avalia-t), Axencia Galega de Coñecemento en Saúde (ACIS); AQuAS, Agència de Qualitat I Avaluació Sanitàries de Catalunya; CADTH, Canadian Agency for Drugs and Technologies in Health; CDE/HTA, Center for Drug Evaluation Health Technology Assessment; FOPH, Federal Office of Public Health; G-BA, Gemeinsamer Bundesausschuss; IECS, Instituto de Efectividad Clínica y Sanitaria; IETS, Instituto de Evaluación Tecnológica en Salud; IQWiG, Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; MaHTAS, Malaysian Health Technology Assessment Section; NECA, National Evidence-based healthcare Collaborating Agency; NIPH, Norwegian Institute of Public Health; SK-NRCHD, Salidat Kairbekova National Research Center for Health Development; ZIN, Zorginstituut Nederland; HIQA, Health Information and Quality Authority.

Institutional context, procedures for assessing medical devices

Survey respondents and interviewees were asked about how assessments of medical devices are initialized and differences with HTA processes for drugs (see Supplementary Tables 1 and 2).

In general, agencies have similar procedures for assessing devices and drugs, but processes may differ in duration, initialization of assessments, and evidential requirements, being more heterogeneous for devices. The definition of medical devices varies widely: five agencies use EU directives that include specific definitions of (classes of) medical devices, three agencies use a definition from their national law, while five agencies report a broader definition of health technology that includes devices.

When a medical device is introduced to a market (after regulatory approval), HTA agencies are mostly asked to conduct assessments that inform reimbursement decisions at the request by decision-makers (73 percent), followed by an application of the manufacturer and identification via horizon scanning (47 percent). Although there are experiments involving stakeholders in deciding which devices need an assessment, this is often limited to proposing topics or providing feedback on a draft HTA protocol, and the final decision rests with decision-makers and sometimes HTA practitioners. Interviewees also mentioned that decision-makers’ needs often determine which assessments are initiated (see also Table 3).

Scoping

Nine survey respondents (60 percent) reported that their agency has (publicly available) guidelines or documents on scoping applicable to medical devices, see Table 2. Guiding principles of the scoping process are transparency (78 percent), overarching goals of the HTA agency or healthcare system, impartiality, consistency, verifiability (all 67 percent), whereas inclusivity (44 percent), timeliness (44 percent) and efficiency (33 percent) are less frequently mentioned. Scoping often focuses on defining the health technology and its comparators needing an assessment (67 percent), whereas defining the health problem is rarely the objective of scoping (22 percent).

Table 2. Overview of answers provided to survey questions on scoping

Eight agencies (53 percent) have a description of stakeholder involvement included in their guidelines for scoping. Input requested from stakeholders primarily provides background information (88 percent) and information on their value perspectives and ideas about relevant outcome measures (63 percent). Stakeholders are recruited by invitation (50 percent) or a combination of closed and open procedures (38 percent). The stakeholders mostly involved in scoping are providers of care, experts in medicine, patients’ organizations, experts in health economics, and policymakers, whereas the involvement of patients themselves (not represented via a patients’ organization), informal caregivers, and the public (organized group of citizens) is low (25 percent or less). Some groups of stakeholders are mostly involved in a specific way: payers and purchasers primarily via consultation (i.e., asked to provide written feedback) and experts in law primarily via participation (i.e., involved in deliberations and meetings).

When it comes to the methodology used in scoping, the population intervention comparators outcomes (PICO) tool is always used. This tool structures the scoping process, focusing on specifying the research question. Comparators and outcomes are primarily selected based on literature reviews, interviews with health professionals and other relevant experts, and focus groups with a mix of experts (including health professionals and patients). In some cases, relevant outcome measures are selected by surveying relevant stakeholders.

Scoping was also discussed during interviews, confirming that it is often technology-focused, based on literature and expert opinion (see also illustrative fragments from interviews in Table 3 and Supplementary Table 3). At some agencies, stakeholders are consulted about whether they agree with the scope and to raise comments about whether there is anything missing. Interviews on TAVI showed that expectations concerning the health problem (aortic valve stenosis) for which TAVI is held to be a solution, and what the relevant comparators are, are not explicitly questioned during scoping and assumed to be similar to what is claimed by health professionals and/or described in the literature. Consequently, TAVI is only compared with the current standard in clinical practice (SAVR), and alternative interventions (e.g., preventative treatment, drug-based treatment, and so) seem not to be considered. The scoping processes conducted for TAVI are also not reported, only their output is part of the final HTA report (e.g., specifications of objectives or terms of reference for the assessments), or a brief description of input collected from stakeholders during scoping is included in the report (e.g., the NIPH report on TAVI includes an appendix on “user involvement”) (1922).

Table 3. Illustrative fragments from summaries of interviews

Interviewees also mentioned that the scope of an assessment is often already pre-determined by legal requirements and/or official HTA guidelines for conducting assessments (see Supplementary Tables 1 and 3).

Assessment

Use of different types of evidence

Participating agencies predominantly use traditional types of studies (e.g., RCT, meta-analysis, systematic review; see Table 4). Also, the use of qualitative research methods is less than 50 percent and confined to obtaining information about patients’ perspectives and experiences to contextualize quantitative evidence, and it has no role as formal evidence in assessments.

Table 4. Overview of answers provided to survey questions on evidence considerations in assessments of high-risk medical devices

Survey responses and interviews with HTA practitioners show their acknowledgment of challenges involved in collecting data for medical devices, but that they also think the same epistemic principles apply (e.g., evidence hierarchy, risk of bias) and that alternatives like real-world evidence introduce more uncertainty (see Tables 3 and 4, and Supplementary Table 3). What is mentioned several times by HTA practitioners is that they only consider comparative data, that is, data that allows you to draw conclusions about the relative effectiveness of different health technologies, which is considered important from the viewpoint of the purpose of HTA (to inform decisions on the level of the healthcare system). The main reasons for considering real-world evidence are a) that this could address iterative developments in medical devices (i.e., traditional methods for gathering evidence cannot keep up with this pace of development), and b) to address the context dependency of medical devices (i.e., contextual factors in “real-world” circumstances).

Interviews on TAVI showed (see Table 3 and Supplementary Table 3) that other data types were considered by HTA agencies but not used when assessing safety or comparative clinical effectiveness of medical devices because they were deemed to provide no additional information with respect to available (high-quality) RCT data. The HTA reports on TAVI also show this reliance on RCT data, only one agency (i.e., HIQA) reported findings of registries in their safety assessment, but these were only used as an addition to RCT data. The data from registries was presented only narratively and without any explicit critical appraisal of their quality (besides evaluating the relevance and appropriateness of the included patient populations in registries) (19).

Aspects considered in assessment

Aspects primarily considered in assessments of medical devices are clinical effectiveness (100 percent), safety (93 percent), costs and economic implications (79 percent), and quality of life (71 percent), followed by organizational aspects (64 percent), and legal and ethical issues (both 50 percent); see Supplementary Table 4.

Interviewees express a lack of expertise, time, and capacity to consider a broader spectrum of aspects, and that explicit consideration of ethical issues is not always seen as the responsibility of HTA practitioners or is not recognized as requiring explicit attention (see Table 3 and Supplementary Table 3). The inclusion of a broader spectrum of aspects is also limited due to legal frameworks that pre-define a narrower scope for assessments.

For TAVI, Ontario Health assessed a broad range of aspects (clinical effectiveness, safety, cost-effectiveness, budget impact, values and preferences of patients and informal caregivers), and these were integrated in the conclusions and recommendations (Reference Health20;24;Reference Smith and Argaez25). Patient preferences were included by reviewing published qualitative and quantitative preferences evidence, and direct engagement of patients with lived experience with TAVI. Ethical issues were not assessed because it was concluded during scoping that there was no need for it. At HIQA, safety, clinical effectiveness, cost-effectiveness, budget impact, and organizational aspects (e.g., impact on healthcare capacity) of TAVI were assessed, whereas ethical issues were only described (with equity as a primary concern) (19). NIPH and HAS assessed the safety, clinical effectiveness, cost-effectiveness, and budget impact of TAVI (Reference Himmels, Flottorp, Stoinska-Schneider, Kvist and Robberstad21;22).

Stakeholder involvement

Stakeholder involvement during assessment is confined to collecting evidence and reviewing its plausibility, and their role in making methodological decisions is limited, see Table 5. Stakeholders involved in all facets of conducting an assessment are patient organizations, providers of care, policymakers, payers/purchasers, and experts in medicine, health economics, epidemiology, ethics, and law. Patients (not represented by an organization), manufacturers, and informal caregivers are involved in collecting evidence, but are almost excluded from making methodological decisions and reviewing evidence.

Table 5. Overview of answers provided to survey questions on stakeholder involvement in assessments of medical devices

Interviewees expressed concerns with stakeholder involvement, mentioning potential threats to the impartiality and objectivity of the evidence base, as stakeholders may have vested interests and information provided by them may be skewed to be in favor of certain outcomes. Additionally, interviewees noted that stakeholders have a limited understanding of HTA processes (see Table 3 and Supplementary Table 3). Despite these concerns, interviewees acknowledge the importance of stakeholder involvement, especially for obtaining information on what are relevant outcomes, and to address challenges related to medical devices (e.g., for the appropriate use of medical devices, the engagement of both clinicians and patients is needed; manufacturers can provide technical information about different generations of a device).

Regarding TAVI, stakeholder involvement was limited to a literature review of quantitative and qualitative research into patient preferences, direct engagement of patients (excluding those at low surgical risk), and the inclusion of patient representatives in the Expert Advisory Group. Their direct contributions involved providing feedback to drafts of HTA reports and sharing their experiences (19Reference Himmels, Flottorp, Stoinska-Schneider, Kvist and Robberstad21).

Discussion

Despite the recognized need for changes in HTA methodology for medical devices, HTA agencies still resort to methods developed for assessing drugs and focus on assessing clinical aspects (safety, effectiveness) and cost-effectiveness using quantitative data. The broadening of who is involved (stakeholder involvement), what is assessed (which aspects of health technology), and which information is considered (e.g., real-world evidence, qualitative research), proposed by VALIDATE and other groups of experts in HTA, is not yet fully seen in current practice at HTA agencies (Reference Enzing, Knies, Boer and Brouwer3;Reference Tarricone, Torbica, Drummond and Medtec8;Reference van der Wilt, Bloemen, Grin, Gutierrez-Ibarluzea, Sampietro-Colom and Refolo12). This discrepancy aligns with previous observations in surveys and reviews of guidelines (Reference Fuchs, Olberg, Panteli, Perleth and Busse4;Reference Ming, He, Yang, Hu, Zhao and Liu5;Reference Bluher, Saunders, Mittard, Torrejon Torres, Davis and Saunders9;Reference Ciani, Wilcher, Blankart, Hatz, Rupel and Erker10). A recently published review of full HTA reports on TAVI for patients at low surgical risk, including the reports discussed in this study, also showed their predominant reliance on traditional RCT data and clinical outcome measures (Reference Rumi, Fortunato, Antonini, Siviero and Cicchetti26). What our findings add to these studies is the understanding that, although HTA practitioners recognize the relevance of other types of evidence and methods, they are committed to existing epistemological principles (e.g., evidence hierarchy, risk of bias) that automatically downgrade non-RCT data, effectively excluding it from having an impact on recommendations as previously observed in a study on real-world data policies for HTA of drugs (Reference Makady, Ham, de Boer, Hillege, Klungel and Goettsch27). HTA scholars have also expressed critique on the quality of real-world evidence used in HTAs of high-risk medical devices (Reference Klein, Blommestein, Al, Pongiglione, Torbica and Groot28).

Certain practical factors may also explain the reluctance to introducing new methods for assessing medical devices. Both in responses to survey questions and during interviews it became clear that HTA practitioners work under time pressure, must pay attention to the demands of decision-makers, and need to adhere to existing legal frameworks and HTA guidelines, limiting their ability to experiment with new methodology. Therefore, HTA practitioners need a supportive environment (institutional context) that recognizes the importance of changing methodology for assessing medical devices.

In addition to this role of the environment, our interviews with HTA practitioners highlight some normative considerations that also play a role in sustaining the status quo. HTA practitioners frequently expressed concerns about how uncertainties and biases associated with other types of evidence and stakeholders might influence the HTA process, potentially conflicting with the responsibility of HTA to guarantee an impartial (“neutral”, “objective”) synthesis and interpretation of the available evidence. Therefore, the persistent use of traditional methods and evidence hierarchies, and the exclusion of stakeholders in parts of the process, may not only be the result of demands from decision-makers and official frameworks, but also because it is regarded as the best way for ensuring this neutral role of HTA in decision-making. As observed in another interview study, HTA practitioners’ reliance on certain epistemological ideas may originate from ideas about the intrinsic value of HTA itself (Reference Ducey, Ross, Pott and Thompson13).

Therefore, the adoption of a new methodology for assessing medical devices at HTA agencies requires a discussion within the HTA community about the roles, responsibilities, and goals of HTA, and how to realize them. This includes acknowledging the implicit normative underpinnings of HTA processes and methods. For example, we agree with interviewees that the role and responsibility of HTA is to provide information on the public value of health technology, requiring expertise, processes, and methods that ensure collected information is not influenced by interests. However, this does not imply that HTA practitioners need to refrain from making value judgments. Increasingly, HTA agencies and scholars acknowledge that conducting assessments requires making value judgments (Reference Charlton, DiStefano, Mitchell, Morrell, Rand and Badano29). Although this may be a matter of degree, partly depending on the mandate of the HTA practitioner (e.g., working within a decision-making body or at an academic institute), every assessment requires making value-laden decisions about what are good methods and outcome measures to consider in evaluating a health technology (Reference Hofmann, Cleemput, Bond, Krones, Droste, Sacchini and Oortwijn30). Given this recognition of the normativity of HTA, there is room to reflect upon whether current epistemic norms (like the strict adherence to a hierarchy of evidence) are still helpful in fulfilling the role of HTA in decision-making. Methods evolve, offering new ways for obtaining reliable data on the effects of health technology, and HTA guidelines already provide some room to consider diverse outcome measures (Reference Subbiah31;Reference Kinchin, Walshe, Normand, Coast, Elliott and Kroll32). Together with the broader HTA community (those using outcomes of HTA or being impacted by it), HTA practitioners may explore how this new methodology may help in assessing medical devices and improve the relevance of HTA (Reference Freitas, Vieira, Oliveira, Monteiro and Bana33).

Future research on the impact of changes in HTA methodology on decision-making, and ideas of decision-makers and stakeholders about evidential requirements for different types of technology, could guide this collaborative rethinking of how new technologies, including medical devices, are assessed (Reference Loblova, Trayanov, Csanadi and Ozieranski34).

Strengths and limitations

Although we managed to collect survey responses and conduct interviews with HTA practitioners working at seventeen different agencies, we cannot verify whether we collected all diversity in used methodology and views of HTA practitioners. Future research should try to include more agencies from different regions and interview multiple practitioners per agency. However, we are assured about the validity of our results by the convergence with findings of previous studies on HTA practice for medical devices and interviews with HTA practitioners about their views on appropriate methodology (Reference Fuchs, Olberg, Panteli, Perleth and Busse4;Reference Bluher, Saunders, Mittard, Torrejon Torres, Davis and Saunders9;Reference Ciani, Wilcher, Blankart, Hatz, Rupel and Erker10;Reference Ducey, Ross, Pott and Thompson13;Reference Boothe14). By combining surveys and interviews, we have provided an in-depth understanding of why certain methodologies are used.

Although we tried to explore websites, published guidelines, and HTA reports of participating agencies, to verify findings, we were sometimes unable to retrieve or understand material because it was not (publicly) available (in English).

Conclusions

Despite recognizing the need for changes in HTA methodology for medical devices, HTA agencies predominantly use methods developed for assessing drugs. Both practical factors (available capacity, existing legal frameworks, and HTA guidelines) and HTA practitioners’ commitments to principles of evidence-based medicine make adoption of a new methodology difficult. Therefore, the adoption of new methodologies at HTA agencies may require a discussion within the HTA community on the roles, responsibilities, and goals of HTA, and how these can be realized by changes in methodology and institutional context.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/S0266462324000254.

Acknowledgments

We thank all the survey respondents and interviewees for their participation in this study. We also thank Professor Gert Jan van der Wilt for reviewing and providing insightful and valuable comments on an early version of this manuscript.

Funding statement

This research received a non-restricted grant from Edwards Lifesciences. The funding organization did not have any influence on the development of the data collection methods, analysis, and reporting.

Competing interest

W.O. reports an unrestricted research grant from Edwards Lifesciences for the conduct of the study which was paid to Radboud University Medical Center. B.B. declares no conflict of interest related to this study.

References

O’Rourke, B, Oortwijn, W, Schuller, T, International Joint Task Group. The new definition of health technology assessment: A milestone in international collaboration. Int J Technol Assess Health Care. 2020;36:187190.CrossRefGoogle ScholarPubMed
Enzing, JJ, Vijgen, S, Knies, S, Boer, B, Brouwer, WBF. Do economic evaluations of TAVI deal with learning effects, innovation, and context dependency? A review. Health Policy Technol. 2021;10:111119.CrossRefGoogle Scholar
Enzing, JJ, Knies, S, Boer, B, Brouwer, WBF. Broadening the application of health technology assessment in the Netherlands: a worthwhile destination but not an easy ride? Health Econ Policy Law. 2021;16:440456.CrossRefGoogle ScholarPubMed
Fuchs, S, Olberg, B, Panteli, D, Perleth, M, Busse, R. HTA of medical devices: Challenges and ideas for the future from a European perspective. Health Policy. 2017;121:215229.CrossRefGoogle ScholarPubMed
Ming, J, He, Y, Yang, Y, Hu, M, Zhao, X, Liu, J, et al. Health technology assessment of medical devices: current landscape, challenges, and a way forward. Cost Eff Resour Alloc. 2022;20:54.CrossRefGoogle Scholar
Torbica, A, Tarricone, R, Schreyogg, J, Drummond, M. Pushing the boundaries of evaluation, diffusion, and use of medical devices in Europe: Insights from the COMED project. Health Econ. 2022;31 suppl 1:19.CrossRefGoogle ScholarPubMed
Pomey, MP, Brouillard, P, Ganache, I, Lambert, L, Boothroyd, L, Collette, C, et al. Co-construction of health technology assessment recommendations with patients: An example with cardiac defibrillator replacement. Health Expect. 2020;23:182192.CrossRefGoogle ScholarPubMed
Tarricone, R, Torbica, A, Drummond, M, Medtec, HTAPG. Key Recommendations from the MedtecHTA Project. Health Econ. 2017;26 Suppl 1:145152.CrossRefGoogle ScholarPubMed
Bluher, M, Saunders, SJ, Mittard, V, Torrejon Torres, R, Davis, JA, Saunders, R. Critical review of european health-economic guidelines for the health technology assessment of medical devices. Front Med. 2019;6:278.CrossRefGoogle ScholarPubMed
Ciani, O, Wilcher, B, Blankart, CR, Hatz, M, Rupel, VP, Erker, RS, et al. Health technology assessment of medical devices: a survey of non-European union agencies. Int J Technol Assess Health Care. 2015;31:154165.CrossRefGoogle ScholarPubMed
van der Wilt, GJ, Oortwijn, W, Consortium, V-H. Health technology assessment: A matter of facts and values. Int J Technol Assess Health Care. 2022;38:e53.CrossRefGoogle ScholarPubMed
van der Wilt, GJ, Bloemen, B, Grin, J, Gutierrez-Ibarluzea, I, Sampietro-Colom, L, Refolo, P, et al. Integrating empirical analysis and normative inquiry in health technology assessment: the values in doing assessments of health technologies approach. Int J Technol Assess Health Care. 2022;38:e52.CrossRefGoogle ScholarPubMed
Ducey, A, Ross, S, Pott, T, Thompson, C. The moral economy of health technology assessment: An empirical qualitative study. Evid Policy. 2017;13:727.CrossRefGoogle Scholar
Boothe, K. (Re)defining legitimacy in Canadian drug assessment policy? Comparing ideas over time. Health Econ Policy Law. 2021;16:424439.CrossRefGoogle ScholarPubMed
Gagnon, H, Legault, GA, Bellemare, CA, Parent, M, Dagenais, P, S, KB, et al. How does HTA addresses current social expectations? An international survey. Int J Technol Assess Health Care. 2020;37:e9.CrossRefGoogle ScholarPubMed
Oortwijn, W, Jansen, M, Baltussen, R. Use of evidence-informed deliberative processes by health technology assessment agencies around the globe. Int J Health Policy Manag. 2020;9:2733.CrossRefGoogle ScholarPubMed
Oortwijn, W, Husereau, D, Abelson, J, Barasa, E, Bayani, DD, Santos, VC, et al. Designing and implementing deliberative processes for health technology assessment: A good practices report of a joint HTAi/ISPOR task force. Int J Technol Assess Health Care. 2022;38:e37.CrossRefGoogle ScholarPubMed
Vahanian, A, Beyersdorf, F, Praz, F, Milojevic, M, Baldus, S, Bauersachs, J, et al. 2021 ESC/EACTS guidelines for the management of valvular heart disease. Eur Heart J. 2022;43:561632.CrossRefGoogle ScholarPubMed
Health Information and Quality Authority (HIQA). Health technology assessment of transcatheter aortic valve implantation (TAVI) in patients with severe symptomatic aortic stenosis at low and intermediate risk of surgical complications. Health Information and Quality Authority (HIQA); 2019.Google Scholar
Health, Ontario. Transcatheter aortic valve implantation in patients with severe aortic valve stenosis at low surgical risk: A health technology assessment. Ont Health Technol Assess Ser. 2020;20:1148.Google Scholar
Himmels, JPW, Flottorp, S, Stoinska-Schneider, A, Kvist, B, Robberstad, B. Transcatheter aortic valve implantation (TAVI) versus surgical aortic valve replacement (SAVR) for patients with severe aortic stenosis and low surgical risk and across surgical risk groups: a health technology assessment. Report 2021. Oslo: Norwegian Institute of Public Health; 2021.Google Scholar
Haute Autorité de Santé (HAS). Commission Nationale D’Évaluation des Dispositifs Medicaux et des Technologies de Santé. EDWARDS SAPIEN 3, bioprothèse valvulaire aortique implantée par voie transfémorale (système COMMANDER) Avis de la CNEDiMTS. Haute Autorité de Santé (HAS); 2020.Google Scholar
Tong, A, Sainsbury, P, Craig, J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. IJQHC. 2007;19:349357.Google ScholarPubMed
Ontario Health. Transcatheter aortic valve implantation in patients with severe aortic valve stenosis at low surgical risk: Recommendation [Internet]. Toronto (ON): Queen’s Printer for Ontario: Ontario Health; 2020.Google Scholar
Smith, A, Argaez, C. Transcatheter aortic valve implantation for aortic stenosis: A rapid qualitative review. Ottawa: CADTH; 2019.Google ScholarPubMed
Rumi, F, Fortunato, A, Antonini, D, Siviero, L, Cicchetti, A. Analysis of heterogeneity of the different health technology assessment reports produced on the transcatheter aortic valve implantation in patients with severe aortic valve stenosis at low surgical risk. Front Cardiovasc Med. 2023;10:1204520.CrossRefGoogle ScholarPubMed
Makady, A, Ham, RT, de Boer, A, Hillege, H, Klungel, O, Goettsch, W, GetReal Workpackage 1. Policies for use of real-world data in health technology assessment (HTA): A comparative study of six HTA agencies. Value in Health. 2017;20:520532.CrossRefGoogle ScholarPubMed
Klein, P, Blommestein, H, Al, M, Pongiglione, B, Torbica, A, Groot, S. Real-world evidence in health technology assessment of high-risk medical devices: Fit for purpose? Health Econ. 2022;31 Suppl 1:1024.CrossRefGoogle ScholarPubMed
Charlton, V, DiStefano, M, Mitchell, P, Morrell, L, Rand, L, Badano, G, et al. We need to talk about values: a proposed framework for the articulation of normative reasoning in health technology assessment. Health Econ Policy Law. 2023:121.CrossRefGoogle Scholar
Hofmann, B, Cleemput, I, Bond, K, Krones, T, Droste, S, Sacchini, D, Oortwijn, W. Revealing and acknowledging value judgments in health technology assessment. Int J Technol Assess Health Care. 2014;30:579586.CrossRefGoogle ScholarPubMed
Subbiah, V. The next generation of evidence-based medicine. Nat Med. 2023;29:4958.CrossRefGoogle ScholarPubMed
Kinchin, I, Walshe, V, Normand, C, Coast, J, Elliott, R, Kroll, T, et al. Expanding health technology assessment towards broader value: Ireland as a case study. Int J Technol Assess Health Care. 2023;39:e26.CrossRefGoogle Scholar
Freitas, L, Vieira, ACL, Oliveira, MD, Monteiro, H, Bana, ECCA. Which value aspects are relevant for the evaluation of medical devices? Exploring stakeholders’ views through a Web-Delphi process. BMC Health Serv Res. 2023;23:593.CrossRefGoogle ScholarPubMed
Loblova, O, Trayanov, T, Csanadi, M, Ozieranski, P. The Emerging Social Science Literature on Health Technology Assessment: A Narrative Review. Value Health. 2020;23:39.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Overview of HTA agencies that (partially) completed the survey and/or participated in the interviews

Figure 1

Table 2. Overview of answers provided to survey questions on scoping

Figure 2

Table 3. Illustrative fragments from summaries of interviews

Figure 3

Table 4. Overview of answers provided to survey questions on evidence considerations in assessments of high-risk medical devices

Figure 4

Table 5. Overview of answers provided to survey questions on stakeholder involvement in assessments of medical devices

Supplementary material: File

Bloemen and Oortwijn supplementary material 1

Bloemen and Oortwijn supplementary material
Download Bloemen and Oortwijn supplementary material 1(File)
File 90.3 KB
Supplementary material: File

Bloemen and Oortwijn supplementary material 2

Bloemen and Oortwijn supplementary material
Download Bloemen and Oortwijn supplementary material 2(File)
File 20.8 KB
Supplementary material: File

Bloemen and Oortwijn supplementary material 3

Bloemen and Oortwijn supplementary material
Download Bloemen and Oortwijn supplementary material 3(File)
File 119 KB
Supplementary material: File

Bloemen and Oortwijn supplementary material 4

Bloemen and Oortwijn supplementary material
Download Bloemen and Oortwijn supplementary material 4(File)
File 110.3 KB