Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-01-15T12:32:32.177Z Has data issue: false hasContentIssue false

OP67 “Black Box Bottleneck” Paradigm And Transparency Issues On Artificial-Intelligence-Based Tools In Health Technology Assessment: A Scoping Review

Published online by Cambridge University Press:  07 January 2025

Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Introduction

One of the pillars of health technology assessment (HTA) is transparency, which guarantees reproducibility and accountability. Due to the “black-boxness” of artificial intelligence (AI) models, the use of AI-based tools adds new layers of complexity for transparency issues. The aim of this scoping review is to map AI-based tools applied in HTA processes, regarding human supervision and “open-sourceness” aspects.

Methods

A search strategy using the terms “AI,” “HTA,” and correlated terms was performed in nine specialized databases (health and informatics) in February 2022. Inclusion criteria were publications testing AI models applied in HTA. Selection of studies was performed by two independent researchers. No filter was applied. Variables of interest included a subset of AI models (e.g., machine learning [ML], neural network), learning methods (e.g., supervised, unsupervised, or semi-supervised learning), and code availability (e.g., open source, closed source). Data were analyzed exploratorily as frequency statistics.

Results

ML with one layer of hidden nodes was applied in 48 (78.6 %) studies, while deep learning (DL) (two-plus layers) were applied in eight (13.1 %). ML models that used supervised learning accounted only for half of the reported models, while half used unsupervised learning. Considering supervision methods in DL models, seven used unsupervised learning, and one used supervision. Four studies did not report the AI model, and 14 studies did not report the supervision paradigm. It was not possible to assess “open-sourceness” in 31 studies. Among the identified software, seven models were not open source, and 13 were open source.

Conclusions

Transparency and accountability are of utmost importance to HTA. Complexity of AI models may introduce trustworthiness issues in HTA. Transparency provided by open-source code becomes essential in building trust in the automation of HTA processes, as does quality of report. Although progress has been observed in transparency and quality, the lack of a methodological framework still poses challenges in the field.

Type
Oral Presentations
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press