Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-25T01:26:33.844Z Has data issue: false hasContentIssue false

A national unmet needs assessment for CTSA-affiliated electronic health record data networks: A customer discovery approach

Published online by Cambridge University Press:  03 October 2024

Nallely Mora*
Affiliation:
Parkinson School of Health Sciences and Public Health, Loyola University Chicago, Chicago, IL, USA Institute for Translational Medicine, Loyola University Chicago, Chicago, IL, USA
Madeline Mehall
Affiliation:
Parkinson School of Health Sciences and Public Health, Loyola University Chicago, Chicago, IL, USA Institute for Translational Medicine, Loyola University Chicago, Chicago, IL, USA
Lindsay A. Lennox
Affiliation:
Colorado Clinical and Translational Sciences Institute, Aurora, CO, USA
Harold A. Pincus
Affiliation:
Irving Institute for Clinical and Translational Research, Columbia University and New York State Psychiatric Institute, New York, NY, USA
David Charron
Affiliation:
Haas School of Business, NSF and NIH I-Corps Programs, University of California, Berkeley, CA, USA
Elaine H. Morrato
Affiliation:
Parkinson School of Health Sciences and Public Health, Loyola University Chicago, Chicago, IL, USA Institute for Translational Medicine, Loyola University Chicago, Chicago, IL, USA
*
Corresponding author: N. Mora; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

The expansion of electronic health record (EHR) data networks over the last two decades has significantly improved the accessibility and processes around data sharing. However, there lies a gap in meeting the needs of Clinical and Translational Science Award (CTSA) hubs, particularly related to real-world data (RWD) and real-world evidence (RWE).

Methods:

We adopted a mixed-methods approach to construct a comprehensive needs assessment that included: (1) A Landscape Context analysis to understand the competitive environment; and (2) Customer Discovery to identify stakeholders and the value proposition related to EHR data networks. Methods included surveys, interviews, and a focus group.

Results:

Thirty-two CTSA institutions contributed data for analysis. Fifty-four interviews and one focus group were conducted. The synthesis of our findings pivots around five emergent themes: (1) CTSA segmentation needs vary according to resources; (2) Team science is key for success; (3) Quality of data generates trust in the network; (4) Capacity building is defined differently by researcher career stage and CTSA existing resources; and (5) Researchers’ unmet needs.

Conclusions:

Based on the results, EHR data networks like ENACT that would like to meet the expectations of academic research centers within the CTSA consortium need to consider filling the gaps identified by our study: foster team science, improve workforce capacity, achieve data governance trust and efficiency of operation, and aid Learning Health Systems with validating, applying, and scaling the evidence to support quality improvement and high-value care. These findings align with the NIH NCATS Strategic Plan for Data Science.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The landscape of Electronic Health Record (EHR) data networks available to academic institutions in the United States (U.S.) has expanded and become more complex over the last two decades. This is partly due to significant national investment resulting from the American Recovery and Reinvestment Act of 2009, including the HITECH Act to promote and expand the adoption of health information technology [Reference Lite, Gordon and Stern1,Reference Burke2] and the establishment of Patient-Centered Outcomes Research Institute (PCORI) and Agency for Healthcare Research (AHRQ)-funded data networks to enable patient-centered comparative effectiveness research [Reference Gabriel and Normand3,Reference Simpson4]. Congressional funding further established the Sentinel Network to improve how the U.S. Food and Drug Administration (FDA) evaluates the safety and performance of medical products and currently exceeds 425 million person-years of observational electronic health (EH) data [Reference Behrman, Benner, Brown, McClellan, Woodcock and Platt5,6].Furthermore, the 21st Century Cures Act enacted in 2016 sought to accelerate the discovery, development, and delivery of new therapies aiming to cure prominent diseases using real-world evidence (RWE) from EH data sources for decision-making affecting drug approval and reimbursement [Reference Klonoff79]. A glossary of terms used throughout the manuscript is available in Figure 1.

Figure 1. Glossary of terms.

Most recently, demands for rapid surveillance and outcomes assessment during the COVID-19 pandemic further accelerated proliferation and use of EHR data for clinical and public health decision-making [Reference Madhavan, Bastarache and Brown10].

Since 2011, the National Institutes of Health (NIH), through its National Center for Advancing Translational Sciences (NCATS), as part of other initiatives has funded a national network of academic medical research institutions, now more than 60 hubs, through its Clinical and Translational Science Awards (CTSA) program to advance health informatics research and translation. Specifically, NCATS envisions “progressing toward a standards-based, interoperable network in a cloud environment where informatics assets, e.g., data, software, and algorithms, can be co-developed and shared across the CTSA consortium [11].”

To that end, NCATS has invested in technologies, like the Informatics for Integrating Biology and the Bedside (i2b2) and the Shared Health Research Information Network (SHRINE). i2b2 is an open-source clinical data warehousing and analytics research platform that enables sharing, integration, standardization, and analysis of heterogeneous data from healthcare and research [12]. SHRINE is a web-based query tool that allows investigators to determine the aggregate total number of patients at participating hospitals who meet a given set of inclusion and exclusion criteria. That investment enabled national implementation of the Accrual to Clinical Trials (ACT) Network using SHRINE to enable real-time querying of i2b2-based clinical data warehouses (CDWs) whose data were harmonized with the ACT ontology [13]. The ACT Network was adopted in over 80% of CTSA programs to provide a platform that helps clinical investigators query EHR data, representing over 142 million patients, in real-time cohort discovery to establish the feasibility of a clinical trial protocol for grant applications [Reference Morrato, Lennox and Sendro14]. State-based and regional data networks have also been enabled by these technologies, such as iTHRIV (Virginia), OneFlorida Data Trust, UCReX (California), and Regenstrief Institute (Indiana) [Reference Loomba, Wasson and Chamakuri1519]. The Observational Medical Outcomes Partnership (OMOP) is another Common Data Model adopted by many CTSAs, including Albert Einstein College of Medicine – Montefiore Health, Columbia University, and Stanford [20,21]. An example of rapid NCATS informatics investment, powered by CTSA collaboration, was the National COVID Cohort Collaborative (N3C) Data Enclave. N3C is one of the largest collections of clinical data from over 60 healthcare institutions across the USA containing data on over 7 million COVID-19 positive cases and related EHR data allowing researchers to study patient outcomes since 2020 [Reference Klonoff7].

NCATS also strongly encourages CTSA hubs to utilize and leverage existing data resources, such as PCORI-supported National Patient-Centered Clinical Research Network (PCORnet), to promote data reproducibility and generalizability across datasets to further accelerate the translational research process [22]. PCORnet is a fully integrated network that offers RWE on more than 30 million people across the USA through partnerships with Clinical Research Networks. Alternatively in the private sector, there are several EHR database tools also utilized by CTSA hubs, such as TriNetX. TriNetX is a global health research network that provides real-world data (RWD) on over 250 million patients for protocol design and feasibility, site selection, and patient recruitment [23]. Many CTSA hubs contribute data to this network because it provides an easy cohort discovery tool that outperforms other networks’ ease-of-use and efficiency of the clinical trial process utilizing RWD analysis to generate RWE [Reference Palchuk, London and Perez-Rey24,Reference Topaloglu and Palchuk25].

The proliferation of EHR data for clinical and translational research is exciting, and with growing demand, there is increased opportunity to enhance its usability. Moreover, the NIH Strategic Plan for Data Science (2024–2029) emphasizes the optimization of data infrastructure, modernizing harmonization and data ecosystems, accessible data management analytics and tools, enhancing the data-science workforce, and ensuring sustainable data stewardship [26]. Understanding that CTSA hubs vary substantially in their capacity to address informatics challenges in data collection, quality, and harmonization; methodology for deep phenotyping (e.g., discrete and textual data); maintaining patient privacy; ontology harmonization; and ability to transfer data beyond institutional firewalls. Convenient, readily available, and easily accessible tools and solutions that address these problems are required. Moreover, CTSA hubs seek guidance on the complicated governance needed to enable data sharing and analysis of shared data (i.e. Data User Agreement (DUA)); many are seeking to focus limited resources on preferred networks vs. supporting all available. Failure to address these needs stymies collaboration and innovation among CTSA hubs, limits efficient use of funds available for advancing translational science, and ultimately impacts efficient and effective healthcare delivery.

NCATS funded the Evolve to Next-Gen ACT Network (ENACT) to make it easier for investigators at CTSA hubs to conduct impactful research and inform clinical decision-making using real-world EHR data [Reference Loomba, Wasson and Chamakuri15,Reference Morrato, Lennox and Dearing27]. Thus, aligning with the NIH NCATS Strategic Plan for Data Science goals, the ENACT dissemination and evaluation (D&E) workgroup conducted a national needs assessment among CTSAs to better understand the competitive landscape and identify recommendations for ENACT and other RWD EHR networks operating within the CTSA consortium.

Methods

Study design

A mixed-methods evaluation was conducted using business start-up principles and methods to assess problem-solution and product-market fit for technology startups [Reference Morrato, Hamer, Sills, Kwan and Schilling28,29]. Our conceptual framework included designing-for-dissemination-and-sustainability principles from implementation science which incorporates stakeholder feedback early at the stages of product conceptualization and demonstration to enhance Fit-to-Context, for both CTSA hubs and EHR data users at academic research institutions [Reference Kwan, Brownson, Glasgow, Morrato and Luke30]. First, we conducted a Landscape Context analysis to understand the competitive environment, in this case, electronic health data alternatives used for conducting health services research in the USA. [Reference Peteraf Margaret31]. Second, the Customer Discovery method was used to identify strong value proposition(s) for EHR data networks for different customer segments within the CTSA consortium.

For the competitive landscape analysis, EHR was categorized into health survey and surveillance data, EHR data, administrative (claims) data, EHR and administrative (claims) data, multidimensional linked datasets (biorepositories, genomic data, etc.), and trends in RWD like artificial intelligence (AI) and natural language processing (NLP) enabling data. Electronic health data products were subcategorized based on attributes like data governance (public or private), and cost per data acquisition (free vs. fee). (Supplement Appendix 3 and 4)

The customer discovery and value proposition methodology aimed to identify customers/stakeholders’ perceptions of (i) the most important “jobs to be done” for EHR data users; (ii) the critical “pains” and “gains” of different types of EHR data available at their institutions; and (iii) metrics of success as defined by different customers/stakeholders segments [Reference Osterwalder, Pigneur, Bernarda and Smith32]. The stakeholders included CTSA hub partners (principal investigators, informaticians, and statisticians), and local end-users (clinical and translational investigators). We employed the I-Corps™@NCATS methodology to inform intervention conceptualization and demonstration for numerous health innovations [Reference Morrato, Lennox and Sendro14,Reference Morrato, Lennox and Dearing27,Reference Morrato, Hamer, Sills, Kwan and Schilling28,Reference Blank33,Reference Nearing, Rainwater and Neves34]. The process yields a validated message framing about the value of a product idea, a value proposition, from the customer’s perspective in the context of competing alternatives to guide dissemination strategies and promote adoption [Reference Kwan, Brownson, Glasgow, Morrato and Luke30].

Study team

The study team was comprised of researchers representing four CTSA hubs who are part of the D&E workgroup of the ENACT network. In addition, a national I-Corps instructor, who led the development of the I-Corps™@NIH and I-Corps™@NCATS programs, independently conducted the interviews to reduce confirmation bias. Preliminary results were shared with the project’s Dissemination Advisory Board (DAB) comprised of experts in dissemination research and practice; marketing; and pharmaceutical and health informatics commercialization (See Supplement Appendix 1).

Study sample

Participants were recruited through a combination of targeted and snowball sampling. Leveraging an existing EHR network (ENACT), we solicited input from principal investigators (PIs) for each CTSA hub enlisted in the network via survey. Participation was voluntary and requested via email by the ENACT leadership team. Survey data included names of identified stakeholders for RWD research databases at each institution for soliciting interviews and relevant referrals. Additionally, we conducted a literature review to identify publications that used EHR networks like ACT, N3C, and TriNetX. Authors belonging to a CTSA hub were contacted soliciting an interview.

Ethical considerations

The study was determined exempt by Loyola University Chicago’s Institutional Review Board (LU IRB #217691110723). Interviewees gave verbal consent before the interview. Participants were not compensated.

Data collection

Data were collected via survey, interview, and focus group, between February and May 2023.

Survey

The survey included public data elements such as institution name, names of the CTSA hub PI, and key personnel including leaders from informatics; learning healthcare systems; statistics; and other researchers that utilized RWD at their institution. We did not collect additional personal information.

Interviews

The main objective was to gather participants’ input on the existing EHR data sources at their institutions. The interview guide followed the customer discovery framework to learn about current practices, perceived barriers, and facilitators concerning RWD utilization for research and translational science based on their roles in the academic healthcare ecosystem. We targeted to conduct 50 interviews, based on the I-Corps™@NCATS program recommendations.

Questions were open-ended and adjusted in an iterative process (Interview Guide Supplement Appendix 2). One-on-one interviews were conducted via phone or virtually (Zoom), with an average duration of 40 minutes. Zoom interviews were recorded and transcribed by Otter AI software (https://otter.ai/). Customer discovery team debriefed after interviews.

Focus group

A 1.5-hour focus group was conducted with informaticists and health informatics researchers interested in RWD/RWE research in the CTSA environment. Structured questions about health informatics’ future, barriers, and facilitators for achieving informatics RWD goals guided the discussion. (Supplement Appendix 2).

Data analysis

Survey data-informed descriptive statistics, including participating CTSA hubs and roles of interviewees per site. Analysis was supported by Excel software.

Interview and focus group data

Transcripts and notes from interviews served as the primary data source for analysis. Interview transcripts were analyzed by three investigators (DC, NM, MM) using rapid qualitative analysis methods, which involved: (1) creating a matrix for codes, (2) establishing interrater reliability by independently coding three interviews and generating consensus, (3) independently coding the remaining interviews, and (4) summarizing themes [Reference Gale, Wu and Erhardt35]. Following the customer discovery and value proposition framework relevant themes were identified and grouped by interviewee role [29,Reference Osterwalder, Pigneur, Bernarda and Smith32]. Qualitative analysis was supported by QSR International Pty Ltd (2020) NVivo 12 software (lumivero.com). The research team, customer discovery team, and DAB met regularly to discuss findings.

Results

Landscape Analysis

Figure 2 shows the landscape of an alternative range of public and private EHR data by types of data (administrative claims, EHR, linked data) that have evolved and are available in the USA. The list of available options is large and requires expertise to know which datasets are convenient and abie to answer specific research questions. The attributes considered during the competitive landscape analysis were governance (open access vs. membership), cost, and scale (national vs. international), information was obtained from the websites and existing literature (Supplement Appendix 3). We identified comparable CTSA-based EHR data networks, that are RWD public sources, like ACT, N3C, and ENACT. Other relevant product attributes are specified in Supplement Appendix 4, representing the competitive alternatives for researchers using EHR RWD within the CTSA ecosystem. The product gap analysis showed that EH data products available to CTSA hubs are numerous.

Figure 2. Landscape of alternative electronic health datasets available in the USA for research.

Customer discovery findings

A total of 32 institutions with membership to 27 CTSA hubs participated in the survey and/or interview process, approximating a 57% participation response rate of the current participating ENACT network (57 total). Between April and August 2023, we conducted 54 one-on-one interviews and one focus group with seven individuals, adding up to 61 individuals contributing to qualitative data for analysis. Table 1 shows the study population characteristics by CTSA hub contributing to data.

Table 1. Characteristics of the study population by CTSA sites contributing to interview and/or focus group data

*CTSA PI = CTSA principal investigator.

The 61 interviewees were comprised of 8 CTSA PIs, 16 Informaticists, 17 Statisticians, and 20 Researchers. Some individuals “crossed over” between groups, representing different points of view. For allocation purposes, the role represented in the descriptive analysis is how it was identified by their CTSA hub.

The next section describes the findings from the data analysis obtained through customer discovery methodology, results are organized by five emerging themes, that informed the value proposition for each customer segment. Illustrative quotes and value propositions by customer segmentation are described in detail in Tables 2 and 3.

Table 2. Summary of customer discovery findings describing value proposition by CTSA segment

RWD = Real-World Data; CTSA- =Clinical Translational Science Award; RWE = Real World Evidence.

*CTSA hub size based on NIH NCATS award < $30M designated as small, and ≥ $30M large/medium.

Table 3. CTSA customer segmentation in relation to real-world data use for clinical and translational research

*CTSA hub size based on award <$30M designated as small, $30M to $49M as medium, and ≥ $50M large.

CTSA segmentation needs vary according to resources

We created an ecosystem mapping to identify our target customer within the CTSA health informatics ecosystem (organizational structures, influencers, and incentives), to identify the “end users” and characterize CTSA archetypes that will help to target specific customer segments to develop EHR network sustainability strategies. (Table 3 and Supplement Appendix 5).

The findings on customer segmentation related to RWD within the CTSA environment identified two archetypes of CTSA hubs: large-medium and small CTSA hub size based on award < $30M designated as small, ≥ $30M large/medium, reflecting the number of resources they might have to devote to EHR data acquisition, sustainment, and training. Stakeholders were categorized by: (a) End users: researchers, informaticians, and statisticians; (b) Influencers: CTSA hub PIs and cluster leaders; and (c) Decision makers: CTSA hub PIs and leads of clinical research offices.

Results show that the interest and capacity to support large, federated data networks varied between CTSAs, specifically according to the institution size, system support, and funding. Highly resourced (large) CTSAs have greater capacity to “add one more network” and often intentionally belong to many vs. endorsing a preferred network. In contrast, low-resourced (small) CTSAs need funding to adopt and maintain the EHR network platform.

Small institutions are focused on allocation of budget and resources according to grant cycle and success, and researchers need more resources to perform their research with RWD. Therefore, small institutions learn how to operate with small budgets and low staff and resources.

“I know enough to be dangerous, but I do not know enough to analyze those large datasets.” -Clinical Researcher.

In medium-sized institutions, there is a mixture of researchers that begin to appear with some combined skills and roles in the institution (clinician and informatics). Some have an informatics team that helps support researchers; however, budgets are still constrained and grant-dependent.

“We’re still debating about no funding for “X” network. We have a lot of needs. And we have two large networks currently unfunded, this being one of them. And so, we’re trying to determine where we have the ability to provide staff from our infrastructure, baseline funding from the hospital, versus what we might have to forego because of that.” -CTSA PI.

Large institutions tend to have well-defined teams and workgroups that support others in the research process. The range of RWD researchers is wide, ranging from senior to new and skilled to novice. Support resources are available, but there are still concerns about capacity building. Local influencers-decision makers are those who have successfully secured grant dollars.

“Until we have a bigger bench of individuals who have the appropriate training and clinical bioinformatics, we need this as a core service.” -CTSA PI, researcher.

Workflow and decision-making on research activities involving RWD vary between CTSA hubs. The medium to large institutions’ process starts with the main researcher with research idea generation and search for feasibility of the study by the PI, which is when they start considering the cohort size for the project. The main influencers are the existing collaborator network mostly defined by the research line, subsequently, the collaborators suggest an RWD network they are familiar with and reach out to the core facility to explore or ask for suggestions, being those core leaders or staff advising the decision makers on what dataset will be used. Thereafter, the statistician weighs in and influences if the dataset will be adequate to answer research questions, or if necessary to re-scope data source. Across all institutions, when available, these individuals have strong influence over research design and cohort development. Some researchers take on the statistical analysis on their own and stick to existing databases. Supplemental Appendix 5 shows the ecosystem map of workflow.

“At least for my group is a team-based effort, because I may simply not even know that database exists. But somebody on the team might know and have access and expertise.” -Researcher, Informaticist.

Team science is key to success

Many interviewees highlighted the importance of teaming to be successful with research involving RWD, which at a minimum includes a researcher, informatician, and statistician. The team science approach was a consistent concept among all CTSA sizes and remarks the importance of having a multidisciplinary collaboration.

“A multidisciplinary team…It’s these large groups that are the most successful at accessing [and analyzing] the data.” -CTSA PI.

Quality of data generates trust in the network

Data quality and types of data requirements and expectations vary significantly between types of clinical and translational researchers.

“We [can] broadly group people into investigators who know they need data and know how to use it … the investigators who appreciate that they could probably benefit from data, but they do not know how to use it. … And there are those who do not even realize that they need data and would benefit from using it. And most often, they also do not know how to use it anyway.” -CTSA PI.

Methodologically skilled researchers (e.g., clinical/pharmaco epidemiologists, health services researchers, and informaticists) conduct sophisticated hypothesis-testing research which requires greater data granularity than an aggregated de-identified dataset provides. They are comfortable seeking and using other available datasets, so they place higher value on quality of the data than on ease of access to the data.

“The whole issue comes down to the quality of the data available, and the ability to access the data and by quality.” -CTSA PI, Informaticist.

Capacity building is defined differently by researcher’s career stage and CTSA’s existing resources

The need to build capacity and foster collaboration was a consistent theme in our interviews with potential users of federated databases. Most researchers utilize data from their own local and trusted sources or collaborators. The need lies in capacity-building catalysts training tools, and a place to scale when a big sample size is needed (“Big N”). The interviews often were satisficing in their dataset decisions. Just enough data to publish was sufficient, and the benefit of extending that dataset to a “Big N” was not compelling even when they might have been able to access it through a national network.

“It’s the institutions and collaborations, and then you try to if you can make the data work.” -Informaticist.

Novice data science researchers either want to learn the skills (e.g., graduate students/fellows) or outsource the skills concierge-style (e.g., clinical investigators).

“[Training] It’s a substantial time investment that I don’t currently have.” -CTSA PI, researcher.

Researchers’ unmet needs

There are currently unserved researchers – e.g., public health-population health, implementation science, and learning health system investigators – who could represent a “new target segment,” an opportunity for new use cases and scale-up of existing EHR networks. A network to investigate the implementation of policies and quality indicators would also have interest, networks that allow a collaborative learning health system. This taps into different communities within the CTSAs to expand their user base and reach.

“Build systems that enable new capabilities, solve problems/issues and are interoperable and reusable in other places.” – Researcher, Informaticist

A desired feature was the ability to perform descriptive statistics and explore variables on the same platform.

“We want to check a few features to see whether they are by a known bias for example, population times demographic features, whether they are bias and the longitudinal data availability.” -Researcher

A clinical decision support network emerged as a strong unmet data need – specifically, a sandbox to validate generalizability of something developed locally and test its scaling. Many seek a “GitHub”-like agnostic learning network to share informatics tools and aggregate data “add-ons,” making the process easier.

“Having access to source code is the best thing.” -Informaticist

Future desired features included the addition of clinical notes by using AI and NLP, which calls to adapt existing data networks, some institutions are already working on those applications and integration to existing EHR.

“But then there’s more data, meaning like …maybe you can extract content concepts from clinical notes, or link to images or other kinds of things…making the network more useful.” -Researcher.

With the insights from our customer discovery, we developed a value proposition for each customer segment. Table 4 summarizes the Value Proposition by attributes and customer (stakeholder) segmentation. One of the benefits of generating a value proposition is to define the “Minimum Viable Product” (MVP), which is the smallest amount of product features and capabilities necessary for release, and then to slowly add more functionality as needed to a new product while collecting validated learnings in the process [Reference Blank33,Reference Ries36]. The MVP concerning EHR database networks was the same across customer segments (stakeholders).

Table 4. Value proposition by attributes and customer segment

“A supportive governance structure, data quality, efficient and easy to use, and supported by a training center that will foster skills acquisition and collaboration across the network”

However, there was a difference in how these benefits needed to be demonstrated by customer type. The respondents indicated that use cases were essential to prove the value of the RWD network available versus current practice and competing alternatives.

Discussion

A wealth of EHR data networks is available to clinical and translational researchers at US academic institutions. Over the past two decades, the number and accessibility of these networks have increased, infrastructure and processes for access and data governance have improved, and both informatics tools and informatics departments have proliferated at US academic medical centers. Trends indicate that data integration with imaging, laboratory results, and harmonization among different institutions are in order. Some networks are currently working on integrating social determinants of health (SDOH) data into existing platforms to enable large-scale data analysis for research (i.e., Sociome Data Commons) [Reference Tilmon, Nyenhuis and Solomonides37].

Given the plethora of options, we believe our systematic evaluation of unmet needs amongst CTSA hubs and clinical and translational scientists is timely. Moreover, the development of new EHR data networks must include a sustainability plan grounded in clearly articulating a differentiating value proposition relative to existing options. The I-Corps@NCATS customer discovery approach is an implementation science methodology well-suited for quickly uncovering unmet needs, elucidating a network’s value proposition, and understanding evolving “fit-to-context” requirements necessary for adoption and sustainability [Reference Kwan, Brownson, Glasgow, Morrato and Luke30].

Some networks have published their lessons learned to inform dissemination and sustainability strategies [Reference Palchuk, London and Perez-Rey24,Reference Morrato, Lennox and Dearing27,Reference Nearing, Rainwater and Neves34,Reference Pace, Fox, White, Graham, Schilling and West38]. Their results show three prominent themes: the importance of data network maturity (governance, quality, and management), commercial viability considerations as proxy for sustainability, and stakeholder support among data stewards, informaticists, and researchers. However, the emergence of new networks with new attributes changes the market, and as end users’ needs evolve, sustainability strategies must be reevaluated continuously.

With that spirit, our group conducted a needs assessment on EHR data networks among academic research centers within the CTSA environment to understand the current competitive landscape and unmet needs to identify recommendations for RWD networks.

Our key findings are that there is no need for a new dataset, major players government-non-profit (Sentinel, PCORNet, VA, OCHIN) and private sector (EPIC, Oracle, IBM, TriNetX, IQVIA) are leading, and competing, in this area. Therefore, the clinical and translational science gap is in the following areas: (1) Team science – and new methodological approaches to test, validate generalizability, and to implement and scale to impact healthcare delivery and public health decision-making; (2) Data governance trust and efficiency of operation (in the context of ever-evolving cyber security) – we are now at the stage of ’economies of scale’; (3) Enable learning health systems (LHS) that can generate, adopt, and apply the evidence to support quality improvement (QI) and high-value care (Computer systems/AI, continuous QI); and RWD/RWE (FDA, pricing/reimbursement), details in Table 5.

Table 5. Expectations for EHR data networks within CTSA academic health centers

Health informatics translational research involving RWD is a team sport requiring expertise in data informatics, statistical analysis, and the subject matter (clinical or public health). Forming and sustaining research teams, especially when one of the areas of expertise is limited at your local site, is a key translational hurdle and pain. According to our findings, a team science approach positively impacts translational research, a multidisciplinary collaboration that at minimum requires a researcher, an informaticist, and a statistician. Collaboration with other institutions is desirable but limited. Therefore a “matching service” and/or improving workforce capacity and development by centralized training and learning labs will support the need and fill the gap.

Our interview findings were that centralized networks were favored over federated networks for reasons like trust in quality, local regulatory (IRB, DUA) awareness-experience, and familiarity within the research teams. Reflections of our learnings include that while there is an interest in joining a federated network among academic research centers, governance is a barrier. Most institutions want to retain their data within institutional boundaries, under the perception of saving time and effort required to obtain necessary legal and other approvals (IRB, DUA). Elements identified as important when belonging to federated networks are (i) The ease of inter-member data sharing; (ii) No cost and data visualization software and hardware for institutional use; and (iii) Support and opportunity for collaboration.

Trust plays a significant role when selecting a data network. Access to large quantities of clinical data from operational EHRs holds much promise, however, a major concern is that data not collected systematically for research will be of poorer quality, introducing bias to findings generated from these data. Elements of trust included data provenance, quality control processes, data feed, and availability to check for clinical parameters. Data quality has been addressed previously by other data networks by assessing the conformance, completeness, and plausibility of each contributing organization’s data against the network [Reference Callahan, Bauck and Bertoch39Reference Kahn, Callahan and Barnard41]. RWD networks like PCORNet and TriNetX integrate data quality as part of their services, gaining trust from users, by using centralized checkpoints to surface site-to-site differences and optimize data quality [Reference Pfaff, Girvin and Gabriel42].

Data science and health informatics are some of the commonly used methods by LHS with the overarching goal of enabling LHS scientists to design, conduct, implement, disseminate, and scale patient-centered outcomes research to improve the quality of care and patient outcomes in diverse health systems. With that healthcare organizations are applying the evidence to support QI and high-value care. However, not all organizations have the same resources, which is why belonging to a network that amplifies the data sources is important. Our findings indicate that people are excited about the concepts of “learning health systems” or “evidence-generating health systems.” However, these concepts were left largely undefined, but the directionality was clear - bridging the clinic and the existing databases into a system. There was recognition that the field is evolving because technical capabilities are still changing. Expressed as: (a) This change unlocks many research opportunities, (b) The existing trouble hiring the people to execute those opportunities due to commercial companies paying more than academic healthcare centers.

Overall, our findings are consistent with operational factors described when building and operating a data network to attain success including (i) purpose, governance, and funding; (ii) network management oversight; (iii) analytic approach and querying technology; (iv) privacy, confidentiality, and security; and (v) transparency [Reference Topaloglu and Palchuk25]. Addressing these factors will impact the success and sustainability of the data network. Sustaining high-value RWD networks is costly and resource-consuming, and federal funding is unsuited to sustaining these initiatives beyond the awarded grant cycle.

Academic institutions frequently face scarce resources to sustain data networks, and often have competing interests and priorities, as well as other available opportunities for “free.” The NIH Strategic Plan for Data Science indicates that the growing cost of managing data is a pressing issue in the data-resource ecosystem, especially in “siloed” centers that do not support data sharing. Specifically planning carefully how resources are to be spent efficiently is necessary to obtain most benefit from investments [26]. An option is to partner with the private sector to advance translational technologies, bringing private-sector expertise, financial and technical resources to achieve the ultimate goal of improving patients’ health. There is opportunity to marry informatics and data science research expertise available in academic networks, like CTSA consortiums, with the dissemination and user-experience resources available with private sector EHR data networks (i.e., EPIC, TriNetX), allowing translational science to move forward effectively. However, inherent conflicts of interest may make that challenging.

The customer discovery methodology allowed us to understand points of view of different customer segments enabling the development of an MVP for EHR data networks. Addressing the minimal needs of end users of EHR data networks is just the beginning of a sustainable product. As described previously, the sustainment of a network relies on transparent network governance, trustworthy data, shareability, collaboration of the network and training, and infrastructure maintenance funding [Reference Palchuk, London and Perez-Rey24,Reference Morrato, Hamer, Sills, Kwan and Schilling28,Reference Nearing, Rainwater and Neves34]. In addition, it is important to collectively envision short-term and long-term success metrics, and build use cases to fortify the network, secure ROI, and demonstrate worthiness for new users to build the community.

Conclusion

Academic research centers belonging to the national CTSA consortium are crucial to achieving the goals of the NIH NCATS Strategic Plan for Data Science, specifically, developing innovative partnerships and collaborations with a strategic array of stakeholders to achieve long-term sustainability. EHR data networks like ENACT that would like to meet the expectations of academic research centers within the CTSA consortium should consider filling the gaps identified by our study: foster team science, improve workforce capacity, achieve data governance trust and efficiency of operation, and enable LHS that can generate, adopt, and apply the evidence to support QI and high-value care.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.609.

Acknowledgements

We want to thank the CTSA hub’s communications teams, principal investigators, bioinformatics, and researchers for volunteering their time and offering valuable insights. Without their participation, our project would not have been possible. We are grateful for the wisdom shared by additional members of our Dissemination Advisory Board including Anne T. Coughlan, James W. Dearing, Charles Friedman, Deborah Goeken, Wayne Guerra, and Jerry D. Shelton.

Authors contributions

Wrote and edited the manuscript (NM, MM, HP, LL, DC, EM). Collected, and analyzed data. (NM, DC, MM). All authors contributed to the article and approved the submitted version. (NM, MM, HP, LL, DC, EM).

Funding statement

EHM has received funding for consulting services on issues of drug safety from Eli Lilly, Health Care Service Corporation, Inc., i2o, Humana Inc., Ipsen, Molina Healthcare, Inc., Reata Pharmaceuticals, Syneos Health, and United Healthcare Services.

Research reported in this publication was supported by the National Center for Advancing Translational Science of the National Institutes of Health under awards 1U24TR004111-01 and UL1TR001417-02S1. In addition, we acknowledge participation from the following CTSA Program Consortium and supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under CTSA Grant Numbers UL1TR001430, UL1TR001873, UM1TR004408, UM1TR004402, UL1TR001450, UL1TR002369, UL1TR002014, UM1TR004404, UL1TR003107, UL1TR001881, UL1TR001442, UL1TR001872, UL1TR002389, UL1TR001425, UM1TR004399, UL1TR001427, UM1TR004405, UM1TR004406, UL1TR001857, UL1TR001439, UM1TR004409, UL1TR003015, UL1TR002319, UL1TR001436, UL1TR002373, UL1TR001420, UL1TR002345, UL1TR002384.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Competing interests

None.

References

Lite, S, Gordon, WJ, Stern, AD. Association of the meaningful use electronic health record incentive program with health information technology venture capital funding. JAMA Netw Open. 2020;3(3):e201402.10.1001/jamanetworkopen.2020.1402CrossRefGoogle ScholarPubMed
Burke, T. The health information technology provisions in the American recovery and reinvestment act of 2009: implications for public health policy and practice. Public Health Rep. 2010;125(1):141145.10.1177/003335491012500119CrossRefGoogle ScholarPubMed
Gabriel, SE, Normand, SL. Getting the methods right--the foundation of patient-centered outcomes research. N Engl J Med. 2012;367(9):787790.10.1056/NEJMp1207437CrossRefGoogle ScholarPubMed
Simpson, L. What does US$1.1 billion buy? An investment in the future. J Comp Effect Res. 2014;3 6(6):561563.10.2217/cer.14.74CrossRefGoogle Scholar
Behrman, RE, Benner, JS, Brown, JS, McClellan, M, Woodcock, J, Platt, R. Developing the sentinel system--a national resource for evidence development. N Engl J Med. 2011;364(6):498499.10.1056/NEJMp1014427CrossRefGoogle ScholarPubMed
Sentinel-System-Overview---Presentation [PowerPoint]2018.Google Scholar
Klonoff, DC. The new FDA real-world evidence program to support development of drugs and biologics. J Diabetes Sci Technol. 2020;14(2):345349.10.1177/1932296819832661CrossRefGoogle ScholarPubMed
Goble, JA. The potential effect of the 21st century cures act on drug development. J Manag Care Spec Pharm. 2018;24(7):677681.Google ScholarPubMed
National Institutes of Health. U.S. Food and Drug Administration (FDA). RFA-FD-23-025: Supporting the use of Real-World Data to Generate Real-World Evidence in Regulatory Decision-Making (U01) Clinical Trials Optional. 2022 (https://grants.nih.gov/grants/guide/rfa-files/RFA-FD-23-025.html) Accessed June 23, 2023.Google Scholar
Madhavan, S, Bastarache, L, Brown, JS, et al. Use of electronic health records to support a public health response to the COVID-19 pandemic in the United States: a perspective from 15 academic medical centers. J Am Med Inform Assoc. 2021;28(2):393401.10.1093/jamia/ocaa287CrossRefGoogle Scholar
Health NIo. NCATS. PAR-21-293: Clinical and Translational Science Award (UM1 Clinical Trial Optional). National Center for Advancing Translational Sciences; 2021 (https://grants.nih.gov/grants/guide/pa-files/PAR-21-293.html) Accessed May 23, 2023.Google Scholar
Informatics for Integrating Biology & The Bedside. i2b2. 2020. n.d. (https://www.i2b2.org/) Accessed February 13, 2024.Google Scholar
Shrine. Harvard Catalyst. 2024. n.d. (https://catalyst.harvard.edu/shrine/) Accessed February 2, 2024.Google Scholar
Morrato, EH, Lennox, LA, Sendro, ER, et al. Scale-up of the accrual to clinical trials (ACT) network across the clinical and translational science award consortium: a mixed-methods evaluation of the first 18 months. J Clin Transl Sci. 2020;4(6):515528.10.1017/cts.2020.505CrossRefGoogle Scholar
Loomba, JJ, Wasson, GS, Chamakuri, RKR, et al. The iTHRIV commons: a cross-institution information and health research data sharing architecture and web application. J Am Med Inform Assn. 2021;29(4):631642.CrossRefGoogle Scholar
Forida CaTSIUo. OneFlorida+ Clinical Research Network. 2024 (https://www.ctsi.ufl.edu/ctsa-consortium-projects__trashed/oneflorida-clinical-research-network/) Accessed February 13, 2024.Google Scholar
California Uo. University of California Research eXchange (UC-ReX) (https://www.ucbraid.org/ucrex) Accessed April 4, 2024.Google Scholar
Regenstrief Institute. https://www.regenstrief.org/. Accessed April 10, 2024.Google Scholar
The integrated Translational Health Research Institute of Virginia (iTHRIV). https://www.ithriv.org/. Accessed April 10, 2024.Google Scholar
STARR-OMOP. https://starr.stanford.edu/data-models/omop. Accessed April 10 24, 2024.Google Scholar
Center CUIM. Observational Health Data Sciences and Informatics (OHDSI). Observational Health D https://www.pgh.cuimc.columbia.edu/projects/observational-health-data-sciences-and-informatics-ohdsi. Accessed April 10, 2024.Google Scholar
PCORnet. The National Patient-Centered Clinical Research Network. (2023b, June 1). .https://pcornet.org/ Accessed June 23, 2023.Google Scholar
The world’s largest, living ecosystem of real-world data and evidence. TriNetX. (2023, May 9). . https://trinetx.com/ Accessed June 23, 2023.Google Scholar
Palchuk, MB, London, JW, Perez-Rey, D, et al. A global federated real-world data and analytics platform for research. JAMIA Open. 2023;6(2):ooad035.CrossRefGoogle ScholarPubMed
Topaloglu, U, Palchuk, MB. Using a federated network of real-world data to optimize clinical trials operations. JCO Clin Cancer Inform. 2018;2(2):110.10.1200/CCI.17.00067CrossRefGoogle ScholarPubMed
Morrato, EH, Lennox, L, Dearing, J, et al. The ENACT network: an evolving open-access, real-world data resource primed for RWE research across the CTSA consortium. J Clin Transl Sci. 2023;7(1):e224.10.1017/cts.2023.617CrossRefGoogle Scholar
Morrato, EH, Hamer, MK, Sills, M, Kwan, B, Schilling, LM. Applying a commercialization-readiness framework to optimize value for achieving sustainability of an electronic health data research network and its data capabilities: the SAFTINet experience. EGEMS (Wash DC). 2019;7(1):48.Google Scholar
Strategyzer. The Business Model Cnavas. The Business Model Cnavas tool. Available at: https://www.strategyzer.com/library/the-business-model-canvas. Accessed January 26, 2024.Google Scholar
Kwan, BM, Brownson, RC, Glasgow, RE, Morrato, EH, Luke, DA. Designing for dissemination and sustainability to promote equitable impacts on health. Annu Rev Public Health. 2022;43(1):331353.CrossRefGoogle ScholarPubMed
Peteraf Margaret, ABME. Scanning dynamic competitive landscapes: a market-based and resource-based framework. Staregic Manag J. 2003;24(10):1027–10441.Google Scholar
Osterwalder, A, Pigneur, Y, Bernarda, G, Smith, A. Value Proposition Design : How to Create Products and Services Customers Want. Hoboken: John Wiley & Sons; 2014.Google Scholar
Blank, S. Why the lean start-up changes everything. Harvard Bus Rev. 2018 (https://hbr.org/2013/05/why-the-lean-start-up-changes-everything) Accessed February 2, 2024.Google Scholar
Nearing, K, Rainwater, J, Neves, S, et al. I-corps@NCATS trains clinical and translational science teams to accelerate translation of research innovations into practice. J Clin Transl Sci. 2020;5(1):e66.CrossRefGoogle ScholarPubMed
Gale, RC, Wu, J, Erhardt, T, et al. Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the veterans health administration. Implement Sci. 2019;14(1):11.CrossRefGoogle ScholarPubMed
Ries, E. Minimum Viable Product: A Guide. Startup Lessons Learned. 2009 (https://www.startuplessonslearned.com/2009/08/minimum-viable-product-guide.html) Accessed February 2, 2024.Google Scholar
Tilmon, S, Nyenhuis, S, Solomonides, A, et al. Sociome data commons: a scalable and sustainable platform for investigating the full social context and determinants of health. J Clin Transl Sci. 2023;7(1):e255.CrossRefGoogle Scholar
Pace, WD, Fox, CH, White, T, Graham, D, Schilling, LM, West, DR. The DARTNet Institute: seeking a sustainable support mechanism for electronic data enabled research networks. EGEMS (Wash DC). 2014;2(2):1063.Google ScholarPubMed
Callahan, TJ, Bauck, AE, Bertoch, D, et al. A comparison of data quality assessment checks in six data sharing networks. EGEMs. 2017;5(1).CrossRefGoogle ScholarPubMed
Kahn, MG, Brown, JS, Chun, AT, et al. Transparent reporting of data quality in distributed data networks. Egems. 2015;3(1).10.13063/2327-9214.1052CrossRefGoogle ScholarPubMed
Kahn, MG, Callahan, TJ, Barnard, J, et al. A harmonized data quality assessment terminology and framework for the secondary use of electronic health record data. Egems. 2016;4(1).CrossRefGoogle ScholarPubMed
Pfaff, ER, Girvin, AT, Gabriel, DL, et al. Synergies between centralized and federated approaches to data quality: a report from the national COVID cohort collaborative. Journal of the American Medical Informatics Association. 2021;29(4):609618.CrossRefGoogle Scholar
Figure 0

Figure 1. Glossary of terms.

Figure 1

Figure 2. Landscape of alternative electronic health datasets available in the USA for research.

Figure 2

Table 1. Characteristics of the study population by CTSA sites contributing to interview and/or focus group data

Figure 3

Table 2. Summary of customer discovery findings describing value proposition by CTSA segment

Figure 4

Table 3. CTSA customer segmentation in relation to real-world data use for clinical and translational research

Figure 5

Table 4. Value proposition by attributes and customer segment

Figure 6

Table 5. Expectations for EHR data networks within CTSA academic health centers

Supplementary material: File

Mora et al. supplementary material

Mora et al. supplementary material
Download Mora et al. supplementary material(File)
File 181.8 KB