Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-24T10:34:46.599Z Has data issue: false hasContentIssue false

Distinguishing between translational science and translational research in CTSA pilot studies: A collaborative project across 12 CTSA hubs

Published online by Cambridge University Press:  18 December 2023

Margaret Schneider*
Affiliation:
The Institute for Clinical and Translational Science, University of California, Irvine, CA, USA
Amanda Woodworth
Affiliation:
The Institute for Clinical and Translational Science, University of California, Irvine, CA, USA
Marissa Ericson
Affiliation:
The Institute for Clinical and Translational Science, University of California, Irvine, CA, USA
Lindsie Boerger
Affiliation:
The Institute of Translational Health Sciences, University of Washington, Seattle, WA, USA
Scott Denne
Affiliation:
The Indiana Clinical and Translational Sciences Institute, Indiana University, Indianapolis, IN, USA
Pam Dillon
Affiliation:
The Wright Center for Clinical and Translational Research, Virginia Commonwealth University, Richmond, VA, USA
Paul Duguid
Affiliation:
The Translational Research Institute, University of Arkansas Medical Sciences, Little Rock, AR, USA
Eman Ghanem
Affiliation:
Duke Clinical & Translational Science Institute, Duke University, Durham, NC, USA
Joe Hunt
Affiliation:
The Indiana Clinical and Translational Sciences Institute, Indiana University, Indianapolis, IN, USA
Jennifer S. Li
Affiliation:
Duke Clinical & Translational Science Institute, Duke University, Durham, NC, USA
Renee McCoy
Affiliation:
Clinical & Translational Science Institute of Southeast Wisconsin, Medical College of Wisconsin, Milwaukee, WI, USA
Nadia Prokofieva
Affiliation:
Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, USA
Vonda Rodriguez
Affiliation:
Duke Clinical & Translational Science Institute, Duke University, Durham, NC, USA
Crystal Sparks
Affiliation:
The Translational Research Institute, University of Arkansas Medical Sciences, Little Rock, AR, USA
Jeffrey Zaleski
Affiliation:
The Indiana Clinical and Translational Sciences Institute, Indiana University, Indianapolis, IN, USA
Henry Xiang
Affiliation:
Center for Clinical and Translational Science, The Ohio State University, Columbus, OH, USA
*
Corresponding author: M. Schneider, PhD; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

The institutions (i.e., hubs) making up the National Institutes of Health (NIH)-funded network of Clinical and Translational Science Awards (CTSAs) share a mission to turn observations into interventions to improve public health. Recently, the focus of the CTSAs has turned increasingly from translational research (TR) to translational science (TS). The current NIH Funding Opportunity Announcement (PAR-21-293) for CTSAs stipulates that pilot studies funded through the CTSAs must be “focused on understanding a scientific or operational principle underlying a step of the translational process with the goal of developing generalizable solutions to accelerate translational research.” This new directive places Pilot Program administrators in the position of arbiters with the task of distinguishing between TR and TS projects. The purpose of this study was to explore the utility of a set of TS principles set forth by NCATS for distinguishing between TR and TS.

Methods:

Twelve CTSA hubs collaborated to generate a list of Translational Science Principles questions. Twenty-nine Pilot Program administrators used these questions to evaluate 26 CTSA-funded pilot studies.

Results:

Factor analysis yielded three factors: Generalizability/Efficiency, Disruptive Innovation, and Team Science. The Generalizability/Efficiency factor explained the largest amount of variance in the questions and was significantly able to distinguish between projects that were verified as TS or TR (t = 6.92, p < .001) by an expert panel.

Conclusions:

The seven questions in this factor may be useful for informing deliberations regarding whether a study addresses a question that aligns with NCATS’ vision of TS.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

In the last decade of the 20th century, translational research started to gain momentum in biology and medicine [Reference Nakao1]. While the scientific community, policymakers, and the general public value and support basic research whose primary goal is to build the scientific basis for the development of novel therapies, there have been longstanding concerns that much basic research has little immediate impact on clinical practice and public health interventions [Reference Kim, Levine, Nehl and Walsh2Reference Curry4]. A previous study reported the estimated time lag between journal publication of a significant basic science discovery to use in practice was between 17 and 23 years [Reference Morris, Wooding and Grant5]. This long lag time was corroborated by a study that examined more than 15 million Medline articles published between 1980 and 2013 [Reference Ke6]. Recognizing this challenge and in response to the push for more timely benefits to the clinical practice, patient outcomes, and population health, countries around the world have initiated major programs aiming to speed up the movement of promising scientific discoveries to clinical practice or public health interventions [Reference Kim, Levine, Nehl and Walsh2,Reference Faupel-Badger, Vogel, Austin and Rutter3,Reference Tsevat and Smyth7]. Since 2005, the National Institutes of Health (NIH) in the United States, the Medical Research Council of the United Kingdom, the Korean Ministry of Health and Welfare in South Korea, and the Chinese Academy of Medical Sciences in China, to name a few, have established major funding streams to support translational research [Reference Kim, Levine, Nehl and Walsh2].

The National Center for Advancing Translational Sciences (NCATS) was established in 2011 by the NIH to “transform the translational process so that new treatments and cures for diseases could be delivered to patients faster [Reference Faupel-Badger, Vogel, Austin and Rutter3,Reference Curry4,Reference Tsevat and Smyth7,8].” One major funding initiative by NCATS to support this mission is the network of academic institutions across the U.S. called Clinical and Translational Science Award (CTSA) hubs. In fiscal year 2022, there were 63 CTSA hubs and each CTSA-funded hub provided services to its institution(s) including shared research infrastructure, collaboration tools, training and educational opportunities, administrative support (e.g., streamlining IRB approvals or data safety monitoring boards), and pilot research funding programs [9]. A key component of all CTSA hubs, the CTSA Pilot Programs projects “are intended to: (1) explore possible innovative new leads or new directions for established investigators; (2) stimulate investigators from other areas to lend their expertise in research in [clinical translational science]; and (3) provide initial support to establish proof of concept [10].”

Originally, the CTSA Pilot Programs focused on translational research, which was defined by NCATS as the process of turning observations in the laboratory, clinic, and community into clinical practice and interventions that improve individual and public health [Reference Nakao1,Reference Faupel-Badger, Vogel, Austin and Rutter3]. According to this holistic concept, translational research is defined as the effort to traverse specific steps of the translational process for a particular target or disease. Over more than a decade, the terms translational research and translational science were used interchangeably [Reference Kim, Levine, Nehl and Walsh2,Reference Curry4,Reference Ke6,Reference McCartney11Reference Krueger, Hendriks and Gauch13]. Recently, an effort to distinguish translational research from translational science has prompted a related but different definition of translational science and stimulated the evolution of a distinct discipline [Reference Austin14]. NCATS currently features the following definition of translational science on its website: Translational science is the field that generates innovations that overcome longstanding challenges along the translational research pipeline. These include scientific, operational, financial, and administrative innovations that transform the way that research is done, making it faster, more efficient, and more impactful [15].

There are many misconceptions about the definitions of translational research and translational science [Reference Austin16,Reference Siedlecki17]. Using the terms interchangeably or discussing translational science to describe translational research projects adds confusion to the field. To effectively advance the field of translational science, there is a need for clear strategies for distinguishing between these two distinct but related disciplines. Moreover, the current CTSA Funding Opportunity Announcement (FOA) [10] stipulates that pilot studies funded through the CTSAs must be focused on translational science. This new directive places CTSA Pilot Program administrators in the position of arbiters with the task of distinguishing between translational science and translational research. The purpose of this study was to leverage the collective knowledge and experience of the CTSA External Reviewer Exchange Consortium (CEREC) [Reference Schneider, Bagaporo and Croker18] to explore whether a set of TS principles set forth by NCATS might be useful in distinguishing TS from TR; work that will be useful to the national CTSA network to meet the NCATS mandate.

Methods

Procedures

Twelve CTSA hubs participated in this study (see hub descriptions in Supplementary Table 1). Pilot Program administrators (Program Directors and Program Managers) from 10 hubs submitted up to three research proposals for pilot studies that had previously been funded at their CTSA hubs. Data collection occurred prior to any of the participating hubs being funded under the new FOA. To ensure that projects of both types were represented, instructions to submitting administrators provided the NCATS definitions of translational science (TS) and translational research (TR) and requested that up to two TS projects and up to one TR project be submitted. Submissions included the abstract and research plan (limited to five pages) and were collected using Research Electronic Data Capture (REDCap®) [Reference Harris, Taylor and Minor19,Reference Harris, Taylor, Thielke, Payne, Gonzalez and Conde20].

A set of questions was developed to reflect TS principles as delineated by NCATS (see details in Measures) [15]. Using these Translational Science Principles questions, pilot study projects submitted by participating hub administrators were evaluated by 29 individuals (coders) administratively affiliated with the CTSA Pilot Programs at the 12 participating hubs. This task was accomplished using a second REDCap® survey that provided coders with each pilot study proposal and asked them to agree or disagree that the proposal met the objectives of each question. A CEREC coordinator assigned projects to coders in a manner that ensured no project was scored by the administrator who had submitted it, and each project was scored by at least three independent coders.

To establish whether each project met the spirit of the NCATS definition of TS, projects were subsequently evaluated by a subset of the authors identified as topic experts (see Data Analysis for details). Categorization of projects as TS or TR was determined by expert consensus, which was established using another REDCap® survey.

Measures

Data source

A checklist was provided to project submitters to characterize the pilot study data source. Multiple categories could be selected, including (1) Basic Science Lab (includes research on cells, blood, and other biological products); (2) Animal Study; (3) Human Subjects Study; (4) De-identified data from human subjects (e.g., Electronic Health Record (EHR) data); and 5) Other.

Disciplines

A checklist was provided to project submitters to characterize the approaches or disciplines represented in the pilot study projects, including (1) Community-based participatory research; (2) Dissemination and Implementation; (3) Informatics; (4) Regulatory Processes; (5) Drug or Device Development; (6) Research Design/Statistics/Research Methods; (7) Team Science; (8) Recruitment/Retention; and (9) Other.

Questions relating to principles of translational science

A set of questions was constructed based on the 20 principles of TS posted on the NCATS website in September 2022 (see Table 1; full descriptions provided in Supplementary Materials). The set of questions was developed in three iterative steps. In step one, a pool of 44 items was created by the first author, comprised of items worded to preserve the complexity of the TS principles as stated on the NCATS website, as well as items that were modified to offer more streamlined versions consistent with best practices of survey design (e.g., avoiding double-barreled questions). These 44 items were circulated to the coauthors for feedback. In step two, items were removed that were redundant, unclear, or did not map onto the NCATS TS principles. Two items also were removed that referred to establishing research funding opportunities, as CTSA-funded pilot awards are not intended to set up research funding opportunities. In step three, the shorter set of 24 questions was again circulated to the coauthors, which resulted in a modification of the response options from a five-point scale (strongly disagree to strongly agree) in favor of a dichotomous scale (“agree” or “disagree”) with the option for “insufficient information to determine.” Table 1 lists the 24 questions, along with the corresponding TS principles.

Table 1. Translational science principles questions, corresponding translational science principles, and percent of responses indicating not enough information within the proposal to determine (N/A)

* Excluded from factor analysis; more than 20% of responses indicated “not enough information to determine.”

1 Key to Translational Science Principles

aPrioritize initiatives that address unmet needs (Focus on Unmet Needs).

bProduce cross-cutting solutions for common and persistent challenges (Generalizable solutions).

cEmphasize creativity and innovation (Creativity and Innovation).

dLeverage cross-disciplinary team science (Cross-disciplinary Team Science).

eEnhance the efficiency and speed of translational research (Efficiency and Speed).

fUtilize boundary-crossing partnerships (boundary-crossing partnerships).

gUse bold and rigorous research approaches (bold and rigorous).

Data Analysis

Prior to analysis, scores of “insufficient information to determine” on the Translational Science Principles questions were coded as missing, and seven items for which 20% or more of responses were missing were excluded from the analysis (identified with an asterisk in Table 1). A Principal Component Analysis was conducted on the remaining 17 questions using SPSS Statistics version 28 (IBM Corp., Armonk, NY). Due to the exploratory nature of this research, several model-fitting techniques were tested, including both orthogonal and oblique rotations. While an orthogonal rotation (i.e., Varimax) minimizes the number of variables with high loadings and simplifies the solution, an oblique rotation (i.e., Promax) allows components to be intercorrelated [Reference Gorsuch21]. Our guiding hypothesis was that we would be able to identify at least one factor that would distinguish TS from TR and would be uncorrelated with any additional factors. As a significant dearth of similar validation studies exists in the literature – and thus no factor analytic studies with which to compare – we examined both Promax and Varimax rotations. Several criteria were used to determine the number of factors and combination of items in each factor, including a scree plot of Eigenvalues [Reference Cattell22], item loadings [Reference Watkins23], and Kaiser criterion [Reference Kaiser24]. The criterion cutoff was set at ± 0.35 for the item loadings. Based on this criterion, each item loaded most highly on one of three distinct factors. Cronbach’s alpha values were computed for the three factors and items were removed if they weakened the reliability of the factor.

To ascertain whether the factors could be useful for discriminating between TS and TR, expert consensus was used to label projects as TS or TR. Experts were members of the author team who met the following criteria: (1) affiliated with a hub that had submitted a CTSA application to NCATS under the most recent FOA; (2) reported having read the pilot study section of the most recent FOA; and (3) reported having discussed TS with colleagues at their hub “a fair amount” or “quite a bit.” Expert consensus was defined as at least seven out of eight experts assigning the same label (TS or TR) to a project (i.e., minimum of 87% agreement). A comparison of mean factor scores between the TS and TR projects for which expert consensus had been achieved was carried out using t-tests, and item-level comparisons in percent agreement were conducted using Chi-Square analyses.

Results

Description of Research Projects

A total of 26 research projects were submitted (see Table 2 for characteristics).

Table 2. Characteristics of research projects (N = 26)

Multiple items could be selected. Percentages do not add up to 100%.

Description of Coders

Twenty-nine coders participated in the study. The coders can be described as follows:

  • 70% were affiliated with a CTSA hub that had submitted a grant application in response to the new FOA.

  • 93% had read the section of the FOA describing the requirements for the pilot study program.

  • Most had discussed TS with colleagues at their CTSA hub “A fair amount” (51%) or “Quite a bit” (29%).

Factor Analysis

Factor analysis yielded three factors explaining 65% of the variance (see Table 3). The seven items loading on Factor One (“Generalizability/Efficiency”), which accounted for 44% of the variance, mapped onto two of the TS principles identified by NCATS: Generalizable Solutions and Efficiency and Speed. Factor Two (“Disruptive Innovation”) contained five items explaining an additional 12% of the variance and mapped onto the TS principles of Creativity and Innovation, Bold and Rigorous, and Focus on Unmet Needs. Factor Three (“Team Science”) was comprised of four items, explaining an additional 9% of variance, that mapped onto a single TS principle of Cross-disciplinary Team Science.

Table 3. Translational science principles survey items, factor loadings, and Cronbach’s alpha values

Utility of Factors for Discriminating Between TS and TR

Prior to examining the utility of the factors for discriminating between TS and TR projects, the expert panel reviewed all 26 submitted projects and reached consensus on the project type for 12 projects (six TS and six TR; see Supplementary Materials for examples). These 12 projects were subsequently utilized to examine whether each of the factors was able to discriminate between TS and TR (see Table 3). As shown in Table 4, the t-tests were statistically significant for Generalizability/Efficiency and Disruptive Innovation, but not for Team Science.

Table 4. Utility of factors for distinguishing between translational science and translational research

Factor scores were computed as the sum of items marked “agree.” N = 23 refers to the number of coded research proposals, which represents a function of the number of pilot proposals (6 TS and 6 TR) and the number of coders who evaluated each proposal (within each project type, five projects were evaluated by four coders and one project was evaluated by three coders).

To aid in interpretation of the findings, the percent agreement with each question in the two factors that showed promise for distinguishing between TR and TS is illustrated in Figure 1. As might be expected given the large percent of the variance accounted for by the Generalizability/Efficiency factor, the large differences in percent agreement between TS and TR projects on each of the seven questions in this factor demonstrate that the principles of Generalizable Solutions and Speed and Efficiency have high utility for distinguishing TS from TR (Chi-Square analyses showed that all differences were statistically significant). Within the Disruptive Innovation factor, the percent agreement was significantly higher for the TS projects on the questions that assessed project ambition, development of a novel technology, and innovation. There was no difference in agreement with the questions that tapped into whether the project addressed an unmet need or would be impactful.

Figure 1. Percent agreement with questions (by factor) by project type. TS = translational science; TR = translational research. * p < .05; **p < .01; ***p < .001.

Discussion

The purpose of this project was to explore whether a set of TS principles set forth by NCATS might be useful in distinguishing TS from TR. The results identified seven Translational Science Principles questions that showed evidence of considerable promise for aiding CTSA Pilot Program administrators in making this determination. These items map onto two of the key TS principles defined by NCATS: (1) produce cross-cutting solutions for common and persistent challenges (Generalizable Solutions); and (2) enhance the efficiency and speed of translational research (Efficiency and Speed). Based on the comparisons of percent agreement with these seven items across TS and TR projects, we suggest that these seven questions may be useful to Pilot Program administrators both in educating investigators about the nature of TS and in making the qualitative determination as to whether a particular project is aligned with TS principles. Future work building on these findings may generate a checklist that will make this determination more reliable across administrators and hubs.

It is noteworthy that in the time since the Translational Science Principles questions were developed, NCATS has revised the organizational framework of the TS principles posted to their website. The original version was also published in the Journal of Clinical and Translational Science in 2022 [Reference Faupel-Badger, Vogel, Austin and Rutter3]. The two TS principles represented in the Generalizability/Efficiency factor fell, respectively, under the two subdivisions of principles that were referred to as “scientific” (including Generalizable Solutions) and “operational” (including Efficiency and Speed). As a whole, the scientific principles focused on features directly related to research question selection, research approaches, and rigorous methods while the operational principles focused on how team functioning, organizational environment, and the culture of science influence the research. The current version (as of December 2023) of the Translational Science Principles posted to the website [15] omits these higher-order categories of “scientific” and “organizational.” This evolution of the way that the principles are depicted reflects the dynamic and still unfolding understanding of how best to communicate and utilize these principles. Our study suggests that a further refinement might involve creating something of a hierarchy of principles to distinguish between those characteristics of the research that are necessary or defining features of TS (i.e., Generalizable Solutions and Efficiency and Speed) and those features that are equally likely to be found in TR projects (i.e., Focus on Unmet Needs; Cross-disciplinary Team Science; Boundary-crossing Partnerships).

We note that the principles of Generalizable Solutions and Efficiency and Speed are at the level of intended outcomes, whereas other principles set forth by NCATS address specific strategies expected to facilitate the achievement of these outcomes. For example, all the questions that loaded onto the Team Science factor in our study, the only factor that failed to distinguish between TS and TR at all, mapped onto the principle of Cross-Disciplinary Science. While there is evidence that research produced by cross-disciplinary teams has better outcomes, including greater productivity and scientific impact, compared with less distributed teams or individual scientists [Reference Hall, Vogel and Huang25], cross-disciplinarity is not a necessary feature of TS. The invention, for example, of a more efficient cell-sorting technology may have dramatic implications for the efficiency of research across a wide range of diseases but may not involve collaboration across multiple disciplines. Our study suggests that whether or not a particular research project features cross-disciplinary science is not a useful distinction when determining whether or not that project should be designated as TS. That said, there may be other motivations at the programmatic level for considering cross-disciplinarity when making funding decisions. Thus, it is important to make a distinction between determining project eligibility in terms of whether it meets the definition of a TS project and project fundability in terms of whether it will be consistent with programmatic objectives.

Whereas our data show that all the questions in the Generalizability/Efficiency factor have utility for distinguishing between TS and TR and none of the questions in the Team Science factor do so, the findings were mixed for the Disruptive Innovation factor. Of the five questions in the Disruptive Innovation factor, three showed significantly higher percent agreement for TS as compared to TR projects: This project develops a novel method or technology; This project is ambitious; and This project is innovative in terms of the scientific approach, methods, or processes. These questions echo the statement by Christopher P. Austin, former NCATS Director, that TS studies “must develop a technology or insight or paradigm to improve the efficiency or effectiveness of a rate-limiting translational roadblock (p. 1634) [Reference Austin14].” The other two questions in the Disruptive Innovation factor relate to whether the project is impactful and whether it addresses an unmet clinical need. As noted above, these qualities may be found in TR projects which, though not focused on the science of translation, may nevertheless be ambitious in their aims and address an unmet clinical need.

We must emphasize here that the results of this study should not be used to discourage research efforts or projects at specific points along the continuum of research translation. As many have advocated [Reference Nakao1,Reference McCartney11,Reference Debley and Christakis26], TS principles can be applied across the translational spectrum, including research that seeks to translate findings from clinical trials into everyday clinical practice, research translating new findings into community practice, and translation of new scientific knowledge into disease prevention population or global health strategies. It has further been argued that TS is not unidirectional; instead, it can be applied to both bench-to-bedside and bedside-to-bench research [Reference Austin14]. Such bedside-to-bench translational research efforts have led to many biomedical breakthroughs over the past 100 years [Reference Nakao1,Reference Debley and Christakis26]. There is nothing in the current study to suggest that where a study falls along the translational spectrum should determine whether the research meets the definition of TS.

A 2008 publication reviewing the history and future trends of what the author called translational science but in fact conflated with translational research stated that “The formal identification of translational science can be expected over the next 10 years or so..(p. vii) [Reference Curry4].” This prediction has generally come true, but TS is still in its nascent stage of development [Reference Nakao1,Reference Faupel-Badger, Vogel, Austin and Rutter3]. There are still many misconceptions about the distinctions between TR and TS [Reference Austin16,Reference Siedlecki17]. Multiple terms and meanings of TR exist in biomedical research [Reference Krueger, Hendriks and Gauch13] and researchers in different scientific domains have different perspectives and practices [Reference Walker, Saylor, Waltz and Fisher12]. To advance the definition of TS currently endorsed by NCATS [Reference Faupel-Badger, Vogel, Austin and Rutter3,Reference Austin27], the Pilot Program administrators within CEREC have here endeavored to identify which of the TS principles are central to the TS/TR distinction. Pilot Program administrators are in a unique position to disseminate these principles as they issue their new calls for proposals under the new FOA. We anticipate that the seven items in the Generalizability/Efficiency factor may prove to be useful not only for selecting applications that satisfy the mandate to fund TS projects but also for educating the investigator community about the differences between TS and TR. As institutional culture often is the product of a research institute’s infrastructure, policy, norms, and leadership [Reference Narayanamurti and Odumosu28], administrative leaders, including CTSA Pilot Program directors and managers, can play a critical role in advancing TS by fostering a broader understanding of what it is and what it is not within the academic research community.

This study has some limitations that should be considered in generalizing the findings. One limitation of the findings presented here is that the 26 CTSA-funded pilot studies that informed the factor analysis were projects that were funded before the release of the recent NIH FOA. Accordingly, pilot study proposals included in this study were expressly not written to conform to the definition of TS that has since become more coherent and more widely understood. In their reviews of the projects, members of the expert panel in this study noted that a number of the studies on which they could not reach consensus (i.e., was it TS or TR?) could have been framed in such a way that the TS nature of the project was made explicit. As written, however, the implications of the research for future research efficiency and/or the generalizability of the research across multiple diseases were left unstated. Future cohorts of CTSA pilot applications will no doubt be more likely to include language that highlights the TS elements of the proposed work, which may change the relative prominence of each TS principle in terms of its ability to distinguish between TS and TR. A second limitation is that a “gold standard” of what defines a TS project does not exist, so we relied on expert consensus to identify TS vs. TR projects. Our definition of a consensus allowed for a single dissenter on the expert panel, so there is still room for debate as to the classification of a few of the studies. Nevertheless, even with this potential for uncontrolled variance in our analyses, we identified a cluster of seven questions that show strong promise for distinguishing between the two project types. A third limitation relates to the still-evolving delineation of TS principles as promoted by NCATS. In the December 2023 version, there is a principle that was omitted from earlier versions and therefore, omitted from our study: Prioritize Diversity, Equity, Inclusion, and Accessibility. We posit that this principle is equally relevant to both TS and TR and would be unlikely to aid in the distinction between the two, but this is a hypothesis that has yet to be empirically tested. An important additional caveat to take into consideration is the arbitrary five-page limit that we placed on the research materials that were submitted for scoring. This limit was determined for pragmatic considerations of the time burden placed on coders, but the result was that materials normally included with proposals (e.g., investigator biosketches, letters of support) were omitted. It is entirely possible that these supplementary materials may have provided additional information that would have enabled coders to form an opinion on items that were checked “insufficient information to determine” in the present study. Future work building on these findings should consider including all proposal materials to inform the coding process.

In conclusion, we leveraged the collective experience of 12 CTSA Pilot programs to identify a set of seven questions mapping onto two TS principles that hold promise for informing the determination of whether a proposed research project meets the definition of TS. These seven questions map onto the principles of Generalizable Solutions and Efficiency and Speed. CTSA Pilot Program administrators may find it helpful to use these items to inform their evaluation of proposals and to make the determination as to whether they are eligible for funding. Operationally, it may be useful to incorporate these items into application and/or review materials. While adherence to these principles may be necessary to advance a proposal for consideration, such adherence is unlikely to be sufficient to warrant funding. Individual hubs, NCATS, and NIH as a whole may aspire to fund projects that adhere to additional principles such as prioritizing diversity, equity, inclusion, and accessibility and/or leveraging cross-disciplinary science. Moreover, a number of the principles that are listed on the NCATS website as principles of TS are hallmarks of robust health science in general: prioritizing initiatives that address unmet needs and using bold and rigorous research approaches. Thus, all the principles set forth by NCATS may be of value for evaluating the fundability of proposed research projects. Our study has identified two of these principles that appear to be most useful in making the distinction between TS and TR.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2023.700.

Acknowledgments

We are grateful to the following CTSA-affiliated individuals for their assistance with project coding: Jasmine Neal, and Carolyn Jones, Center for Clinical and Translational Science, The Ohio State University; Swathi Thaker, The Center for Clinical and Translation Center, University of Alabama, Birmingham; Hardeep Ranu, Harvard Catalyst; James McNamara, Duke Clinical & Translational Science Institute, Duke University; David Friedland, Clinical & Translational Science Institute of Southeast Wisconsin, Medical College of Wisconsin; Richard Sterlin, The Wright Center, Virginia Commonwealth; Tim McCaffrey, the Clinical and Translational Science Institute at Children’s National, George Washington University; Denise Daudelin and Aviva Must, Tufts Clinical and Translational Science Institute, Tufts Medical Center; Kristian O’Connor, Clinical & Translational Science Institute of Southeast Wisconsin, University of Wisconsin- Milwaukee; Robin Liston, formerly at The Translational Research Institute, University of Arkansas for Medical Sciences, now at the Frontiers Clinical and Translational Science Institute, Kansas University Medical Center; and two anonymous coders. We also thank the following investigators for allowing us to provide their research abstracts in the Supplementary Materials: Naomi Sage Chaytor, Ph.D., Dawn Wolfgram, M.D., and Kwang Woo Ahn, Ph.D.

Funding statement

This work was supported by the National Institute of Health, National Center for Advancing Translational Sciences, Clinical Translational Science Awards at all participating hubs: UL1TR001414, UL1TR002319, UM1TR004402, UM1TR004360, UL1TR003107, UL1TR002553, UL1TR001436, UM1TR004398, UL1TR002733, UL1TR002541, UL1TR003096, and UL1TR001876. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Competing interests

The authors have no conflicts of interest to declare.

References

Nakao, K. Translational science: newly emerging science in biology and medicine - lessons from translational research on the natriuretic peptide family and leptin. Proc Jpn Acad Ser B Phys Biol Sci. 2019;95(9):538567. doi: 10.2183/pjab.95.037.CrossRefGoogle ScholarPubMed
Kim, YH, Levine, AD, Nehl, EJ, Walsh, JP. A bibliometric measure of translational science. Scientometrics. 2020;125(3):23492382. doi: 10.1007/s11192-020-03668-2.CrossRefGoogle ScholarPubMed
Faupel-Badger, JM, Vogel, AL, Austin, CP, Rutter, JL. Advancing translational science education. Clin Transl Sci. 2022;15(11):25552566. doi: 10.1111/cts.13390.CrossRefGoogle ScholarPubMed
Curry, SH. Translational science: past, present, and future. Biotechniques. 2008;44(2):iiviii. doi: 10.2144/000112749.CrossRefGoogle ScholarPubMed
Morris, ZS, Wooding, S, Grant, J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510520. doi: 10.1258/jrsm.2011.110180.CrossRefGoogle ScholarPubMed
Ke, Q. Identifying translational science through embeddings of controlled vocabularies. J Am Med Inform Assoc. 2019;26(6):516523. doi: 10.1093/jamia/ocy177.CrossRefGoogle ScholarPubMed
Tsevat, J, Smyth, SS. Training the translational workforce: expanding beyond translational research to include translational science. J Clin Transl Sci. 2020;4(4):360362. doi: 10.1017/cts.2020.31.CrossRefGoogle ScholarPubMed
National Center for Advancing Translational Sciences. Open opportunities. National Institutes of Health. Published n.d. Updated June 22, 2023. https://ncats.nih.gov/funding/open#requests-for-applications. Accessed June 22, 2023.Google Scholar
National Center for Advancing Translational Sciences. Funded activities under the NCATS clinical and translational science awards (CTSA) program with NCATS’ appropriation and additional CTSA funding from other sources. National Institutes of Health. Published 2023. Updated January 3, 2023. https://ncats.nih.gov/files/CTSA-Program-Funded-Activities-and-Other-Sources_508.pdf. Accessed June 23, 2023.Google Scholar
National Center for Advancing Translational Sciences. Clinical and translational science award (UM1 clinical trial optional). National Institutes of Health. Published 2021. https://grants.nih.gov/grants/guide/pa-files/PAR-21-293.html. Accessed September 5, 2023.Google Scholar
McCartney, PR. The translational science spectrum. J Obstet Gynecol Neonatal Nurs. 2021;50(6):655658. doi: 10.1016/j.jogn.2021.09.007.CrossRefGoogle ScholarPubMed
Walker, RL, Saylor, KW, Waltz, M, Fisher, JA. Translational science: a survey of US biomedical researchers’ perspectives and practices. Lab Anim (NY). 2022;51(1):2235. doi: 10.1038/s41684-021-00890-0.CrossRefGoogle ScholarPubMed
Krueger, AK, Hendriks, B, Gauch, S. The multiple meanings of translational research in (bio)medical research. Hist Philos Life Sci. 2019;41(4):57. doi: 10.1007/s40656-019-0293-7.CrossRefGoogle ScholarPubMed
Austin, CP. Opportunities and challenges in translational science. Clin Transl Sci. 2021;14(5):16291647. doi: 10.1111/cts.13055.CrossRefGoogle ScholarPubMed
National Center for Advancing Translational Sciences. Translational science principles. National Institutes of Health. Published n.d. Updated June 29, 2023. https://ncats.nih.gov/training-education/translational-science-principles. Accessed September 1, 2023.Google Scholar
Austin, CP. Translational misconceptions. Nat Rev Drug Discov. 2021;20(7):489490. doi: 10.1038/d41573-021-00008-8.CrossRefGoogle ScholarPubMed
Siedlecki, SL. Translational science: the final step for implementing evidence-based practice. Clin Nurse Spec. 2023;37(2):5457. doi: 10.1097/NUR.0000000000000728.CrossRefGoogle ScholarPubMed
Schneider, M, Bagaporo, A, Croker, JA, et al. The CTSA external reviewer exchange consortium (CEREC): engagement and efficacy. J Clin Transl Sci. 2019;3(6):325331. doi: 10.1017/cts.2019.411.CrossRefGoogle ScholarPubMed
Harris, PA, Taylor, R, Minor, BL, et al. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019;95:103208. doi: 10.1016/j.jbi.2019.103208.CrossRefGoogle ScholarPubMed
Harris, PA, Taylor, R, Thielke, R, Payne, J, Gonzalez, N, Conde, JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381. doi: 10.1016/j.jbi.2008.08.010.CrossRefGoogle ScholarPubMed
Gorsuch, R. Factor Analysis. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.Google Scholar
Cattell, RB. The scree test for the number of factors. Multivariate Behav Res. 1966;1(2):245276. doi: 10.1207/s15327906mbr0102_10.CrossRefGoogle ScholarPubMed
Watkins, M. Exploratory factor analysis: a guide to best practice. J Black Psychol. 2018;44(3):219246. doi: 10.1177/0095798418771807.CrossRefGoogle Scholar
Kaiser, HF. The application of electronic computers to factor analysis. Educ Psychol Meas. 1960;20(1):141145. doi: 10.1177/001316446002000116.CrossRefGoogle Scholar
Hall, KL, Vogel, AL, Huang, GC, et al. The science of team science: a review of the empirical evidence and research gaps on collaboration in science. Am Psychol. 2018;73(4):532548. doi: 10.1037/amp0000319.CrossRefGoogle ScholarPubMed
Debley, J, Christakis, DA. Call for papers reporting pediatric translational science research. JAMA Pediatr. 2023;177(1):78. doi: 10.1001/jamapediatrics.2022.4823.CrossRefGoogle ScholarPubMed
Austin, CP. Translating translation. Nat Rev Drug Discov. 2018;17(7):455456. doi: 10.1038/nrd.2018.27.CrossRefGoogle ScholarPubMed
Narayanamurti, V, Odumosu, T. Cycles of Invention and Discovery Rethinking the Endless Frontier.  Cambridge, MA: Harvard University Press, 2016.CrossRefGoogle Scholar
Figure 0

Table 1. Translational science principles questions, corresponding translational science principles, and percent of responses indicating not enough information within the proposal to determine (N/A)

Figure 1

Table 2. Characteristics of research projects (N = 26)

Figure 2

Table 3. Translational science principles survey items, factor loadings, and Cronbach’s alpha values

Figure 3

Table 4. Utility of factors for distinguishing between translational science and translational research

Figure 4

Figure 1. Percent agreement with questions (by factor) by project type. TS = translational science; TR = translational research. * p < .05; **p < .01; ***p < .001.

Supplementary material: File

Schneider et al. supplementary material
Download undefined(File)
File 15.2 KB