Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-22T07:54:54.046Z Has data issue: false hasContentIssue false

402 Developing a rubric to distinguish translational science from translational research in CTSA pilot projects

Published online by Cambridge University Press:  24 April 2023

Pamela Dillon
Affiliation:
Virginia Commonwealth University
Renee McCoy
Affiliation:
Medical College of Wisconsin
Paul Duguid
Affiliation:
University of Arkansas for Medical Sciences
Crystal Sparks
Affiliation:
University of Arkansas for Medical Sciences
Swathi Thaker
Affiliation:
University of Alabama, Birmingham
Henry Xiang
Affiliation:
Ohio State University
Lindsie Boerger
Affiliation:
University of Washington
Joe Hunt
Affiliation:
Indiana University
Scott Denne
Affiliation:
Indiana University
Tim McCaffree
Affiliation:
Children’s National Hospital
Jennifer Lee
Affiliation:
Duke University
Margaret Schneider
Affiliation:
University of California, Irvine
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

OBJECTIVES/GOALS: The goal of the CTSA consortium is to move scientific discoveries to clinical application. Translational science (TS) focuses on the process by which this happens, and NCATS supports pilot projects that propose TS questions. We are developing a rubric to guide program managers’ability to discriminate between TS and translational research (TR). METHODS/STUDY POPULATION: The CTSA External Review Exchange Consortium (CEREC) and CEREC II are reciprocal review collaborations between CTSA hubs that identify reviewers for each other’s pilot grant applications. CEREC and CEREC II partners developed a 31-item rubric, based on NIH’s Translational Science Principles, for discriminating pilot TS grant applications from those proposing TR. The hubs contributed proposals pre-selected as either TS or TR projects. Then, experienced reviewers and/or program administrators from the hubs used the rubric to score each of the proposals. Reliability of the rubric will be assessed using inter-rater reliability (% agreement and kappa). To identify which of the items in the rubric best discriminate between TS and TR, Item Response Theory analysis will be employed. RESULTS/ANTICIPATED RESULTS: Ten CEREC participating hubs submitted 30 applications: 20 TS proposals and 10 TR proposals. Twenty-two reviewers from 12 CEREC hubs evaluated the applications by using the scoring rubric; at least two reviewers evaluated each proposal. The results of the analyses will describe the reliability of the rubric and identify which of the seven TS Principles are most useful for distinguishing between TS and TR pilot grant proposals. Ultimately, this work will yield a scoring rubric that will be disseminated throughout the CTSA network to facilitate the screening of TS applications. DISCUSSION/SIGNIFICANCE: Optimizing research processes is critical to ensure that scientific discoveries are integrated into clinical practice and public health policy as rapidly, efficiently, and equitably as possible. By appropriately identifying and funding TS projects, CTSA hubs can accelerate the impact of clinical and translational research.

Type
Research Management, Operations, and Administration
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2023. The Association for Clinical and Translational Science