Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-27T03:31:34.152Z Has data issue: false hasContentIssue false

72 Bringing Neuropsychology to the Community: Adaptation of a Rey Osterreith Complex Figure Scoring System for Use in Large-Scale Community-Based Clinical Trials

Published online by Cambridge University Press:  21 December 2023

Rebecca Handsman*
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Alyssa Verbalis
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Alexis Khuu
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Andrea Lopez
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Lucy S McClellan
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Cara E Pugliese
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
Lauren Kenworthy
Affiliation:
Center for Autism Spectrum Disorders, Children’s National Hospital, Washington, DC, USA
*
Correspondence: Rebecca Handsman Center for Autism Spectrum Disorders, Children’s National Hospital [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Objective:

The Rey Osterreith Complex Figure (ROCF) is a neuropsychological task used to measure visual-motor integration, visual memory, and executive functioning (EF) in autistic youth. The ROCF is a valued clinical tool because it provides an insight into the way an individual approaches and organizes complex visual stimuli. The constructs measured by the ROCF such as planning, organization, and working memory are highly relevant for research in, but the standardized procedures for scoring the ROCF can be challenging to implement in large scale clinical trials due to complex and lengthy scoring rubrics. We present preliminary data on an adaptation to an existing scoring system that provides quantifiable scores, can be implemented with reliability, and reduces scoring time.

Participants and Methods:

Data was taken from two large-scale clinical trials focusing on EF in autistic youth. All participants completed the ROCF following standard administration guidelines. The research team reviewed commonly used scoring systems and determined that the Boston Qualitative Scoring System (BQSS) was the best fit due to its strengths in measuring EF, the process-related variables generated, and the available normative data. Initially, the BQSS full scoring system was used, which resulted in comprehensive scores but was not feasible due to the time required (approximately 1-1.5 hours per figure for research assistants to complete scoring). Then, the BQSS short form was used, which was successful at solving the timing problem, but resulted in greater subjectivity in the scores impacting the team’s ability to become reliable. Independent reliability could not be calculated for this version because of the large number of discrepancies among scorers which included 2 neuropsychologists and 4 research assistants. A novel checklist was then developed that combined aspects of both scoring systems to help promote objectivity and reliability. In combination with this checklist the team created weekly check in meetings where challenging figures could be brought to discuss. Independent reliability was calculated amongst all research assistant team members (n=4) for the short form and novel checklist. Reliability was calculated based on (1) if the drawing qualified for being brought to the whole team and (2) individual scores on the checklist.

Results:

Independent reliability was calculated for 10 figures scored utilizing the novel checklist by a team of 4 trained research assistants. All scorers were able to achieve 80% reliability with a high average (80-86%). Study team members reported that scoring took less time taking on average 30-45 minutes per figure.

Conclusions:

Inter-rater reliability was strong on the checklist the study team created, indicating its potential as a useful adaptation to the BQSS scoring system that reduces time demands, making the tool feasible for use in large-scale clinical research studies with initially positive reliability factors. The checklist was easy to use, required little training and could be completed quickly. Future research should continue to examine the reliability of the checklist and the time it takes to complete. Additionally, the ROCF should be studied more broadly in research and examined as a potential outcome measure for large scale research studies.

Type
Poster Session 08: Assessment | Psychometrics | Noncredible Presentations | Forensic
Copyright
Copyright © INS. Published by Cambridge University Press, 2023