Hostname: page-component-848d4c4894-xfwgj Total loading time: 0 Render date: 2024-07-02T15:34:24.500Z Has data issue: false hasContentIssue false

27 Green’s Word Memory Test (WMT) Immediate Recall as a Screening Tool for Performance Invalidity

Published online by Cambridge University Press:  21 December 2023

Jonathan D Sober*
Affiliation:
Michael E. DeBakey VAMC, Houston, TX, USA
Nicholas J Pastorek
Affiliation:
Michael E. DeBakey VAMC, Houston, TX, USA
J. Parks Fillauer
Affiliation:
Michael E. DeBakey VAMC, Houston, TX, USA
Brian I Miller
Affiliation:
Michael E. DeBakey VAMC, Houston, TX, USA
Cheyanne C Barba
Affiliation:
Michael E. DeBakey VAMC, Houston, TX, USA
*
Correspondence: Jonathan D. Sober Michael E. DeBakey VAMC [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Objective:

Assessment of performance validity during neuropsychology evaluation is essential to reliably interpret cognitive test scores. Studies (Webber et al., 2018; Wisdom, et al., 2012) have validated the use of abbreviated measures, such as Trial 1 (T1) of the Test of Memory Malingering (TOMM), to detect invalid performance. Only one study (Bauer et al., 2007) known to these authors has examined the utility of Green’s Word Memory Test (WMT) immediate recall (IR) as a screening tool for invalid performance. This study explores WMT IR as an independent indicator of performance validity in a mild TBI (mTBI) veteran population.

Participants and Methods:

Participants included 211 (Mage = 32.1, SD = 7.4; Medu = 13.1, SD = 1.64; 94.8% male; 67.8% White) OEF/OIF/OND veterans with a history of mTBI who participated in a comprehensive neuropsychological evaluation at one of five participating VA Medical Centers. Performance validity was assessed using validated cut scores from the following measures: WMT IR and delayed recall (DR); TOMM T1; WAIS-IV reliable digit span; CVLT-II forced choice raw score; Wisconsin Card Sorting Test failure to maintain set; and the Rey Memory for Fifteen Items test, combo score. Sensitivity and specificity were calculated for each IR score compared with failure on DR. In addition, sensitivity and specificity were calculated for each WMT IR score compared to failure of at least one additional performance validity measure (excluding DR), two or more measures, and three or more measures, respectively.

Results:

Results indicated that 46.8% participants failed to meet cut offs for adequate performance validity based on the standard WMT IR cut score (i.e., 82.5%; M = 81.8%, SD = 17.7%); however, 50.2% participants failed to meet criteria based on the standard WMT DR cut score (M = 79.8% SD = 18.6%). A cut score of 82.5% or below on WMT IR correctly identified 82.4% (i.e., sensitivity) of subjects who performed below cut score on DR, with a specificity of 94.2%. Examination of IR cutoffs compared to failure of one or more other PVTs revealed that the standardized cut score of 82.5% or below had a sensitivity of 78.2% and a specificity of 72.4%; whereas, a cut score of 65% or below had a sensitivity of 41% and a specificity of 91.3%. Similarly, examination of IR cutoffs compared to failure of two or more additional PVTs revealed that the cut score of 60% or below had a sensitivity of 45.7% and specificity of 93.1% ; whereas, a cut score of 57.5% or below had a sensitivity of 57.9% and specificity of 90.9% when using failure of three or more PVTs as the criterion.

Conclusions:

Results indicated that a cut score of 82.5% or below on WMT IR may be sufficient to detect invalid performance when considering WMT DR as criterion. Furthermore, WMT IR alone, with adjustments to cut scores, appears to be a reasonable way to assess symptom validity compared to other PVTs. Sensitivity and specificity of WMT IR scores may have been adversely impacted by lower sensitivity of other PVTs to independently identify invalid performance.

Type
Poster Session 08: Assessment | Psychometrics | Noncredible Presentations | Forensic
Copyright
Copyright © INS. Published by Cambridge University Press, 2023