Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-26T03:06:21.110Z Has data issue: false hasContentIssue false

Reliable Change on Neuropsychological Tests in the Uniform Data Set

Published online by Cambridge University Press:  03 August 2015

Brandon E. Gavett*
Affiliation:
University of Colorado, Colorado Springs, Department of Psychology, Colorado Springs, Colorado
Lee Ashendorf
Affiliation:
Boston University School of Medicine, Department of Psychiatry, Boston, Massachusetts
Ashita S. Gurnani
Affiliation:
University of Colorado, Colorado Springs, Department of Psychology, Colorado Springs, Colorado
*
Correspondence and reprint requests to: Brandon E. Gavett, UCCS Department of Psychology, 1420 Austin Bluffs Parkway, Colorado Springs, CO 80918. E-mail: [email protected]

Abstract

Longitudinal normative data obtained from a robust elderly sample (i.e., believed to be free from neurodegenerative disease) are sparse. The purpose of the present study was to develop reliable change indices (RCIs) that can assist with interpretation of test score changes relative to a healthy sample of older adults (ages 50+). Participants were 4217 individuals who completed at least three annual evaluations at one of 34 past and present Alzheimer’s Disease Centers throughout the United States. All participants were diagnosed as cognitively normal at every study visit, which ranged from three to nine approximately annual evaluations. One-year RCIs were calculated for 11 neuropsychological variables in the Uniform Data Set by regressing follow-up test scores onto baseline test scores, age, education, visit number, post-baseline assessment interval, race, and sex in a linear mixed effects regression framework. In addition, the cumulative frequency distributions of raw score changes were examined to describe the base rates of test score changes. Baseline test score, age, education, and race were robust predictors of follow-up test scores across most tests. The effects of maturation (aging) were more pronounced on tests related to attention and executive functioning, whereas practice effects were more pronounced on tests of episodic and semantic memory. Interpretation of longitudinal changes on 11 cognitive test variables can be facilitated through the use of reliable change intervals and base rates of score changes in this robust sample of older adults. A Web-based calculator is provided to assist neuropsychologists with interpretation of longitudinal change. (JINS, 2015, 21, 558–567)

Type
Research Articles
Copyright
Copyright © The International Neuropsychological Society 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adams, K.M. (2000). Practical and ethical issues pertaining to test revisions. Psychological Assessment, 12, 281286.CrossRefGoogle ScholarPubMed
Attix, D.K., Story, T.J., Chelune, G.J., Ball, J.D., Stutts, M.L., Hart, R.P., & Barth, J.T. (2009). The prediction of change: Normative neuropsychological trajectories. The Clinical Neuropsychologist, 23, 2138.Google Scholar
Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). lme4: Linear mixed-effects models using Eigen and S4. R package version 1.1-8 [Software]. Retrieved from http://CRAN.R-project.org/package=lme4.Google Scholar
Beekly, D.L., Ramos, E.M., Lee, W.W., Deitrich, W.D., Jacka, M.E., Wu, J., & Kukull, W.A. (2007). The National Alzheimer’s Coordinating Center (NACC) database: The Uniform Data Set. Alzheimer Disease and Associated Disorders, 21, 249258.Google Scholar
Bläsi, S., Zehnder, A.E., Berres, M., Taylor, K.I., Spiegel, R., & Monsch, A.U. (2009). Norms for change in episodic memory as a prerequisite for the diagnosis of mild cognitive impairment (MCI). Neuropsychology, 23, 189200.Google Scholar
Bush, S.S. (2010). Determining whether or when to adopt new versions of psychological and neuropsychological tests: Ethical and professional considerations. The Clinical Neuropsychologist, 24, 716.CrossRefGoogle ScholarPubMed
Calamia, M., Markon, K., & Tranel, D. (2013). The robust reliability of neuropsychological measures: Meta-analyses of test-retest correlations. The Clinical Neuropsychologist, 27, 10771105.Google Scholar
De Santi, S., Pirraglia, E., Barr, W., Babb, J., Williams, S., Rogers, K., & de Leon, M.J. (2008). Robust and conventional neuropsychological norms: Diagnosis and prediction of age-related cognitive decline. Neuropsychology, 22, 469484.CrossRefGoogle ScholarPubMed
Duff, K. (2012). Evidence-based indicators of neuropsychological change in the individual patient: Relevant concepts and methods. Archives of Clinical Neuropsychology, 27, 248261.Google Scholar
Duff, K., Callister, C., Dennett, K., & Tometich, D. (2012). Practice effects: A unique cognitive variable. The Clinical Neuropsychologist, 26, 11171127.Google Scholar
Folstein, M.F., Folstein, S.E., & McHugh, P.R. (1975). “Mini-mental state.” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189198.CrossRefGoogle ScholarPubMed
Frerichs, R.J., & Tuokko, H.A. (2005). A comparison of methods for measuring cognitive change in older adults. Archives of Clinical Neuropsychology, 20, 321333.Google Scholar
Heaton, R.K., Miller, S.W., Taylor, S.J., & Grant, I. (2004). Revised comprehensive norms for an expanded Halstead-Reitan Battery: Demographically adjusted neuropsychological norms for African American and Caucasian adults. Lutz, FL: Psychological Assessment Resources, Inc.Google Scholar
Heilbronner, R.L., Sweet, J.J., Attix, D.K., Krull, K.R., Henry, G.K., & Hart, R.P. (2010). Official position of the American Academy of Clinical Neuropsychology on serial neuropsychological assessments: The utility and challenges of repeat test administrations in clinical and forensic contexts. The Clinical Neuropsychologist, 24, 12671278.CrossRefGoogle ScholarPubMed
Hinton-Bayre, A.D. (2010). Deriving reliable change statistics from test-retest normative data: Comparison of models and mathematical expressions. Archives of Clinical Neuropsychology, 25, 244256.CrossRefGoogle ScholarPubMed
Holtzer, R., Goldin, Y., Zimmerman, M., Katz, M., Buschke, H., & Lipton, R. B. (2008). Robust norms for selected neuropsychological tests in older adults. Archives of Clinical Neuropsychology, 23, 531541.Google Scholar
Jefferson, A.L., Wong, S., Gracer, T.S., Ozonoff, A., Green, R.C., & Stern, R.A. (2007). Geriatric performance on an abbreviated version of the Boston Naming Test. Applied Neuropsychology, 14, 215223.Google Scholar
Matarazzo, J.D., & Herman, D.O. (1984). Base rate data for the WAIS-R: Test-retest stability and VIQ-PIQ differences. Journal of Clinical Neuropsychology, 6, 351366.CrossRefGoogle ScholarPubMed
McCaffrey, R.J., Duff, K., & Westervelt, H.J. (2000). Practitioner’s guide to evaluating change with neuropsychological assessment instruments. New York, NY: Springer.Google Scholar
McKhann, G.M., Knopman, D.S., Chertkow, H., Hyman, B.T., Jack, C.R., Kawas, C.H., & Phelps, C.H. (2011). The diagnosis of dementia due to Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s & Dementia, 7, 263269.Google Scholar
Mitrushina, M., Boone, K.B., Razani, J., & D’Elia, L.F. (2005). Handbook of normative data for neuropsychological assessment (2nd ed.). New York, NY: Oxford University Press.Google Scholar
Morris, J.C., Weintraub, S., Chui, H.C., Cummings, J., Decarli, C., Ferris, S., & Kukull, W.A. (2006). The Uniform Data Set (UDS): Clinical and cognitive variables and descriptive data from Alzheimer Disease Centers. Alzheimer Disease and Associated Disorders, 20, 210216.Google Scholar
Pinhiero, J.C., & Bates, D.M. (2000). Mixed-effects models in S and S-PLUS. New York: Springer.CrossRefGoogle Scholar
Pedraza, O., Lucas, J., Smith, G.E., Petersen, R.C., Graff-Radford, N.R., & Ivnik, R.J. (2010). Robust and expanded norms for the Dementia Rating Scale. Archives of Clinical Neuropsychology, 25, 347358.Google Scholar
R Core Team (2015). R: A language and environment for statistical computing (Version 3.1.2) [Software]. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/.Google Scholar
Reitan, R., & Wolfson, D. (1993). The Halstead-Reitan neuropsychological test battery: Theory and clinical applications. Tucson, AZ: Neuropsychology Press.Google Scholar
Salthouse, T.A., & Tucker-Drob, E.M. (2008). Implications of short-term retest effects for the interpretation of longitudinal change. Neuropsychology, 22, 800811.Google Scholar
Shirk, S.D., Mitchell, M.B., Shaughnessy, L.W., Sherman, J.C., Locascio, J.J., Weintraub, S., & Atri, A. (2011). A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery. Alzheimer’s Research & Therapy, 3, 32.CrossRefGoogle ScholarPubMed
Silverstein, M.L., & Nelson, L.D. (2000). Clinical and research implications of revising psychological tests. Psychological Assessment, 12, 298303.CrossRefGoogle ScholarPubMed
Sosa-Ortiz, A.L., Acosta-Castillo, I., & Prince, M.J. (2012). Epidemiology of dementias and Alzheimer’s disease. Archives of Medical Research, 43, 600608.Google Scholar
Strauss, E., Sherman, E.M.S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed.). New York, NY: Oxford University Press.Google Scholar
Strauss, E., Spreen, O., & Hunter, M. (2000). Implications of test revisions for research. Psychological Assessment, 12, 237244.Google Scholar
Wechsler, D. (1981). Wechsler Adult Intelligence Scale-Revised. New York: Psychological Corporation.Google Scholar
Wechsler, D. (1987). WMS-R: Wechsler Memory Scale-Revised. New York: Psychological Corporation.Google Scholar
Weintraub, S., Salmon, D., Mercaldo, N., Ferris, S., Graff-Radford, N.R., Chui, H., & Morris, J.C. (2009). The Alzheimer’s Disease Centers’ Uniform Data Set (UDS): The neuropsychologic test battery. Alzheimer Disease and Associated Disorders, 23, 91101.Google Scholar
Whittingham, M.J., Stephens, P.A., Bradbury, R.B., & Freckleton, R.P. (2006). Why do we still use stepwise modelling in ecology and behaviour? Journal of Animal Ecology, 75, 11821189.Google Scholar