Objectives: The clinical assessment efficiency of the CAGE questionnaire for alcohol abuse based on diagnostic accuracy has not been fully established to date because of the varied and inconclusive gold standards used as diagnostic criteria. CAGE has also been highlighted to miss almost half of the risk-drinkers due to the use of inadequetly set criteria for the positive recognition of alcohol abuse. This study aims to establish the diagnostic accuracy of CAGE at different treatment settings.
Methods: A hybrid of the receiver operating characteristic (ROC) and the Taguchi method was used, as this approach proved to evaluate the diagnostic performance and accuracy in hypothetical clinical settings. Data were used from three cross-clinical treatment settings, i.e., general medicine outpatients, medical inpatients, and psychiatric inpatients, and analyzed by means of a step-wise application of managable number of statistical indices such as the area under the ROC curve (AUC), leveling factor (p′), and signal-to-noise ratios (S/N; standardized S/N [SS/N]).
Results: The selected settings yielded similar AUCs but portrayed different trade-offs on the ROC curves signaling the presence of different critical CAGE scores. Analysis of the sensitivity and specificity data of i, ii, iii by p′, S/N, SS/N and their dependent relation resulted in the critical CAGE scores of 1,1, and 2; and high diagnostic accuracy levels of 76.84 percent, 86 percent, and 76.84 percent, respectively.
Conclusions: By setting these critical CAGE scores as the minimum detection levels of alcohol abuse, early intervention before the onset of serious alcohol-related problems is possible. This will decrease the health-care costs of the patient and, in addition, reduce the psychological and social burdens inherent to alcohol abuse both on the patient and society. Having its critical scores reliably identified and diagnostic accuracy fully determined, CAGE can now improve the detection rate of problem drinking individuals substantially.