No CrossRef data available.
Published online by Cambridge University Press: 24 May 2024
Background: Intraoperative testing for awake craniotomies requires a multidisciplinary team which may not be available in low-resource settings. We explored the creation of an AI tool for automated testing. Methods: We developed a NodeJS application, EloquentAid (https://www.eloquentaid.com/), for language testing automation. The workflow was as follows: users select an image-based naming task and verbally identify the image in English. Then, the application transcribes the response using OpenAI’s Whisper transcription service. Finally, the application evaluates response correctness. Feedback is provided through auditory and color signals. To assess its reliability, we tested the EloquentAid versus a human rater using a 57-item test based on the Boston Naming Test. Participants were neurosurgery and neurology residents from the Philippines. Qualitative surveys were obtained post-test. Results: A total of 798 observations were recorded (N=14). Human-application agreement was 60.52%. Cohen’s kappa was 0.31 (fair agreement). There were no false positive identifications by EloquentAid. Noun-type was felt to affect human error (i.e. “knocker,” “yolk,” “trellis”). Accent and pronunciation were felt to affect EloquentAid errors. Conclusions: EloquentAid is a promising tool to facilitate intraoperative testing and brain mapping using AI for speech recognition and response evaluation. Preliminary data shows fair human-app agreements. Improvements in test items and pronunciation recognition may be made.