Article contents
Questionnaires for eliciting evaluation data from users of interactive question answering systems
Published online by Cambridge University Press: 01 January 2009
Abstract
Evaluating interactive question answering (QA) systems with real users can be challenging because traditional evaluation measures based on the relevance of items returned are difficult to employ since relevance judgments can be unstable in multi-user evaluations. The work reported in this paper evaluates, in distinguishing among a set of interactive QA systems, the effectiveness of three questionnaires: a Cognitive Workload Questionnaire (NASA TLX), and Task and System Questionnaires customized to a specific interactive QA application. These Questionnaires were evaluated with four systems, seven analysts, and eight scenarios during a 2-week workshop. Overall, results demonstrate that all three Questionnaires are effective at distinguishing among systems, with the Task Questionnaire being the most sensitive. Results also provide initial support for the validity and reliability of the Questionnaires.
- Type
- Papers
- Information
- Natural Language Engineering , Volume 15 , Special Issue 1: Interactive Question Answering , January 2009 , pp. 119 - 141
- Copyright
- Copyright © Cambridge University Press 2008
References
- 7
- Cited by