Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-26T03:21:47.638Z Has data issue: false hasContentIssue false

Introduction to the special issue on evaluating word sense disambiguation systems

Published online by Cambridge University Press:  22 January 2003

PHILIP EDMONDS
Affiliation:
Sharp Laboratories of Europe, Oxford Science Park, Oxford OX4 4GB, UK e-mail: [email protected]
ADAM KILGARRIFF
Affiliation:
Information Technology Research Institute, University of Brighton, Lewes Road, Brighton BN2 4GJ, UK e-mail: [email protected]

Abstract

Has system performance on Word Sense Disambiguation (WSD) reached a limit? Automatic systems don't perform nearly as well as humans on the task, and from the results of the SENSEVAL exercises, recent improvements in system performance appear negligible or even negative. Still, systems do perform much better than the baselines, so something is being done right. System evaluation is crucial to explain these results and to show the way forward. Indeed, the success of any project in WSD is tied to the evaluation methodology used, and especially to the formalization of the task that the systems perform. The evaluation of WSD has turned out to be as difficult as designing the systems in the first place.

Type
Research Article
Copyright
2002 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)