Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-22T17:55:59.960Z Has data issue: false hasContentIssue false

Parameter optimization for machine-learning of word sense disambiguation

Published online by Cambridge University Press:  22 January 2003

V. HOSTE
Affiliation:
CNTS Language Technology Group, University of Antwerp, Belgium e-mail: [email protected], [email protected]
I. HENDRICKX
Affiliation:
ILK Computational Linguistics, Tilburg University, The Netherlands e-mail: [email protected], [email protected]
W. DAELEMANS
Affiliation:
CNTS Language Technology Group, University of Antwerp, Belgium e-mail: [email protected], [email protected] ILK Computational Linguistics, Tilburg University, The Netherlands e-mail: [email protected], [email protected]
A. VAN DEN BOSCH
Affiliation:
ILK Computational Linguistics, Tilburg University, The Netherlands e-mail: [email protected], [email protected] WhizBang! Labs – Research, Pittsburgh, PA, USA

Abstract

Various Machine Learning (ML) approaches have been demonstrated to produce relatively successful Word Sense Disambiguation (WSD) systems. There are still unexplained differences among the performance measurements of different algorithms, hence it is warranted to deepen the investigation into which algorithm has the right ‘bias’ for this task. In this paper, we show that this is not easy to accomplish, due to intricate interactions between information sources, parameter settings, and properties of the training data. We investigate the impact of parameter optimization on generalization accuracy in a memory-based learning approach to English and Dutch WSD. A ‘word-expert’ architecture was adopted, yielding a set of classifiers, each specialized in one single wordform. The experts consist of multiple memory-based learning classifiers, each taking different information sources as input, combined in a voting scheme. We optimized the architectural and parametric settings for each individual word-expert by performing cross-validation experiments on the learning material. The results of these experiments show that the variation of both the algorithmic parameters and the information sources available to the classifiers leads to large fluctuations in accuracy. We demonstrate that optimization per word-expert leads to an overall significant improvement in the generalization accuracies of the produced WSD systems.

Type
Research Article
Copyright
2002 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)