Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-19T11:15:02.518Z Has data issue: false hasContentIssue false

Text and speech translation by means of subsequential transducers

Published online by Cambridge University Press:  01 December 1996

J. M. VILAR
Affiliation:
Unidad Predepart. de Informática, Campus de Penyeta Roja, Universitat Jaume I, E-12071 Castellón de la Plana (SPAIN)
V. M. JIMÉNEZ
Affiliation:
Depto. de Sistemas Informáticos y Computación, Universidad Politécnica de Valencia, Camino de Vera s/n, E-46071 Valencia (SPAIN). e-mail: [email protected]
J. C. AMENGUAL
Affiliation:
Unidad Predepart. de Informática, Campus de Penyeta Roja, Universitat Jaume I, E-12071 Castellón de la Plana (SPAIN)
A. CASTELLANOS
Affiliation:
Unidad Predepart. de Informática, Campus de Penyeta Roja, Universitat Jaume I, E-12071 Castellón de la Plana (SPAIN)
D. LLORENS
Affiliation:
Depto. de Sistemas Informáticos y Computación, Universidad Politécnica de Valencia, Camino de Vera s/n, E-46071 Valencia (SPAIN). e-mail: [email protected]
E. VIDAL
Affiliation:
Depto. de Sistemas Informáticos y Computación, Universidad Politécnica de Valencia, Camino de Vera s/n, E-46071 Valencia (SPAIN). e-mail: [email protected]

Abstract

The full paper explores the possibility of using Subsequential Transducers (SST), a finite state model, in limited domain translation tasks, both for text and speech input. A distinctive advantage of SSTs is that they can be efficiently learned from sets of input-output examples by means of OSTIA, the Onward Subsequential Transducer Inference Algorithm (Oncina et al. 1993). In this work a technique is proposed to increase the performance of OSTIA by reducing the asynchrony between the input and output sentences, the use of error correcting parsing to increase the robustness of the models is explored, and an integrated architecture for speech input translation by means of SSTs is described.

Type
Research Article
Copyright
1997 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)