Published online by Cambridge University Press: 20 October 2008
While the use of categorical features seems to be the appropriate way to express sound patterns within languages, these features do not seem adequate to describe the sounds actually produced by speakers. Examination of the speech signal fails to reveal objective, discrete phonological segments. Similarly, segments are not directly observable in the flow of articulatory movements, and vary slightly according to an individual speaker's articulatory strategies. Because of the lack of a reliable relationship between segments and speech sounds, a plausible transition from feature representation to the actual acoustic signal has proven elusive. This paper utilises a theory of information processing, known as PARALLEL DISTRIBUTED PROCESSING (PDP) NETWORKS (also called neural networks), to propose a model which begins to express this transition: translating the feature bundles indicated in a broad phonetic transcription into continuous, potentially variable articulator behaviour.