Book contents
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgments
- Introduction
- Part I Theories and models
- Part II Research results: components of the motor system for speech
- Part III Wider perspectives
- Part IV Instrumental techniques
- 10 Palatography
- 11 Imaging techniques
- 12 Electromagnetic articulography
- 13 Electromyography
- 14 Transducers for investigating velopharyngeal function
- 15 Techniques for investigating laryngeal articulation
- 16 Acoustic analysis
- References
- Index
16 - Acoustic analysis
Published online by Cambridge University Press: 22 September 2009
- Frontmatter
- Contents
- List of figures
- List of tables
- List of contributors
- Acknowledgments
- Introduction
- Part I Theories and models
- Part II Research results: components of the motor system for speech
- Part III Wider perspectives
- Part IV Instrumental techniques
- 10 Palatography
- 11 Imaging techniques
- 12 Electromagnetic articulography
- 13 Electromyography
- 14 Transducers for investigating velopharyngeal function
- 15 Techniques for investigating laryngeal articulation
- 16 Acoustic analysis
- References
- Index
Summary
Acoustic analysis techniques
Techniques for acoustic analysis are more easily available than techniques for articulatory analysis which explains why the former have been used more extensively than the latter in studies on coarticulation. Criteria for data analysis on spectrograms and on acoustic waveforms were early established; nowadays the availability of fast and powerful acoustic analysis techniques allows processing large amounts of data which helps to improve our knowledge of the factors inducing variability in the acoustic signal. An increase in the explanatory power of the coarticulatory effects may be achieved through synchronous acoustic and articulatory measures; moreover inferences about articulatory mechanisms may be drawn from acoustic data on coarticulatory effects using general principles of acoustic theory of speech production (keeping in mind that articulatory–acoustic relationships may be non-linear).
Traditionally formant frequency information for vowels, laterals, rhotics, glides and nasals has been obtained from broad-band spectrograms at the centre of formants or from narrow-band spectral section peaks; narrow-band spectral sections have been mostly used to gather frequency information at the noise period for fricatives, affricates and stop bursts. Formant detection on spectrograms may be problematic when formant bands are too weak, two formants come close together (e.g. F2 and F3 for /i/) or F1 has a very low frequency (e.g. in the case of /i/).
Nowadays formant frequency trajectories are usually tracked using the all-pole model LPC (Linear Prediction Coding).
- Type
- Chapter
- Information
- CoarticulationTheory, Data and Techniques, pp. 322 - 336Publisher: Cambridge University PressPrint publication year: 1999
- 2
- Cited by