Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-24T17:03:31.208Z Has data issue: false hasContentIssue false

350 Investigating the Architecture of Speech Processing Pathways in the Brain

Published online by Cambridge University Press:  19 April 2022

Plamen Nikolov
Affiliation:
Georgetown University
Skrikanth Damera
Affiliation:
Georgetown University
Noah Steinberg
Affiliation:
Georgetown University
Naama Zur
Affiliation:
Georgetown University
Lillian Chang
Affiliation:
Georgetown University
Kyle Yoon
Affiliation:
Georgetown University
Marcus Dreux
Affiliation:
Georgetown University
Peter Turkeltaub
Affiliation:
Georgetown University
Josef Rauschecker
Affiliation:
Georgetown University
Maximilian Riesenhuber
Affiliation:
Georgetown University
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

OBJECTIVES/GOALS: Speech production requires mapping between sound-based and motor-based neural representations of a word – accomplished by learning internal models. However, the neural bases of these internal models remain unclear. The aim of this study is to provide experimental evidence for these internal models in the brain during speech production. METHODS/STUDY POPULATION: 16 healthy human adults were recruited for this electrooculography speech study. 20 English pseudowords were designed to vary on confusability along specific features of articulation (place vs manner). All words were controlled for length and voicing. Three task conditions were performed: speech perception, covert and overt speech production. EEG was recorded using a 64-channel Biosemi ActiveTwo system. EMG was recorded on the orbicularis orbis inferior and neck strap muscles. Overt productions were recorded with a high-quality microphone to determine overt production onset. EMG during was used to determine covert production onset. Neuroimaging: Representational Similarity Analysis (RSA), was used to probe the sound- and motor-based neural representations over sensors and time for each task. RESULTS/ANTICIPATED RESULTS: Production (motor) and perception (sound) neural representations were calculated using a cross-validated squared Euclidean distance metric. The RSA results in the speech perception task show a strong selectivity around 150ms, which is compatible with recent human electrocorticography findings in human superior temporal gyrus. Parietal sensors showed a large difference for motor-based neural representations, indicating a strong encoding for production related processes, as hypothesized by previous studies on the ventral and dorsal stream model of language. Temporal sensors, however, showed a large change for both motor- and sound-based neural representations. This is a surprising result since temporal regions are believed to be primarily engaged in perception (sound-based) processes. DISCUSSION/SIGNIFICANCE: This study used neuroimaging (EEG) and advanced multivariate pattern analysis (RSA) to test models of production (motor-) and perception (sound-) based neural representations in three different speech task conditions. These results show strong feasibility of this approach to map how the perception and production processes interact in the brain.

Type
Valued Approaches
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2022. The Association for Clinical and Translational Science