Journal of cognitive neuroscience
-
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. ⋯ Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
-
Humans commonly understand the unobservable mental states of others by observing their actions. Embodied simulation theories suggest that this ability may be based in areas of the fronto-parietal mirror neuron system, yet neuroimaging studies that explicitly investigate the human ability to draw mental state inferences point to the involvement of a “mentalizing" system consisting of regions that do not overlap with the mirror neuron system. For the present study, we developed a novel action identification paradigm that allowed us to explicitly investigate the neural bases of mentalizing observed actions. ⋯ Although areas of the mirror neuron system did show an enhanced response during action identification, its activity was not significantly modulated by the extent to which the observers identified mental states. Instead, several regions of the mentalizing system, including dorsal and ventral aspects of medial pFC, posterior cingulate cortex, and temporal poles, were associated with mentalizing actions, whereas a single region in left lateral occipito-temporal cortex was associated with mechanizing actions. These data suggest that embodied simulation is insufficient to account for the sophisticated mentalizing that human beings are capable of while observing another and that a different system along the cortical midline and in anterior temporal cortex is involved in mentalizing an observed action.