Journal of cognitive neuroscience
-
In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. ⋯ Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
-
Comparative Study
Cardiorespiratory fitness and the flexible modulation of cognitive control in preadolescent children.
The influence of cardiorespiratory fitness on the modulation of cognitive control was assessed in preadolescent children separated into higher- and lower-fit groups. Participants completed compatible and incompatible stimulus-response conditions of a modified flanker task, consisting of congruent and incongruent arrays, while ERPs and task performance were concurrently measured. Findings revealed decreased response accuracy for lower- relative to higher-fit participants with a selectively larger deficit in response to the incompatible stimulus-response condition, requiring the greatest amount of cognitive control. ⋯ Neuroelectric measures indicated that higher-fit, relative to lower-fit, participants exhibited global increases in P3 amplitude and shorter P3 latency, as well as greater modulation of P3 amplitude between the compatible and incompatible stimulus-response conditions. Similarly, higher-fit participants exhibited smaller error-related negativity (ERN) amplitudes in the compatible condition, and greater modulation of the ERN between the compatible and incompatible conditions, relative to lower-fit participants who exhibited large ERN amplitudes across both conditions. These findings suggest that lower-fit children may have more difficulty than higher-fit children in the flexible modulation of cognitive control processes to meet task demands.
-
The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. ⋯ First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.
-
Repetitive TMS (rTMS) provides a noninvasive tool for modulating neural activity in the human brain. In healthy participants, rTMS applied over the language-related areas in the left hemisphere, including the left posterior temporal area of Wernicke (LTMP) and inferior frontal area of Broca, have been shown to affect performance on word recognition tasks. To investigate the neural substrate of these behavioral effects, off-line rTMS was combined with fMRI acquired during the performance of a word recognition task. ⋯ Our results showed that rTMS increased task-related fMRI response in the homologue areas contralateral to the stimulated sites. We also found an effect of rTMS on response time for the LTMP group only. These findings provide insights into changes in neural activity in cortical regions connected to the stimulated site and are consistent with a hypothesis raised in a previous review about the role of the homologue areas in the contralateral hemisphere for preserving behavior after neural interference.
-
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. ⋯ Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.