Neuroscience
-
Editorial Comment
Potential of Mirror Rehabilitation Therapy in Stroke Outcome.
-
Perception deals with temporal sequences of events, like series of phonemes for audition, dynamic changes in pressure for touch textures, or moving objects for vision. Memory processes are thus needed to make sense of the temporal patterning of sensory information. Recently, we have shown that auditory temporal patterns could be learned rapidly and incidentally with repeated exposure [Kang et al., 2017]. ⋯ Results showed that, if a random temporal pattern re-occurred at random times during an experimental block, it was rapidly learned, whatever the sensory modality. Moreover, patterns first learned in the auditory modality displayed transfer of learning to either touch or vision. This suggests that sensory systems may be exquisitely tuned to incidentally learn re-occurring temporal patterns, with possible cross-talk between the senses.
-
In everyday listening environments, a main task for our auditory system is to follow one out of multiple speakers talking simultaneously. The present study was designed to find electrophysiological indicators of two central processes involved - segregating the speech mixture into distinct speech sequences corresponding to the two speakers, and then attending to one of the speech sequences. We generated multistable speech stimuli that were set up to create ambiguity as to whether only one or two speakers are talking. ⋯ In the latter case, they distinguished which speaker was in their attentional foreground. Our data show a long-lasting event-related potential (ERP) modulation starting at 130ms after stimulus onset, which can be explained by the perceptual organization of the two speech sequences into attended foreground and ignored background streams. Our paradigm extends previous work with pure-tone sequences toward speech stimuli and adds the possibility to obtain neural correlates of the difficulty to segregate a speech mixture into distinct streams.
-
Review
Automatic frequency-shift detection in the auditory system: A review of psychophysical findings.
The human brain has the task of binding successive sounds produced by the same acoustic source into a coherent perceptual stream, and binding must be selective when several sources are concurrently active. Binding appears to obey a principle of spectral proximity: pure tones close in frequency are more likely to be bound than pure tones with remote frequencies. It has been hypothesized that the binding process is realized by automatic "frequency-shift detectors" (FSDs), comparable to the detectors of spatial motion in the visual system. ⋯ A number of variants of this study have been performed since 2005, in order to confirm the existence of FSDs, to characterize their properties, and to clarify as far as possible their neural underpinnings. The results obtained up to now suggest that the working of the FSDs exploits an implicit sensory memory which is powerful with respect to both capacity and retention time. Tones within chords can be perceptually enhanced by small frequency shifts, in a manner suggesting that the FSDs can serve in auditory scene analysis not only as binding tools but also, to a limited extent, as segregation tools.
-
Comparative Study
Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. ⋯ Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans.