Perceptual organization of auditory streaming-task relies on neural entrainment of the stimulus-presentation rate: MEG evidence

Background: Humans are able to extract regularities from complex auditory scenes in order to form perceptually meaningful elements. It has been shown previously that this process depends critically on both the temporal integration of the sensory input over time and the degree of frequency separation...

Verfasser: Chakalov, Ivan
Draganova, Rossitza
Wollbrink, Andreas
Preißl, Hubert
Pantev, Christo
FB/Einrichtung:FB 05: Medizinische Fakultät
Dokumenttypen:Artikel
Medientypen:Text
Erscheinungsdatum:2013
Publikation in MIAMI:25.02.2014
Datum der letzten Änderung:08.09.2022
Angaben zur Ausgabe:[Electronic ed.]
Quelle:BMC Neuroscience 14 (2013) 120
Schlagwörter:MEG; Time-frequency spectrum; Auditory scene analysis; Task-driven entrainment
Fachgebiet (DDC):610: Medizin und Gesundheit
Lizenz:CC BY 2.0
Sprache:English
Anmerkungen:Finanziert durch den Open-Access-Publikationsfonds 2013/2014 der Deutschen Forschungsgemeinschaft (DFG) und der Westfälischen Wilhelms-Universität Münster (WWU Münster).
Format:PDF-Dokument
URN:urn:nbn:de:hbz:6-84309662484
Weitere Identifikatoren:DOI: 10.1186/1471-2202-14-120
Permalink:https://nbn-resolving.de/urn:nbn:de:hbz:6-84309662484
Onlinezugriff:1471-2202-14-120.pdf

Background: Humans are able to extract regularities from complex auditory scenes in order to form perceptually meaningful elements. It has been shown previously that this process depends critically on both the temporal integration of the sensory input over time and the degree of frequency separation between concurrent sound sources. Our goal was to examine the relationship between these two aspects by means of magnetoencephalography (MEG). To achieve this aim, we combined time-frequency analysis on a sensor space level with source analysis. Our paradigm consisted of asymmetric ABA-tone triplets wherein the B-tones were presented temporally closer to the first A-tones, providing different tempi within the same sequence. Participants attended to the slowest B-rhythm whilst the frequency separation between tones was manipulated (0-, 2-, 4- and 10-semitones). Results: The results revealed that the asymmetric ABA-triplets spontaneously elicited periodic-sustained responses corresponding to the temporal distribution of the A-B and B-A tone intervals in all conditions. Moreover, when attending to the B-tones, the neural representations of the A- and B-streams were both detectable in the scenarios which allow perceptual streaming (2-, 4- and 10-semitones). Alongside this, the steady-state responses tuned to the presentation of the B-tones enhanced significantly with increase of the frequency separation between tones. However, the strength of the B-tones related steady-state responses dominated the strength of the A-tones responses in the 10-semitones condition. Conversely, the representation of the A-tones dominated the B-tones in the cases of 2- and 4-semitones conditions, in which a greater effort was required for completing the task. Additionally, the P1 evoked fields’ component following the B-tones increased in magnitude with the increase of inter-tonal frequency difference. Conclusions: The enhancement of the evoked fields in the source space, along with the B-tones related activity of the time-frequency results, likely reflect the selective enhancement of the attended B-stream. The results also suggested a dissimilar efficiency of the temporal integration of separate streams depending on the degree of frequency separation between the sounds. Overall, the present findings suggest that the neural effects of auditory streaming could be directly captured in the time-frequency spectrum at the sensor-space level.