Solving the puzzle of natural auditory object perception

Neural mechanisms in humans and animal models

In natural auditory environments, most people effortlessly separate relevant sound objects from the auditory background. Yet, many significantly struggle in such settings (e.g., people with hearing impairments). Furthermore, in certain disease states, either internal (tinnitus) or external sounds or sound objects preoccupy our mind to the detriment of normal function. To characterise and treat these impairments we ultimately need to understand their neuronal-level mechanisms. In this proposal, we will collect brain-imaging data (electroencephalography and functional magnetic resonance imaging) during complex naturalistic auditory scene analysis with different types of naturalistic auditory objects (e.g., human speech or animal sounds). Our work aims to establish system-level brain mechanisms underlying auditory object analysis in humans and their compatibility to animal models, in order to ultimately understand how neurons process auditory objects in acoustical scenes.

This research is supported by the Research Council of Finland.

Current results of the project

Selective attention enables separation of overlapping speech in noisy environments. This study using EEG-fMRI fusion with continuous audiovisual cocktail party speech, by Patrik Wikman, Viljami Salmela and co-authors (see pre-print of the article here), shows that attention acts by routing neural processing through recurrent feedback-feedforward loops between nodes of the speech network.

See videos of the results here and here.