Speech is uniquely human, being the foundation of communication, knowledge dissemination, and cultural activities. Despite its complexity, human infants appear to learn the native language effortlessly during the first years of life. Developmental disorders of language may hamper this crucial phase, which can harmfully impact later steps of development. Our longitudinal DyslexiaBaby study follows up language development since birth until school age and determines the effects of inherited dyslexia risk with neurophysiological, -imaging, and -psychological tests, as well as questionnaires and DNA samples. Moreover, we determine whether language learning can be supported in early infancy with music-based intervention.
Music brings joy and inspiration to our daily life. These positive effects of music listening and playing as well as singing can be part of childhood and adolescence. In our studies, we have shown how active training and learning to play a music instrument can also advance auditory neurocognition and higher-order cognition such as executive functions. We have also shown how weekly music play school participation in day care can facilitate language functions of typically developing children – it is of note that the effects were found after the children joined the music play school for two years (one year being not sufficient for us to make this conclusion). It is also of note that auditory learning starts already prior to birth and that its effects can be revealed by brain recordings. In children, the development of skills related to understanding speech and music is of great interest due to the benefits of early detection of possible impairments of hearing abilities. Our current projects aim at understanding the normal development of these abilities as well as developing further means to support development of speech and music skills in children with hearing loss.
In the human brain, music is closely linked to many auditory, cognitive, verbal, motor, and emotional functions, which undergo changes during the normal ageing process and in many ageing-related neurological disorders. Music can be a powerful stimulus and versatile rehabilitation tool for the ageing brain. In our research, we use a combination of behavioural, neurophysiological, and neuroimaging methods to explore (i) how ageing and neurological disorders such as stroke, aphasia, and dementia affect the ability to perceive, experience, and produce music; (ii) can regular musical activities and music-based rehabilitation aid cognitive, verbal, and motor functioning and emotional and social well-being both in healthy ageing and in neurological disorders; and (iii) which neural and cognitive mechanisms drive the effects of music in the ageing and recovering brain.
We have several electroencephalography (EEG) laboratories with shielded rooms, mobile EEG recording units, trans-cranial magnetic stimulator integrated with EEG (TMS/EEG), and a laboratory for motion capture. The stimulus systems have been optimized and measured to meet high requirements e.g. in temporal accuracy. In case of tech-related questions, please contact our laboratory engineers.
EEG is a functional brain research technique that records electric activity caused by neural activity. EEG is often recorded to study ERPs (see below). EEG is typically measured with electrodes attached to the scalp. EEG offers a good temporal resolution, but exact sources of brain activity are challenging to interpret. It is a safe, cost-effective and easy method that is in heavy clinical and scientific use around the world, even in studies of newborn infants. Event-related potentials (ERP) are electric responses recorded from the brain, and they are calculated from the EEG signal. ERPs are event-related, i.e., they depict the brains activity in response to an internal or external stimuli, such as sounds. We mainly use Presentation (by Neurobehavioral Systems) and Psychtoolbox for presenting stimuli.
In TMS, an electric current (induced by magnetic pulse) is used to stimulate neural tissue. With suitable stimulation parameters it is possible to either inhibit or excite neural function. When combined with EEG, functional connections between brain areas can be investigated. TMS enables even determining causal connections between brain and behaviour. We use the navigated TMS device by Nexstim.
We use a variety of behavioral research methods such as neuropsychological tests, interviews and questionnaires. Neuropsychological tests assess different aspects of cognitive functioning such as memory, attention and language skills. Assessments can be conducted at our laboratories or in online testing environments. We use Presentation and PsychoPy for on-site and jsPsych / JATOS for online studies.
Motion capture allows the recording of large and fine movements in dancing and gait, for example. We use a Qualisys motion capture setup with 8x Miqus M3 cameras, 2x Miqus color video cameras and a camera sync unit.
Eye-tracker is a camera based device which is able to track the pupil and its movements relative to the head. With an eye-tracker the point of gaze, micro movements of the eyes, and changes in pupil size can be measured. We are currently using Eyelink 1000 Plus eye tracker by SR Research.
With ANS recording devices, one can collect signals reflecting the activity of the autonomic nervous system, which is influenced by, for example, emotional states. ANS measures include the heart rate (and its variability), skin conductance, bodily temperature, and the tonus of (facial) muscles. ANS can be used together with other methods such as (f)MRI.
The roots of CBRU are in the beginning of 1980, when Professor Risto Näätänen (1939–2023) founded Psychophysiology Research Group at the Department of Psychology, University of Helsinki. CBRU evolved from this group, being officially founded in 1995 when it got the CoE status granted by the Academy of Finland. Initially, research at CBRU focused on two cerebral responses: the mismatch negativity (MMN) and processing negativity (PN), both discovered by Näätänen et al. (1978). The MMN is a change detection mechanism that reflects cortical sound discrimination accuracy, whereas the PN indicates how the brain selects relevant stimuli for further processing. Due to its broad applicability in the field of cognitive neuroscience, MMN has become a popular tool worldwide. MMN can be applied to a variety of groups, including patients, infants, and even fetuses, and it can be recorded even from inattentive participants. Already in 2004, an estimated 1000 publications in international refereed journals reported using this brain response. During the past ten years, the research scope of CBRU has become considerably wider. Currently, the topics of CBRU research cover various neurocognitive functions behind human learning and rehabilitation.
MMN is elicited irrespective of the subject or patient’s attention or behavioural task which implicates the occurrence of an automatic comparison between the current input and the representation, or the memory trace, of the preceding auditory events (for a comprehensive review, see Näätänen et al., 2019). This change-detection process occurs unconsciously in the auditory cortices (generating the auditory-cortex subcomponent of the MMN). However, it activates, with a very short time delay, frontal-cortex mechanisms (generating the frontal subcomponent of the MMN) controlling the direction of attention, which leads to attention switch to, and conscious perception of, sound change. Thus, the MMN is also involved in initiating a cerebral warning mechanism which is of great biological significance.
While the basic mechanism of the MMN is simple, with appropriate experimental manipulations, it is a versatile tool for investigating various aspects of auditory perception and attention. Here are some examples on the applicability of the MMN for neuroscience studies:
Traditionally, the MMN has been recorded with an oddball paradigm, which typically includes a repetitive standard stimulus (e.g., a 1000 Hz tone, 90-% probability) occasionally replaced by a deviant stimulus (e.g., 1100 Hz, 10-% probability). This approach is very time consuming, always having the trade-off of MMN signal quality and the amount of information obtained (the number of different deviant types for which the MMN is recorded). Especially for investigating patient groups and young children, this approach is not optimal since the recording times should be kept minimal. At the same time, the EEG trial loss may be high due to, e.g., movements.
To overcome these problems, Näätänen et al. (2004) developed a new multi-feature MMN paradigm, called initially “Optimum-1”, with which MMN for five different types of deviant sounds can be recorded in the same time as for one deviant type in the oddball paradigm described above. The deviants included in such multi-feature sequence are alternating with the standard stimulus, and each deviant stimulus should differ from the rest of the stimuli in one feature only. The rationale of this paradigm is that besides serving as a deviant, each deviant stimulus type also strengthens the memory trace for the features it shares with the rest of the stimuli, thereby acting as a “standard”. For example, if only the frequency of a sound changes, this sound still strengthens the memory traces for sound duration, intensity, and location.
Based on the multi-feature paradigm development described above, even more cognitively demanding “musical” MMN stimulation paradigms were developed to probe the neural basis of musical skills. The first of these paradigms was based on the idea about multiple acoustical features being encoded in parallel and thus being behind the generation of the MMN. There, the sounds of a given triad chord were presented in a looped manner in the order “lowest, highest, middle, highest”. In this paradigm, the recording time was less than 15 min for a total of six different deviants, thus data collection is considerably faster than in traditional paradigms. In the melodic multi-feature paradigm developed by prof. Huotilainen, a looped 2-s melody was used. This melody also included a total of six deviants, three of which modulated the structure of the melody for its successive presentations. The data collection here also took less than 15 min. For further information, see Tervaniemi, 2022.