The research on multisensory perception concentrates on audiovisual interactions. An important topic is the investigation the audiovisual nature of speech and its utility in learning. That is, the articulatory movements that produce the auditory signal are partially visible on the talker’s face, and observing them enhances speech comprehension. Even though this is well known, audiovisual speech has been used surprisingly little in speech and language training. Therefore, we aim to chart the instances in which it can offer learning benefits. We also study audiovisual semantic memory, that is, how different combinations of natural visual images and auditory sounds, or spoken and written words can enhance memory performance.
Principal investigator: Kaisa Tiippana
Prior knowledge affects the way we interpret incoming sensory signals, both based on long-term learning (memory colors), and short-term learning (statistical priors). In my own research, I have discovered that prior knowledge about object identity affects the way we perceive their colors (see e.g Olkkonen, Hansen, & Gegenfurtner, 2008 ). More recently, I found that prior knowledge acquired on the shorter term also affects color appearance in delayed color matches ( Olkkonen, McCarthy, & Allred, 2014).
The effect of long-term or short-term memory processes on color appearance are not explained by current models of color perception or memory, but fit well in a probabilistic inference framework based on a Bayesian ideal observer. A Bayesian observer estimates the external cause of an incoming sensory signal by combining the sensory evidence with prior information about the world. Together with Toni Saarela and Sarah Allred, I have implemented a Bayesian model observer that produces similar interactions between perceptual constancy and short-term memory for lightness that we observed recently in human observers for both lightness and hue (Olkkonen & Allred, 2014 ; Olkkonen, Saarela, & Allred, in prep (OSA 2014 abstract))). My next goal is to implement this model in full-color scenes, and test the model with a new, independent data set.
Principal investigator: Maria Olkkonen
One of the core processes in our cognitive system is working memory, a capacity-limited and temporary memory storage that requires active maintenance of the remembered material. Behavioural studies suggest that memory limitations arise from the precision of stored representations, and neuroimaging studies have localized visual memory representations in the visual cortex to the same areas that are used for the perception of these features. Most of these studies have investigated memory precision and representations of primary visual features and shapes. In this project, we study neural representations of faces and precision of face memory.
We use methods of psychophysics, computational modelling, neuroimaging (both EEG and fMRI), and multivariate data analysis methods. We measure memory precision for facial features, face identity and facial emotions using morphed faces. We use multivariate pattern analysis methods to study face representations during perception and memory maintenance. Finally, the role of face memory precision in prosopagnosia will be tested.
Principal investigator: Viljami Salmela
The group has interest in several topics related to motor planning processes in general, and sensory-motor processes involved in action planning in particular. These studies have mainly used behavioural methods but also EEG, TMS and fMRI methods have been used to explore these issues. The studies can be roughly divided into two main research lines from which one is associated with 1) interaction between sensory and hand motor processes, and the other is associated with 2) speech-related motor planning. Better details of the papers that are cited below can be found from here
These studies have shown automatic motor activation of manual responses triggered by visual presentation of orientation-affordance (Vainio, Ellis, & Tucker, 2007), size-affordance (e.g., Vainio, Ellis, Tucker & Symes, 2008), grip type (Vainio, Tucker & Ellis, 2007), hand identity (Vainio & Mustonen, 2011), gaze direction (Vainio et al., 2014) as well as auditory presentation of spatially oriented tone (Paavilainen et al., 2016).
Some of these studies have focused on exploring inhibition mechanisms related to motor activation triggered by perceived stimuli. We have shown, for example, that hand motor activation is immediately inhibited if the activation is triggered by distractor object (Ellis et al., 2007) or if the object that triggers the activation is removed from the display prior to onset of the response (e.g., Vainio et al., 2013) or during the hand movement (Vainio, 2013).
This line of research has also investigated how motor planning of manual actions influences perceptual processes. We have shown, for example, that the precision and power grasp performance modulates activation in auditory cortex (Wikman, Vainio & Rinne, 2015), and that preparation of these grasp types facilitates detection of those visually presented objects whose size is congruent with the prepared grasp type (Symes et al., 2008).
This line of research has focused on investigating how processes involved in planning articulatory gestures and hand actions are integrated. We have shown, for example, that EMG responses of hand muscles are increased by the TMS stimulation applied to hand motor area if participants are pronouncing meaningless syllables during the stimulation (Komeilipoor et al., 2016). Furthermore, we have shown that processes involved in planning certain articulatory gestures are systematically connected to processes that are involved in planning grasp actions (e.g., Vainio et al., 2013), and to processes that are involved in planning forward-backward hand movements (Vainio et al., 2015). Finally, we have shown that perception of different object shapes (round vs. sharp) can modulate processes involved in planning articulatory gestures (Vainio et al., 2016). Production of round vowels and voiced bilabial consonant are facilitated by round shapes whereas production of un-round vowels and alveolar stop consonant are facilitated by sharp shapes.
Principal investigator: Lari Vainio
In normal human visual behaviour, stimuli are mainly presented to the visual system by eye movements. In our group, eye movements have recently been studied in relation to financial penalties (with Jan Theeuwes), coding of object edges (with Mark Georgeson) and retinal cone density (with Austin Roorda).
Principal investigator: Markku Kilpeläinen