Research

How do humans make sense of a continuous, fleeting stream of real-time speech, where what we have just heard is continually replaced by new material? We need to achieve all this within the limitations of the working memory.  Processes of perception make use of chunking in many domains, such as perceiving objects or events. It is reasonable to assume that similar processes are going on in listening to speech. The point of departure for project Chunking in language: units of meaning and processing (CLUMP) is that in speech, chunking orients to understanding, and progresses by chunking up the input into processable units. We hypothesize that chunking is driven by a search for meaning, which is facilitated by linguistic cues at various level of structure and sound. We further hypothesize that the perceptually most salient boundaries coincide with places where linguistic cues converge. CLUMP combines research from linguistics, cognitive science, and neuroscience.

We start from a theoretical model of chunking online, Linear Unit Grammar (LUG), a linguistic model based on the linearity of continuous speech.  We use experimental methods that tap both the behavioural correlates of speech reception – for which a web application has been developed, enabling participants to hear speech extracts and chunk them up by marking chunk boundaries by tapping on them, and also neural correlates in the brain, which make use of brain imaging by magnetoencephalography (MEG). With these methods, we are exploring

  1. the cognitive plausibility of the model and its neural correlates
  2. the segmentation of the speech stream in relation to meaning, and
  3. the properties of different outcomes of the chunking process (meaning, syntax, prosody).

Project is supported by a grant from the Finnish Cultural Foundation (Suomen Kulttuurirahasto, SKR).