This Title All WIREs
How to cite this WIREs title:
WIREs Cogn Sci
Impact Factor: 3.476

Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

This review compares how humans process action and language sequences produced by other humans. On the one hand, we identify commonalities between action and language processing in terms of cognitive mechanisms (e.g., perceptual segmentation, predictive processing, integration across multiple temporal scales), neural resources (e.g., the left inferior frontal cortex), and processing algorithms (e.g., comprehension based on changes in signal entropy). On the other hand, drawing on sign language with its particularly strong motor component, we also highlight what differentiates (both oral and signed) linguistic communication from nonlinguistic action sequences. We propose the multiscale information transfer framework (MSIT) as a way of integrating these insights and highlight directions into which future empirical research inspired by the MSIT framework might fruitfully evolve. This article is categorized under: Psychology > Language Linguistics > Language in Mind and Brain Psychology > Motor Skill and Performance Psychology > Prediction
Model detailing the multiscale information transfer framework. The sensory signal, which unfolds linearly in time, contains parameters that are processed at multiple scales of resolution in both language comprehension and action observation. The incoming sensory signal is segmented into chunks at multiple scales under the top‐down guidance of the processor's predictions
[ Normal View | Magnified View ]
fMRI scans showing frontal regions where more surprising base‐suffix combinations (e.g., tear + less) elicit stronger activation than less surprising ones (e.g., worth + less) in masked visual priming (e.g., tear—TEARLESS vs. worth—WORTHLESS). The positive parametric correlation between BOLD signal and surprisal is projected on sections of the canonical montreal neurological institute (MNI) single‐subject template, rendered at a peak‐level threshold of p < 0.001 (uncorr.). Color bars indicate the range of the relevant voxel‐level t‐values. The crosshairs locate the origin of the voxel of greatest t‐statistic in a cluster of the right inferior frontal gyrus (pars opercularis)
[ Normal View | Magnified View ]
Schematic representation illustrating the fact that identical probabilities of syntagmatic co‐occurrence (e.g., p(verb1 + object1) = 0.20 and p(verb2 + object1) = 0.20) need not amount to equal expectedness, since paradigmatic competition between different options (here: objects) will also play a role. Thus, object1 is the paradigmatically dispreferred competitor after verb1, while it is the preferred option after verb2
[ Normal View | Magnified View ]
ASL and action: comparative variability optical flow spectrograms (a and b) and power law model of the signal across frequencies (c), indicative of information transfer capacity at the observed scale, which is higher for sign language, as compared to action
[ Normal View | Magnified View ]

Browse by Topic

Psychology > Prediction
Psychology > Motor Skill and Performance
Linguistics > Language in Mind and Brain
Psychology > Language

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts